640KB 100% are enough for everyone. I am talking about the Task Manager % CPU Utilization Graph and why it’s 100% Utilization in the overall CPU % Utilization view is sometimes wrong, often misleading, and nearly aways useless for performance troubleshooting, although working correctly according to the specs.
Update2
The fixed Task Manager has arrived in Windows 11 24H2 and later. CPU utilization is now representing actual core usage in CPU graph. CPU Utility was added to Details view and is no longer capped at 100%.


This is great news for everyone who wants to quickly see if some process is using more CPU than it should without the exaggeration of frequency scaling.
Update2 End
Update 1
I do not know if enough people have complained, but Microsoft decided to roll back this strange behavior and aims for consistency and usable numbers. See https://blogs.windows.com/windows-insider/2025/02/28/announcing-windows-11-insider-preview-build-26120-3360-dev-and-beta-channels/ where the work has started.
We are beginning to roll out a change to the way Task Manager calculates CPU utilization for the Processes, Performance, and Users pages. Task Manager will now use the standard metrics to display CPU workload consistently across all pages and aligning with industry standards and third-party tools. For backward compatibility, a new optional column called CPU Utility is available (hidden by default) on the Details tab showing the previous CPU value used on the Processes page.
Thanks!
Update 1 End

I have on my HP Laptop an i5-1345U CPU which has 2 Hyper threaded Power Cores and 8 Efficiency Cores which in total accounts to 10 Cores and 12 Logical Processors (+2 for Hyper threaded cores) according to Task Manager. The CPU % Utilization value is 100% which means that all 12 Logical Processors are working at 100%. Really?
Let’s see more by switching from the Overall Utilization to Logical Processors view:


5 Cores are at 100%, but 7 cores are nearly Idle. That is unexpected and the cause for a lot of confusion when someone uses Task Manager to check for CPU hogs. 100% Utilization in Task Manager does not mean that all Logical processors are in use, as one (naively) would expect it to be the case.
I am by far not the first finding that issue:
- https://aaron-margosis.medium.com/task-managers-cpu-numbers-are-all-but-meaningless-2d165b421e43
- https://illuminati.services/2021/03/17/windows-10-task-manager-cpu-inaccurate-a-tale-of-two-metrics/
What really bugs me is that even with the latest Windows 11 version, this issue has not been addressed.
If 100% Utilization does not mean 100% of all Logical Processors are in use, what is displayed here? Let’s take a sharper look, what is visible in Task Manager when I max out 5/12 Logical Processors with the CPUSTRESS tool from Sysinternals.

Calculating the % Logical Processors Utilization (which is the % Processor Time performance counter value) we arrive at 5/12*100% = 41%. This value is shown in the Details view of Task Manager. If you sum up the numbers in the Details view, you will arrive at ca. 43% in total. But Task Manager shows 100% Utilization in the CPU overview. These inconsistent numbers in task manager (by designTM ) get worse with every new CPU generation and makes the overall CPU graph useless. I have asked some colleagues if they have noticed these inconsistencies. Yes they do, but assumed they are just not smart enough to make sense of the displayed data.
In effect Task Manager always shows 100% CPU Utilization starting at 41% Logical Processor Utilization. The main task of Task Manager: To check if something uses too much CPU becomes much harder than it should be.
Ok the number is not meaningful, but it is correct. What do we see here? The value visualized by Task Manager in Overall % CPU Utilization is the performance counter % Processor Utility which is different from the values shown in the Details view which is using % Processor Time. Its description is:
Processor Utility is the amount of work a processor is completing, as a percentage of the amount of work the processor could complete if it were running at its nominal performance and never idle. On some processors, Processor Utility may exceed 100%.
If the CPU goes into any Turbo/Throttle mode, Task Manager will show a scaled value which is proportional to the current clock rate. Based on my measurements, it seems to be roughly:
%ProcessorUtility = %ProcessorTime * fCurrent/fBase
The intention of this value is that if the CPU clocks higher an application can run faster. It reflects the actual CPU processing performance. While well-meaning there is a big issue with today’s CPUs which aim for power efficiency. The base frequency gets lower, but the turbo frequencies get higher, making the scaling factor (fCurrent/fBase) large. If e.g. you have on a 10 core system one core fully utilized you have 10% utilization (%ProcessorTime). When the multiplier fCurrent/fBase is huge e.g. 10 then a single turbo boosted core would show as 100% Processor Utility in Task Manager which is correct by design but misses the point that people use Task Manager to identify CPU hogs.
My current laptop CPU has a low 1.6 GHz base frequency, but can boost up to 4.12 GHz. This is a factor 2.5 more than the base frequency. The displayed value in the graph view of Task Manager is the Logical Processor Utilization 41% (5/12) multiplied with the frequency factor of 2.5 which results in 102%. Yes this is bigger than 100%. Utilization Values > 100% are unintuitive to the average user, so Microsoft decided to display an at 100% capped value in Task Manager. This is the reason it shows 100% CPU Utilization at >= 41% Logical Processor Utilization, but the value will not increase anymore if more CPUs are in use.
If you visualize the Performance Counters
- % Processor Time (Unscaled per core utilization used everywhere else in Task Manager 100% is max)
- % Processor Utility (Frequency scaled core utilization used in CPU Overall Utilization view can go significantly above 100%)
in Perfmon you find that the % Processor Utility is not capped at 100% and rises at the beginning and then drops a bit because the CPU is getting hot and thermal throttling kicks in which reduces CPU frequency. Below is data for a different CPU (I5 13600 KF) shown:

When all cores are in use, I want to see it in Task Manager easily in the overall CPU Utilization graph. The per-core view is cumbersome — if not all cores are active, interpreting the graph becomes difficult, and the 100%-capped Processor Utility number adds even more confusion.
To make things even more complicated, there are since several years Power and Efficiency cores used by Intel mainstream CPUs which differ in performance roughly by a factor 3. P and E cores are also usually running on different top clock speeds, which makes calculation of an average frequency of busy cores challenging. Added complexity enters the arena with smart per core scheduling by the OS based on policies which can prefer P/E cores or a mixture of both.
The original goal to visualize application speed with a frequency scaled CPU Utilization view has become a meaningless metric on today’s CPUs.
The best solution in my point of view is to switch back to the old-fashioned % Processor Time graph and be done with it. Adding more smartness will just cause more confusion and inconsistencies across the displayed numbers in Task Manager.
A single, meaningful number representing application speed might be possible, but calculating it would be highly complex. Several factors must be considered, including:
- Core Type (P-core vs. E-core)
- Maximum Core Frequency
Performance varies significantly between P-cores and E-cores depending on the instruction set used, with a rough scaling factor of about 3. Additionally, maximum core frequency is influenced by factors like CPU cooling efficiency and ambient temperature.
This complexity makes it difficult to create a CPU performance graph that accurately represents application performance on a 0-100% scale. Furthermore, it’s a moving target—what qualifies as “100%” can vary based on operating conditions. For instance, a CPU running at 0°C vs. 40°C may experience different thermal throttling states, affecting frequency and overall speed.
Now you see why Task Manager can be confusing. Let’s hope MS makes this not even more rocket science and gives performance practitioners easily understandable, uncut and consistent diagnostics.