xref: /dpdk/doc/guides/prog_guide/power_man.rst (revision af0785a2447b307965377b62f46a5f39457a85a3)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright(c) 2010-2014 Intel Corporation.
3
4Power Management
5================
6
7The DPDK Power Management feature allows users space applications to save power
8by dynamically adjusting CPU frequency or entering into different C-States.
9
10*   Adjusting the CPU frequency dynamically according to the utilization of RX queue.
11
12*   Entering into different deeper C-States according to the adaptive algorithms to speculate
13    brief periods of time suspending the application if no packets are received.
14
15The interfaces for adjusting the operating CPU frequency are in the power management library.
16C-State control is implemented in applications according to the different use cases.
17
18CPU Frequency Scaling
19---------------------
20
21The Linux kernel provides a cpufreq module for CPU frequency scaling for each lcore.
22For example, for cpuX, /sys/devices/system/cpu/cpuX/cpufreq/ has the following sys files for frequency scaling:
23
24*   affected_cpus
25
26*   bios_limit
27
28*   cpuinfo_cur_freq
29
30*   cpuinfo_max_freq
31
32*   cpuinfo_min_freq
33
34*   cpuinfo_transition_latency
35
36*   related_cpus
37
38*   scaling_available_frequencies
39
40*   scaling_available_governors
41
42*   scaling_cur_freq
43
44*   scaling_driver
45
46*   scaling_governor
47
48*   scaling_max_freq
49
50*   scaling_min_freq
51
52*   scaling_setspeed
53
54In the DPDK, scaling_governor is configured in user space.
55Then, a user space application can prompt the kernel by writing scaling_setspeed to adjust the CPU frequency
56according to the strategies defined by the user space application.
57
58Core-load Throttling through C-States
59-------------------------------------
60
61Core state can be altered by speculative sleeps whenever the specified lcore has nothing to do.
62In the DPDK, if no packet is received after polling,
63speculative sleeps can be triggered according the strategies defined by the user space application.
64
65Per-core Turbo Boost
66--------------------
67
68Individual cores can be allowed to enter a Turbo Boost state on a per-core
69basis. This is achieved by enabling Turbo Boost Technology in the BIOS, then
70looping through the relevant cores and enabling/disabling Turbo Boost on each
71core.
72
73Use of Power Library in a Hyper-Threaded Environment
74----------------------------------------------------
75
76In the case where the power library is in use on a system with Hyper-Threading enabled,
77the frequency on the physical core is set to the highest frequency of the Hyper-Thread siblings.
78So even though an application may request a scale down, the core frequency will
79remain at the highest frequency until all Hyper-Threads on that core request a scale down.
80
81API Overview of the Power Library
82---------------------------------
83
84The main methods exported by power library are for CPU frequency scaling and include the following:
85
86*   **Freq up**: Prompt the kernel to scale up the frequency of the specific lcore.
87
88*   **Freq down**: Prompt the kernel to scale down the frequency of the specific lcore.
89
90*   **Freq max**: Prompt the kernel to scale up the frequency of the specific lcore to the maximum.
91
92*   **Freq min**: Prompt the kernel to scale down the frequency of the specific lcore to the minimum.
93
94*   **Get available freqs**: Read the available frequencies of the specific lcore from the sys file.
95
96*   **Freq get**: Get the current frequency of the specific lcore.
97
98*   **Freq set**: Prompt the kernel to set the frequency for the specific lcore.
99
100*   **Enable turbo**: Prompt the kernel to enable Turbo Boost for the specific lcore.
101
102*   **Disable turbo**: Prompt the kernel to disable Turbo Boost for the specific lcore.
103
104User Cases
105----------
106
107The power management mechanism is used to save power when performing L3 forwarding.
108
109
110Empty Poll API
111--------------
112
113Removal Warning
114~~~~~~~~~~~~~~~
115
116The experimental empty poll API will be removed from the library
117in a future DPDK release.
118The empty poll mechanism is superseded by the power PMD modes
119i.e. monitor, pause and scale.
120
121
122Abstract
123~~~~~~~~
124
125For packet processing workloads such as DPDK polling is continuous.
126This means CPU cores always show 100% busy independent of how much work
127those cores are doing. It is critical to accurately determine how busy
128a core is hugely important for the following reasons:
129
130        * No indication of overload conditions
131        * User does not know how much real load is on a system, resulting
132          in wasted energy as no power management is utilized
133
134Compared to the original l3fwd-power design, instead of going to sleep
135after detecting an empty poll, the new mechanism just lowers the core frequency.
136As a result, the application does not stop polling the device, which leads
137to improved handling of bursts of traffic.
138
139When the system become busy, the empty poll mechanism can also increase the core
140frequency (including turbo) to do best effort for intensive traffic. This gives
141us more flexible and balanced traffic awareness over the standard l3fwd-power
142application.
143
144
145Proposed Solution
146~~~~~~~~~~~~~~~~~
147The proposed solution focuses on how many times empty polls are executed.
148The less the number of empty polls, means current core is busy with processing
149workload, therefore, the higher frequency is needed. The high empty poll number
150indicates the current core not doing any real work therefore, we can lower the
151frequency to safe power.
152
153In the current implementation, each core has 1 empty-poll counter which assume
1541 core is dedicated to 1 queue. This will need to be expanded in the future to
155support multiple queues per core.
156
157Power state definition:
158^^^^^^^^^^^^^^^^^^^^^^^
159
160* LOW:  Not currently used, reserved for future use.
161
162* MED:  the frequency is used to process modest traffic workload.
163
164* HIGH: the frequency is used to process busy traffic workload.
165
166There are two phases to establish the power management system:
167^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
168* Training phase. This phase is used to measure the optimal frequency
169  change thresholds for a given system. The thresholds will differ from
170  system to system due to differences in processor micro-architecture,
171  cache and device configurations.
172  In this phase, the user must ensure that no traffic can enter the
173  system so that counts can be measured for empty polls at low, medium
174  and high frequencies. Each frequency is measured for two seconds.
175  Once the training phase is complete, the threshold numbers are
176  displayed, and normal mode resumes, and traffic can be allowed into
177  the system. These threshold number can be used on the command line
178  when starting the application in normal mode to avoid re-training
179  every time.
180
181* Normal phase. Every 10ms the run-time counters are compared
182  to the supplied threshold values, and the decision will be made
183  whether to move to a different power state (by adjusting the
184  frequency).
185
186API Overview for Empty Poll Power Management
187~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
188* **State Init**: initialize the power management system.
189
190* **State Free**: free the resource hold by power management system.
191
192* **Update Empty Poll Counter**: update the empty poll counter.
193
194* **Update Valid Poll Counter**: update the valid poll counter.
195
196* **Set the Frequency Index**: update the power state/frequency mapping.
197
198* **Detect empty poll state change**: empty poll state change detection algorithm then take action.
199
200User Cases
201----------
202The mechanism can applied to any device which is based on polling. e.g. NIC, FPGA.
203
204Ethernet PMD Power Management API
205---------------------------------
206
207Abstract
208~~~~~~~~
209
210Existing power management mechanisms require developers to change application
211design or change code to make use of it. The PMD power management API provides a
212convenient alternative by utilizing Ethernet PMD RX callbacks, and triggering
213power saving whenever empty poll count reaches a certain number.
214
215* Monitor
216   This power saving scheme will put the CPU into optimized power state and
217   monitor the Ethernet PMD RX descriptor address, waking the CPU up whenever
218   there's new traffic. Support for this scheme may not be available on all
219   platforms, and further limitations may apply (see below).
220
221* Pause
222   This power saving scheme will avoid busy polling by either entering
223   power-optimized sleep state with ``rte_power_pause()`` function, or, if it's
224   not supported by the underlying platform, use ``rte_pause()``.
225
226* Frequency scaling
227   This power saving scheme will use ``librte_power`` library functionality to
228   scale the core frequency up/down depending on traffic volume.
229   The reaction time of the frequency scaling mode is longer
230   than the pause and monitor mode.
231
232The "monitor" mode is only supported in the following configurations and scenarios:
233
234* On Linux* x86_64, `rte_power_monitor()` requires WAITPKG instruction set being
235  supported by the CPU, while `rte_power_monitor_multi()` requires WAITPKG and
236  RTM instruction sets being supported by the CPU. RTM instruction set may also
237  require booting the Linux with `tsx=on` command line parameter. Please refer
238  to your platform documentation for further information.
239
240* If ``rte_cpu_get_intrinsics_support()`` function indicates that
241  ``rte_power_monitor_multi()`` function is supported by the platform, then
242  monitoring multiple Ethernet Rx queues for traffic will be supported.
243
244* If ``rte_cpu_get_intrinsics_support()`` function indicates that only
245  ``rte_power_monitor()`` is supported by the platform, then monitoring will be
246  limited to a mapping of 1 core 1 queue (thus, each Rx queue will have to be
247  monitored from a different lcore).
248
249* If ``rte_cpu_get_intrinsics_support()`` function indicates that neither of the
250  two monitoring functions are supported, then monitor mode will not be supported.
251
252* Not all Ethernet drivers support monitoring, even if the underlying
253  platform may support the necessary CPU instructions. Please refer to
254  :doc:`../nics/overview` for more information.
255
256
257API Overview for Ethernet PMD Power Management
258~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
259
260* **Queue Enable**: Enable specific power scheme for certain queue/port/core.
261
262* **Queue Disable**: Disable power scheme for certain queue/port/core.
263
264* **Get Emptypoll Max**: Get the configured number of empty polls to wait before
265  entering sleep state.
266
267* **Set Emptypoll Max**: Set the number of empty polls to wait before entering
268  sleep state.
269
270* **Get Pause Duration**: Get the configured duration (microseconds) to be used
271  in the Pause callback.
272
273* **Set Pause Duration**: Set the duration of the pause (microseconds) used in
274  the Pause mode callback.
275
276* **Get Scaling Min Freq**: Get the configured minimum frequency (kHz) to be used
277  in Frequency Scaling mode.
278
279* **Set Scaling Min Freq**: Set the minimum frequency (kHz) to be used in Frequency
280  Scaling mode.
281
282* **Get Scaling Max Freq**: Get the configured maximum frequency (kHz) to be used
283  in Frequency Scaling mode.
284
285* **Set Scaling Max Freq**: Set the maximum frequency (kHz) to be used in Frequency
286  Scaling mode.
287
288Intel Uncore API
289----------------
290
291Abstract
292~~~~~~~~
293
294Uncore is a term used by Intel to describe the functions of a microprocessor
295that are not in the core, but which must be closely connected to the core
296to achieve high performance: L3 cache, on-die memory controller, etc.
297Significant power savings can be achieved by reducing the uncore frequency
298to its lowest value.
299
300The Linux kernel provides the driver "intel-uncore-frequency"
301to control the uncore frequency limits for x86 platform.
302The driver is available from kernel version 5.6 and above.
303Also CONFIG_INTEL_UNCORE_FREQ_CONTROL will need to be enabled in the kernel,
304which was added in 5.6.
305This manipulates the context of MSR 0x620,
306which sets min/max of the uncore for the SKU.
307
308API Overview for Intel Uncore
309~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
310
311Overview of each function in the Intel Uncore API,
312with explanation of what they do.
313Each function should not be called in the fast path.
314
315Uncore Power Init
316  Initialize uncore power, populate frequency array
317  and record original min & max for die on pkg.
318
319Uncore Power Exit
320  Exit uncore power, restoring original min & max for die on pkg.
321
322Get Uncore Power Freq
323  Get current uncore freq index for die on pkg.
324
325Set Uncore Power Freq
326  Set min & max uncore freq index for die on pkg
327  to specified index value (min and max will be the same).
328
329Uncore Power Max
330  Set min & max uncore freq to maximum frequency index for die on pkg
331  (min and max will be the same).
332
333Uncore Power Min
334  Set min & max uncore freq to minimum frequency index for die on pkg
335  (min and max will be the same).
336
337Get Num Freqs
338  Get the number of frequencies in the index array.
339
340Get Num Pkgs
341  Get the number of packages (CPU's) on the system.
342
343Get Num Dies
344  Get the number of die's on a given package.
345
346References
347----------
348
349*   The :doc:`../sample_app_ug/l3_forward_power_man`
350    chapter in the :doc:`../sample_app_ug/index` section.
351
352*   The :doc:`../sample_app_ug/vm_power_management`
353    chapter in the :doc:`../sample_app_ug/index` section.
354
355*   The :doc:`../nics/overview` chapter in the :doc:`../nics/index` section
356