1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright(c) 2010-2014 Intel Corporation. 3 4L3 Forwarding with Power Management Sample Application 5====================================================== 6 7Introduction 8------------ 9 10The L3 Forwarding with Power Management application is an example of power-aware packet processing using the DPDK. 11The application is based on existing L3 Forwarding sample application, 12with the power management algorithms to control the P-states and 13C-states of the Intel processor via a power management library. 14 15Overview 16-------- 17 18The application demonstrates the use of the Power libraries in the DPDK to implement packet forwarding. 19The initialization and run-time paths are very similar to those of the :doc:`l3_forward`. 20The main difference from the L3 Forwarding sample application is that this application introduces power-aware optimization algorithms 21by leveraging the Power library to control P-state and C-state of processor based on packet load. 22 23The DPDK includes poll-mode drivers to configure Intel NIC devices and their receive (Rx) and transmit (Tx) queues. 24The design principle of this PMD is to access the Rx and Tx descriptors directly without any interrupts to quickly receive, 25process and deliver packets in the user space. 26 27In general, the DPDK executes an endless packet processing loop on dedicated IA cores that include the following steps: 28 29* Retrieve input packets through the PMD to poll Rx queue 30 31* Process each received packet or provide received packets to other processing cores through software queues 32 33* Send pending output packets to Tx queue through the PMD 34 35In this way, the PMD achieves better performance than a traditional interrupt-mode driver, 36at the cost of keeping cores active and running at the highest frequency, 37hence consuming the maximum power all the time. 38However, during the period of processing light network traffic, 39which happens regularly in communication infrastructure systems due to well-known "tidal effect", 40the PMD is still busy waiting for network packets, which wastes a lot of power. 41 42Processor performance states (P-states) are the capability of an Intel processor 43to switch between different supported operating frequencies and voltages. 44If configured correctly, according to system workload, this feature provides power savings. 45CPUFreq is the infrastructure provided by the Linux* kernel to control the processor performance state capability. 46CPUFreq supports a user space governor that enables setting frequency via manipulating the virtual file device from a user space application. 47The Power library in the DPDK provides a set of APIs for manipulating a virtual file device to allow user space application 48to set the CPUFreq governor and set the frequency of specific cores. 49 50This application includes a P-state power management algorithm to generate a frequency hint to be sent to CPUFreq. 51The algorithm uses the number of received and available Rx packets on recent polls to make a heuristic decision to scale frequency up/down. 52Specifically, some thresholds are checked to see whether a specific core running an DPDK polling thread needs to increase frequency 53a step up based on the near to full trend of polled Rx queues. 54Also, it decreases frequency a step if packet processed per loop is far less than the expected threshold 55or the thread's sleeping time exceeds a threshold. 56 57C-States are also known as sleep states. 58They allow software to put an Intel core into a low power idle state from which it is possible to exit via an event, such as an interrupt. 59However, there is a tradeoff between the power consumed in the idle state and the time required to wake up from the idle state (exit latency). 60Therefore, as you go into deeper C-states, the power consumed is lower but the exit latency is increased. Each C-state has a target residency. 61It is essential that when entering into a C-state, the core remains in this C-state for at least as long as the target residency in order 62to fully realize the benefits of entering the C-state. 63CPUIdle is the infrastructure provide by the Linux kernel to control the processor C-state capability. 64Unlike CPUFreq, CPUIdle does not provide a mechanism that allows the application to change C-state. 65It actually has its own heuristic algorithms in kernel space to select target C-state to enter by executing privileged instructions like HLT and MWAIT, 66based on the speculative sleep duration of the core. 67In this application, we introduce a heuristic algorithm that allows packet processing cores to sleep for a short period 68if there is no Rx packet received on recent polls. 69In this way, CPUIdle automatically forces the corresponding cores to enter deeper C-states 70instead of always running to the C0 state waiting for packets. 71 72.. note:: 73 74 To fully demonstrate the power saving capability of using C-states, 75 it is recommended to enable deeper C3 and C6 states in the BIOS during system boot up. 76 77Compiling the Application 78------------------------- 79 80To compile the sample application see :doc:`compiling`. 81 82The application is located in the ``l3fwd-power`` sub-directory. 83 84Running the Application 85----------------------- 86 87The application has a number of command line options: 88 89.. code-block:: console 90 91 ./<build_dir>/examples/dpdk-l3fwd_power [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] [--enable-jumbo [--max-pkt-len PKTLEN]] [--no-numa] 92 93where, 94 95* -p PORTMASK: Hexadecimal bitmask of ports to configure 96 97* -P: Sets all ports to promiscuous mode so that packets are accepted regardless of the packet's Ethernet MAC destination address. 98 Without this option, only packets with the Ethernet MAC destination address set to the Ethernet address of the port are accepted. 99 100* --config (port,queue,lcore)[,(port,queue,lcore)]: determines which queues from which ports are mapped to which cores. 101 102* --enable-jumbo: optional, enables jumbo frames 103 104* --max-pkt-len: optional, maximum packet length in decimal (64-9600) 105 106* --no-numa: optional, disables numa awareness 107 108* --empty-poll: Traffic Aware power management. See below for details 109 110* --telemetry: Telemetry mode. 111 112See :doc:`l3_forward` for details. 113The L3fwd-power example reuses the L3fwd command line options. 114 115Explanation 116----------- 117 118The following sections provide some explanation of the sample application code. 119As mentioned in the overview section, 120the initialization and run-time paths are identical to those of the L3 forwarding application. 121The following sections describe aspects that are specific to the L3 Forwarding with Power Management sample application. 122 123Power Library Initialization 124~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 125 126The Power library is initialized in the main routine. 127It changes the P-state governor to userspace for specific cores that are under control. 128The Timer library is also initialized and several timers are created later on, 129responsible for checking if it needs to scale down frequency at run time by checking CPU utilization statistics. 130 131.. note:: 132 133 Only the power management related initialization is shown. 134 135.. code-block:: c 136 137 int main(int argc, char **argv) 138 { 139 struct lcore_conf *qconf; 140 int ret; 141 unsigned nb_ports; 142 uint16_t queueid, portid; 143 unsigned lcore_id; 144 uint64_t hz; 145 uint32_t n_tx_queue, nb_lcores; 146 uint8_t nb_rx_queue, queue, socketid; 147 148 // ... 149 150 /* init RTE timer library to be used to initialize per-core timers */ 151 152 rte_timer_subsystem_init(); 153 154 // ... 155 156 157 /* per-core initialization */ 158 159 for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { 160 if (rte_lcore_is_enabled(lcore_id) == 0) 161 continue; 162 163 /* init power management library for a specified core */ 164 165 ret = rte_power_init(lcore_id); 166 if (ret) 167 rte_exit(EXIT_FAILURE, "Power management library " 168 "initialization failed on core%d\n", lcore_id); 169 170 /* init timer structures for each enabled lcore */ 171 172 rte_timer_init(&power_timers[lcore_id]); 173 174 hz = rte_get_hpet_hz(); 175 176 rte_timer_reset(&power_timers[lcore_id], hz/TIMER_NUMBER_PER_SECOND, SINGLE, lcore_id, power_timer_cb, NULL); 177 178 // ... 179 } 180 181 // ... 182 } 183 184Monitoring Loads of Rx Queues 185~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 186 187In general, the polling nature of the DPDK prevents the OS power management subsystem from knowing 188if the network load is actually heavy or light. 189In this sample, sampling network load work is done by monitoring received and 190available descriptors on NIC Rx queues in recent polls. 191Based on the number of returned and available Rx descriptors, 192this example implements algorithms to generate frequency scaling hints and speculative sleep duration, 193and use them to control P-state and C-state of processors via the power management library. 194Frequency (P-state) control and sleep state (C-state) control work individually for each logical core, 195and the combination of them contributes to a power efficient packet processing solution when serving light network loads. 196 197The rte_eth_rx_burst() function and the newly-added rte_eth_rx_queue_count() function are used in the endless packet processing loop 198to return the number of received and available Rx descriptors. 199And those numbers of specific queue are passed to P-state and C-state heuristic algorithms 200to generate hints based on recent network load trends. 201 202.. note:: 203 204 Only power control related code is shown. 205 206.. code-block:: c 207 208 static 209 __rte_noreturn int main_loop(__rte_unused void *dummy) 210 { 211 // ... 212 213 while (1) { 214 // ... 215 216 /** 217 * Read packet from RX queues 218 */ 219 220 lcore_scaleup_hint = FREQ_CURRENT; 221 lcore_rx_idle_count = 0; 222 223 for (i = 0; i < qconf->n_rx_queue; ++i) 224 { 225 rx_queue = &(qconf->rx_queue_list[i]); 226 rx_queue->idle_hint = 0; 227 portid = rx_queue->port_id; 228 queueid = rx_queue->queue_id; 229 230 nb_rx = rte_eth_rx_burst(portid, queueid, pkts_burst, MAX_PKT_BURST); 231 stats[lcore_id].nb_rx_processed += nb_rx; 232 233 if (unlikely(nb_rx == 0)) { 234 /** 235 * no packet received from rx queue, try to 236 * sleep for a while forcing CPU enter deeper 237 * C states. 238 */ 239 240 rx_queue->zero_rx_packet_count++; 241 242 if (rx_queue->zero_rx_packet_count <= MIN_ZERO_POLL_COUNT) 243 continue; 244 245 rx_queue->idle_hint = power_idle_heuristic(rx_queue->zero_rx_packet_count); 246 lcore_rx_idle_count++; 247 } else { 248 rx_ring_length = rte_eth_rx_queue_count(portid, queueid); 249 250 rx_queue->zero_rx_packet_count = 0; 251 252 /** 253 * do not scale up frequency immediately as 254 * user to kernel space communication is costly 255 * which might impact packet I/O for received 256 * packets. 257 */ 258 259 rx_queue->freq_up_hint = power_freq_scaleup_heuristic(lcore_id, rx_ring_length); 260 } 261 262 /* Prefetch and forward packets */ 263 264 // ... 265 } 266 267 if (likely(lcore_rx_idle_count != qconf->n_rx_queue)) { 268 for (i = 1, lcore_scaleup_hint = qconf->rx_queue_list[0].freq_up_hint; i < qconf->n_rx_queue; ++i) { 269 x_queue = &(qconf->rx_queue_list[i]); 270 271 if (rx_queue->freq_up_hint > lcore_scaleup_hint) 272 273 lcore_scaleup_hint = rx_queue->freq_up_hint; 274 } 275 276 if (lcore_scaleup_hint == FREQ_HIGHEST) 277 278 rte_power_freq_max(lcore_id); 279 280 else if (lcore_scaleup_hint == FREQ_HIGHER) 281 rte_power_freq_up(lcore_id); 282 } else { 283 /** 284 * All Rx queues empty in recent consecutive polls, 285 * sleep in a conservative manner, meaning sleep as 286 * less as possible. 287 */ 288 289 for (i = 1, lcore_idle_hint = qconf->rx_queue_list[0].idle_hint; i < qconf->n_rx_queue; ++i) { 290 rx_queue = &(qconf->rx_queue_list[i]); 291 if (rx_queue->idle_hint < lcore_idle_hint) 292 lcore_idle_hint = rx_queue->idle_hint; 293 } 294 295 if ( lcore_idle_hint < SLEEP_GEAR1_THRESHOLD) 296 /** 297 * execute "pause" instruction to avoid context 298 * switch for short sleep. 299 */ 300 rte_delay_us(lcore_idle_hint); 301 else 302 /* long sleep force ruining thread to suspend */ 303 usleep(lcore_idle_hint); 304 305 stats[lcore_id].sleep_time += lcore_idle_hint; 306 } 307 } 308 } 309 310P-State Heuristic Algorithm 311~~~~~~~~~~~~~~~~~~~~~~~~~~~ 312 313The power_freq_scaleup_heuristic() function is responsible for generating a frequency hint for the specified logical core 314according to available descriptor number returned from rte_eth_rx_queue_count(). 315On every poll for new packets, the length of available descriptor on an Rx queue is evaluated, 316and the algorithm used for frequency hinting is as follows: 317 318* If the size of available descriptors exceeds 96, the maximum frequency is hinted. 319 320* If the size of available descriptors exceeds 64, a trend counter is incremented by 100. 321 322* If the length of the ring exceeds 32, the trend counter is incremented by 1. 323 324* When the trend counter reached 10000 the frequency hint is changed to the next higher frequency. 325 326.. note:: 327 328 The assumption is that the Rx queue size is 128 and the thresholds specified above 329 must be adjusted accordingly based on actual hardware Rx queue size, 330 which are configured via the rte_eth_rx_queue_setup() function. 331 332In general, a thread needs to poll packets from multiple Rx queues. 333Most likely, different queue have different load, so they would return different frequency hints. 334The algorithm evaluates all the hints and then scales up frequency in an aggressive manner 335by scaling up to highest frequency as long as one Rx queue requires. 336In this way, we can minimize any negative performance impact. 337 338On the other hand, frequency scaling down is controlled in the timer callback function. 339Specifically, if the sleep times of a logical core indicate that it is sleeping more than 25% of the sampling period, 340or if the average packet per iteration is less than expectation, the frequency is decreased by one step. 341 342C-State Heuristic Algorithm 343~~~~~~~~~~~~~~~~~~~~~~~~~~~ 344 345Whenever recent rte_eth_rx_burst() polls return 5 consecutive zero packets, 346an idle counter begins incrementing for each successive zero poll. 347At the same time, the function power_idle_heuristic() is called to generate speculative sleep duration 348in order to force logical to enter deeper sleeping C-state. 349There is no way to control C- state directly, and the CPUIdle subsystem in OS is intelligent enough 350to select C-state to enter based on actual sleep period time of giving logical core. 351The algorithm has the following sleeping behavior depending on the idle counter: 352 353* If idle count less than 100, the counter value is used as a microsecond sleep value through rte_delay_us() 354 which execute pause instructions to avoid costly context switch but saving power at the same time. 355 356* If idle count is between 100 and 999, a fixed sleep interval of 100 μs is used. 357 A 100 μs sleep interval allows the core to enter the C1 state while keeping a fast response time in case new traffic arrives. 358 359* If idle count is greater than 1000, a fixed sleep value of 1 ms is used until the next timer expiration is used. 360 This allows the core to enter the C3/C6 states. 361 362.. note:: 363 364 The thresholds specified above need to be adjusted for different Intel processors and traffic profiles. 365 366If a thread polls multiple Rx queues and different queue returns different sleep duration values, 367the algorithm controls the sleep time in a conservative manner by sleeping for the least possible time 368in order to avoid a potential performance impact. 369 370Empty Poll Mode 371------------------------- 372Additionally, there is a traffic aware mode of operation called "Empty 373Poll" where the number of empty polls can be monitored to keep track 374of how busy the application is. Empty poll mode can be enabled by the 375command line option --empty-poll. 376 377See :doc:`Power Management<../prog_guide/power_man>` chapter in the DPDK Programmer's Guide for empty poll mode details. 378 379.. code-block:: console 380 381 ./<build_dir>/examples/dpdk-l3fwd-power -l xxx -n 4 -w 0000:xx:00.0 -w 0000:xx:00.1 -- -p 0x3 -P --config="(0,0,xx),(1,0,xx)" --empty-poll="0,0,0" -l 14 -m 9 -h 1 382 383Where, 384 385--empty-poll: Enable the empty poll mode instead of original algorithm 386 387--empty-poll="training_flag, med_threshold, high_threshold" 388 389* ``training_flag`` : optional, enable/disable training mode. Default value is 0. If the training_flag is set as 1(true), then the application will start in training mode and print out the trained threshold values. If the training_flag is set as 0(false), the application will start in normal mode, and will use either the default thresholds or those supplied on the command line. The trained threshold values are specific to the user’s system, may give a better power profile when compared to the default threshold values. 390 391* ``med_threshold`` : optional, sets the empty poll threshold of a modestly busy system state. If this is not supplied, the application will apply the default value of 350000. 392 393* ``high_threshold`` : optional, sets the empty poll threshold of a busy system state. If this is not supplied, the application will apply the default value of 580000. 394 395* -l : optional, set up the LOW power state frequency index 396 397* -m : optional, set up the MED power state frequency index 398 399* -h : optional, set up the HIGH power state frequency index 400 401Empty Poll Mode Example Usage 402~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 403To initially obtain the ideal thresholds for the system, the training 404mode should be run first. This is achieved by running the l3fwd-power 405app with the training flag set to “1”, and the other parameters set to 4060. 407 408.. code-block:: console 409 410 ./<build_dir>/examples/dpdk-l3fwd-power -l 1-3 -- -p 0x0f --config="(0,0,2),(0,1,3)" --empty-poll "1,0,0" –P 411 412This will run the training algorithm for x seconds on each core (cores 2 413and 3), and then print out the recommended threshold values for those 414cores. The thresholds should be very similar for each core. 415 416.. code-block:: console 417 418 POWER: Bring up the Timer 419 POWER: set the power freq to MED 420 POWER: Low threshold is 230277 421 POWER: MED threshold is 335071 422 POWER: HIGH threshold is 523769 423 POWER: Training is Complete for 2 424 POWER: set the power freq to MED 425 POWER: Low threshold is 236814 426 POWER: MED threshold is 344567 427 POWER: HIGH threshold is 538580 428 POWER: Training is Complete for 3 429 430Once the values have been measured for a particular system, the app can 431then be started without the training mode so traffic can start immediately. 432 433.. code-block:: console 434 435 ./<build_dir>/examples/dpdk-l3fwd-power -l 1-3 -- -p 0x0f --config="(0,0,2),(0,1,3)" --empty-poll "0,340000,540000" –P 436 437Telemetry Mode 438-------------- 439 440The telemetry mode support for ``l3fwd-power`` is a standalone mode, in this mode 441``l3fwd-power`` does simple l3fwding along with calculating empty polls, full polls, 442and busy percentage for each forwarding core. The aggregation of these 443values of all cores is reported as application level telemetry to metric 444library for every 500ms from the main core. 445 446The busy percentage is calculated by recording the poll_count 447and when the count reaches a defined value the total 448cycles it took is measured and compared with minimum and maximum 449reference cycles and accordingly busy rate is set to either 0% or 45050% or 100%. 451 452.. code-block:: console 453 454 ./<build_dir>/examples/dpdk-l3fwd-power --telemetry -l 1-3 -- -p 0x0f --config="(0,0,2),(0,1,3)" --telemetry 455 456The new stats ``empty_poll`` , ``full_poll`` and ``busy_percent`` can be viewed by running the script 457``/usertools/dpdk-telemetry-client.py`` and selecting the menu option ``Send for global Metrics``. 458