xref: /minix3/minix/lib/libsys/cpuavg.c (revision 1f3ef2b206fbc798d1aa54783fe2c426e1276fb5)
1*1f3ef2b2SDavid van Moolenbroek /*
2*1f3ef2b2SDavid van Moolenbroek  * Routines to maintain a decaying average of per-process CPU utilization, in a
3*1f3ef2b2SDavid van Moolenbroek  * way that results in numbers that are (hopefully) similar to those produced
4*1f3ef2b2SDavid van Moolenbroek  * by NetBSD.  Once a second, NetBSD performs the following basic computation
5*1f3ef2b2SDavid van Moolenbroek  * for each process:
6*1f3ef2b2SDavid van Moolenbroek  *
7*1f3ef2b2SDavid van Moolenbroek  *   avg = ccpu * avg + (1 - ccpu) * (run / hz)
8*1f3ef2b2SDavid van Moolenbroek  *
9*1f3ef2b2SDavid van Moolenbroek  * In this formula, 'avg' is the running average, 'hz' is the number of clock
10*1f3ef2b2SDavid van Moolenbroek  * ticks per second, 'run' is the number of ticks during which the process was
11*1f3ef2b2SDavid van Moolenbroek  * found running in the last second, and 'ccpu' is a decay value chosen such
12*1f3ef2b2SDavid van Moolenbroek  * that only 5% of the original average remains after 60 seconds: e**(-1/20).
13*1f3ef2b2SDavid van Moolenbroek  *
14*1f3ef2b2SDavid van Moolenbroek  * Here, the idea is that we update the average lazily, namely, only when the
15*1f3ef2b2SDavid van Moolenbroek  * process is running when the kernel processes a clock tick - no matter how
16*1f3ef2b2SDavid van Moolenbroek  * long it had not been running before that.  The result is that at any given
17*1f3ef2b2SDavid van Moolenbroek  * time, the average may be out of date.  For that reason, this code is shared
18*1f3ef2b2SDavid van Moolenbroek  * between the kernel and the MIB service: the latter occasionally obtains the
19*1f3ef2b2SDavid van Moolenbroek  * raw kernel process table, for example because a user runs ps(1), and it then
20*1f3ef2b2SDavid van Moolenbroek  * needs to bring the values up to date.  The kernel could do that itself just
21*1f3ef2b2SDavid van Moolenbroek  * before copying out the process table, but the MIB service is equally capable
22*1f3ef2b2SDavid van Moolenbroek  * of doing it post-copy - while also being preemptible during the computation.
23*1f3ef2b2SDavid van Moolenbroek  * There is more to be said about this, but the summary is that it is not clear
24*1f3ef2b2SDavid van Moolenbroek  * which of the two options is better in practice.  We simply chose this one.
25*1f3ef2b2SDavid van Moolenbroek  *
26*1f3ef2b2SDavid van Moolenbroek  * In addition, we deliberately delay updating the actual average by one
27*1f3ef2b2SDavid van Moolenbroek  * second, keeping the last second's number of process run ticks in a separate
28*1f3ef2b2SDavid van Moolenbroek  * variable 'last'.  This allows us to produce an estimate of short-term
29*1f3ef2b2SDavid van Moolenbroek  * activity of the process as well.  We use this to generate a "CPU estimate"
30*1f3ef2b2SDavid van Moolenbroek  * value.  BSD generates such a value for the purpose of scheduling, but we
31*1f3ef2b2SDavid van Moolenbroek  * have no actual use for that, and generating the value just for userland is
32*1f3ef2b2SDavid van Moolenbroek  * a bit too costly in our case.  Our inaccurate value should suffice for most
33*1f3ef2b2SDavid van Moolenbroek  * practical purposes though (e.g., comparisons between active processes).
34*1f3ef2b2SDavid van Moolenbroek  *
35*1f3ef2b2SDavid van Moolenbroek  * Overall, in terms of overhead, our approach should produce the same values
36*1f3ef2b2SDavid van Moolenbroek  * as NetBSD while having only the same overhead as NetBSD in the very worst
37*1f3ef2b2SDavid van Moolenbroek  * case, and much less overhead on average.  Even in the worst case, in our
38*1f3ef2b2SDavid van Moolenbroek  * case, the computation is spread out across each second, rather than all done
39*1f3ef2b2SDavid van Moolenbroek  * at once.  In terms of implementation, since this code is running in the
40*1f3ef2b2SDavid van Moolenbroek  * kernel, we make use of small tables of precomputed values, and we try to
41*1f3ef2b2SDavid van Moolenbroek  * save on computation as much as possible.  We copy much of the NetBSD
42*1f3ef2b2SDavid van Moolenbroek  * approach of avoiding divisions using FSCALE.
43*1f3ef2b2SDavid van Moolenbroek  *
44*1f3ef2b2SDavid van Moolenbroek  * Another difference with NetBSD is that our kernel does not actually call
45*1f3ef2b2SDavid van Moolenbroek  * this function from its clock interrupt handler, but rather when a process
46*1f3ef2b2SDavid van Moolenbroek  * has spent a number of CPU cycles that adds up to one clock tick worth of
47*1f3ef2b2SDavid van Moolenbroek  * execution time.  The result is better accuracy (no process can escape
48*1f3ef2b2SDavid van Moolenbroek  * accounting by yielding just before each clock interrupt), but due to the
49*1f3ef2b2SDavid van Moolenbroek  * inaccuracy of converting CPU cycles to clock ticks, a process may end up
50*1f3ef2b2SDavid van Moolenbroek  * using more than 'hz' clock ticks per second.  We could correct for this;
51*1f3ef2b2SDavid van Moolenbroek  * however, it has not yet shown to be a problem.
52*1f3ef2b2SDavid van Moolenbroek  *
53*1f3ef2b2SDavid van Moolenbroek  * Zooming out a bit again, the current average is fairly accurate but not
54*1f3ef2b2SDavid van Moolenbroek  * very precise.  There are two reasons for this.  First, the accounting is in
55*1f3ef2b2SDavid van Moolenbroek  * clock tick fractions, which means that a per-second CPU usage below 1/hz
56*1f3ef2b2SDavid van Moolenbroek  * cannot be measured.  Second, the NetBSD FSCALE and ccpu values are such that
57*1f3ef2b2SDavid van Moolenbroek  * (FSCALE - ccpu) equals 100, which means that a per-second CPU usage below
58*1f3ef2b2SDavid van Moolenbroek  * 1/100 cannot be measured either.  Both issues can be resolved by switching
59*1f3ef2b2SDavid van Moolenbroek  * to a CPU cycle based accounting approach, which requires 64-bit arithmetic
60*1f3ef2b2SDavid van Moolenbroek  * and a MINIX3-specific FSCALE value.  For now, this is just not worth doing.
61*1f3ef2b2SDavid van Moolenbroek  *
62*1f3ef2b2SDavid van Moolenbroek  * Finally, it should be noted that in terms of overall operating system
63*1f3ef2b2SDavid van Moolenbroek  * functionality, the CPU averages feature is entirely optional; as of writing,
64*1f3ef2b2SDavid van Moolenbroek  * the produced values are only used in the output of utilities such as ps(1).
65*1f3ef2b2SDavid van Moolenbroek  * If computing the CPU average becomes too burdensome in terms of either
66*1f3ef2b2SDavid van Moolenbroek  * performance or maintenance, it can simply be removed again.
67*1f3ef2b2SDavid van Moolenbroek  *
68*1f3ef2b2SDavid van Moolenbroek  * Original author: David van Moolenbroek <david@minix3.org>
69*1f3ef2b2SDavid van Moolenbroek  */
70*1f3ef2b2SDavid van Moolenbroek 
71*1f3ef2b2SDavid van Moolenbroek #include "sysutil.h"
72*1f3ef2b2SDavid van Moolenbroek #include <sys/param.h>
73*1f3ef2b2SDavid van Moolenbroek 
74*1f3ef2b2SDavid van Moolenbroek #define CCPUTAB_SHIFT	3				/* 2**3 == 8 */
75*1f3ef2b2SDavid van Moolenbroek #define CCPUTAB_MASK	((1 << CCPUTAB_SHIFT) - 1)
76*1f3ef2b2SDavid van Moolenbroek 
77*1f3ef2b2SDavid van Moolenbroek #define F(n) ((uint32_t)((n) * FSCALE))
78*1f3ef2b2SDavid van Moolenbroek 
79*1f3ef2b2SDavid van Moolenbroek /* e**(-1/20*n)*FSCALE for n=1..(2**CCPUTAB_SHIFT-1) */
80*1f3ef2b2SDavid van Moolenbroek static const uint32_t ccpu_low[CCPUTAB_MASK] = {
81*1f3ef2b2SDavid van Moolenbroek 	F(0.951229424501), F(0.904837418036), F(0.860707976425),
82*1f3ef2b2SDavid van Moolenbroek 	F(0.818730753078), F(0.778800783071), F(0.740818220682),
83*1f3ef2b2SDavid van Moolenbroek 	F(0.704688089719)
84*1f3ef2b2SDavid van Moolenbroek };
85*1f3ef2b2SDavid van Moolenbroek #define ccpu		(ccpu_low[0])
86*1f3ef2b2SDavid van Moolenbroek 
87*1f3ef2b2SDavid van Moolenbroek /* e**(-1/20*8*n)*FSCALE for n=1.. until the value is zero (for FSCALE=2048) */
88*1f3ef2b2SDavid van Moolenbroek static const uint32_t ccpu_high[] = {
89*1f3ef2b2SDavid van Moolenbroek 	F(0.670320046036), F(0.449328964117), F(0.301194211912),
90*1f3ef2b2SDavid van Moolenbroek 	F(0.201896517995), F(0.135335283237), F(0.090717953289),
91*1f3ef2b2SDavid van Moolenbroek 	F(0.060810062625), F(0.040762203978), F(0.027323722447),
92*1f3ef2b2SDavid van Moolenbroek 	F(0.018315638889), F(0.012277339903), F(0.008229747049),
93*1f3ef2b2SDavid van Moolenbroek 	F(0.005516564421), F(0.003697863716), F(0.002478752177),
94*1f3ef2b2SDavid van Moolenbroek 	F(0.001661557273), F(0.001113775148), F(0.000746585808),
95*1f3ef2b2SDavid van Moolenbroek 	F(0.000500451433)
96*1f3ef2b2SDavid van Moolenbroek };
97*1f3ef2b2SDavid van Moolenbroek 
98*1f3ef2b2SDavid van Moolenbroek /*
99*1f3ef2b2SDavid van Moolenbroek  * Initialize the per-process CPU average structure.  To be called when the
100*1f3ef2b2SDavid van Moolenbroek  * process is started, that is, as part of a fork call.
101*1f3ef2b2SDavid van Moolenbroek  */
102*1f3ef2b2SDavid van Moolenbroek void
cpuavg_init(struct cpuavg * ca)103*1f3ef2b2SDavid van Moolenbroek cpuavg_init(struct cpuavg * ca)
104*1f3ef2b2SDavid van Moolenbroek {
105*1f3ef2b2SDavid van Moolenbroek 
106*1f3ef2b2SDavid van Moolenbroek 	ca->ca_base = 0;
107*1f3ef2b2SDavid van Moolenbroek 	ca->ca_run = 0;
108*1f3ef2b2SDavid van Moolenbroek 	ca->ca_last = 0;
109*1f3ef2b2SDavid van Moolenbroek 	ca->ca_avg = 0;
110*1f3ef2b2SDavid van Moolenbroek }
111*1f3ef2b2SDavid van Moolenbroek 
112*1f3ef2b2SDavid van Moolenbroek /*
113*1f3ef2b2SDavid van Moolenbroek  * Return a new CPU usage average value, resulting from decaying the old value
114*1f3ef2b2SDavid van Moolenbroek  * by the given number of seconds, using the formula (avg * ccpu**secs).
115*1f3ef2b2SDavid van Moolenbroek  * We use two-level lookup tables to limit the computational expense to two
116*1f3ef2b2SDavid van Moolenbroek  * multiplications while keeping the tables themselves relatively small.
117*1f3ef2b2SDavid van Moolenbroek  */
118*1f3ef2b2SDavid van Moolenbroek static uint32_t
cpuavg_decay(uint32_t avg,uint32_t secs)119*1f3ef2b2SDavid van Moolenbroek cpuavg_decay(uint32_t avg, uint32_t secs)
120*1f3ef2b2SDavid van Moolenbroek {
121*1f3ef2b2SDavid van Moolenbroek 	unsigned int slot;
122*1f3ef2b2SDavid van Moolenbroek 
123*1f3ef2b2SDavid van Moolenbroek 	/*
124*1f3ef2b2SDavid van Moolenbroek 	 * The ccpu_high table is set up such that with the default FSCALE, the
125*1f3ef2b2SDavid van Moolenbroek 	 * values of any array entries beyond the end would be zero.  That is,
126*1f3ef2b2SDavid van Moolenbroek 	 * the average would be decayed to a value that, if represented in
127*1f3ef2b2SDavid van Moolenbroek 	 * FSCALE units, would be zero.  Thus, if it has been that long ago
128*1f3ef2b2SDavid van Moolenbroek 	 * that we updated the average, we can just reset it to zero.
129*1f3ef2b2SDavid van Moolenbroek 	 */
130*1f3ef2b2SDavid van Moolenbroek 	if (secs > (__arraycount(ccpu_high) << CCPUTAB_SHIFT))
131*1f3ef2b2SDavid van Moolenbroek 		return 0;
132*1f3ef2b2SDavid van Moolenbroek 
133*1f3ef2b2SDavid van Moolenbroek 	if (secs > CCPUTAB_MASK) {
134*1f3ef2b2SDavid van Moolenbroek 		slot = (secs >> CCPUTAB_SHIFT) - 1;
135*1f3ef2b2SDavid van Moolenbroek 
136*1f3ef2b2SDavid van Moolenbroek 		avg = (ccpu_high[slot] * avg) >> FSHIFT;	/* decay #3 */
137*1f3ef2b2SDavid van Moolenbroek 
138*1f3ef2b2SDavid van Moolenbroek 		secs &= CCPUTAB_MASK;
139*1f3ef2b2SDavid van Moolenbroek 	}
140*1f3ef2b2SDavid van Moolenbroek 
141*1f3ef2b2SDavid van Moolenbroek 	if (secs > 0)
142*1f3ef2b2SDavid van Moolenbroek 		avg = (ccpu_low[secs - 1] * avg) >> FSHIFT;	/* decay #4 */
143*1f3ef2b2SDavid van Moolenbroek 
144*1f3ef2b2SDavid van Moolenbroek 	return avg;
145*1f3ef2b2SDavid van Moolenbroek }
146*1f3ef2b2SDavid van Moolenbroek 
147*1f3ef2b2SDavid van Moolenbroek /*
148*1f3ef2b2SDavid van Moolenbroek  * Update the CPU average value, either because the kernel is processing a
149*1f3ef2b2SDavid van Moolenbroek  * clock tick, or because the MIB service updates obtained averages.  We
150*1f3ef2b2SDavid van Moolenbroek  * perform the decay in at most four computation steps (shown as "decay #n"),
151*1f3ef2b2SDavid van Moolenbroek  * and thus, this algorithm is O(1).
152*1f3ef2b2SDavid van Moolenbroek  */
153*1f3ef2b2SDavid van Moolenbroek static void
cpuavg_update(struct cpuavg * ca,clock_t now,clock_t hz)154*1f3ef2b2SDavid van Moolenbroek cpuavg_update(struct cpuavg * ca, clock_t now, clock_t hz)
155*1f3ef2b2SDavid van Moolenbroek {
156*1f3ef2b2SDavid van Moolenbroek 	clock_t delta;
157*1f3ef2b2SDavid van Moolenbroek 	uint32_t secs;
158*1f3ef2b2SDavid van Moolenbroek 
159*1f3ef2b2SDavid van Moolenbroek 	delta = now - ca->ca_base;
160*1f3ef2b2SDavid van Moolenbroek 
161*1f3ef2b2SDavid van Moolenbroek 	/*
162*1f3ef2b2SDavid van Moolenbroek 	 * If at least a second elapsed since we last updated the average, we
163*1f3ef2b2SDavid van Moolenbroek 	 * must do so now.  If not, we need not do anything for now.
164*1f3ef2b2SDavid van Moolenbroek 	 */
165*1f3ef2b2SDavid van Moolenbroek 	if (delta < hz)
166*1f3ef2b2SDavid van Moolenbroek 		return;
167*1f3ef2b2SDavid van Moolenbroek 
168*1f3ef2b2SDavid van Moolenbroek 	/*
169*1f3ef2b2SDavid van Moolenbroek 	 * Decay the average by one second, and merge in the run fraction of
170*1f3ef2b2SDavid van Moolenbroek 	 * the previous second, as though that second only just ended - even
171*1f3ef2b2SDavid van Moolenbroek 	 * though the real time is at least one whole second ahead.  By doing
172*1f3ef2b2SDavid van Moolenbroek 	 * so, we roll the statistics time forward by one virtual second.
173*1f3ef2b2SDavid van Moolenbroek 	 */
174*1f3ef2b2SDavid van Moolenbroek 	ca->ca_avg = (ccpu * ca->ca_avg) >> FSHIFT;		/* decay #1 */
175*1f3ef2b2SDavid van Moolenbroek 	ca->ca_avg += (FSCALE - ccpu) * (ca->ca_last / hz) >> FSHIFT;
176*1f3ef2b2SDavid van Moolenbroek 
177*1f3ef2b2SDavid van Moolenbroek 	ca->ca_last = ca->ca_run;	/* move 'run' into 'last' */
178*1f3ef2b2SDavid van Moolenbroek 	ca->ca_run = 0;
179*1f3ef2b2SDavid van Moolenbroek 
180*1f3ef2b2SDavid van Moolenbroek 	ca->ca_base += hz;		/* move forward by a second */
181*1f3ef2b2SDavid van Moolenbroek 	delta -= hz;
182*1f3ef2b2SDavid van Moolenbroek 
183*1f3ef2b2SDavid van Moolenbroek 	if (delta < hz)
184*1f3ef2b2SDavid van Moolenbroek 		return;
185*1f3ef2b2SDavid van Moolenbroek 
186*1f3ef2b2SDavid van Moolenbroek 	/*
187*1f3ef2b2SDavid van Moolenbroek 	 * At least a whole second more elapsed since the start of the recorded
188*1f3ef2b2SDavid van Moolenbroek 	 * second.  That means that our current 'run' counter (now moved into
189*1f3ef2b2SDavid van Moolenbroek 	 * 'last') is also outdated, and we need to merge it in as well, before
190*1f3ef2b2SDavid van Moolenbroek 	 * performing the next decay steps.
191*1f3ef2b2SDavid van Moolenbroek 	 */
192*1f3ef2b2SDavid van Moolenbroek 	ca->ca_avg = (ccpu * ca->ca_avg) >> FSHIFT;		/* decay #2 */
193*1f3ef2b2SDavid van Moolenbroek 	ca->ca_avg += (FSCALE - ccpu) * (ca->ca_last / hz) >> FSHIFT;
194*1f3ef2b2SDavid van Moolenbroek 
195*1f3ef2b2SDavid van Moolenbroek 	ca->ca_last = 0;		/* 'run' is already zero now */
196*1f3ef2b2SDavid van Moolenbroek 
197*1f3ef2b2SDavid van Moolenbroek 	ca->ca_base += hz;		/* move forward by a second */
198*1f3ef2b2SDavid van Moolenbroek 	delta -= hz;
199*1f3ef2b2SDavid van Moolenbroek 
200*1f3ef2b2SDavid van Moolenbroek 	if (delta < hz)
201*1f3ef2b2SDavid van Moolenbroek 		return;
202*1f3ef2b2SDavid van Moolenbroek 
203*1f3ef2b2SDavid van Moolenbroek 	/*
204*1f3ef2b2SDavid van Moolenbroek 	 * If additional whole seconds elapsed since the start of the last
205*1f3ef2b2SDavid van Moolenbroek 	 * second slot, roll forward in time by that many whole seconds, thus
206*1f3ef2b2SDavid van Moolenbroek 	 * decaying the value properly while maintaining alignment to whole-
207*1f3ef2b2SDavid van Moolenbroek 	 * second slots.  The decay takes up to another two computation steps.
208*1f3ef2b2SDavid van Moolenbroek 	 */
209*1f3ef2b2SDavid van Moolenbroek 	secs = delta / hz;
210*1f3ef2b2SDavid van Moolenbroek 
211*1f3ef2b2SDavid van Moolenbroek 	ca->ca_avg = cpuavg_decay(ca->ca_avg, secs);
212*1f3ef2b2SDavid van Moolenbroek 
213*1f3ef2b2SDavid van Moolenbroek 	ca->ca_base += secs * hz;	/* move forward by whole seconds */
214*1f3ef2b2SDavid van Moolenbroek }
215*1f3ef2b2SDavid van Moolenbroek 
216*1f3ef2b2SDavid van Moolenbroek /*
217*1f3ef2b2SDavid van Moolenbroek  * The clock ticked, and this last clock tick is accounted to the process for
218*1f3ef2b2SDavid van Moolenbroek  * which the CPU average statistics are stored in 'ca'.  Update the statistics
219*1f3ef2b2SDavid van Moolenbroek  * accordingly, decaying the average as necessary.  The current system uptime
220*1f3ef2b2SDavid van Moolenbroek  * must be given as 'now', and the number of clock ticks per second must be
221*1f3ef2b2SDavid van Moolenbroek  * given as 'hz'.
222*1f3ef2b2SDavid van Moolenbroek  */
223*1f3ef2b2SDavid van Moolenbroek void
cpuavg_increment(struct cpuavg * ca,clock_t now,clock_t hz)224*1f3ef2b2SDavid van Moolenbroek cpuavg_increment(struct cpuavg * ca, clock_t now, clock_t hz)
225*1f3ef2b2SDavid van Moolenbroek {
226*1f3ef2b2SDavid van Moolenbroek 
227*1f3ef2b2SDavid van Moolenbroek 	if (ca->ca_base == 0)
228*1f3ef2b2SDavid van Moolenbroek 		ca->ca_base = now;
229*1f3ef2b2SDavid van Moolenbroek 	else
230*1f3ef2b2SDavid van Moolenbroek 		cpuavg_update(ca, now, hz);
231*1f3ef2b2SDavid van Moolenbroek 
232*1f3ef2b2SDavid van Moolenbroek 	/*
233*1f3ef2b2SDavid van Moolenbroek 	 * Register that the process was running at this clock tick.  We could
234*1f3ef2b2SDavid van Moolenbroek 	 * avoid one division above by precomputing (FSCALE/hz), but this is
235*1f3ef2b2SDavid van Moolenbroek 	 * typically not a clean division and would therefore result in (more)
236*1f3ef2b2SDavid van Moolenbroek 	 * loss of accuracy.
237*1f3ef2b2SDavid van Moolenbroek 	 */
238*1f3ef2b2SDavid van Moolenbroek 	ca->ca_run += FSCALE;
239*1f3ef2b2SDavid van Moolenbroek }
240*1f3ef2b2SDavid van Moolenbroek 
241*1f3ef2b2SDavid van Moolenbroek /*
242*1f3ef2b2SDavid van Moolenbroek  * Retrieve the decaying CPU utilization average (as return value), the number
243*1f3ef2b2SDavid van Moolenbroek  * of CPU run ticks in the current second so far (stored in 'cpticks'), and an
244*1f3ef2b2SDavid van Moolenbroek  * opaque CPU utilization estimate (stored in 'estcpu').  The caller must
245*1f3ef2b2SDavid van Moolenbroek  * provide the CPU average structure ('ca_orig'), which will not be modified,
246*1f3ef2b2SDavid van Moolenbroek  * as well as the current uptime in clock ticks ('now') and the number of clock
247*1f3ef2b2SDavid van Moolenbroek  * ticks per second ('hz').
248*1f3ef2b2SDavid van Moolenbroek  */
249*1f3ef2b2SDavid van Moolenbroek uint32_t
cpuavg_getstats(const struct cpuavg * ca_orig,uint32_t * cpticks,uint32_t * estcpu,clock_t now,clock_t hz)250*1f3ef2b2SDavid van Moolenbroek cpuavg_getstats(const struct cpuavg * ca_orig, uint32_t * cpticks,
251*1f3ef2b2SDavid van Moolenbroek 	uint32_t * estcpu, clock_t now, clock_t hz)
252*1f3ef2b2SDavid van Moolenbroek {
253*1f3ef2b2SDavid van Moolenbroek 	struct cpuavg ca;
254*1f3ef2b2SDavid van Moolenbroek 
255*1f3ef2b2SDavid van Moolenbroek 	ca = *ca_orig;
256*1f3ef2b2SDavid van Moolenbroek 
257*1f3ef2b2SDavid van Moolenbroek 	/* Update the average as necessary. */
258*1f3ef2b2SDavid van Moolenbroek 	cpuavg_update(&ca, now, hz);
259*1f3ef2b2SDavid van Moolenbroek 
260*1f3ef2b2SDavid van Moolenbroek 	/* Merge the last second into the average. */
261*1f3ef2b2SDavid van Moolenbroek 	ca.ca_avg = (ccpu * ca.ca_avg) >> FSHIFT;
262*1f3ef2b2SDavid van Moolenbroek 	ca.ca_avg += (FSCALE - ccpu) * (ca.ca_last / hz) >> FSHIFT;
263*1f3ef2b2SDavid van Moolenbroek 
264*1f3ef2b2SDavid van Moolenbroek 	*cpticks = ca.ca_run >> FSHIFT;
265*1f3ef2b2SDavid van Moolenbroek 
266*1f3ef2b2SDavid van Moolenbroek 	/*
267*1f3ef2b2SDavid van Moolenbroek 	 * NetBSD's estcpu value determines a scheduling queue, and decays to
268*1f3ef2b2SDavid van Moolenbroek 	 * 10% in 5*(the current load average) seconds.  Our 'estcpu' simply
269*1f3ef2b2SDavid van Moolenbroek 	 * reports the process's percentage of CPU usage in the last second,
270*1f3ef2b2SDavid van Moolenbroek 	 * thus yielding a value in the range 0..100 with a decay of 100% after
271*1f3ef2b2SDavid van Moolenbroek 	 * one second.  This should be good enough for most practical purposes.
272*1f3ef2b2SDavid van Moolenbroek 	 */
273*1f3ef2b2SDavid van Moolenbroek 	*estcpu = (ca.ca_last / hz * 100) >> FSHIFT;
274*1f3ef2b2SDavid van Moolenbroek 
275*1f3ef2b2SDavid van Moolenbroek 	return ca.ca_avg;
276*1f3ef2b2SDavid van Moolenbroek }
277*1f3ef2b2SDavid van Moolenbroek 
278*1f3ef2b2SDavid van Moolenbroek /*
279*1f3ef2b2SDavid van Moolenbroek  * Return the ccpu decay value, in FSCALE units.
280*1f3ef2b2SDavid van Moolenbroek  */
281*1f3ef2b2SDavid van Moolenbroek uint32_t
cpuavg_getccpu(void)282*1f3ef2b2SDavid van Moolenbroek cpuavg_getccpu(void)
283*1f3ef2b2SDavid van Moolenbroek {
284*1f3ef2b2SDavid van Moolenbroek 
285*1f3ef2b2SDavid van Moolenbroek 	return ccpu;
286*1f3ef2b2SDavid van Moolenbroek }
287