Lines Matching +full:average +full:- +full:on
23 .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
32 When 4.2BSD was first installed on several large timesharing systems
35 of 4.1BSD (based on load averages observed under a normal timesharing load).
39 Later work concentrated on the operation of the kernel itself.
51 on one machine, a VAX-11/780 with eight megabytes of memory.\**
57 traffic is usually between users on the same machine and ranges from
58 person-to-person telephone messages to per-organization distribution
69 showed \fIsendmail\fP as the top cpu user on the system.
83 pseudo-terminal handler in addition to the local hardware terminal
85 character involves four processes on two machines.
94 period on one of our general timesharing machines.
97 This test was run on several occasions over a three month period.
113 Micro-operation benchmarks
117 programs was constructed and run on a VAX-11/750 with 4.5 megabytes
118 of physical memory and two disks on a MASSBUS controller.
127 were run twice with the results shown as the average of
159 pipeself4 send 10,000 4-byte messages to yourself
160 pipeself512 send 10,000 512-byte messages to yourself
161 pipediscard4 send 10,000 4-byte messages to child who discards
162 pipediscard512 send 10,000 512-byte messages to child who discards
163 pipeback4 exchange 10,000 4-byte messages with child
164 pipeback512 exchange 10,000 512-byte messages with child
165 forks0 fork-exit-wait 1,000 times
166 forks1k sbrk(1024), fault page, fork-exit-wait 1,000 times
167 forks100k sbrk(102400), fault pages, fork-exit-wait 1,000 times
168 vforks0 vfork-exit-wait 1,000 times
169 vforks1k sbrk(1024), fault page, vfork-exit-wait 1,000 times
170 vforks100k sbrk(102400), fault pages, vfork-exit-wait 1,000 times
171 execs0null fork-exec ``null job''-exit-wait 1,000 times
173 execs1knull sbrk(1024), fault page, fork-exec ``null job''-exit-wait 1,000 times
175 execs100knull sbrk(102400), fault pages, fork-exec ``null job''-exit-wait 1,000 times
176 vexecs0null vfork-exec ``null job''-exit-wait 1,000 times
177 vexecs1knull sbrk(1024), fault page, vfork-exec ``null job''-exit-wait 1,000 times
178 vexecs100knull sbrk(102400), fault pages, vfork-exec ``null job''-exit-wait 1,000 times
179 execs0big fork-exec ``big job''-exit-wait 1,000 times
180 execs1kbig sbrk(1024), fault page, fork-exec ``big job''-exit-wait 1,000 times
181 execs100kbig sbrk(102400), fault pages, fork-exec ``big job''-exit-wait 1,000 times
182 vexecs0big vfork-exec ``big job''-exit-wait 1,000 times
183 vexecs1kbig sbrk(1024), fault pages, vfork-exec ``big job''-exit-wait 1,000 times
184 vexecs100kbig sbrk(102400), fault pages, vfork-exec ``big job''-exit-wait 1,000 times
192 are scaled to reflect their being run on a VAX-11/750, they
195 \** We assume that a VAX-11/750 runs at 60% of the speed of a VAX-11/780
256 4.2BSD because of their implementation on top of the interprocess
303 have the effect of increasing the average number of components
322 costly string comparisons are only done on names that are the
325 The net effect of the changes is that the average time to
329 or 11% of all cycles executed on the machine.
386 and empties each silo on each clock interrupt.
387 This allows high input rates without the cost of per-character interrupts
389 However, as character input rates on most machines are usually
412 in \fIwait\fP when searching for \fB\s-2ZOMBIE\s+2\fP and
413 \fB\s-2STOPPED\s+2\fP processes;
443 that as much as 20% of the time spent in the kernel on a loaded
444 system (a VAX-11/780) can be spent in \fIschedcpu\fP and, on average,
445 5-10% of the kernel time is spent in \fIschedcpu\fP.
453 to gather statistics on the performance of the buffer cache.
455 cache and the read-ahead policies.
458 that large amounts of read-ahead might be performed without
464 a peak mid-afternoon period on a VAX 11/780 with four megabytes
468 (the actual number of buffers depends on the size mix of
494 During the test period the load average ranged from 2 to 13
495 with an average of 5.
504 over the period ranged from 2 to 6 megabytes with an average
509 On average 250 requests to read disk blocks were initiated
518 On average, an 85% cache hit rate was observed for read requests.
520 In addition, 5 read-ahead requests were made each second
522 Despite the policies to rapidly reuse read-ahead buffers
523 that remain unclaimed, more than 90% of the read-ahead
528 of the buffer cache may be reduced significantly on memory-poor
543 concern. Results from a profiled kernel show an average
547 This figure can vary significantly depending on
548 the network hardware used, the average message
550 layer. On one machine we profiled over a 17 hour
558 The performance of TCP over slower long-haul networks
560 The first problem was a bug that prevented round-trip timing measurements
563 that was well-tuned for Ethernet, but was poorly chosen for
597 seqpage-v as above, but first make \fIvadvise\fP\|(2) call
599 randpage-v as above, but first make \fIvadvise\fP call
632 To establish a common ground on which to compare the paging
633 routines of each system, we check instead the average page fault
653 seqpage-v 579 812 3.8 5.3 216.0 237.7 8394 8351
655 randpage-v 572 562 6.1 7.3 62.2 77.5 8126 9852
682 randpage-v 8126 9852 765 786