Lines Matching +full:cache +full:- +full:time +full:- +full:ms
49 chosen to optimize running time at the expense of memory.
52 Decreases in the running time of the system may be unnoticeable
60 of the time in the system was spent in the
62 translating path names to inodes\u\s-21\s0\d\**.
64 \** \u\s-21\s0\d Inode is an abbreviation for ``Index node''.
84 Changing directories invalidates the cache, as
90 $N$ files, search time decreases from $O ( N sup 2 )$ to $O(N)$.
95 The cost of the cache is about 20 lines of code
101 cache we ran ``ls \-l''
103 Before the per-process cache this command
104 used 22.3 seconds of system time.
105 After adding the cache the program used the same amount
106 of user time, but the system time dropped to 3.3 seconds.
110 The results showed that the time in \fInamei\fP
111 dropped by only 2.6 ms/call and
112 still accounted for 36% of the system call time,
114 This amounted to a drop in system time from 57% to about 55%.
121 part time % of kernel
123 self 11.0 ms/call 9.2%
124 child 10.6 ms/call 8.9%
126 total 21.6 ms/call 18.1%
129 Table 9. Call times for \fInamei\fP with per-process cache.
134 was caused by a low cache hit ratio.
135 Although the cache was 90% effective when hit,
138 although the amount of time spent in \fInamei\fP itself
140 more time was spent in the routines that it called
146 with a cache of recent name translations\**.
148 \** The cache is keyed on a name and the
155 \fInamei\fP first looks in its cache of recent translations
159 The system already maintained a cache of recently accessed inodes,
160 so the initial name cache
161 maintained a simple name-inode association that was used to
163 We considered implementing the cache by tagging each inode
169 the inode table for a long period of time, but are never looked
173 By keeping a separate table of names, the cache can
177 can reduce the size of the cache (or even eliminate it)
180 Another issue to be considered is how the name cache should
186 However, if the name cache holds hard references,
194 by a \fIcapability\fP \- a 32-bit number
195 guaranteed to be unique\u\s-22\s0\d \**.
197 \** \u\s-22\s0\d When all the numbers have been exhausted, all outstanding
201 When an entry is made in the name cache,
202 the capability of its inode is copied to the name cache entry.
204 When a name cache hit occurs,
205 the capability of the name cache entry is compared
207 If the capabilities do not match, the name cache entry is invalid.
208 Since the name cache holds only soft references,
212 searching through the entire cache;
215 The cost of the name cache is about 200 lines of code
217 and 48 bytes per cache entry.
220 using 10-50 kilobytes of physical memory.
221 The name cache is resident in memory at all times.
223 After adding the system wide name cache we reran ``ls \-l''
225 The user time remained the same,
226 however the system time rose slightly to 3.7 seconds.
228 now had to maintain the cache,
233 showed a 13 ms/call decrease in \fInamei\fP, with
234 \fInamei\fP accounting for only 26% of the system call time,
235 13% of the time in the kernel,
237 System time dropped from 55% to about 49%.
244 part time % of kernel
246 self 4.2 ms/call 6.2%
247 child 4.4 ms/call 6.6%
249 total 8.6 ms/call 12.8%
256 On our general time sharing systems we find that during the twelve
262 The name cache has a hit rate of 70%-80%;
263 the directory offset cache gets a hit rate of 5%-15%.
266 the percentage of system time devoted to name translation has
268 While the system wide cache reduces both the amount of time in
272 time spent in \fInamei\fP itself increases even though the
273 actual time per call decreases.
274 This is because less total time is being spent in the kernel,
275 hence a smaller absolute time becomes a larger total percentage.
280 it can either generate an interrupt each time a character is received,
301 per-character interrupts.
320 The software-interrupt level portion of the clock routine is only
332 The kernel process table is now multi-threaded to allow selective searching
335 Free slots can be obtained in constant time by taking one
350 The system previously scanned the entire process table each time it created
362 Processes that had run for their entire time slice had their
364 Processes that had not used their time slice, or that had
367 the scheduler represented nearly 20% of the system time.
382 As most of the clock-based events need not be done at high priority,
384 time-critical events such as cpu scheduling and timeout processing.
404 To minimize the number of full-sized blocks that must be broken
410 The file system still uses a best fit strategy the first time
412 However, the first time that the file system is forced to copy a growing
422 For large directories, this scan is time consuming.
424 a directory that is once over-filled will increase the cost
425 of file creation even after the over-filling is corrected.
443 and interface in use; non-local connections use a more conservative size
444 for long-haul networks.
446 On multiply-homed hosts, the local address bound by TCP now always corresponds
466 Routing has been modified to include a one element cache of the last
478 When \fIexec\fP-ing a new process, the kernel creates the new
482 These two copy operations were done one byte at a time, but
483 are now done a string at a time.
484 This optimization reduced the time to process
486 the average time to do an \fIexec\fP call decreased by 25%.
494 At some later time the process would again
497 The event would cause the scheduler to be invoked a second time
507 context in preparation for a non-local goto used to save many more
537 if (scanc(map[i], 1, 47, i - 63))
555 subl3 $64,_i,\-(sp) subl3 $64,_i,\-(sp) subl3 $64,_i,r5
559 pushl \-56(fp)[r3] pushl \-56(fp)[r3] movl \-56(fp)[r3],r2
579 The running time of the low level buffer management primitives was
590 running time doing single character read system calls.
592 running time by a factor of five!
593 Thus, while most of our time has been spent tuning the kernel,
605 database at a time. These routines were generalized to handle
609 time of many important programs such as the mail subsystem,
610 the C-shell (in doing tilde expansion), \fIls \-l\fP, etc.
617 appropriately-sized buffers.
620 Some C library routines and commonly-used programs use low-level
632 The inordinate expense of sending single-byte packets through
661 buffer cache and explained, to a large
670 a user was changed to cache host table lookups, resulting in a similar
680 advantage of the name cache maintained by the system.
686 efficient because it is all done with in-memory tables.
700 Rather than having many servers started at boot time, a single server,
727 This reduces the amount of network traffic and the time required
731 of time were optimized.
733 and inline expansions of the ubiquitous byte-swapping functions.
744 The C Run-time Library
748 In particular the running time of the string routines can be
752 Certain library routines that did file input in one-character reads
759 The C-shell was converted to run on 4.2BSD by
761 While this provided a functioning C-shell,
764 The C-shell has been modified to use the new signal