Lines Matching +full:memory +full:- +full:to +full:- +full:memory
13 .\" may be used to endorse or promote products derived from this software
17 .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
21 .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
29 Motivations for a New Virtual Memory System
31 The virtual memory system distributed with Berkeley UNIX has served
33 However the relentless advance of technology has begun to render it
37 and attempts to define the new design considerations that should
38 be taken into account in a new virtual memory design.
40 Implementation of 4.3BSD virtual memory
43 have used the same virtual memory design.
51 if there is insufficient swap space left to honor the allocation,
53 Thus, the limit to available virtual memory is established by the
54 amount of swap space allocated to the system.
56 Memory pages are used in a sort of shell game to contain the
61 As the supply of free pages begins to run out, dirty pages are
62 pushed to the previously allocated swap space so that they can be reused
64 If a previously accessed page that has been pushed to swap is once
68 Design assumptions for 4.3BSD virtual memory
70 The design criteria for the current virtual memory implementation
72 At that time the cost of memory was about a thousand times greater per
75 These machines had far more disk storage than they had memory
76 and given the cost tradeoff between memory and disk storage,
77 wanted to make maximal use of the memory even at the cost of
80 The primary motivation for virtual memory was to allow the
81 system to run individual programs whose address space exceeded
82 the memory capacity of the machine.
83 Thus the virtual memory capability allowed programs to be run that
86 was the ability to allow the sum of the memory requirements of
87 all active processes to exceed the amount of physical memory on
90 was to have the sum of active virtual memory be one and a half
91 to two times the physical memory on the machine.
93 At the time that the virtual memory system was designed,
96 directly connected to the machine.
97 Similarly all the disk space devoted to swap space was also
100 were roughly equivalent to the speed and latency with which swap
102 Given the high cost of memory there was little incentive to have
109 In the ten years since the current virtual memory system was designed,
112 micro-processor has become powerful enough to allow users to have their
115 time sharing model to an environment in which users have a
117 This workstation is linked through a network to a centralized
119 The workstations tend to have a large quantity of memory,
121 Because users do not want to be bothered with backing up their disks,
130 In this same period of time the cost per byte of memory has dropped
132 Thus the cost per byte of memory compared to the cost per byte of disk is
135 is beginning to change dramatically.
136 As the amount of physical memory on machines increases and the number of
139 memory than physical memory to that of having a surplus of memory that can
142 Because many machines will have more physical memory than they do swap
144 it is no longer reasonable to limit the maximum virtual memory
146 Consequently, the new design will allow the maximum virtual memory
147 to be the sum of physical memory plus swap space.
148 For machines with no swap space, the maximum virtual memory will
149 be governed by the amount of physical memory.
154 rather than to a locally-attached disk.
155 One use of the surplus memory would be to
158 the file server that the data was up to date.
160 while the free memory is maintained in a separate pool.
161 The new design should have only a single memory pool so that any
162 free memory can be used to cache recently accessed files.
164 Another portion of the memory will be used to keep track of the contents
165 of the blocks on any locally-attached swap space analogously
166 to the way that memory pages are handled.
167 Thus inactive swap blocks can also be used to cache less-recently-used
171 This design allows the user to simply allocate their entire local disk
172 to swap space, thus allowing the system to decide what files should
173 be cached to maximize its usefulness.
177 It also insures that all modified files are migrated back to the
178 file server in a timely fashion, thus eliminating the need to dump
183 This section outlines our new virtual memory interface as it is
189 The virtual memory interface is designed to support both large,
190 sparse address spaces as well as small, densely-used address spaces.
192 size of the physical memory on the machine,
193 while ``large'' may extend up to the maximum addressability of the machine.
196 a shared read-only fill-on-demand region with its text,
197 a private fill-on-demand region for its initialized data,
198 a private zero-fill-on-demand region for its uninitialized data and heap,
199 and a private zero-fill-on-demand region for its stack.
200 In addition to these regions, a process may allocate new ones.
206 When it is privately mapped, changes to the contents of the region
207 are not reflected to any other process that map the same region.
208 Regions may be mapped read-only or read-write.
210 a shared read-only region for the text, and a private read-write
214 It may map some memory hardware on the machine such as a frame buffer.
221 If the region is mapped read-write and shared, changes to the
223 If the region is read-write but private,
224 changes to the region are copied to a private page that is not
225 visible to other processes mapping the file,
226 and these modified pages are not reflected back to the file.
228 The final type of region is ``anonymous memory''.
230 it is zero-fill-on-demand and its contents are abandoned
234 written to a disk unless memory is short and part of the region
235 must be paged to a swap area.
240 is much higher than simply zeroing memory.
242 If several processes wish to share a region,
247 nor be willing to pay the overhead associated with them.
248 For anonymous memory they must use some other rendezvous point.
249 Our current interface allows processes to associate a
250 descriptor with a region, which it may then pass to other
251 processes that wish to attach to the region.
256 Shared memory as high speed interprocess communication
258 The primary use envisioned for shared memory is to
262 require a system call to hand off a set
264 by the recipient process to receive the data.
266 to avoid a memory to memory copy, the overhead of doing the system
268 Shared memory, by contrast, allows processes to share data at any
271 However, to maintain all but the simplest of data structures,
272 the processes must serialize their modifications to shared
273 data structures if they are to avoid corrupting them.
277 Thus processes are once again limited by the need to do two
278 system calls per transaction, one to lock the semaphore, the
279 second to release it.
280 The net effect is that the shared memory model provides little if
284 requires a large decrease in the number of system calls needed to
289 Thus if one can find a way to avoid doing a system call in the case
291 one would expect to be able to dramatically reduce the number
292 of system calls needed to achieve serialization.
296 In the typical case, a process executes an atomic test-and-set instruction
297 to acquire a lock, finds it free, and thus is able to get it.
299 need to do a system call to wait for the lock to clear.
303 it necessary for the process to do a system call to cause the other
304 process(es) to be awakened.
307 Some computers require access to special hardware to implement
308 atomic interprocessor test-and-set.
310 all have to be done with system calls;
312 though they would tend to run slowly.
325 System V allows entire sets of semaphores to be set concurrently.
326 If any of the locks are unavailable, the process is put to sleep
329 that serializes access to the set of semaphores being simulated.
331 inspected to see if the desired ones are available.
336 unavailable semaphores for which to wait.
339 In all the above examples, there appears to be a race condition.
341 and the time that it manages to call the system to sleep on the
345 on the physical byte of memory that is being used for the semaphore.
346 The system call to put a process to sleep takes a pointer
348 the kernel, the kernel can repeat the test-and-set.
351 did the test-and-set and eventually got into the sleep request system call,
353 it to sleep.
354 Thus the only problem to solve is how the kernel interlocks between testing
355 a semaphore and going to sleep;
362 ``Data Structures Added in the Berkeley Virtual Memory Extensions
371 Codenummer 051560-44(1984)01, February 1984.