xref: /plan9/sys/doc/9.ms (revision 426d2b71458df9b491ba6c167f699b3f1f7b0428)
1.HTML "Plan 9 from Bell Labs"
2.TL
3Plan 9 from Bell Labs
4.AU
5Rob Pike
6Dave Presotto
7Sean Dorward
8Bob Flandrena
9Ken Thompson
10Howard Trickey
11Phil Winterbottom
12.AI
13.MH
14USA
15.SH
16Motivation
17.PP
18.FS
19Appeared in a slightly different form in
20.I
21Computing Systems,
22.R
23Vol 8 #3, Summer 1995, pp. 221-254.
24.FE
25By the mid 1980's, the trend in computing was
26away from large centralized time-shared computers towards
27networks of smaller, personal machines,
28typically UNIX `workstations'.
29People had grown weary of overloaded, bureaucratic timesharing machines
30and were eager to move to small, self-maintained systems, even if that
31meant a net loss in computing power.
32As microcomputers became faster, even that loss was recovered, and
33this style of computing remains popular today.
34.PP
35In the rush to personal workstations, though, some of their weaknesses
36were overlooked.
37First, the operating system they run, UNIX, is itself an old timesharing system and
38has had trouble adapting to ideas
39born after it.  Graphics and networking were added to UNIX well into
40its lifetime and remain poorly integrated and difficult to administer.
41More important, the early focus on having private machines
42made it difficult for networks of machines to serve as seamlessly as the old
43monolithic timesharing systems.
44Timesharing centralized the management
45and amortization of costs and resources;
46personal computing fractured, democratized, and ultimately amplified
47administrative problems.
48The choice of
49an old timesharing operating system to run those personal machines
50made it difficult to bind things together smoothly.
51.PP
52Plan 9 began in the late 1980's as an attempt to have it both
53ways: to build a system that was centrally administered and cost-effective
54using cheap modern microcomputers as its computing elements.
55The idea was to build a time-sharing system out of workstations, but in a novel way.
56Different computers would handle
57different tasks: small, cheap machines in people's offices would serve
58as terminals providing access to large, central, shared resources such as computing
59servers and file servers.  For the central machines, the coming wave of
60shared-memory multiprocessors seemed obvious candidates.
61The philosophy is much like that of the Cambridge
62Distributed System [NeHe82].
63The early catch phrase was to build a UNIX out of a lot of little systems,
64not a system out of a lot of little UNIXes.
65.PP
66The problems with UNIX were too deep to fix, but some of its ideas could be
67brought along.  The best was its use of the file system to coordinate
68naming of and access to resources, even those, such as devices, not traditionally
69treated as files.
70For Plan 9, we adopted this idea by designing a network-level protocol, called 9P,
71to enable machines to access files on remote systems.
72Above this, we built a naming
73system that lets people and their computing agents build customized views
74of the resources in the network.
75This is where Plan 9 first began to look different:
76a Plan 9 user builds a private computing environment and recreates it wherever
77desired, rather than doing all computing on a private machine.
78It soon became clear that this model was richer
79than we had foreseen, and the ideas of per-process name spaces
80and file-system-like resources were extended throughout
81the system\(emto processes, graphics, even the network itself.
82.PP
83By 1989 the system had become solid enough
84that some of us began using it as our exclusive computing environment.
85This meant bringing along many of the services and applications we had
86used on UNIX.  We used this opportunity to revisit many issues, not just
87kernel-resident ones, that we felt UNIX addressed badly.
88Plan 9 has new compilers,
89languages,
90libraries,
91window systems,
92and many new applications.
93Many of the old tools were dropped, while those brought along have
94been polished or rewritten.
95.PP
96Why be so all-encompassing?
97The distinction between operating system, library, and application
98is important to the operating system researcher but uninteresting to the
99user.  What matters is clean functionality.
100By building a complete new system,
101we were able to solve problems where we thought they should be solved.
102For example, there is no real `tty driver' in the kernel; that is the job of the window
103system.
104In the modern world, multi-vendor and multi-architecture computing
105are essential, yet the usual compilers and tools assume the program is being
106built to run locally; we needed to rethink these issues.
107Most important, though, the test of a system is the computing
108environment it provides.
109Producing a more efficient way to run the old UNIX warhorses
110is empty engineering;
111we were more interested in whether the new ideas suggested by
112the architecture of the underlying system encourage a more effective way of working.
113Thus, although Plan 9 provides an emulation environment for
114running POSIX commands, it is a backwater of the system.
115The vast majority
116of system software is developed in the `native' Plan 9 environment.
117.PP
118There are benefits to having an all-new system.
119First, our laboratory has a history of building experimental peripheral boards.
120To make it easy to write device drivers,
121we want a system that is available in source form
122(no longer guaranteed with UNIX, even
123in the laboratory in which it was born).
124Also, we want to redistribute our work, which means the software
125must be locally produced.  For example, we could have used some vendors'
126C compilers for our system, but even had we overcome the problems with
127cross-compilation, we would have difficulty
128redistributing the result.
129.PP
130This paper serves as an overview of the system.  It discusses the architecture
131from the lowest building blocks to the computing environment seen by users.
132It also serves as an introduction to the rest of the Plan 9 Programmer's Manual,
133which it accompanies.  More detail about topics in this paper
134can be found elsewhere in the manual.
135.SH
136Design
137.PP
138The view of the system is built upon three principles.
139First, resources are named and accessed like files in a hierarchical file system.
140Second, there is a standard protocol, called 9P, for accessing these
141resources.
142Third, the disjoint hierarchies provided by different services are
143joined together into a single private hierarchical file name space.
144The unusual properties of Plan 9 stem from the consistent, aggressive
145application of these principles.
146.PP
147A large Plan 9 installation has a number of computers networked
148together, each providing a particular class of service.
149Shared multiprocessor servers provide computing cycles;
150other large machines offer file storage.
151These machines are located in an air-conditioned machine
152room and are connected by high-performance networks.
153Lower bandwidth networks such as Ethernet or ISDN connect these
154servers to office- and home-resident workstations or PCs, called terminals
155in Plan 9 terminology.
156Figure 1 shows the arrangement.
157.KF
158.PS < network.pic
159.IP
160.ps -1
161.in .25i
162.ll -.25i
163.ps -1
164.vs -1
165.I "Figure 1. Structure of a large Plan 9 installation.
166CPU servers and file servers share fast local-area networks,
167while terminals use slower wider-area networks such as Ethernet,
168Datakit, or telephone lines to connect to them.
169Gateway machines, which are just CPU servers connected to multiple
170networks, allow machines on one network to see another.
171.ps +1
172.vs +1
173.ll +.25i
174.in 0
175.ps
176.sp
177.KE
178.PP
179The modern style of computing offers each user a dedicated workstation or PC.
180Plan 9's approach is different.
181The various machines with screens, keyboards, and mice all provide
182access to the resources of the network, so they are functionally equivalent,
183in the manner of the terminals attached to old timesharing systems.
184When someone uses the system, though,
185the terminal is temporarily personalized by that user.
186Instead of customizing the hardware, Plan 9 offers the ability to customize
187one's view of the system provided by the software.
188That customization is accomplished by giving local, personal names for the
189publicly visible resources in the network.
190Plan 9 provides the mechanism to assemble a personal view of the public
191space with local names for globally accessible resources.
192Since the most important resources of the network are files, the model
193of that view is file-oriented.
194.PP
195The client's local name space provides a way to customize the user's
196view of the network.  The services available in the network all export file
197hierarchies.
198Those important to the user are gathered together into
199a custom name space; those of no immediate interest are ignored.
200This is a different style of use from the idea of a `uniform global name space'.
201In Plan 9, there are known names for services and uniform names for
202files exported by those services,
203but the view is entirely local.  As an analogy, consider the difference
204between the phrase `my house' and the precise address of the speaker's
205home.  The latter may be used by anyone but the former is easier to say and
206makes sense when spoken.
207It also changes meaning depending on who says it,
208yet that does not cause confusion.
209Similarly, in Plan 9 the name
210.CW /dev/cons
211always refers to the user's terminal and
212.CW /bin/date
213the correct version of the date
214command to run,
215but which files those names represent depends on circumstances such as the
216architecture of the machine executing
217.CW date .
218Plan 9, then, has local name spaces that obey globally understood
219conventions;
220it is the conventions that guarantee sane behavior in the presence
221of local names.
222.PP
223The 9P protocol is structured as a set of transactions that
224send a request from a client to a (local or remote) server and return the result.
2259P controls file systems, not just files:
226it includes procedures to resolve file names and traverse the name
227hierarchy of the file system provided by the server.
228On the other hand,
229the client's name space is held by the client system alone, not on or with the server,
230a distinction from systems such as Sprite [OCDNW88].
231Also, file access is at the level of bytes, not blocks, which distinguishes
2329P from protocols like NFS and RFS.
233A paper by Welch compares Sprite, NFS, and Plan 9's network file system structures [Welc94].
234.PP
235This approach was designed with traditional files in mind,
236but can be extended
237to many other resources.
238Plan 9 services that export file hierarchies include I/O devices,
239backup services,
240the window system,
241network interfaces,
242and many others.
243One example is the process file system,
244.CW /proc ,
245which provides a clean way
246to examine and control running processes.
247Precursor systems had a similar idea [Kill84], but Plan 9 pushes the
248file metaphor much further [PPTTW93].
249The file system model is well-understood, both by system builders and general users,
250so services that present file-like interfaces are easy to build, easy to understand,
251and easy to use.
252Files come with agreed-upon rules for
253protection,
254naming,
255and access both local and remote,
256so services built this way are ready-made for a distributed system.
257(This is a distinction from `object-oriented' models, where these issues
258must be faced anew for every class of object.)
259Examples in the sections that follow illustrate these ideas in action.
260.SH
261The Command-level View
262.PP
263Plan 9 is meant to be used from a machine with a screen running
264the window system.
265It has no notion of `teletype' in the UNIX sense.  The keyboard handling of
266the bare system is rudimentary, but once the window system, 8½ [Pike91],
267is running,
268text can be edited with `cut and paste' operations from a pop-up menu,
269copied between windows, and so on.
2708½ permits editing text from the past, not just on the current input line.
271The text-editing capabilities of 8½ are strong enough to displace
272special features such as history in the shell,
273paging and scrolling,
274and mail editors.
2758½ windows do not support cursor addressing and,
276except for one terminal emulator to simplify connecting to traditional systems,
277there is no cursor-addressing software in Plan 9.
278.PP
279Each window is created in a separate name space.
280Adjustments made to the name space in a window do not affect other windows
281or programs, making it safe to experiment with local modifications to the name
282space, for example
283to substitute files from the dump file system when debugging.
284Once the debugging is done, the window can be deleted and all trace of the
285experimental apparatus is gone.
286Similar arguments apply to the private space each window has for environment
287variables, notes (analogous to UNIX signals), etc.
288.PP
289Each window is created running an application, such as the shell, with
290standard input and output connected to the editable text of the window.
291Each window also has a private bitmap and multiplexed access to the
292keyboard, mouse, and other graphical resources through files like
293.CW /dev/mouse ,
294.CW /dev/bitblt ,
295and
296.CW /dev/cons
297(analogous to UNIX's
298.CW /dev/tty ).
299These files are provided by 8½, which is implemented as a file server.
300Unlike X windows, where a new application typically creates a new window
301to run in, an 8½ graphics application usually runs in the window where it starts.
302It is possible and efficient for an application to create a new window, but
303that is not the style of the system.
304Again contrasting to X, in which a remote application makes a network
305call to the X server to start running,
306a remote 8½ application sees the
307.CW mouse ,
308.CW bitblt ,
309and
310.CW cons
311files for the window as usual in
312.CW /dev ;
313it does not know whether the files are local.
314It just reads and writes them to control the window;
315the network connection is already there and multiplexed.
316.PP
317The intended style of use is to run interactive applications such as the window
318system and text editor on the terminal and to run computation- or file-intensive
319applications on remote servers.
320Different windows may be running programs on different machines over
321different networks, but by making the name space equivalent in all windows,
322this is transparent: the same commands and resources are available, with the same names,
323wherever the computation is performed.
324.PP
325The command set of Plan 9 is similar to that of UNIX.
326The commands fall into several broad classes.  Some are new programs for
327old jobs: programs like
328.CW ls ,
329.CW cat ,
330and
331.CW who
332have familiar names and functions but are new, simpler implementations.
333.CW Who ,
334for example, is a shell script, while
335.CW ps
336is just 95 lines of C code.
337Some commands are essentially the same as their UNIX ancestors:
338.CW awk ,
339.CW troff ,
340and others have been converted to ANSI C and extended to handle
341Unicode, but are still the familiar tools.
342Some are entirely new programs for old niches: the shell
343.CW rc ,
344text editor
345.CW sam ,
346debugger
347.CW acid ,
348and others
349displace the better-known UNIX tools with similar jobs.
350Finally, about half the commands are new.
351.PP
352Compatibility was not a requirement for the system.
353Where the old commands or notation seemed good enough, we
354kept them.  When they didn't, we replaced them.
355.SH
356The File Server
357.PP
358A central file server stores permanent files and presents them to the network
359as a file hierarchy exported using 9P.
360The server is a stand-alone system, accessible only over the network,
361designed to do its one job well.
362It runs no user processes, only a fixed set of routines compiled into the
363boot image.
364Rather than a set of disks or separate file systems,
365the main hierarchy exported by the server is a single
366tree, representing files on many disks.
367That hierarchy is
368shared by many users over a wide area on a variety of networks.
369Other file trees exported by
370the server include
371special-purpose systems such as temporary storage and, as explained
372below, a backup service.
373.PP
374The file server has three levels of storage.
375The central server in our installation has
376about 100 megabytes of memory buffers,
37727 gigabytes of magnetic disks,
378and 350 gigabytes of
379bulk storage in a write-once-read-many (WORM) jukebox.
380The disk is a cache for the WORM and the memory is a cache for the disk;
381each is much faster, and sees about an order of magnitude more traffic,
382than the level it caches.
383The addressable data in the file system can be larger than the size of the
384magnetic disks, because they are only a cache;
385our main file server has about 40 gigabytes of active storage.
386.PP
387The most unusual feature of the file server
388comes from its use of a WORM device for
389stable storage.
390Every morning at 5 o'clock, a
391.I dump
392of the file system occurs automatically.
393The file system is frozen and
394all blocks modified since the last dump
395are queued to be written to the WORM.
396Once the blocks are queued,
397service is restored and
398the read-only root of the dumped
399file system appears in a
400hierarchy of all dumps ever taken, named by its date.
401For example, the directory
402.CW /n/dump/1995/0315
403is the root directory of an image of the file system
404as it appeared in the early morning of March 15, 1995.
405It takes a few minutes to queue the blocks,
406but the process to copy blocks to the WORM, which runs in the background, may take hours.
407.PP
408There are two ways the dump file system is used.
409The first is by the users themselves, who can browse the
410dump file system directly or attach pieces of
411it to their name space.
412For example, to track down a bug,
413it is straightforward to try the compiler from three months ago
414or to link a program with yesterday's library.
415With daily snapshots of all files,
416it is easy to find when a particular change was
417made or what changes were made on a particular date.
418People feel free to make large speculative changes
419to files in the knowledge that they can be backed
420out with a single
421copy command.
422There is no backup system as such;
423instead, because the dump
424is in the file name space,
425backup problems can be solved with
426standard tools
427such as
428.CW cp ,
429.CW ls ,
430.CW grep ,
431and
432.CW diff .
433.PP
434The other (very rare) use is complete system backup.
435In the event of disaster,
436the active file system can be initialized from any dump by clearing the
437disk cache and setting the root of
438the active file system to be a copy
439of the dumped root.
440Although easy to do, this is not to be taken lightly:
441besides losing any change made after the date of the dump, this recovery method
442results in a very slow system.
443The cache must be reloaded from WORM, which is much
444slower than magnetic disks.
445The file system takes a few days to reload the working
446set and regain its full performance.
447.PP
448Access permissions of files in the dump are the same
449as they were when the dump was made.
450Normal utilities have normal
451permissions in the dump without any special arrangement.
452The dump file system is read-only, though,
453which means that files in the dump cannot be written regardless of their permission bits;
454in fact, since directories are part of the read-only structure,
455even the permissions cannot be changed.
456.PP
457Once a file is written to WORM, it cannot be removed,
458so our users never see
459``please clean up your files''
460messages and there is no
461.CW df
462command.
463We regard the WORM jukebox as an unlimited resource.
464The only issue is how long it will take to fill.
465Our WORM has served a community of about 50 users
466for five years and has absorbed daily dumps, consuming a total of
46765% of the storage in the jukebox.
468In that time, the manufacturer has improved the technology,
469doubling the capacity of the individual disks.
470If we were to upgrade to the new media,
471we would have more free space than in the original empty jukebox.
472Technology has created storage faster than we can use it.
473.SH
474Unusual file servers
475.PP
476Plan 9 is characterized by a variety of servers that offer
477a file-like interface to unusual services.
478Many of these are implemented by user-level processes, although the distinction
479is unimportant to their clients; whether a service is provided by the kernel,
480a user process, or a remote server is irrelevant to the way it is used.
481There are dozens of such servers; in this section we present three representative ones.
482.PP
483Perhaps the most remarkable file server in Plan 9 is 8½, the window system.
484It is discussed at length elsewhere [Pike91], but deserves a brief explanation here.
4858½ provides two interfaces: to the user seated at the terminal, it offers a traditional
486style of interaction with multiple windows, each running an application, all controlled
487by a mouse and keyboard.
488To the client programs, the view is also fairly traditional:
489programs running in a window see a set of files in
490.CW /dev
491with names like
492.CW mouse ,
493.CW screen ,
494and
495.CW cons .
496Programs that want to print text to their window write to
497.CW /dev/cons ;
498to read the mouse, they read
499.CW /dev/mouse .
500In the Plan 9 style, bitmap graphics is implemented by providing a file
501.CW /dev/bitblt
502on which clients write encoded messages to execute graphical operations such as
503.CW bitblt
504(RasterOp).
505What is unusual is how this is done:
5068½ is a file server, serving the files in
507.CW /dev
508to the clients running in each window.
509Although every window looks the same to its client,
510each window has a distinct set of files in
511.CW /dev .
5128½ multiplexes its clients' access to the resources of the terminal
513by serving multiple sets of files.  Each client is given a private name space
514with a
515.I different
516set of files that behave the same as in all other windows.
517There are many advantages to this structure.
518One is that 8½ serves the same files it needs for its own implementation\(emit
519multiplexes its own interface\(emso it may be run, recursively, as a client of itself.
520Also, consider the implementation of
521.CW /dev/tty
522in UNIX, which requires special code in the kernel to redirect
523.CW open
524calls to the appropriate device.
525Instead, in 8½ the equivalent service falls out
526automatically: 8½ serves
527.CW /dev/cons
528as its basic function; there is nothing extra to do.
529When a program wants to
530read from the keyboard, it opens
531.CW /dev/cons ,
532but it is a private file, not a shared one with special properties.
533Again, local name spaces make this possible; conventions about the consistency of
534the files within them make it natural.
535.PP
5368½ has a unique feature made possible by its design.
537Because it is implemented as a file server,
538it has the power to postpone answering read requests for a particular window.
539This behavior is toggled by a reserved key on the keyboard.
540Toggling once suspends client reads from the window;
541toggling again resumes normal reads, which absorb whatever text has been prepared,
542one line at a time.
543This allows the user to edit multi-line input text on the screen before the application sees it,
544obviating the need to invoke a separate editor to prepare text such as mail
545messages.
546A related property is that reads are answered directly from the
547data structure defining the text on the display: text may be edited until
548its final newline makes the prepared line of text readable by the client.
549Even then, until the line is read, the text the client will read can be changed.
550For example, after typing
551.P1
552% make
553rm *
554.P2
555to the shell, the user can backspace over the final newline at any time until
556.CW make
557finishes, holding off execution of the
558.CW rm
559command, or even point with the mouse
560before the
561.CW rm
562and type another command to be executed first.
563.PP
564There is no
565.CW ftp
566command in Plan 9.  Instead, a user-level file server called
567.CW ftpfs
568dials the FTP site, logs in on behalf of the user, and uses the FTP protocol
569to examine files in the remote directory.
570To the local user, it offers a file hierarchy, attached to
571.CW /n/ftp
572in the local name space, mirroring the contents of the FTP site.
573In other words, it translates the FTP protocol into 9P to offer Plan 9 access to FTP sites.
574The implementation is tricky;
575.CW ftpfs
576must do some sophisticated caching for efficiency and
577use heuristics to decode remote directory information.
578But the result is worthwhile:
579all the local file management tools such as
580.CW cp ,
581.CW grep ,
582.CW diff ,
583and of course
584.CW ls
585are available to FTP-served files exactly as if they were local files.
586Other systems such as Jade and Prospero
587have exploited the same opportunity [Rao81, Neu92],
588but because of local name spaces and the simplicity of implementing 9P,
589this approach
590fits more naturally into Plan 9 than into other environments.
591.PP
592One server,
593.CW exportfs ,
594is a user process that takes a portion of its own name space and
595makes it available to other processes by
596translating 9P requests into system calls to the Plan 9 kernel.
597The file hierarchy it exports may contain files from multiple servers.
598.CW Exportfs
599is usually run as a remote server
600started by a local program,
601either
602.CW import
603or
604.CW cpu .
605.CW Import
606makes a network call to the remote machine, starts
607.CW exportfs
608there, and attaches its 9P connection to the local name space.  For example,
609.P1
610import helix /net
611.P2
612makes Helix's network interfaces visible in the local
613.CW /net
614directory.  Helix is a central server and
615has many network interfaces, so this permits a machine with one network to
616access to any of Helix's networks.  After such an import, the local
617machine may make calls on any of the networks connected to Helix.
618Another example is
619.P1
620import helix /proc
621.P2
622which makes Helix's processes visible in the local
623.CW /proc ,
624permitting local debuggers to examine remote processes.
625.PP
626The
627.CW cpu
628command connects the local terminal to a remote
629CPU server.
630It works in the opposite direction to
631.CW import :
632after calling the server, it starts a
633.I local
634.CW exportfs
635and mounts it in the name space of a process, typically a newly created shell, on the
636server.
637It then rearranges the name space
638to make local device files (such as those served by
639the terminal's window system) visible in the server's
640.CW /dev
641directory.
642The effect of running a
643.CW cpu
644command is therefore to start a shell on a fast machine, one more tightly
645coupled to the file server,
646with a name space analogous
647to the local one.
648All local device files are visible remotely, so remote applications have full
649access to local services such as bitmap graphics,
650.CW /dev/cons ,
651and so on.
652This is not the same as
653.CW rlogin ,
654which does nothing to reproduce the local name space on the remote system,
655nor is it the same as
656file sharing with, say, NFS, which can achieve some name space equivalence but
657not the combination of access to local hardware devices, remote files, and remote
658CPU resources.
659The
660.CW cpu
661command is a uniquely transparent mechanism.
662For example, it is reasonable
663to start a window system in a window running a
664.CW cpu
665command; all windows created there automatically start processes on the CPU server.
666.SH
667Configurability and administration
668.PP
669The uniform interconnection of components in Plan 9 makes it possible to configure
670a Plan 9 installation many different ways.
671A single laptop PC can function as a stand-alone Plan 9 system;
672at the other extreme, our setup has central multiprocessor CPU
673servers and file servers and scores of terminals ranging from small PCs to
674high-end graphics workstations.
675It is such large installations that best represent how Plan 9 operates.
676.PP
677The system software is portable and the same
678operating system runs on all hardware.
679Except for performance, the appearance of the system on, say,
680an SGI workstation is the same
681as on a laptop.
682Since computing and file services are centralized, and terminals have
683no permanent file storage, all terminals are functionally identical.
684In this way, Plan 9 has one of the good properties of old timesharing systems, where
685a user could sit in front of any machine and see the same system.  In the modern
686workstation community, machines tend to be owned by people who customize them
687by storing private information on local disk.
688We reject this style of use,
689although the system itself can be used this way.
690In our group, we have a laboratory with many public-access machines\(ema terminal
691room\(emand a user may sit down at any one of them and work.
692.PP
693Central file servers centralize not just the files, but also their administration
694and maintenance.
695In fact, one server is the main server, holding all system files; other servers provide
696extra storage or are available for debugging and other special uses, but the system
697software resides on one machine.
698This means that each program
699has a single copy of the binary for each architecture, so it is
700trivial to install updates and bug fixes.
701There is also a single user database; there is no need to synchronize distinct
702.CW /etc/passwd
703files.
704On the other hand, depending on a single central server does limit the size of an installation.
705.PP
706Another example of the power of centralized file service
707is the way Plan 9 administers network information.
708On the central server there is a directory,
709.CW /lib/ndb ,
710that contains all the information necessary to administer the local Ethernet and
711other networks.
712All the machines use the same database to talk to the network; there is no
713need to manage a distributed naming system or keep parallel files up to date.
714To install a new machine on the local Ethernet, choose a
715name and IP address and add these to a single file in
716.CW /lib/ndb ;
717all the machines in the installation will be able to talk to it immediately.
718To start running, plug the machine into the network, turn it on, and use BOOTP
719and TFTP to load the kernel.
720All else is automatic.
721.PP
722Finally,
723the automated dump file system frees all users from the need to maintain
724their systems, while providing easy access to backup files without
725tapes, special commands, or the involvement of support staff.
726It is difficult to overstate the improvement in lifestyle afforded by this service.
727.PP
728Plan 9 runs on a variety of hardware without
729constraining how to configure an installation.
730In our laboratory, we
731chose to use central servers because they amortize costs and administration.
732A sign that this is a good decision is that our cheap
733terminals remain comfortable places
734to work for about five years, much longer than workstations that must provide
735the complete computing environment.
736We do, however, upgrade the central machines, so
737the computation available from even old Plan 9 terminals improves with time.
738The money saved by avoiding regular upgrades of terminals
739is instead spent on the newest, fastest multiprocessor servers.
740We estimate this costs about half the money of networked workstations
741yet provides general access to more powerful machines.
742.SH
743C Programming
744.PP
745Plan 9 utilities are written in several languages.
746Some are scripts for the shell,
747.CW rc
748[Duff90]; a handful
749are written in a new C-like concurrent language called Alef [Wint95], described below.
750The great majority, though, are written in a dialect of ANSI C [ANSIC].
751Of these, most are entirely new programs, but some
752originate in pre-ANSI C code
753from our research UNIX system [UNIX85].
754These have been updated to ANSI C
755and reworked for portability and cleanliness.
756.PP
757The Plan 9 C dialect has some minor extensions,
758described elsewhere [Pike95], and a few major restrictions.
759The most important restriction is that the compiler demands that
760all function definitions have ANSI prototypes
761and all function calls appear in the scope of a prototyped declaration
762of the function.
763As a stylistic rule,
764the prototyped declaration is placed in a header file
765included by all files that call the function.
766Each system library has an associated header file, declaring all
767functions in that library.
768For example, the standard Plan 9 library is called
769.CW libc ,
770so all C source files include
771.CW <libc.h> .
772These rules guarantee that all functions
773are called with arguments having the expected types \(em something
774that was not true with pre-ANSI C programs.
775.PP
776Another restriction is that the C compilers accept only a subset of the
777preprocessor directives required by ANSI.
778The main omission is
779.CW #if ,
780since we believe it
781is never necessary and often abused.
782Also, its effect is
783better achieved by other means.
784For instance, an
785.CW #if
786used to toggle a feature at compile time can be written
787as a regular
788.CW if
789statement, relying on compile-time constant folding and
790dead code elimination to discard object code.
791.PP
792Conditional compilation, even with
793.CW #ifdef ,
794is used sparingly in Plan 9.
795The only architecture-dependent
796.CW #ifdefs
797in the system are in low-level routines in the graphics library.
798Instead, we avoid such dependencies or, when necessary, isolate
799them in separate source files or libraries.
800Besides making code hard to read,
801.CW #ifdefs
802make it impossible to know what source is compiled into the binary
803or whether source protected by them will compile or work properly.
804They make it harder to maintain software.
805.PP
806The standard Plan 9 library overlaps much of
807ANSI C and POSIX [POSIX], but diverges
808when appropriate to Plan 9's goals or implementation.
809When the semantics of a function
810change, we also change the name.
811For instance, instead of UNIX's
812.CW creat ,
813Plan 9 has a
814.CW create
815function that takes three arguments,
816the original two plus a third that, like the second
817argument of
818.CW open ,
819defines whether the returned file descriptor is to be opened for reading,
820writing, or both.
821This design was forced by the way 9P implements creation,
822but it also simplifies the common use of
823.CW create
824to initialize a temporary file.
825.PP
826Another departure from ANSI C is that Plan 9 uses a 16-bit character set
827called Unicode [ISO10646, Unicode].
828Although we stopped short of full internationalization,
829Plan 9 treats the representation
830of all major languages uniformly throughout all its
831software.
832To simplify the exchange of text between programs, the characters are packed into
833a byte stream by an encoding we designed, called UTF-8,
834which is now
835becoming accepted as a standard [FSSUTF].
836It has several attractive properties,
837including byte-order independence,
838backwards compatibility with ASCII,
839and ease of implementation.
840.PP
841There are many problems in adapting existing software to a large
842character set with an encoding that represents characters with
843a variable number of bytes.
844ANSI C addresses some of the issues but
845falls short of
846solving them all.
847It does not pick a character set encoding and does not
848define all the necessary I/O library routines.
849Furthermore, the functions it
850.I does
851define have engineering problems.
852Since the standard left too many problems unsolved,
853we decided to build our own interface.
854A separate paper has the details [Pike93].
855.PP
856A small class of Plan 9 programs do not follow the conventions
857discussed in this section.
858These are programs imported from and maintained by
859the UNIX community;
860.CW tex
861is a representative example.
862To avoid reconverting such programs every time a new version
863is released,
864we built a porting environment, called the ANSI C/POSIX Environment, or APE [Tric95].
865APE comprises separate include files, libraries, and commands,
866conforming as much as possible to the strict ANSI C and base-level
867POSIX specifications.
868To port network-based software such as X Windows, it was necessary to add
869some extensions to those
870specifications, such as the BSD networking functions.
871.SH
872Portability and Compilation
873.PP
874Plan 9 is portable across a variety of processor architectures.
875Within a single computing session, it is common to use
876several architectures: perhaps the window system running on
877an Intel processor connected to a MIPS-based CPU server with files
878resident on a SPARC system.
879For this heterogeneity to be transparent, there must be conventions
880about data interchange between programs; for software maintenance
881to be straightforward, there must be conventions about cross-architecture
882compilation.
883.PP
884To avoid byte order problems,
885data is communicated between programs as text whenever practical.
886Sometimes, though, the amount of data is high enough that a binary
887format is necessary;
888such data is communicated as a byte stream with a pre-defined encoding
889for multi-byte values.
890In the rare cases where a format
891is complex enough to be defined by a data structure,
892the structure is never
893communicated as a unit; instead, it is decomposed into
894individual fields, encoded as an ordered byte stream, and then
895reassembled by the recipient.
896These conventions affect data
897ranging from kernel or application program state information to object file
898intermediates generated by the compiler.
899.PP
900Programs, including the kernel, often present their data
901through a file system interface,
902an access mechanism that is inherently portable.
903For example, the system clock is represented by a decimal number in the file
904.CW /dev/time ;
905the
906.CW time
907library function (there is no
908.CW time
909system call) reads the file and converts it to binary.
910Similarly, instead of encoding the state of an application
911process in a series of flags and bits in private memory,
912the kernel
913presents a text string in the file named
914.CW status
915in the
916.CW /proc
917file system associated with each process.
918The Plan 9
919.CW ps
920command is trivial: it prints the contents of
921the desired status files after some minor reformatting; moreover, after
922.P1
923import helix /proc
924.P2
925a local
926.CW ps
927command reports on the status of Helix's processes.
928.PP
929Each supported architecture has its own compilers and loader.
930The C and Alef compilers produce intermediate files that
931are portably encoded; the contents
932are unique to the target architecture but the format of the
933file is independent of compiling processor type.
934When a compiler for a given architecture is compiled on
935another type of processor and then used to compile a program
936there,
937the intermediate produced on
938the new architecture is identical to the intermediate
939produced on the native processor.  From the compiler's
940point of view, every compilation is a cross-compilation.
941.PP
942Although each architecture's loader accepts only intermediate files produced
943by compilers for that architecture,
944such files could have been generated by a compiler executing
945on any type of processor.
946For instance, it is possible to run
947the MIPS compiler on a 486, then use the MIPS loader on a
948SPARC to produce a MIPS executable.
949.PP
950Since Plan 9 runs on a variety of architectures, even in a single installation,
951distinguishing the compilers and intermediate names
952simplifies multi-architecture
953development from a single source tree.
954The compilers and the loader for each architecture are
955uniquely named; there is no
956.CW cc
957command.
958The names are derived by concatenating a code letter
959associated with the target architecture with the name of the
960compiler or loader.  For example, the letter `8' is
961the code letter for Intel
962.I x 86
963processors; the C compiler is named
964.CW 8c ,
965the Alef compiler
966.CW 8al ,
967and the loader is called
968.CW 8l .
969Similarly, the compiler intermediate files are suffixed
970.CW .8 ,
971not
972.CW .o .
973.PP
974The Plan 9
975build program
976.CW mk ,
977a relative of
978.CW make ,
979reads the names of the current and target
980architectures from environment variables called
981.CW $cputype
982and
983.CW $objtype .
984By default the current processor is the target, but setting
985.CW $objtype
986to the name of another architecture
987before invoking
988.CW mk
989results in a cross-build:
990.P1
991% objtype=sparc mk
992.P2
993builds a program for the SPARC architecture regardless of the executing machine.
994The value of
995.CW $objtype
996selects a
997file of architecture-dependent variable definitions
998that configures the build to use the appropriate compilers and loader.
999Although simple-minded, this technique works well in practice:
1000all applications in Plan 9 are built from a single source tree
1001and it is possible to build the various architectures in parallel without conflict.
1002.SH
1003Parallel programming
1004.PP
1005Plan 9's support for parallel programming has two aspects.
1006First, the kernel provides
1007a simple process model and a few carefully designed system calls for
1008synchronization and sharing.
1009Second, a new parallel programming language called Alef
1010supports concurrent programming.
1011Although it is possible to write parallel
1012programs in C, Alef is the parallel language of choice.
1013.PP
1014There is a trend in new operating systems to implement two
1015classes of processes: normal UNIX-style processes and light-weight
1016kernel threads.
1017Instead, Plan 9 provides a single class of process but allows fine control of the
1018sharing of a process's resources such as memory and file descriptors.
1019A single class of process is a
1020feasible approach in Plan 9 because the kernel has an efficient system
1021call interface and cheap process creation and scheduling.
1022.PP
1023Parallel programs have three basic requirements:
1024management of resources shared between processes,
1025an interface to the scheduler,
1026and fine-grain process synchronization using spin locks.
1027On Plan 9,
1028new processes are created using the
1029.CW rfork
1030system call.
1031.CW Rfork
1032takes a single argument,
1033a bit vector that specifies
1034which of the parent process's resources should be shared,
1035copied, or created anew
1036in the child.
1037The resources controlled by
1038.CW rfork
1039include the name space,
1040the environment,
1041the file descriptor table,
1042memory segments,
1043and notes (Plan 9's analog of UNIX signals).
1044One of the bits controls whether the
1045.CW rfork
1046call will create a new process; if the bit is off, the resulting
1047modification to the resources occurs in the process making the call.
1048For example, a process calls
1049.CW rfork(RFNAMEG)
1050to disconnect its name space from its parent's.
1051Alef uses a
1052fine-grained fork in which all the resources, including
1053memory, are shared between parent
1054and child, analogous to creating a kernel thread in many systems.
1055.PP
1056An indication that
1057.CW rfork
1058is the right model is the variety of ways it is used.
1059Other than the canonical use in the library routine
1060.CW fork ,
1061it is hard to find two calls to
1062.CW rfork
1063with the same bits set; programs
1064use it to create many different forms of sharing and resource allocation.
1065A system with just two types of processes\(emregular processes and threads\(emcould
1066not handle this variety.
1067.PP
1068There are two ways to share memory.
1069First, a flag to
1070.CW rfork
1071causes all the memory segments of the parent to be shared with the child
1072(except the stack, which is
1073forked copy-on-write regardless).
1074Alternatively, a new segment of memory may be
1075attached using the
1076.CW segattach
1077system call; such a segment
1078will always be shared between parent and child.
1079.PP
1080The
1081.CW rendezvous
1082system call provides a way for processes to synchronize.
1083Alef uses it to implement communication channels,
1084queuing locks,
1085multiple reader/writer locks, and
1086the sleep and wakeup mechanism.
1087.CW Rendezvous
1088takes two arguments, a tag and a value.
1089When a process calls
1090.CW rendezvous
1091with a tag it sleeps until another process
1092presents a matching tag.
1093When a pair of tags match, the values are exchanged
1094between the two processes and both
1095.CW rendezvous
1096calls return.
1097This primitive is sufficient to implement the full set of synchronization routines.
1098.PP
1099Finally, spin locks are provided by
1100an architecture-dependent library at user level.
1101Most processors provide atomic test and set instructions that
1102can be used to implement locks.
1103A notable exception is the MIPS R3000, so the SGI
1104Power series multiprocessors have special lock hardware on the bus.
1105User processes gain access to the lock hardware
1106by mapping pages of hardware locks
1107into their address space using the
1108.CW segattach
1109system call.
1110.PP
1111A Plan 9 process in a system call will block regardless of its `weight'.
1112This means that when a program wishes to read from a slow
1113device without blocking the entire calculation, it must fork a process to do
1114the read for it.  The solution is to start a satellite
1115process that does the I/O and delivers the answer to the main program
1116through shared memory or perhaps a pipe.
1117This sounds onerous but works easily and efficiently in practice; in fact,
1118most interactive Plan 9 applications, even relatively ordinary ones written
1119in C, such as
1120the text editor Sam [Pike87], run as multiprocess programs.
1121.PP
1122The kernel support for parallel programming in Plan 9 is a few hundred lines
1123of portable code; a handful of simple primitives enable the problems to be handled
1124cleanly at user level.
1125Although the primitives work fine from C,
1126they are particularly expressive from within Alef.
1127The creation
1128and management of slave I/O processes can be written in a few lines of Alef,
1129providing the foundation for a consistent means of multiplexing
1130data flows between arbitrary processes.
1131Moreover, implementing it in a language rather than in the kernel
1132ensures consistent semantics between all devices
1133and provides a more general multiplexing primitive.
1134Compare this to the UNIX
1135.CW select
1136system call:
1137.CW select
1138applies only to a restricted set of devices,
1139legislates a style of multiprogramming in the kernel,
1140does not extend across networks,
1141is difficult to implement, and is hard to use.
1142.PP
1143Another reason
1144parallel programming is important in Plan 9 is that
1145multi-threaded user-level file servers are the preferred way
1146to implement services.
1147Examples of such servers include the programming environment
1148Acme [Pike94],
1149the name space exporting tool
1150.CW exportfs
1151[PPTTW93],
1152the HTTP daemon,
1153and the network name servers
1154.CW cs
1155and
1156.CW dns
1157[PrWi93].
1158Complex applications such as Acme prove that
1159careful operating system support can reduce the difficulty of writing
1160multi-threaded applications without moving threading and
1161synchronization primitives into the kernel.
1162.SH
1163Implementation of Name Spaces
1164.PP
1165User processes construct name spaces using three system calls:
1166.CW mount ,
1167.CW bind ,
1168and
1169.CW unmount .
1170The
1171.CW mount
1172system call attaches a tree served by a file server to
1173the current name space.  Before calling
1174.CW mount ,
1175the client must (by outside means) acquire a connection to the server in
1176the form of a file descriptor that may be written and read to transmit 9P messages.
1177That file descriptor represents a pipe or network connection.
1178.PP
1179The
1180.CW mount
1181call attaches a new hierarchy to the existing name space.
1182The
1183.CW bind
1184system call, on the other hand, duplicates some piece of existing name space at
1185another point in the name space.
1186The
1187.CW unmount
1188system call allows components to be removed.
1189.PP
1190Using
1191either
1192.CW bind
1193or
1194.CW mount ,
1195multiple directories may be stacked at a single point in the name space.
1196In Plan 9 terminology, this is a
1197.I union
1198directory and behaves like the concatenation of the constituent directories.
1199A flag argument to
1200.CW bind
1201and
1202.CW mount
1203specifies the position of a new directory in the union,
1204permitting new elements
1205to be added either at the front or rear of the union or to replace it entirely.
1206When a file lookup is performed in a union directory, each component
1207of the union is searched in turn and the first match taken; likewise,
1208when a union directory is read, the contents of each of the component directories
1209is read in turn.
1210Union directories are one of the most widely used organizational features
1211of the Plan 9 name space.
1212For instance, the directory
1213.CW /bin
1214is built as a union of
1215.CW /$cputype/bin
1216(program binaries),
1217.CW /rc/bin
1218(shell scripts),
1219and perhaps more directories provided by the user.
1220This construction makes the shell
1221.CW $PATH
1222variable unnecessary.
1223.PP
1224One question raised by union directories
1225is which element of the union receives a newly created file.
1226After several designs, we decided on the following.
1227By default, directories in unions do not accept new files, although the
1228.CW create
1229system call applied to an existing file succeeds normally.
1230When a directory is added to the union, a flag to
1231.CW bind
1232or
1233.CW mount
1234enables create permission (a property of the name space) in that directory.
1235When a file is being created with a new name in a union, it is created in the
1236first directory of the union with create permission; if that creation fails,
1237the entire
1238.CW create
1239fails.
1240This scheme enables the common use of placing a private directory anywhere
1241in a union of public ones,
1242while allowing creation only in the private directory.
1243.PP
1244By convention, kernel device file systems
1245are bound into the
1246.CW /dev
1247directory, but to bootstrap the name space building process it is
1248necessary to have a notation that permits
1249direct access to the devices without an existing name space.
1250The root directory
1251of the tree served by a device driver can be accessed using the syntax
1252.CW # \f2c\f1,
1253where
1254.I c
1255is a unique character (typically a letter) identifying the
1256.I type
1257of the device.
1258Simple device drivers serve a single level directory containing a few files.
1259As an example,
1260each serial port is represented by a data and a control file:
1261.P1
1262% bind -a '#t' /dev
1263% cd /dev
1264% ls -l eia*
1265--rw-rw-rw- t 0 bootes bootes 0 Feb 24 21:14 eia1
1266--rw-rw-rw- t 0 bootes bootes 0 Feb 24 21:14 eia1ctl
1267--rw-rw-rw- t 0 bootes bootes 0 Feb 24 21:14 eia2
1268--rw-rw-rw- t 0 bootes bootes 0 Feb 24 21:14 eia2ctl
1269.P2
1270The
1271.CW bind
1272program is an encapsulation of the
1273.CW bind
1274system call; its
1275.CW -a
1276flag positions the new directory at the end of the union.
1277The data files
1278.CW eia1
1279and
1280.CW eia2
1281may be read and written to communicate over the serial line.
1282Instead of using special operations on these files to control the devices,
1283commands written to the files
1284.CW eia1ctl
1285and
1286.CW eia2ctl
1287control the corresponding device;
1288for example,
1289writing the text string
1290.CW b1200
1291to
1292.CW /dev/eia1ctl
1293sets the speed of that line to 1200 baud.
1294Compare this to the UNIX
1295.CW ioctl
1296system call: in Plan 9, devices are controlled by textual messages,
1297free of byte order problems, with clear semantics for reading and writing.
1298It is common to configure or debug devices using shell scripts.
1299.PP
1300It is the universal use of the 9P protocol that
1301connects Plan 9's components together to form a
1302distributed system.
1303Rather than inventing a unique protocol for each
1304service such as
1305.CW rlogin ,
1306FTP, TFTP, and X windows,
1307Plan 9 implements services
1308in terms of operations on file objects,
1309and then uses a single, well-documented protocol to exchange information between
1310computers.
1311Unlike NFS, 9P treats files as a sequence of bytes rather than blocks.
1312Also unlike NFS, 9P is stateful: clients perform
1313remote procedure calls to establish pointers to objects in the remote
1314file server.
1315These pointers are called file identifiers or
1316.I fids .
1317All operations on files supply a fid to identify an object in the remote file system.
1318.PP
1319The 9P protocol defines 17 messages, providing
1320means to authenticate users, navigate fids around
1321a file system hierarchy, copy fids, perform I/O, change file attributes,
1322and create and delete files.
1323Its complete specification is in Section 5 of the Programmer's Manual [9man].
1324Here is the procedure to gain access to the name hierarchy supplied by a server.
1325A file server connection is established via a pipe or network connection.
1326An initial
1327.CW session
1328message performs a bilateral authentication between client and server.
1329An
1330.CW attach
1331message then connects a fid suggested by the client to the root of the server file
1332tree.
1333The
1334.CW attach
1335message includes the identity of the user performing the attach; henceforth all
1336fids derived from the root fid will have permissions associated with
1337that user.
1338Multiple users may share the connection, but each must perform an attach to
1339establish his or her identity.
1340.PP
1341The
1342.CW walk
1343message moves a fid through a single level of the file system hierarchy.
1344The
1345.CW clone
1346message takes an established fid and produces a copy that points
1347to the same file as the original.
1348Its purpose is to enable walking to a file in a directory without losing the fid
1349on the directory.
1350The
1351.CW open
1352message locks a fid to a specific file in the hierarchy,
1353checks access permissions,
1354and prepares the fid
1355for I/O.
1356The
1357.CW read
1358and
1359.CW write
1360messages allow I/O at arbitrary offsets in the file;
1361the maximum size transferred is defined by the protocol.
1362The
1363.CW clunk
1364message indicates the client has no further use for a fid.
1365The
1366.CW remove
1367message behaves like
1368.CW clunk
1369but causes the file associated with the fid to be removed and any associated
1370resources on the server to be deallocated.
1371.PP
13729P has two forms: RPC messages sent on a pipe or network connection and a procedural
1373interface within the kernel.
1374Since kernel device drivers are directly addressable,
1375there is no need to pass messages to
1376communicate with them;
1377instead each 9P transaction is implemented by a direct procedure call.
1378For each fid,
1379the kernel maintains a local representation in a data structure called a
1380.I channel ,
1381so all operations on files performed by the kernel involve a channel connected
1382to that fid.
1383The simplest example is a user process's file descriptors, which are
1384indexes into an array of channels.
1385A table in the kernel provides a list
1386of entry points corresponding one to one with the 9P messages for each device.
1387A system call such as
1388.CW read
1389from the user translates into one or more procedure calls
1390through that table, indexed by the type character stored in the channel:
1391.CW procread ,
1392.CW eiaread ,
1393etc.
1394Each call takes at least
1395one channel as an argument.
1396A special kernel driver, called the
1397.I mount
1398driver, translates procedure calls to messages, that is,
1399it converts local procedure calls to remote ones.
1400In effect, this special driver
1401becomes a local proxy for the files served by a remote file server.
1402The channel pointer in the local call is translated to the associated fid
1403in the transmitted message.
1404.PP
1405The mount driver is the sole RPC mechanism employed by the system.
1406The semantics of the supplied files, rather than the operations performed upon
1407them, create a particular service such as the
1408.CW cpu
1409command.
1410The mount driver demultiplexes protocol
1411messages between clients sharing a communication channel
1412with a file server.
1413For each outgoing RPC message,
1414the mount driver allocates a buffer labeled by a small unique integer,
1415called a
1416.I tag .
1417The reply to the RPC is labeled with the same tag, which is used by
1418the mount driver to match the reply with the request.
1419.PP
1420The kernel representation of the name space
1421is called the
1422.I "mount table" ,
1423which stores a list of bindings between channels.
1424Each entry in the mount table contains a pair of channels: a
1425.I from
1426channel and a
1427.I to
1428channel.
1429Every time a walk succeeds in moving a channel to a new location in the name space,
1430the mount table is consulted to see if a `from' channel matches the new name; if
1431so the `to' channel is cloned and substituted for the original.
1432Union directories are implemented by converting the `to'
1433channel into a list of channels:
1434a successful walk to a union directory returns a `to' channel that forms
1435the head of
1436a list of channels, each representing a component directory
1437of the union.
1438If a walk
1439fails to find a file in the first directory of the union, the list is followed,
1440the next component cloned, and walk tried on that directory.
1441.PP
1442Each file in Plan 9 is uniquely identified by a set of integers:
1443the type of the channel (used as the index of the function call table),
1444the server or device number
1445distinguishing the server from others of the same type (decided locally by the driver),
1446and a
1447.I qid
1448formed from two 32-bit numbers called
1449.I path
1450and
1451.I version .
1452The path is a unique file number assigned by a device driver or
1453file server when a file is created.
1454The version number is updated whenever
1455the file is modified; as described in the next section,
1456it can be used to maintain cache coherency between
1457clients and servers.
1458.PP
1459The type and device number are analogous to UNIX major and minor
1460device numbers;
1461the qid is analogous to the i-number.
1462The device and type
1463connect the channel to a device driver and the qid
1464identifies the file within that device.
1465If the file recovered from a walk has the same type, device, and qid path
1466as an entry in the mount table, they are the same file and the
1467corresponding substitution from the mount table is made.
1468This is how the name space is implemented.
1469.SH
1470File Caching
1471.PP
1472The 9P protocol has no explicit support for caching files on a client.
1473The large memory of the central file server acts as a shared cache for all its clients,
1474which reduces the total amount of memory needed across all machines in the network.
1475Nonetheless, there are sound reasons to cache files on the client, such as a slow
1476connection to the file server.
1477.PP
1478The version field of the qid is changed whenever the file is modified,
1479which makes it possible to do some weakly coherent forms of caching.
1480The most important is client caching of text and data segments of executable files.
1481When a process
1482.CW execs
1483a program, the file is re-opened and the qid's version is compared with that in the cache;
1484if they match, the local copy is used.
1485The same method can be used to build a local caching file server.
1486This user-level server interposes on the 9P connection to the remote server and
1487monitors the traffic, copying data to a local disk.
1488When it sees a read of known data, it answers directly,
1489while writes are passed on immediately\(emthe cache is write-through\(emto keep
1490the central copy up to date.
1491This is transparent to processes on the terminal and requires no change to 9P;
1492it works well on home machines connected over serial lines.
1493A similar method can be applied to build a general client cache in unused local
1494memory, but this has not been done in Plan 9.
1495.SH
1496Networks and Communication Devices
1497.PP
1498Network interfaces are kernel-resident file systems, analogous to the EIA device
1499described earlier.
1500Call setup and shutdown are achieved by writing text strings to the control file
1501associated with the device;
1502information is sent and received by reading and writing the data file.
1503The structure and semantics of the devices is common to all networks so,
1504other than a file name substitution,
1505the same procedure makes a call using TCP over Ethernet as URP over Datakit [Fra80].
1506.PP
1507This example illustrates the structure of the TCP device:
1508.P1
1509% ls -lp /net/tcp
1510d-r-xr-xr-x I 0 bootes bootes 0 Feb 23 20:20 0
1511d-r-xr-xr-x I 0 bootes bootes 0 Feb 23 20:20 1
1512--rw-rw-rw- I 0 bootes bootes 0 Feb 23 20:20 clone
1513% ls -lp /net/tcp/0
1514--rw-rw---- I 0 rob    bootes 0 Feb 23 20:20 ctl
1515--rw-rw---- I 0 rob    bootes 0 Feb 23 20:20 data
1516--rw-rw---- I 0 rob    bootes 0 Feb 23 20:20 listen
1517--r--r--r-- I 0 bootes bootes 0 Feb 23 20:20 local
1518--r--r--r-- I 0 bootes bootes 0 Feb 23 20:20 remote
1519--r--r--r-- I 0 bootes bootes 0 Feb 23 20:20 status
1520%
1521.P2
1522The top directory,
1523.CW /net/tcp ,
1524contains a
1525.CW clone
1526file and a directory for each connection, numbered
1527.CW 0
1528to
1529.I n .
1530Each connection directory corresponds to an TCP/IP connection.
1531Opening
1532.CW clone
1533reserves an unused connection and returns its control file.
1534Reading the control file returns the textual connection number, so the user
1535process can construct the full name of the newly allocated
1536connection directory.
1537The
1538.CW local ,
1539.CW remote ,
1540and
1541.CW status
1542files are diagnostic; for example,
1543.CW remote
1544contains the address (for TCP, the IP address and port number) of the remote side.
1545.PP
1546A call is initiated by writing a connect message with a network-specific address as
1547its argument; for example, to open a Telnet session (port 23) to a remote machine
1548with IP address 135.104.9.52,
1549the string is:
1550.P1
1551connect 135.104.9.52!23
1552.P2
1553The write to the control file blocks until the connection is established;
1554if the destination is unreachable, the write returns an error.
1555Once the connection is established, the
1556.CW telnet
1557application reads and writes the
1558.CW data
1559file
1560to talk to the remote Telnet daemon.
1561On the other end, the Telnet daemon would start by writing
1562.P1
1563announce 23
1564.P2
1565to its control file to indicate its willingness to receive calls to this port.
1566Such a daemon is called a
1567.I listener
1568in Plan 9.
1569.PP
1570A uniform structure for network devices cannot hide all the details
1571of addressing and communication for dissimilar networks.
1572For example, Datakit uses textual, hierarchical addresses unlike IP's 32-bit addresses, so
1573an application given a control file must still know what network it represents.
1574Rather than make every application know the addressing of every network,
1575Plan 9 hides these details in a
1576.I connection
1577.I server ,
1578called
1579.CW cs .
1580.CW Cs
1581is a file system mounted in a known place.
1582It supplies a single control file that an application uses to discover how to connect
1583to a host.
1584The application writes the symbolic address and service name for
1585the connection it wishes to make,
1586and reads back the name of the
1587.CW clone
1588file to open and the address to present to it.
1589If there are multiple networks between the machines,
1590.CW cs
1591presents a list of possible networks and addresses to be tried in sequence;
1592it uses heuristics to decide the order.
1593For instance, it presents the highest-bandwidth choice first.
1594.PP
1595A single library function called
1596.CW dial
1597talks to
1598.CW cs
1599to establish the connection.
1600An application that uses
1601.CW dial
1602needs no changes, not even recompilation, to adapt to new networks;
1603the interface to
1604.CW cs
1605hides the details.
1606.PP
1607The uniform structure for networks in Plan 9 makes the
1608.CW import
1609command all that is needed to construct gateways.
1610.SH
1611Kernel structure for networks
1612.PP
1613The kernel plumbing used to build Plan 9 communications
1614channels is called
1615.I streams
1616[Rit84][Presotto].
1617A stream is a bidirectional channel connecting a
1618physical or pseudo-device to a user process.
1619The user process inserts and removes data at one end of the stream;
1620a kernel process acting on behalf of a device operates at
1621the other end.
1622A stream comprises a linear list of
1623.I "processing modules" .
1624Each module has both an upstream (toward the process) and
1625downstream (toward the device)
1626.I "put routine" .
1627Calling the put routine of the module on either end of the stream
1628inserts data into the stream.
1629Each module calls the succeeding one to send data up or down the stream.
1630Like UNIX streams [Rit84],
1631Plan 9 streams can be dynamically configured.
1632.SH
1633The IL Protocol
1634.PP
1635The 9P protocol must run above a reliable transport protocol with delimited messages.
16369P has no mechanism to recover from transmission errors and
1637the system assumes that each read from a communication channel will
1638return a single 9P message;
1639it does not parse the data stream to discover message boundaries.
1640Pipes and some network protocols already have these properties but
1641the standard IP protocols do not.
1642TCP does not delimit messages, while
1643UDP [RFC768] does not provide reliable in-order delivery.
1644.PP
1645We designed a new protocol, called IL (Internet Link), to transmit 9P messages over IP.
1646It is a connection-based protocol that provides
1647reliable transmission of sequenced messages between machines.
1648Since a process can have only a single outstanding 9P request,
1649there is no need for flow control in IL.
1650Like TCP, IL has adaptive timeouts: it scales acknowledge and retransmission times
1651to match the network speed.
1652This allows the protocol to perform well on both the Internet and on local Ethernets.
1653Also, IL does no blind retransmission,
1654to avoid adding to the congestion of busy networks.
1655Full details are in another paper [PrWi95].
1656.PP
1657In Plan 9, the implementation of IL is smaller and faster than TCP.
1658IL is our main Internet transport protocol.
1659.SH
1660Overview of authentication
1661.PP
1662Authentication establishes the identity of a
1663user accessing a resource.
1664The user requesting the resource is called the
1665.I client
1666and the user granting access to the resource is called the
1667.I server .
1668This is usually done under the auspices of a 9P attach message.
1669A user may be a client in one authentication exchange and a server in another.
1670Servers always act on behalf of some user,
1671either a normal client or some administrative entity, so authentication
1672is defined to be between users, not machines.
1673.PP
1674Each Plan 9 user has an associated DES [NBS77] authentication key;
1675the user's identity is verified by the ability to
1676encrypt and decrypt special messages called challenges.
1677Since knowledge of a user's key gives access to that user's resources,
1678the Plan 9 authentication protocols never transmit a message containing
1679a cleartext key.
1680.PP
1681Authentication is bilateral:
1682at the end of the authentication exchange,
1683each side is convinced of the other's identity.
1684Every machine begins the exchange with a DES key in memory.
1685In the case of CPU and file servers, the key, user name, and domain name
1686for the server are read from permanent storage,
1687usually non-volatile RAM.
1688In the case of terminals,
1689the key is derived from a password typed by the user at boot time.
1690A special machine, known as the
1691.I authentication
1692.I server ,
1693maintains a database of keys for all users in its administrative domain and
1694participates in the authentication protocols.
1695.PP
1696The authentication protocol is as follows:
1697after exchanging challenges, one party
1698contacts the authentication server to create
1699permission-granting
1700.I tickets
1701encrypted with
1702each party's secret key and containing a new conversation key.
1703Each
1704party decrypts its own ticket and uses the conversation key to
1705encrypt the other party's challenge.
1706.PP
1707This structure is somewhat like Kerberos [MBSS87], but avoids
1708its reliance on synchronized clocks.
1709Also
1710unlike Kerberos, Plan 9 authentication supports a `speaks for'
1711relation [LABW91] that enables one user to have the authority
1712of another;
1713this is how a CPU server runs processes on behalf of its clients.
1714.PP
1715Plan 9's authentication structure builds
1716secure services rather than depending on firewalls.
1717Whereas firewalls require special code for every service penetrating the wall,
1718the Plan 9 approach permits authentication to be done in a single place\(em9P\(emfor
1719all services.
1720For example, the
1721.CW cpu
1722command works securely across the Internet.
1723.SH
1724Authenticating external connections
1725.PP
1726The regular Plan 9 authentication protocol is not suitable for text-based services such as
1727Telnet
1728or FTP.
1729In such cases, Plan 9 users authenticate with hand-held DES calculators called
1730.I authenticators .
1731The authenticator holds a key for the user, distinct from
1732the user's normal authentication key.
1733The user `logs on' to the authenticator using a 4-digit PIN.
1734A correct PIN enables the authenticator for a challenge/response exchange with the server.
1735Since a correct challenge/response exchange is valid only once
1736and keys are never sent over the network,
1737this procedure is not susceptible to replay attacks, yet
1738is compatible with protocols like Telnet and FTP.
1739.SH
1740Special users
1741.PP
1742Plan 9 has no super-user.
1743Each server is responsible for maintaining its own security, usually permitting
1744access only from the console, which is protected by a password.
1745For example, file servers have a unique administrative user called
1746.CW adm ,
1747with special privileges that apply only to commands typed at the server's
1748physical console.
1749These privileges concern the day-to-day maintenance of the server,
1750such as adding new users and configuring disks and networks.
1751The privileges do
1752.I not
1753include the ability to modify, examine, or change the permissions of any files.
1754If a file is read-protected by a user, only that user may grant access to others.
1755.PP
1756CPU servers have an equivalent user name that allows administrative access to
1757resources on that server such as the control files of user processes.
1758Such permission is necessary, for example, to kill rogue processes, but
1759does not extend beyond that server.
1760On the other hand, by means of a key
1761held in protected non-volatile RAM,
1762the identity of the administrative user is proven to the
1763authentication server.
1764This allows the CPU server to authenticate remote users, both
1765for access to the server itself and when the CPU server is acting
1766as a proxy on their behalf.
1767.PP
1768Finally, a special user called
1769.CW none
1770has no password and is always allowed to connect;
1771anyone may claim to be
1772.CW none .
1773.CW None
1774has restricted permissions; for example, it is not allowed to examine dump files
1775and can read only world-readable files.
1776.PP
1777The idea behind
1778.CW none
1779is analogous to the anonymous user in FTP
1780services.
1781On Plan 9, guest FTP servers are further confined within a special
1782restricted name space.
1783It disconnects guest users from system programs, such as the contents of
1784.CW /bin ,
1785but makes it possible to make local files available to guests
1786by binding them explicitly into the space.
1787A restricted name space is more secure than the usual technique of exporting
1788an ad hoc directory tree; the result is a kind of cage around untrusted users.
1789.SH
1790The cpu command and proxied authentication
1791.PP
1792When a call is made to a CPU server for a user, say Peter,
1793the intent is that Peter wishes to run processes with his own authority.
1794To implement this property,
1795the CPU server does the following when the call is received.
1796First, the listener forks off a process to handle the call.
1797This process changes to the user
1798.CW none
1799to avoid giving away permissions if it is compromised.
1800It then performs the authentication protocol to verify that the
1801calling user really is Peter, and to prove to Peter that
1802the machine is itself trustworthy.
1803Finally, it reattaches to all relevant file servers using the
1804authentication protocol to identify itself as Peter.
1805In this case, the CPU server is a client of the file server and performs the
1806client portion of the authentication exchange on behalf of Peter.
1807The authentication server will give the process tickets to
1808accomplish this only if the CPU server's administrative user name is allowed to
1809.I "speak for"
1810Peter.
1811.PP
1812The
1813.I "speaks for
1814relation [LABW91] is kept in a table on the authentication server.
1815To simplify the management of users computing in different authentication domains,
1816it also contains mappings between user names in different domains,
1817for example saying that user
1818.CW rtm
1819in one domain is the same person as user
1820.CW rtmorris
1821in another.
1822.SH
1823File Permissions
1824.PP
1825One of the advantages of constructing services as file systems
1826is that the solutions to ownership and permission problems fall out naturally.
1827As in UNIX,
1828each file or directory has separate read, write, and execute/search permissions
1829for the file's owner, the file's group, and anyone else.
1830The idea of group is unusual:
1831any user name is potentially a group name.
1832A group is just a user with a list of other users in the group.
1833Conventions make the distinction: most people have user names without group members,
1834while groups have long lists of attached names.  For example, the
1835.CW sys
1836group traditionally has all the system programmers,
1837and system files are accessible
1838by group
1839.CW sys .
1840Consider the following two lines of a user database stored on a server:
1841.P1
1842pjw:pjw:
1843sys::pjw,ken,philw,presotto
1844.P2
1845The first establishes user
1846.CW pjw
1847as a regular user.  The second establishes user
1848.CW sys
1849as a group and lists four users who are
1850.I members
1851of that group.
1852The empty colon-separated field is space for a user to be named as the
1853.I group
1854.I leader .
1855If a group has a leader, that user has special permissions for the group,
1856such as freedom to change the group permissions
1857of files in that group.
1858If no leader is specified, each member of the group is considered equal, as if each were
1859the leader.
1860In our example, only
1861.CW pjw
1862can add members to his group, but all of
1863.CW sys 's
1864members are equal partners in that group.
1865.PP
1866Regular files are owned by the user that creates them.
1867The group name is inherited from the directory holding the new file.
1868Device files are treated specially:
1869the kernel may arrange the ownership and permissions of
1870a file appropriate to the user accessing the file.
1871.PP
1872A good example of the generality this offers is process files,
1873which are owned and read-protected by the owner of the process.
1874If the owner wants to let someone else access the memory of a process,
1875for example to let the author of a program debug a broken image, the standard
1876.CW chmod
1877command applied to the process files does the job.
1878.PP
1879Another unusual application of file permissions
1880is the dump file system, which is not only served by the same file
1881server as the original data, but represented by the same user database.
1882Files in the dump are therefore given identical protection as files in the regular
1883file system;
1884if a file is owned by
1885.CW pjw
1886and read-protected, once it is in the dump file system it is still owned by
1887.CW pjw
1888and read-protected.
1889Also, since the dump file system is immutable, the file cannot be changed;
1890it is read-protected forever.
1891Drawbacks are that if the file is readable but should have been read-protected,
1892it is readable forever, and that user names are hard to re-use.
1893.SH
1894Performance
1895.PP
1896As a simple measure of the performance of the Plan 9 kernel,
1897we compared the
1898time to do some simple operations on Plan 9 and on SGI's IRIX Release 5.3
1899running on an SGI Challenge M with a 100MHz MIPS R4400 and a 1-megabyte
1900secondary cache.
1901The test program was written in Alef,
1902compiled with the same compiler,
1903and run on identical hardware,
1904so the only variables are the operating system and libraries.
1905.PP
1906The program tests the time to do a context switch
1907.CW rendezvous "" (
1908on Plan 9,
1909.CW blockproc
1910on IRIX);
1911a trivial system call
1912.CW rfork(0) "" (
1913and
1914.CW nap(0) );
1915and
1916lightweight fork
1917.CW rfork(RFPROC) "" (
1918and
1919.CW sproc(PR_SFDS|PR_SADDR) ).
1920It also measures the time to send a byte on a pipe from one process
1921to another and the throughput on a pipe between two processes.
1922The results appear in Table 1.
1923.KS
1924.TS
1925center,box;
1926ccc
1927lnn.
1928Test	Plan 9	IRIX
1929_
1930Context switch	39 µs	150 µs
1931System call	6 µs	36 µs
1932Light fork	1300 µs	2200 µs
1933Pipe latency	110 µs	200 µs
1934Pipe bandwidth	11678 KB/s	14545 KB/s
1935.TE
1936.ce
1937.I
1938Table 1.  Performance comparison.
1939.R
1940.KE
1941.LP
1942Although the Plan 9 times are not spectacular, they show that the kernel is
1943competitive with commercial systems.
1944.SH
1945Discussion
1946.PP
1947Plan 9 has a relatively conventional kernel;
1948the system's novelty lies in the pieces outside the kernel and the way they interact.
1949When building Plan 9, we considered all aspects
1950of the system together, solving problems where the solution fit best.
1951Sometimes the solution spanned many components.
1952An example is the problem of heterogeneous instruction architectures,
1953which is addressed by the compilers (different code characters, portable
1954object code),
1955the environment
1956.CW $cputype "" (
1957and
1958.CW $objtype ),
1959the name space
1960(binding in
1961.CW /bin ),
1962and other components.
1963Sometimes many issues could be solved in a single place.
1964The best example is 9P,
1965which centralizes naming, access, and authentication.
19669P is really the core
1967of the system;
1968it is fair to say that the Plan 9 kernel is primarily a 9P multiplexer.
1969.PP
1970Plan 9's focus on files and naming is central to its expressiveness.
1971Particularly in distributed computing, the way things are named has profound
1972influence on the system [Nee89].
1973The combination of
1974local name spaces and global conventions to interconnect networked resources
1975avoids the difficulty of maintaining a global uniform name space,
1976while naming everything like a file makes the system easy to understand, even for
1977novices.
1978Consider the dump file system, which is trivial to use for anyone familiar with
1979hierarchical file systems.
1980At a deeper level, building all the resources above a single uniform interface
1981makes interoperability easy.
1982Once a resource exports a 9P interface,
1983it can combine transparently
1984with any other part of the system to build unusual applications;
1985the details are hidden.
1986This may sound object-oriented, but there are distinctions.
1987First, 9P defines a fixed set of `methods'; it is not an extensible protocol.
1988More important,
1989files are well-defined and well-understood
1990and come prepackaged with familiar methods of access, protection, naming, and
1991networking.
1992Objects, despite their generality, do not come with these attributes defined.
1993By reducing `object' to `file', Plan 9 gets some technology for free.
1994.PP
1995Nonetheless, it is possible to push the idea of file-based computing too far.
1996Converting every resource in the system into a file system is a kind of metaphor,
1997and metaphors can be abused.
1998A good example of restraint is
1999.CW /proc ,
2000which is only a view of a process, not a representation.
2001To run processes, the usual
2002.CW fork
2003and
2004.CW exec
2005calls are still necessary, rather than doing something like
2006.P1
2007cp /bin/date /proc/clone/mem
2008.P2
2009The problem with such examples is that they require the server to do things
2010not under its control.
2011The ability to assign meaning to a command like this does not
2012imply the meaning will fall naturally out of the structure of answering the 9P requests
2013it generates.
2014As a related example, Plan 9 does not put machine's network names in the file
2015name space.
2016The network interfaces provide a very different model of naming, because using
2017.CW open ,
2018.CW create ,
2019.CW read ,
2020and
2021.CW write
2022on such files would not offer a suitable place to encode all the details of call
2023setup for an arbitrary network.
2024This does not mean that the network interface cannot be file-like, just that it must
2025have a more tightly defined structure.
2026.PP
2027What would we do differently next time?
2028Some elements of the implementation are unsatisfactory.
2029Using streams to implement network interfaces in the kernel
2030allows protocols to be connected together dynamically,
2031such as to attach the same TTY driver to TCP, URP, and
2032IL connections,
2033but Plan 9 makes no use of this configurability.
2034(It was exploited, however, in the research UNIX system for which
2035streams were invented.)
2036Replacing streams by static I/O queues would
2037simplify the code and make it faster.
2038.PP
2039Although the main Plan 9 kernel is portable across many machines,
2040the file server is implemented separately.
2041This has caused several problems:
2042drivers that must be written twice,
2043bugs that must be fixed twice,
2044and weaker portability of the file system code.
2045The solution is easy: the file server kernel should be maintained
2046as a variant of the regular operating system, with no user processes and
2047special compiled-in
2048kernel processes to implement file service.
2049Another improvement to the file system would be a change of internal structure.
2050The WORM jukebox is the least reliable piece of the hardware, but because
2051it holds the metadata of the file system, it must be present in order to serve files.
2052The system could be restructured so the WORM is a backup device only, with the
2053file system proper residing on magnetic disks.
2054This would require no change to the external interface.
2055.PP
2056Although Plan 9 has per-process name spaces, it has no mechanism to give the
2057description of a process's name space to another process except by direct inheritance.
2058The
2059.CW cpu
2060command, for example, cannot in general reproduce the terminal's name space;
2061it can only re-interpret the user's login profile and make substitutions for things like
2062the name of the binary directory to load.
2063This misses any local modifications made before running
2064.CW cpu .
2065It should instead be possible to capture the terminal's name space and transmit
2066its description to a remote process.
2067.PP
2068Despite these problems, Plan 9 works well.
2069It has matured into the system that supports our research,
2070rather than being the subject of the research itself.
2071Experimental new work includes developing interfaces to faster networks,
2072file caching in the client kernel,
2073encapsulating and exporting name spaces,
2074and the ability to re-establish the client state after a server crash.
2075Attention is now focusing on using the system to build distributed applications.
2076.PP
2077One reason for Plan 9's success is that we use it for our daily work, not just as a research tool.
2078Active use forces us to address shortcomings as they arise and to adapt the system
2079to solve our problems.
2080Through this process, Plan 9 has become a comfortable, productive programming
2081environment, as well as a vehicle for further systems research.
2082.SH
2083References
2084.nr PS -1
2085.nr VS -2
2086.IP [9man] 9
2087.I
2088Plan 9 Programmer's Manual,
2089Volume 1,
2090.R
2091AT&T Bell Laboratories,
2092Murray Hill, NJ,
20931995.
2094.IP [ANSIC] 9
2095\f2American National Standard for Information Systems \-
2096Programming Language C\f1, American National Standards Institute, Inc.,
2097New York, 1990.
2098.IP [Duff90] 9
2099Tom Duff, ``Rc - A Shell for Plan 9 and UNIX systems'',
2100.I
2101Proc. of the Summer 1990 UKUUG Conf.,
2102.R
2103London, July, 1990, pp. 21-33, reprinted, in a different form, in this volume.
2104.IP [Fra80] 9
2105A.G. Fraser,
2106``Datakit \- A Modular Network for Synchronous and Asynchronous Traffic'',
2107.I
2108Proc. Int. Conf. on Commun.,
2109.R
2110June 1980, Boston, MA.
2111.IP [FSSUTF] 9
2112.I
2113File System Safe UCS Transformation Format (FSS-UTF),
2114.R
2115X/Open Preliminary Specification, 1993.
2116ISO designation is
2117ISO/IEC JTC1/SC2/WG2 N 1036, dated 1994-08-01.
2118.IP "[ISO10646] " 9
2119ISO/IEC DIS 10646-1:1993
2120.I
2121Information technology \-
2122Universal Multiple-Octet Coded Character Set (UCS) \(em
2123Part 1: Architecture and Basic Multilingual Plane.
2124.R
2125.IP [Kill84] 9
2126T.J. Killian,
2127``Processes as Files'',
2128.I
2129USENIX Summer 1984 Conf. Proc.,
2130.R
2131June 1984, Salt Lake City, UT.
2132.IP "[LABW91] " 9
2133Butler Lampson,
2134Martín Abadi,
2135Michael Burrows, and
2136Edward Wobber,
2137``Authentication in Distributed Systems: Theory and Practice'',
2138.I
2139Proc. 13th ACM Symp. on Op. Sys. Princ.,
2140.R
2141Asilomar, 1991,
2142pp. 165-182.
2143.IP "[MBSS87] " 9
2144S. P. Miller,
2145B. C. Neumann,
2146J. I. Schiller, and
2147J. H. Saltzer,
2148``Kerberos Authentication and Authorization System'',
2149Massachusetts Institute of Technology,
21501987.
2151.IP [NBS77] 9
2152National Bureau of Standards (U.S.),
2153.I
2154Federal Information Processing Standard 46,
2155.R
2156National Technical Information Service, Springfield, VA, 1977.
2157.IP [Nee89] 9
2158R. Needham, ``Names'', in
2159.I
2160Distributed systems,
2161.R
2162S. Mullender, ed.,
2163Addison Wesley, 1989
2164.IP "[NeHe82] " 9
2165R.M. Needham and A.J. Herbert,
2166.I
2167The Cambridge Distributed Computing System,
2168.R
2169Addison-Wesley, London, 1982
2170.IP [Neu92] 9
2171B. Clifford Neuman,
2172``The Prospero File System'',
2173.I
2174USENIX File Systems Workshop Proc.,
2175.R
2176Ann Arbor, 1992, pp. 13-28.
2177.IP "[OCDNW88] " 9
2178John Ousterhout, Andrew Cherenson, Fred Douglis, Mike Nelson, and Brent Welch,
2179``The Sprite Network Operating System'',
2180.I
2181IEEE Computer,
2182.R
218321(2), 23-38, Feb. 1988.
2184.IP [Pike87] 9
2185Rob Pike, ``The Text Editor \f(CWsam\fP'',
2186.I
2187Software - Practice and Experience,
2188.R
2189Nov 1987, \f3\&17\f1(11), pp. 813-845; reprinted in this volume.
2190.IP [Pike91] 9
2191Rob Pike, ``8½, the Plan 9 Window System'',
2192.I
2193USENIX Summer Conf. Proc.,
2194.R
2195Nashville, June, 1991, pp. 257-265,
2196reprinted in this volume.
2197.IP [Pike93] 9
2198Rob Pike and Ken Thompson, ``Hello World or Καλημέρα κόσμε or
2199\f(Jpこんにちは 世界\fP'',
2200.I
2201USENIX Winter Conf. Proc.,
2202.R
2203San Diego, 1993, pp. 43-50,
2204reprinted in this volume.
2205.IP [Pike94] 9
2206Rob Pike,
2207``Acme: A User Interface for Programmers'',
2208.I
2209USENIX Proc. of the Winter 1994 Conf.,
2210.R
2211San Francisco, CA,
2212.IP [Pike95] 9
2213Rob Pike,
2214``How to Use the Plan 9 C Compiler'',
2215.I
2216Plan 9 Programmer's Manual,
2217Volume 2,
2218.R
2219AT&T Bell Laboratories,
2220Murray Hill, NJ,
22211995.
2222.IP [POSIX] 9
2223.I
2224Information Technology\(emPortable Operating
2225System Interface (POSIX) Part 1:
2226System Application Program Interface (API)
2227[C Language],
2228.R
2229IEEE, New York, 1990.
2230.IP "[PPTTW93] " 9
2231Rob Pike, Dave Presotto, Ken Thompson, Howard Trickey, and Phil Winterbottom, ``The Use of Name Spaces in Plan 9'',
2232.I
2233Op. Sys. Rev.,
2234.R
2235Vol. 27, No. 2, April 1993, pp. 72-76,
2236reprinted in this volume.
2237.IP [Presotto] 9
2238Dave Presotto,
2239``Multiprocessor Streams for Plan 9'',
2240.I
2241UKUUG Summer 1990 Conf. Proc.,
2242.R
2243July 1990, pp. 11-19.
2244.IP [PrWi93] 9
2245Dave Presotto and Phil Winterbottom,
2246``The Organization of Networks in Plan 9'',
2247.I
2248USENIX Proc. of the Winter 1993 Conf.,
2249.R
2250San Diego, CA,
2251pp. 43-50,
2252reprinted in this volume.
2253.IP [PrWi95] 9
2254Dave Presotto and Phil Winterbottom,
2255``The IL Protocol'',
2256.I
2257Plan 9 Programmer's Manual,
2258Volume 2,
2259.R
2260AT&T Bell Laboratories,
2261Murray Hill, NJ,
22621995.
2263.IP "[RFC768] " 9
2264J. Postel, RFC768,
2265.I "User Datagram Protocol,
2266.I "DARPA Internet Program Protocol Specification,
2267August 1980.
2268.IP "[RFC793] " 9
2269RFC793,
2270.I "Transmission Control Protocol,
2271.I "DARPA Internet Program Protocol Specification,
2272September 1981.
2273.IP [Rao91] 9
2274Herman Chung-Hwa Rao,
2275.I
2276The Jade File System,
2277.R
2278(Ph. D. Dissertation),
2279Dept. of Comp. Sci,
2280University of Arizona,
2281TR 91-18.
2282.IP [Rit84] 9
2283D.M. Ritchie,
2284``A Stream Input-Output System'',
2285.I
2286AT&T Bell Laboratories Technical Journal,
2287\f363\f1(8), October, 1984.
2288.IP [Tric95] 9
2289Howard Trickey,
2290``APE \(em The ANSI/POSIX Environment'',
2291.I
2292Plan 9 Programmer's Manual,
2293Volume 2,
2294.R
2295AT&T Bell Laboratories,
2296Murray Hill, NJ,
22971995.
2298.IP [Unicode] 9
2299.I
2300The Unicode Standard,
2301Worldwide Character Encoding,
2302Version 1.0, Volume 1,
2303.R
2304The Unicode Consortium,
2305Addison Wesley,
2306New York,
23071991.
2308.IP [UNIX85] 9
2309.I
2310UNIX Time-Sharing System Programmer's Manual,
2311Research Version, Eighth Edition, Volume 1.
2312.R
2313AT&T Bell Laboratories, Murray Hill, NJ, 1985.
2314.IP [Welc94] 9
2315Brent Welch,
2316``A Comparison of Three Distributed File System Architectures: Vnode, Sprite, and Plan 9'',
2317.I
2318Computing Systems,
2319.R
23207(2), pp. 175-199, Spring, 1994.
2321.IP [Wint95] 9
2322Phil Winterbottom,
2323``Alef Language Reference Manual'',
2324.I
2325Plan 9 Programmer's Manual,
2326Volume 2,
2327.R
2328AT&T Bell Laboratories,
2329Murray Hill, NJ,
23301995.
2331