xref: /netbsd-src/sbin/raidctl/raidctl.8 (revision b7b7574d3bf8eeb51a1fa3977b59142ec6434a55)
1.\"     $NetBSD: raidctl.8,v 1.67 2014/04/03 18:54:10 christos Exp $
2.\"
3.\" Copyright (c) 1998, 2002 The NetBSD Foundation, Inc.
4.\" All rights reserved.
5.\"
6.\" This code is derived from software contributed to The NetBSD Foundation
7.\" by Greg Oster
8.\"
9.\" Redistribution and use in source and binary forms, with or without
10.\" modification, are permitted provided that the following conditions
11.\" are met:
12.\" 1. Redistributions of source code must retain the above copyright
13.\"    notice, this list of conditions and the following disclaimer.
14.\" 2. Redistributions in binary form must reproduce the above copyright
15.\"    notice, this list of conditions and the following disclaimer in the
16.\"    documentation and/or other materials provided with the distribution.
17.\"
18.\" THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
19.\" ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
20.\" TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
21.\" PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
22.\" BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
23.\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
24.\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
25.\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
26.\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
27.\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
28.\" POSSIBILITY OF SUCH DAMAGE.
29.\"
30.\"
31.\" Copyright (c) 1995 Carnegie-Mellon University.
32.\" All rights reserved.
33.\"
34.\" Author: Mark Holland
35.\"
36.\" Permission to use, copy, modify and distribute this software and
37.\" its documentation is hereby granted, provided that both the copyright
38.\" notice and this permission notice appear in all copies of the
39.\" software, derivative works or modified versions, and any portions
40.\" thereof, and that both notices appear in supporting documentation.
41.\"
42.\" CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
43.\" CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND
44.\" FOR ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
45.\"
46.\" Carnegie Mellon requests users of this software to return to
47.\"
48.\"  Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU
49.\"  School of Computer Science
50.\"  Carnegie Mellon University
51.\"  Pittsburgh PA 15213-3890
52.\"
53.\" any improvements or extensions that they make and grant Carnegie the
54.\" rights to redistribute these changes.
55.\"
56.Dd April 3, 2014
57.Dt RAIDCTL 8
58.Os
59.Sh NAME
60.Nm raidctl
61.Nd configuration utility for the RAIDframe disk driver
62.Sh SYNOPSIS
63.Nm
64.Op Fl v
65.Fl a Ar component Ar dev
66.Nm
67.Op Fl v
68.Fl A Op yes | no | forceroot | softroot
69.Ar dev
70.Nm
71.Op Fl v
72.Fl B Ar dev
73.Nm
74.Op Fl v
75.Fl c Ar config_file Ar dev
76.Nm
77.Op Fl v
78.Fl C Ar config_file Ar dev
79.Nm
80.Op Fl v
81.Fl f Ar component Ar dev
82.Nm
83.Op Fl v
84.Fl F Ar component Ar dev
85.Nm
86.Op Fl v
87.Fl g Ar component Ar dev
88.Nm
89.Op Fl v
90.Fl G Ar dev
91.Nm
92.Op Fl v
93.Fl i Ar dev
94.Nm
95.Op Fl v
96.Fl I Ar serial_number Ar dev
97.Nm
98.Op Fl v
99.Fl m Ar dev
100.Nm
101.Op Fl v
102.Fl M
103.Oo yes | no | set
104.Ar params
105.Oc
106.Ar dev
107.Nm
108.Op Fl v
109.Fl p Ar dev
110.Nm
111.Op Fl v
112.Fl P Ar dev
113.Nm
114.Op Fl v
115.Fl r Ar component Ar dev
116.Nm
117.Op Fl v
118.Fl R Ar component Ar dev
119.Nm
120.Op Fl v
121.Fl s Ar dev
122.Nm
123.Op Fl v
124.Fl S Ar dev
125.Nm
126.Op Fl v
127.Fl u Ar dev
128.Sh DESCRIPTION
129.Nm
130is the user-land control program for
131.Xr raid 4 ,
132the RAIDframe disk device.
133.Nm
134is primarily used to dynamically configure and unconfigure RAIDframe disk
135devices.
136For more information about the RAIDframe disk device, see
137.Xr raid 4 .
138.Pp
139This document assumes the reader has at least rudimentary knowledge of
140RAID and RAID concepts.
141.Pp
142The command-line options for
143.Nm
144are as follows:
145.Bl -tag -width indent
146.It Fl a Ar component Ar dev
147Add
148.Ar component
149as a hot spare for the device
150.Ar dev .
151Component labels (which identify the location of a given
152component within a particular RAID set) are automatically added to the
153hot spare after it has been used and are not required for
154.Ar component
155before it is used.
156.It Fl A Ic yes Ar dev
157Make the RAID set auto-configurable.
158The RAID set will be automatically configured at boot
159.Ar before
160the root file system is mounted.
161Note that all components of the set must be of type
162.Dv RAID
163in the disklabel.
164.It Fl A Ic no Ar dev
165Turn off auto-configuration for the RAID set.
166.It Fl A Ic forceroot Ar dev
167Make the RAID set auto-configurable, and also mark the set as being
168eligible to be the root partition.
169A RAID set configured this way will
170.Ar override
171the use of the boot disk as the root device.
172All components of the set must be of type
173.Dv RAID
174in the disklabel.
175Note that only certain architectures
176.Pq currently alpha, amd64, i386, pmax, sandpoint, sparc, sparc64, and vax
177support booting a kernel directly from a RAID set.
178.It Fl A Ic softroot Ar dev
179Like
180.Ic forceroot ,
181but only change the root device if the boot device is part of the RAID set.
182.It Fl B Ar dev
183Initiate a copyback of reconstructed data from a spare disk to
184its original disk.
185This is performed after a component has failed,
186and the failed drive has been reconstructed onto a spare drive.
187.It Fl c Ar config_file Ar dev
188Configure the RAIDframe device
189.Ar dev
190according to the configuration given in
191.Ar config_file .
192A description of the contents of
193.Ar config_file
194is given later.
195.It Fl C Ar config_file Ar dev
196As for
197.Fl c ,
198but forces the configuration to take place.
199Fatal errors due to uninitialized components are ignored.
200This is required the first time a RAID set is configured.
201.It Fl f Ar component Ar dev
202This marks the specified
203.Ar component
204as having failed, but does not initiate a reconstruction of that component.
205.It Fl F Ar component Ar dev
206Fails the specified
207.Ar component
208of the device, and immediately begin a reconstruction of the failed
209disk onto an available hot spare.
210This is one of the mechanisms used to start
211the reconstruction process if a component does have a hardware failure.
212.It Fl g Ar component Ar dev
213Get the component label for the specified component.
214.It Fl G Ar dev
215Generate the configuration of the RAIDframe device in a format suitable for
216use with the
217.Fl c
218or
219.Fl C
220options.
221.It Fl i Ar dev
222Initialize the RAID device.
223In particular, (re-)write the parity on the selected device.
224This
225.Em MUST
226be done for
227.Em all
228RAID sets before the RAID device is labeled and before
229file systems are created on the RAID device.
230.It Fl I Ar serial_number Ar dev
231Initialize the component labels on each component of the device.
232.Ar serial_number
233is used as one of the keys in determining whether a
234particular set of components belong to the same RAID set.
235While not strictly enforced, different serial numbers should be used for
236different RAID sets.
237This step
238.Em MUST
239be performed when a new RAID set is created.
240.It Fl m Ar dev
241Display status information about the parity map on the RAID set, if any.
242If used with
243.Fl v
244then the current contents of the parity map will be output (in
245hexadecimal format) as well.
246.It Fl M Ic yes Ar dev
247.\"XXX should there be a section with more info on the parity map feature?
248Enable the use of a parity map on the RAID set; this is the default,
249and greatly reduces the time taken to check parity after unclean
250shutdowns at the cost of some very slight overhead during normal
251operation.
252Changes to this setting will take effect the next time the set is
253configured.
254Note that RAID-0 sets, having no parity, will not use a parity map in
255any case.
256.It Fl M Ic no Ar dev
257Disable the use of a parity map on the RAID set; doing this is not
258recommended.
259This will take effect the next time the set is configured.
260.It Fl M Ic set Ar cooldown Ar tickms Ar regions Ar dev
261Alter the parameters of the parity map; parameters to leave unchanged
262can be given as 0, and trailing zeroes may be omitted.
263.\"XXX should this explanation be deferred to another section as well?
264The RAID set is divided into
265.Ar regions
266regions; each region is marked dirty for at most
267.Ar cooldown
268intervals of
269.Ar tickms
270milliseconds each after a write to it, and at least
271.Ar cooldown
272\- 1 such intervals.
273Changes to
274.Ar regions
275take effect the next time is configured, while changes to the other
276parameters are applied immediately.
277The default parameters are expected to be reasonable for most workloads.
278.It Fl p Ar dev
279Check the status of the parity on the RAID set.
280Displays a status message,
281and returns successfully if the parity is up-to-date.
282.It Fl P Ar dev
283Check the status of the parity on the RAID set, and initialize
284(re-write) the parity if the parity is not known to be up-to-date.
285This is normally used after a system crash (and before a
286.Xr fsck 8 )
287to ensure the integrity of the parity.
288.It Fl r Ar component Ar dev
289Remove the spare disk specified by
290.Ar component
291from the set of available spare components.
292.It Fl R Ar component Ar dev
293Fails the specified
294.Ar component ,
295if necessary, and immediately begins a reconstruction back to
296.Ar component .
297This is useful for reconstructing back onto a component after
298it has been replaced following a failure.
299.It Fl s Ar dev
300Display the status of the RAIDframe device for each of the components
301and spares.
302.It Fl S Ar dev
303Check the status of parity re-writing, component reconstruction, and
304component copyback.
305The output indicates the amount of progress
306achieved in each of these areas.
307.It Fl u Ar dev
308Unconfigure the RAIDframe device.
309This does not remove any component labels or change any configuration
310settings (e.g. auto-configuration settings) for the RAID set.
311.It Fl v
312Be more verbose.
313For operations such as reconstructions, parity
314re-writing, and copybacks, provide a progress indicator.
315.El
316.Pp
317The device used by
318.Nm
319is specified by
320.Ar dev .
321.Ar dev
322may be either the full name of the device, e.g.,
323.Pa /dev/rraid0d ,
324for the i386 architecture, or
325.Pa /dev/rraid0c
326for many others, or just simply
327.Pa raid0
328(for
329.Pa /dev/rraid0[cd] ) .
330It is recommended that the partitions used to represent the
331RAID device are not used for file systems.
332.Ss Configuration file
333The format of the configuration file is complex, and
334only an abbreviated treatment is given here.
335In the configuration files, a
336.Sq #
337indicates the beginning of a comment.
338.Pp
339There are 4 required sections of a configuration file, and 2
340optional sections.
341Each section begins with a
342.Sq START ,
343followed by the section name,
344and the configuration parameters associated with that section.
345The first section is the
346.Sq array
347section, and it specifies
348the number of rows, columns, and spare disks in the RAID set.
349For example:
350.Bd -literal -offset indent
351START array
3521 3 0
353.Ed
354.Pp
355indicates an array with 1 row, 3 columns, and 0 spare disks.
356Note that although multi-dimensional arrays may be specified, they are
357.Em NOT
358supported in the driver.
359.Pp
360The second section, the
361.Sq disks
362section, specifies the actual components of the device.
363For example:
364.Bd -literal -offset indent
365START disks
366/dev/sd0e
367/dev/sd1e
368/dev/sd2e
369.Ed
370.Pp
371specifies the three component disks to be used in the RAID device.
372If any of the specified drives cannot be found when the RAID device is
373configured, then they will be marked as
374.Sq failed ,
375and the system will operate in degraded mode.
376Note that it is
377.Em imperative
378that the order of the components in the configuration file does not
379change between configurations of a RAID device.
380Changing the order of the components will result in data loss
381if the set is configured with the
382.Fl C
383option.
384In normal circumstances, the RAID set will not configure if only
385.Fl c
386is specified, and the components are out-of-order.
387.Pp
388The next section, which is the
389.Sq spare
390section, is optional, and, if present, specifies the devices to be used as
391.Sq hot spares
392\(em devices which are on-line,
393but are not actively used by the RAID driver unless
394one of the main components fail.
395A simple
396.Sq spare
397section might be:
398.Bd -literal -offset indent
399START spare
400/dev/sd3e
401.Ed
402.Pp
403for a configuration with a single spare component.
404If no spare drives are to be used in the configuration, then the
405.Sq spare
406section may be omitted.
407.Pp
408The next section is the
409.Sq layout
410section.
411This section describes the general layout parameters for the RAID device,
412and provides such information as
413sectors per stripe unit,
414stripe units per parity unit,
415stripe units per reconstruction unit,
416and the parity configuration to use.
417This section might look like:
418.Bd -literal -offset indent
419START layout
420# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level
42132 1 1 5
422.Ed
423.Pp
424The sectors per stripe unit specifies, in blocks, the interleave
425factor; i.e., the number of contiguous sectors to be written to each
426component for a single stripe.
427Appropriate selection of this value (32 in this example)
428is the subject of much research in RAID architectures.
429The stripe units per parity unit and
430stripe units per reconstruction unit are normally each set to 1.
431While certain values above 1 are permitted, a discussion of valid
432values and the consequences of using anything other than 1 are outside
433the scope of this document.
434The last value in this section (5 in this example)
435indicates the parity configuration desired.
436Valid entries include:
437.Bl -tag -width inde
438.It 0
439RAID level 0.
440No parity, only simple striping.
441.It 1
442RAID level 1.
443Mirroring.
444The parity is the mirror.
445.It 4
446RAID level 4.
447Striping across components, with parity stored on the last component.
448.It 5
449RAID level 5.
450Striping across components, parity distributed across all components.
451.El
452.Pp
453There are other valid entries here, including those for Even-Odd
454parity, RAID level 5 with rotated sparing, Chained declustering,
455and Interleaved declustering, but as of this writing the code for
456those parity operations has not been tested with
457.Nx .
458.Pp
459The next required section is the
460.Sq queue
461section.
462This is most often specified as:
463.Bd -literal -offset indent
464START queue
465fifo 100
466.Ed
467.Pp
468where the queuing method is specified as fifo (first-in, first-out),
469and the size of the per-component queue is limited to 100 requests.
470Other queuing methods may also be specified, but a discussion of them
471is beyond the scope of this document.
472.Pp
473The final section, the
474.Sq debug
475section, is optional.
476For more details on this the reader is referred to
477the RAIDframe documentation discussed in the
478.Sx HISTORY
479section.
480.Pp
481See
482.Sx EXAMPLES
483for a more complete configuration file example.
484.Sh FILES
485.Bl -tag -width /dev/XXrXraidX -compact
486.It Pa /dev/{,r}raid*
487.Cm raid
488device special files.
489.El
490.Sh EXAMPLES
491It is highly recommended that before using the RAID driver for real
492file systems that the system administrator(s) become quite familiar
493with the use of
494.Nm ,
495and that they understand how the component reconstruction process works.
496The examples in this section will focus on configuring a
497number of different RAID sets of varying degrees of redundancy.
498By working through these examples, administrators should be able to
499develop a good feel for how to configure a RAID set, and how to
500initiate reconstruction of failed components.
501.Pp
502In the following examples
503.Sq raid0
504will be used to denote the RAID device.
505Depending on the architecture,
506.Pa /dev/rraid0c
507or
508.Pa /dev/rraid0d
509may be used in place of
510.Pa raid0 .
511.Ss Initialization and Configuration
512The initial step in configuring a RAID set is to identify the components
513that will be used in the RAID set.
514All components should be the same size.
515Each component should have a disklabel type of
516.Dv FS_RAID ,
517and a typical disklabel entry for a RAID component might look like:
518.Bd -literal -offset indent
519f:  1800000  200495     RAID              # (Cyl.  405*- 4041*)
520.Ed
521.Pp
522While
523.Dv FS_BSDFFS
524will also work as the component type, the type
525.Dv FS_RAID
526is preferred for RAIDframe use, as it is required for features such as
527auto-configuration.
528As part of the initial configuration of each RAID set,
529each component will be given a
530.Sq component label .
531A
532.Sq component label
533contains important information about the component, including a
534user-specified serial number, the row and column of that component in
535the RAID set, the redundancy level of the RAID set, a
536.Sq modification counter ,
537and whether the parity information (if any) on that
538component is known to be correct.
539Component labels are an integral part of the RAID set,
540since they are used to ensure that components
541are configured in the correct order, and used to keep track of other
542vital information about the RAID set.
543Component labels are also required for the auto-detection
544and auto-configuration of RAID sets at boot time.
545For a component label to be considered valid, that
546particular component label must be in agreement with the other
547component labels in the set.
548For example, the serial number,
549.Sq modification counter ,
550number of rows and number of columns must all be in agreement.
551If any of these are different, then the component is
552not considered to be part of the set.
553See
554.Xr raid 4
555for more information about component labels.
556.Pp
557Once the components have been identified, and the disks have
558appropriate labels,
559.Nm
560is then used to configure the
561.Xr raid 4
562device.
563To configure the device, a configuration file which looks something like:
564.Bd -literal -offset indent
565START array
566# numRow numCol numSpare
5671 3 1
568
569START disks
570/dev/sd1e
571/dev/sd2e
572/dev/sd3e
573
574START spare
575/dev/sd4e
576
577START layout
578# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_5
57932 1 1 5
580
581START queue
582fifo 100
583.Ed
584.Pp
585is created in a file.
586The above configuration file specifies a RAID 5
587set consisting of the components
588.Pa /dev/sd1e ,
589.Pa /dev/sd2e ,
590and
591.Pa /dev/sd3e ,
592with
593.Pa /dev/sd4e
594available as a
595.Sq hot spare
596in case one of the three main drives should fail.
597A RAID 0 set would be specified in a similar way:
598.Bd -literal -offset indent
599START array
600# numRow numCol numSpare
6011 4 0
602
603START disks
604/dev/sd10e
605/dev/sd11e
606/dev/sd12e
607/dev/sd13e
608
609START layout
610# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_0
61164 1 1 0
612
613START queue
614fifo 100
615.Ed
616.Pp
617In this case, devices
618.Pa /dev/sd10e ,
619.Pa /dev/sd11e ,
620.Pa /dev/sd12e ,
621and
622.Pa /dev/sd13e
623are the components that make up this RAID set.
624Note that there are no hot spares for a RAID 0 set,
625since there is no way to recover data if any of the components fail.
626.Pp
627For a RAID 1 (mirror) set, the following configuration might be used:
628.Bd -literal -offset indent
629START array
630# numRow numCol numSpare
6311 2 0
632
633START disks
634/dev/sd20e
635/dev/sd21e
636
637START layout
638# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_1
639128 1 1 1
640
641START queue
642fifo 100
643.Ed
644.Pp
645In this case,
646.Pa /dev/sd20e
647and
648.Pa /dev/sd21e
649are the two components of the mirror set.
650While no hot spares have been specified in this
651configuration, they easily could be, just as they were specified in
652the RAID 5 case above.
653Note as well that RAID 1 sets are currently limited to only 2 components.
654At present, n-way mirroring is not possible.
655.Pp
656The first time a RAID set is configured, the
657.Fl C
658option must be used:
659.Bd -literal -offset indent
660raidctl -C raid0.conf raid0
661.Ed
662.Pp
663where
664.Pa raid0.conf
665is the name of the RAID configuration file.
666The
667.Fl C
668forces the configuration to succeed, even if any of the component
669labels are incorrect.
670The
671.Fl C
672option should not be used lightly in
673situations other than initial configurations, as if
674the system is refusing to configure a RAID set, there is probably a
675very good reason for it.
676After the initial configuration is done (and
677appropriate component labels are added with the
678.Fl I
679option) then raid0 can be configured normally with:
680.Bd -literal -offset indent
681raidctl -c raid0.conf raid0
682.Ed
683.Pp
684When the RAID set is configured for the first time, it is
685necessary to initialize the component labels, and to initialize the
686parity on the RAID set.
687Initializing the component labels is done with:
688.Bd -literal -offset indent
689raidctl -I 112341 raid0
690.Ed
691.Pp
692where
693.Sq 112341
694is a user-specified serial number for the RAID set.
695This initialization step is
696.Em required
697for all RAID sets.
698As well, using different serial numbers between RAID sets is
699.Em strongly encouraged ,
700as using the same serial number for all RAID sets will only serve to
701decrease the usefulness of the component label checking.
702.Pp
703Initializing the RAID set is done via the
704.Fl i
705option.
706This initialization
707.Em MUST
708be done for
709.Em all
710RAID sets, since among other things it verifies that the parity (if
711any) on the RAID set is correct.
712Since this initialization may be quite time-consuming, the
713.Fl v
714option may be also used in conjunction with
715.Fl i :
716.Bd -literal -offset indent
717raidctl -iv raid0
718.Ed
719.Pp
720This will give more verbose output on the
721status of the initialization:
722.Bd -literal -offset indent
723Initiating re-write of parity
724Parity Re-write status:
725 10% |****                                   | ETA:    06:03 /
726.Ed
727.Pp
728The output provides a
729.Sq Percent Complete
730in both a numeric and graphical format, as well as an estimated time
731to completion of the operation.
732.Pp
733Since it is the parity that provides the
734.Sq redundancy
735part of RAID, it is critical that the parity is correct as much as possible.
736If the parity is not correct, then there is no
737guarantee that data will not be lost if a component fails.
738.Pp
739Once the parity is known to be correct, it is then safe to perform
740.Xr disklabel 8 ,
741.Xr newfs 8 ,
742or
743.Xr fsck 8
744on the device or its file systems, and then to mount the file systems
745for use.
746.Pp
747Under certain circumstances (e.g., the additional component has not
748arrived, or data is being migrated off of a disk destined to become a
749component) it may be desirable to configure a RAID 1 set with only
750a single component.
751This can be achieved by using the word
752.Dq absent
753to indicate that a particular component is not present.
754In the following:
755.Bd -literal -offset indent
756START array
757# numRow numCol numSpare
7581 2 0
759
760START disks
761absent
762/dev/sd0e
763
764START layout
765# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_1
766128 1 1 1
767
768START queue
769fifo 100
770.Ed
771.Pp
772.Pa /dev/sd0e
773is the real component, and will be the second disk of a RAID 1 set.
774The first component is simply marked as being absent.
775Configuration (using
776.Fl C
777and
778.Fl I Ar 12345
779as above) proceeds normally, but initialization of the RAID set will
780have to wait until all physical components are present.
781After configuration, this set can be used normally, but will be operating
782in degraded mode.
783Once a second physical component is obtained, it can be hot-added,
784the existing data mirrored, and normal operation resumed.
785.Pp
786The size of the resulting RAID set will depend on the number of data
787components in the set.
788Space is automatically reserved for the component labels, and
789the actual amount of space used
790for data on a component will be rounded down to the largest possible
791multiple of the sectors per stripe unit (sectPerSU) value.
792Thus, the amount of space provided by the RAID set will be less
793than the sum of the size of the components.
794.Ss Maintenance of the RAID set
795After the parity has been initialized for the first time, the command:
796.Bd -literal -offset indent
797raidctl -p raid0
798.Ed
799.Pp
800can be used to check the current status of the parity.
801To check the parity and rebuild it necessary (for example,
802after an unclean shutdown) the command:
803.Bd -literal -offset indent
804raidctl -P raid0
805.Ed
806.Pp
807is used.
808Note that re-writing the parity can be done while
809other operations on the RAID set are taking place (e.g., while doing a
810.Xr fsck 8
811on a file system on the RAID set).
812However: for maximum effectiveness of the RAID set, the parity should be
813known to be correct before any data on the set is modified.
814.Pp
815To see how the RAID set is doing, the following command can be used to
816show the RAID set's status:
817.Bd -literal -offset indent
818raidctl -s raid0
819.Ed
820.Pp
821The output will look something like:
822.Bd -literal -offset indent
823Components:
824           /dev/sd1e: optimal
825           /dev/sd2e: optimal
826           /dev/sd3e: optimal
827Spares:
828           /dev/sd4e: spare
829Component label for /dev/sd1e:
830   Row: 0 Column: 0 Num Rows: 1 Num Columns: 3
831   Version: 2 Serial Number: 13432 Mod Counter: 65
832   Clean: No Status: 0
833   sectPerSU: 32 SUsPerPU: 1 SUsPerRU: 1
834   RAID Level: 5  blocksize: 512 numBlocks: 1799936
835   Autoconfig: No
836   Last configured as: raid0
837Component label for /dev/sd2e:
838   Row: 0 Column: 1 Num Rows: 1 Num Columns: 3
839   Version: 2 Serial Number: 13432 Mod Counter: 65
840   Clean: No Status: 0
841   sectPerSU: 32 SUsPerPU: 1 SUsPerRU: 1
842   RAID Level: 5  blocksize: 512 numBlocks: 1799936
843   Autoconfig: No
844   Last configured as: raid0
845Component label for /dev/sd3e:
846   Row: 0 Column: 2 Num Rows: 1 Num Columns: 3
847   Version: 2 Serial Number: 13432 Mod Counter: 65
848   Clean: No Status: 0
849   sectPerSU: 32 SUsPerPU: 1 SUsPerRU: 1
850   RAID Level: 5  blocksize: 512 numBlocks: 1799936
851   Autoconfig: No
852   Last configured as: raid0
853Parity status: clean
854Reconstruction is 100% complete.
855Parity Re-write is 100% complete.
856Copyback is 100% complete.
857.Ed
858.Pp
859This indicates that all is well with the RAID set.
860Of importance here are the component lines which read
861.Sq optimal ,
862and the
863.Sq Parity status
864line.
865.Sq Parity status: clean
866indicates that the parity is up-to-date for this RAID set,
867whether or not the RAID set is in redundant or degraded mode.
868.Sq Parity status: DIRTY
869indicates that it is not known if the parity information is
870consistent with the data, and that the parity information needs
871to be checked.
872Note that if there are file systems open on the RAID set,
873the individual components will not be
874.Sq clean
875but the set as a whole can still be clean.
876.Pp
877To check the component label of
878.Pa /dev/sd1e ,
879the following is used:
880.Bd -literal -offset indent
881raidctl -g /dev/sd1e raid0
882.Ed
883.Pp
884The output of this command will look something like:
885.Bd -literal -offset indent
886Component label for /dev/sd1e:
887   Row: 0 Column: 0 Num Rows: 1 Num Columns: 3
888   Version: 2 Serial Number: 13432 Mod Counter: 65
889   Clean: No Status: 0
890   sectPerSU: 32 SUsPerPU: 1 SUsPerRU: 1
891   RAID Level: 5  blocksize: 512 numBlocks: 1799936
892   Autoconfig: No
893   Last configured as: raid0
894.Ed
895.Ss Dealing with Component Failures
896If for some reason
897(perhaps to test reconstruction) it is necessary to pretend a drive
898has failed, the following will perform that function:
899.Bd -literal -offset indent
900raidctl -f /dev/sd2e raid0
901.Ed
902.Pp
903The system will then be performing all operations in degraded mode,
904where missing data is re-computed from existing data and the parity.
905In this case, obtaining the status of raid0 will return (in part):
906.Bd -literal -offset indent
907Components:
908           /dev/sd1e: optimal
909           /dev/sd2e: failed
910           /dev/sd3e: optimal
911Spares:
912           /dev/sd4e: spare
913.Ed
914.Pp
915Note that with the use of
916.Fl f
917a reconstruction has not been started.
918To both fail the disk and start a reconstruction, the
919.Fl F
920option must be used:
921.Bd -literal -offset indent
922raidctl -F /dev/sd2e raid0
923.Ed
924.Pp
925The
926.Fl f
927option may be used first, and then the
928.Fl F
929option used later, on the same disk, if desired.
930Immediately after the reconstruction is started, the status will report:
931.Bd -literal -offset indent
932Components:
933           /dev/sd1e: optimal
934           /dev/sd2e: reconstructing
935           /dev/sd3e: optimal
936Spares:
937           /dev/sd4e: used_spare
938[...]
939Parity status: clean
940Reconstruction is 10% complete.
941Parity Re-write is 100% complete.
942Copyback is 100% complete.
943.Ed
944.Pp
945This indicates that a reconstruction is in progress.
946To find out how the reconstruction is progressing the
947.Fl S
948option may be used.
949This will indicate the progress in terms of the
950percentage of the reconstruction that is completed.
951When the reconstruction is finished the
952.Fl s
953option will show:
954.Bd -literal -offset indent
955Components:
956           /dev/sd1e: optimal
957           /dev/sd2e: spared
958           /dev/sd3e: optimal
959Spares:
960           /dev/sd4e: used_spare
961[...]
962Parity status: clean
963Reconstruction is 100% complete.
964Parity Re-write is 100% complete.
965Copyback is 100% complete.
966.Ed
967.Pp
968At this point there are at least two options.
969First, if
970.Pa /dev/sd2e
971is known to be good (i.e., the failure was either caused by
972.Fl f
973or
974.Fl F ,
975or the failed disk was replaced), then a copyback of the data can
976be initiated with the
977.Fl B
978option.
979In this example, this would copy the entire contents of
980.Pa /dev/sd4e
981to
982.Pa /dev/sd2e .
983Once the copyback procedure is complete, the
984status of the device would be (in part):
985.Bd -literal -offset indent
986Components:
987           /dev/sd1e: optimal
988           /dev/sd2e: optimal
989           /dev/sd3e: optimal
990Spares:
991           /dev/sd4e: spare
992.Ed
993.Pp
994and the system is back to normal operation.
995.Pp
996The second option after the reconstruction is to simply use
997.Pa /dev/sd4e
998in place of
999.Pa /dev/sd2e
1000in the configuration file.
1001For example, the configuration file (in part) might now look like:
1002.Bd -literal -offset indent
1003START array
10041 3 0
1005
1006START disks
1007/dev/sd1e
1008/dev/sd4e
1009/dev/sd3e
1010.Ed
1011.Pp
1012This can be done as
1013.Pa /dev/sd4e
1014is completely interchangeable with
1015.Pa /dev/sd2e
1016at this point.
1017Note that extreme care must be taken when
1018changing the order of the drives in a configuration.
1019This is one of the few instances where the devices and/or
1020their orderings can be changed without loss of data!
1021In general, the ordering of components in a configuration file should
1022.Em never
1023be changed.
1024.Pp
1025If a component fails and there are no hot spares
1026available on-line, the status of the RAID set might (in part) look like:
1027.Bd -literal -offset indent
1028Components:
1029           /dev/sd1e: optimal
1030           /dev/sd2e: failed
1031           /dev/sd3e: optimal
1032No spares.
1033.Ed
1034.Pp
1035In this case there are a number of options.
1036The first option is to add a hot spare using:
1037.Bd -literal -offset indent
1038raidctl -a /dev/sd4e raid0
1039.Ed
1040.Pp
1041After the hot add, the status would then be:
1042.Bd -literal -offset indent
1043Components:
1044           /dev/sd1e: optimal
1045           /dev/sd2e: failed
1046           /dev/sd3e: optimal
1047Spares:
1048           /dev/sd4e: spare
1049.Ed
1050.Pp
1051Reconstruction could then take place using
1052.Fl F
1053as describe above.
1054.Pp
1055A second option is to rebuild directly onto
1056.Pa /dev/sd2e .
1057Once the disk containing
1058.Pa /dev/sd2e
1059has been replaced, one can simply use:
1060.Bd -literal -offset indent
1061raidctl -R /dev/sd2e raid0
1062.Ed
1063.Pp
1064to rebuild the
1065.Pa /dev/sd2e
1066component.
1067As the rebuilding is in progress, the status will be:
1068.Bd -literal -offset indent
1069Components:
1070           /dev/sd1e: optimal
1071           /dev/sd2e: reconstructing
1072           /dev/sd3e: optimal
1073No spares.
1074.Ed
1075.Pp
1076and when completed, will be:
1077.Bd -literal -offset indent
1078Components:
1079           /dev/sd1e: optimal
1080           /dev/sd2e: optimal
1081           /dev/sd3e: optimal
1082No spares.
1083.Ed
1084.Pp
1085In circumstances where a particular component is completely
1086unavailable after a reboot, a special component name will be used to
1087indicate the missing component.
1088For example:
1089.Bd -literal -offset indent
1090Components:
1091           /dev/sd2e: optimal
1092          component1: failed
1093No spares.
1094.Ed
1095.Pp
1096indicates that the second component of this RAID set was not detected
1097at all by the auto-configuration code.
1098The name
1099.Sq component1
1100can be used anywhere a normal component name would be used.
1101For example, to add a hot spare to the above set, and rebuild to that hot
1102spare, the following could be done:
1103.Bd -literal -offset indent
1104raidctl -a /dev/sd3e raid0
1105raidctl -F component1 raid0
1106.Ed
1107.Pp
1108at which point the data missing from
1109.Sq component1
1110would be reconstructed onto
1111.Pa /dev/sd3e .
1112.Pp
1113When more than one component is marked as
1114.Sq failed
1115due to a non-component hardware failure (e.g., loss of power to two
1116components, adapter problems, termination problems, or cabling issues) it
1117is quite possible to recover the data on the RAID set.
1118The first thing to be aware of is that the first disk to fail will
1119almost certainly be out-of-sync with the remainder of the array.
1120If any IO was performed between the time the first component is considered
1121.Sq failed
1122and when the second component is considered
1123.Sq failed ,
1124then the first component to fail will
1125.Em not
1126contain correct data, and should be ignored.
1127When the second component is marked as failed, however, the RAID device will
1128(currently) panic the system.
1129At this point the data on the RAID set
1130(not including the first failed component) is still self consistent,
1131and will be in no worse state of repair than had the power gone out in
1132the middle of a write to a file system on a non-RAID device.
1133The problem, however, is that the component labels may now have 3 different
1134.Sq modification counters
1135(one value on the first component that failed, one value on the second
1136component that failed, and a third value on the remaining components).
1137In such a situation, the RAID set will not autoconfigure,
1138and can only be forcibly re-configured
1139with the
1140.Fl C
1141option.
1142To recover the RAID set, one must first remedy whatever physical
1143problem caused the multiple-component failure.
1144After that is done, the RAID set can be restored by forcibly
1145configuring the raid set
1146.Em without
1147the component that failed first.
1148For example, if
1149.Pa /dev/sd1e
1150and
1151.Pa /dev/sd2e
1152fail (in that order) in a RAID set of the following configuration:
1153.Bd -literal -offset indent
1154START array
11551 4 0
1156
1157START disks
1158/dev/sd1e
1159/dev/sd2e
1160/dev/sd3e
1161/dev/sd4e
1162
1163START layout
1164# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_5
116564 1 1 5
1166
1167START queue
1168fifo 100
1169
1170.Ed
1171.Pp
1172then the following configuration (say "recover_raid0.conf")
1173.Bd -literal -offset indent
1174START array
11751 4 0
1176
1177START disks
1178absent
1179/dev/sd2e
1180/dev/sd3e
1181/dev/sd4e
1182
1183START layout
1184# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_5
118564 1 1 5
1186
1187START queue
1188fifo 100
1189.Ed
1190.Pp
1191can be used with
1192.Bd -literal -offset indent
1193raidctl -C recover_raid0.conf raid0
1194.Ed
1195.Pp
1196to force the configuration of raid0.
1197A
1198.Bd -literal -offset indent
1199raidctl -I 12345 raid0
1200.Ed
1201.Pp
1202will be required in order to synchronize the component labels.
1203At this point the file systems on the RAID set can then be checked and
1204corrected.
1205To complete the re-construction of the RAID set,
1206.Pa /dev/sd1e
1207is simply hot-added back into the array, and reconstructed
1208as described earlier.
1209.Ss RAID on RAID
1210RAID sets can be layered to create more complex and much larger RAID sets.
1211A RAID 0 set, for example, could be constructed from four RAID 5 sets.
1212The following configuration file shows such a setup:
1213.Bd -literal -offset indent
1214START array
1215# numRow numCol numSpare
12161 4 0
1217
1218START disks
1219/dev/raid1e
1220/dev/raid2e
1221/dev/raid3e
1222/dev/raid4e
1223
1224START layout
1225# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_0
1226128 1 1 0
1227
1228START queue
1229fifo 100
1230.Ed
1231.Pp
1232A similar configuration file might be used for a RAID 0 set
1233constructed from components on RAID 1 sets.
1234In such a configuration, the mirroring provides a high degree
1235of redundancy, while the striping provides additional speed benefits.
1236.Ss Auto-configuration and Root on RAID
1237RAID sets can also be auto-configured at boot.
1238To make a set auto-configurable,
1239simply prepare the RAID set as above, and then do a:
1240.Bd -literal -offset indent
1241raidctl -A yes raid0
1242.Ed
1243.Pp
1244to turn on auto-configuration for that set.
1245To turn off auto-configuration, use:
1246.Bd -literal -offset indent
1247raidctl -A no raid0
1248.Ed
1249.Pp
1250RAID sets which are auto-configurable will be configured before the
1251root file system is mounted.
1252These RAID sets are thus available for
1253use as a root file system, or for any other file system.
1254A primary advantage of using the auto-configuration is that RAID components
1255become more independent of the disks they reside on.
1256For example, SCSI ID's can change, but auto-configured sets will always be
1257configured correctly, even if the SCSI ID's of the component disks
1258have become scrambled.
1259.Pp
1260Having a system's root file system
1261.Pq Pa /
1262on a RAID set is also allowed, with the
1263.Sq a
1264partition of such a RAID set being used for
1265.Pa / .
1266To use raid0a as the root file system, simply use:
1267.Bd -literal -offset indent
1268raidctl -A root raid0
1269.Ed
1270.Pp
1271To return raid0a to be just an auto-configuring set simply use the
1272.Fl A Ar yes
1273arguments.
1274.Pp
1275Note that kernels can only be directly read from RAID 1 components on
1276architectures that support that
1277.Pq currently alpha, i386, pmax, sandpoint, sparc, sparc64, and vax .
1278On those architectures, the
1279.Dv FS_RAID
1280file system is recognized by the bootblocks, and will properly load the
1281kernel directly from a RAID 1 component.
1282For other architectures, or to support the root file system
1283on other RAID sets, some other mechanism must be used to get a kernel booting.
1284For example, a small partition containing only the secondary boot-blocks
1285and an alternate kernel (or two) could be used.
1286Once a kernel is booting however, and an auto-configuring RAID set is
1287found that is eligible to be root, then that RAID set will be
1288auto-configured and used as the root device.
1289If two or more RAID sets claim to be root devices, then the
1290user will be prompted to select the root device.
1291At this time, RAID 0, 1, 4, and 5 sets are all supported as root devices.
1292.Pp
1293A typical RAID 1 setup with root on RAID might be as follows:
1294.Bl -enum
1295.It
1296wd0a - a small partition, which contains a complete, bootable, basic
1297.Nx
1298installation.
1299.It
1300wd1a - also contains a complete, bootable, basic
1301.Nx
1302installation.
1303.It
1304wd0e and wd1e - a RAID 1 set, raid0, used for the root file system.
1305.It
1306wd0f and wd1f - a RAID 1 set, raid1, which will be used only for
1307swap space.
1308.It
1309wd0g and wd1g - a RAID 1 set, raid2, used for
1310.Pa /usr ,
1311.Pa /home ,
1312or other data, if desired.
1313.It
1314wd0h and wd1h - a RAID 1 set, raid3, if desired.
1315.El
1316.Pp
1317RAID sets raid0, raid1, and raid2 are all marked as auto-configurable.
1318raid0 is marked as being a root file system.
1319When new kernels are installed, the kernel is not only copied to
1320.Pa / ,
1321but also to wd0a and wd1a.
1322The kernel on wd0a is required, since that
1323is the kernel the system boots from.
1324The kernel on wd1a is also
1325required, since that will be the kernel used should wd0 fail.
1326The important point here is to have redundant copies of the kernel
1327available, in the event that one of the drives fail.
1328.Pp
1329There is no requirement that the root file system be on the same disk
1330as the kernel.
1331For example, obtaining the kernel from wd0a, and using
1332sd0e and sd1e for raid0, and the root file system, is fine.
1333It
1334.Em is
1335critical, however, that there be multiple kernels available, in the
1336event of media failure.
1337.Pp
1338Multi-layered RAID devices (such as a RAID 0 set made
1339up of RAID 1 sets) are
1340.Em not
1341supported as root devices or auto-configurable devices at this point.
1342(Multi-layered RAID devices
1343.Em are
1344supported in general, however, as mentioned earlier.)
1345Note that in order to enable component auto-detection and
1346auto-configuration of RAID devices, the line:
1347.Bd -literal -offset indent
1348options    RAID_AUTOCONFIG
1349.Ed
1350.Pp
1351must be in the kernel configuration file.
1352See
1353.Xr raid 4
1354for more details.
1355.Ss Swapping on RAID
1356A RAID device can be used as a swap device.
1357In order to ensure that a RAID device used as a swap device
1358is correctly unconfigured when the system is shutdown or rebooted,
1359it is recommended that the line
1360.Bd -literal -offset indent
1361swapoff=YES
1362.Ed
1363.Pp
1364be added to
1365.Pa /etc/rc.conf .
1366.Ss Unconfiguration
1367The final operation performed by
1368.Nm
1369is to unconfigure a
1370.Xr raid 4
1371device.
1372This is accomplished via a simple:
1373.Bd -literal -offset indent
1374raidctl -u raid0
1375.Ed
1376.Pp
1377at which point the device is ready to be reconfigured.
1378.Ss Performance Tuning
1379Selection of the various parameter values which result in the best
1380performance can be quite tricky, and often requires a bit of
1381trial-and-error to get those values most appropriate for a given system.
1382A whole range of factors come into play, including:
1383.Bl -enum
1384.It
1385Types of components (e.g., SCSI vs. IDE) and their bandwidth
1386.It
1387Types of controller cards and their bandwidth
1388.It
1389Distribution of components among controllers
1390.It
1391IO bandwidth
1392.It
1393file system access patterns
1394.It
1395CPU speed
1396.El
1397.Pp
1398As with most performance tuning, benchmarking under real-life loads
1399may be the only way to measure expected performance.
1400Understanding some of the underlying technology is also useful in tuning.
1401The goal of this section is to provide pointers to those parameters which may
1402make significant differences in performance.
1403.Pp
1404For a RAID 1 set, a SectPerSU value of 64 or 128 is typically sufficient.
1405Since data in a RAID 1 set is arranged in a linear
1406fashion on each component, selecting an appropriate stripe size is
1407somewhat less critical than it is for a RAID 5 set.
1408However: a stripe size that is too small will cause large IO's to be
1409broken up into a number of smaller ones, hurting performance.
1410At the same time, a large stripe size may cause problems with
1411concurrent accesses to stripes, which may also affect performance.
1412Thus values in the range of 32 to 128 are often the most effective.
1413.Pp
1414Tuning RAID 5 sets is trickier.
1415In the best case, IO is presented to the RAID set one stripe at a time.
1416Since the entire stripe is available at the beginning of the IO,
1417the parity of that stripe can be calculated before the stripe is written,
1418and then the stripe data and parity can be written in parallel.
1419When the amount of data being written is less than a full stripe worth, the
1420.Sq small write
1421problem occurs.
1422Since a
1423.Sq small write
1424means only a portion of the stripe on the components is going to
1425change, the data (and parity) on the components must be updated
1426slightly differently.
1427First, the
1428.Sq old parity
1429and
1430.Sq old data
1431must be read from the components.
1432Then the new parity is constructed,
1433using the new data to be written, and the old data and old parity.
1434Finally, the new data and new parity are written.
1435All this extra data shuffling results in a serious loss of performance,
1436and is typically 2 to 4 times slower than a full stripe write (or read).
1437To combat this problem in the real world, it may be useful
1438to ensure that stripe sizes are small enough that a
1439.Sq large IO
1440from the system will use exactly one large stripe write.
1441As is seen later, there are some file system dependencies
1442which may come into play here as well.
1443.Pp
1444Since the size of a
1445.Sq large IO
1446is often (currently) only 32K or 64K, on a 5-drive RAID 5 set it may
1447be desirable to select a SectPerSU value of 16 blocks (8K) or 32
1448blocks (16K).
1449Since there are 4 data sectors per stripe, the maximum
1450data per stripe is 64 blocks (32K) or 128 blocks (64K).
1451Again, empirical measurement will provide the best indicators of which
1452values will yield better performance.
1453.Pp
1454The parameters used for the file system are also critical to good performance.
1455For
1456.Xr newfs 8 ,
1457for example, increasing the block size to 32K or 64K may improve
1458performance dramatically.
1459As well, changing the cylinders-per-group
1460parameter from 16 to 32 or higher is often not only necessary for
1461larger file systems, but may also have positive performance implications.
1462.Ss Summary
1463Despite the length of this man-page, configuring a RAID set is a
1464relatively straight-forward process.
1465All that needs to be done is the following steps:
1466.Bl -enum
1467.It
1468Use
1469.Xr disklabel 8
1470to create the components (of type RAID).
1471.It
1472Construct a RAID configuration file: e.g.,
1473.Pa raid0.conf
1474.It
1475Configure the RAID set with:
1476.Bd -literal -offset indent
1477raidctl -C raid0.conf raid0
1478.Ed
1479.Pp
1480.It
1481Initialize the component labels with:
1482.Bd -literal -offset indent
1483raidctl -I 123456 raid0
1484.Ed
1485.Pp
1486.It
1487Initialize other important parts of the set with:
1488.Bd -literal -offset indent
1489raidctl -i raid0
1490.Ed
1491.Pp
1492.It
1493Get the default label for the RAID set:
1494.Bd -literal -offset indent
1495disklabel raid0 \*[Gt] /tmp/label
1496.Ed
1497.Pp
1498.It
1499Edit the label:
1500.Bd -literal -offset indent
1501vi /tmp/label
1502.Ed
1503.Pp
1504.It
1505Put the new label on the RAID set:
1506.Bd -literal -offset indent
1507disklabel -R -r raid0 /tmp/label
1508.Ed
1509.Pp
1510.It
1511Create the file system:
1512.Bd -literal -offset indent
1513newfs /dev/rraid0e
1514.Ed
1515.Pp
1516.It
1517Mount the file system:
1518.Bd -literal -offset indent
1519mount /dev/raid0e /mnt
1520.Ed
1521.Pp
1522.It
1523Use:
1524.Bd -literal -offset indent
1525raidctl -c raid0.conf raid0
1526.Ed
1527.Pp
1528To re-configure the RAID set the next time it is needed, or put
1529.Pa raid0.conf
1530into
1531.Pa /etc
1532where it will automatically be started by the
1533.Pa /etc/rc.d
1534scripts.
1535.El
1536.Sh SEE ALSO
1537.Xr ccd 4 ,
1538.Xr raid 4 ,
1539.Xr rc 8
1540.Sh HISTORY
1541RAIDframe is a framework for rapid prototyping of RAID structures
1542developed by the folks at the Parallel Data Laboratory at Carnegie
1543Mellon University (CMU).
1544A more complete description of the internals and functionality of
1545RAIDframe is found in the paper "RAIDframe: A Rapid Prototyping Tool
1546for RAID Systems", by William V. Courtright II, Garth Gibson, Mark
1547Holland, LeAnn Neal Reilly, and Jim Zelenka, and published by the
1548Parallel Data Laboratory of Carnegie Mellon University.
1549.Pp
1550The
1551.Nm
1552command first appeared as a program in CMU's RAIDframe v1.1 distribution.
1553This version of
1554.Nm
1555is a complete re-write, and first appeared in
1556.Nx 1.4 .
1557.Sh COPYRIGHT
1558.Bd -literal
1559The RAIDframe Copyright is as follows:
1560
1561Copyright (c) 1994-1996 Carnegie-Mellon University.
1562All rights reserved.
1563
1564Permission to use, copy, modify and distribute this software and
1565its documentation is hereby granted, provided that both the copyright
1566notice and this permission notice appear in all copies of the
1567software, derivative works or modified versions, and any portions
1568thereof, and that both notices appear in supporting documentation.
1569
1570CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
1571CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND
1572FOR ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
1573
1574Carnegie Mellon requests users of this software to return to
1575
1576 Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU
1577 School of Computer Science
1578 Carnegie Mellon University
1579 Pittsburgh PA 15213-3890
1580
1581any improvements or extensions that they make and grant Carnegie the
1582rights to redistribute these changes.
1583.Ed
1584.Sh WARNINGS
1585Certain RAID levels (1, 4, 5, 6, and others) can protect against some
1586data loss due to component failure.
1587However the loss of two components of a RAID 4 or 5 system,
1588or the loss of a single component of a RAID 0 system will
1589result in the entire file system being lost.
1590RAID is
1591.Em NOT
1592a substitute for good backup practices.
1593.Pp
1594Recomputation of parity
1595.Em MUST
1596be performed whenever there is a chance that it may have been compromised.
1597This includes after system crashes, or before a RAID
1598device has been used for the first time.
1599Failure to keep parity correct will be catastrophic should a
1600component ever fail \(em it is better to use RAID 0 and get the
1601additional space and speed, than it is to use parity, but
1602not keep the parity correct.
1603At least with RAID 0 there is no perception of increased data security.
1604.Pp
1605When replacing a failed component of a RAID set, it is a good
1606idea to zero out the first 64 blocks of the new component to insure the
1607RAIDframe driver doesn't erroneously detect a component label in the
1608new component.
1609This is particularly true on
1610.Em RAID 1
1611sets because there is at most one correct component label in a failed RAID
16121 installation, and the RAIDframe driver picks the component label with the
1613highest serial number and modification value as the authoritative source
1614for the failed RAID set when choosing which component label to use to
1615configure the RAID set.
1616.Sh BUGS
1617Hot-spare removal is currently not available.
1618