xref: /netbsd-src/sbin/raidctl/raidctl.8 (revision 8b0f9554ff8762542c4defc4f70e1eb76fb508fa)
1.\"     $NetBSD: raidctl.8,v 1.51 2007/12/14 07:24:01 explorer Exp $
2.\"
3.\" Copyright (c) 1998, 2002 The NetBSD Foundation, Inc.
4.\" All rights reserved.
5.\"
6.\" This code is derived from software contributed to The NetBSD Foundation
7.\" by Greg Oster
8.\"
9.\" Redistribution and use in source and binary forms, with or without
10.\" modification, are permitted provided that the following conditions
11.\" are met:
12.\" 1. Redistributions of source code must retain the above copyright
13.\"    notice, this list of conditions and the following disclaimer.
14.\" 2. Redistributions in binary form must reproduce the above copyright
15.\"    notice, this list of conditions and the following disclaimer in the
16.\"    documentation and/or other materials provided with the distribution.
17.\" 3. All advertising materials mentioning features or use of this software
18.\"    must display the following acknowledgement:
19.\"        This product includes software developed by the NetBSD
20.\"        Foundation, Inc. and its contributors.
21.\" 4. Neither the name of The NetBSD Foundation nor the names of its
22.\"    contributors may be used to endorse or promote products derived
23.\"    from this software without specific prior written permission.
24.\"
25.\" THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
26.\" ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
27.\" TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
28.\" PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
29.\" BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
30.\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
31.\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
32.\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
33.\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
34.\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
35.\" POSSIBILITY OF SUCH DAMAGE.
36.\"
37.\"
38.\" Copyright (c) 1995 Carnegie-Mellon University.
39.\" All rights reserved.
40.\"
41.\" Author: Mark Holland
42.\"
43.\" Permission to use, copy, modify and distribute this software and
44.\" its documentation is hereby granted, provided that both the copyright
45.\" notice and this permission notice appear in all copies of the
46.\" software, derivative works or modified versions, and any portions
47.\" thereof, and that both notices appear in supporting documentation.
48.\"
49.\" CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
50.\" CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND
51.\" FOR ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
52.\"
53.\" Carnegie Mellon requests users of this software to return to
54.\"
55.\"  Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU
56.\"  School of Computer Science
57.\"  Carnegie Mellon University
58.\"  Pittsburgh PA 15213-3890
59.\"
60.\" any improvements or extensions that they make and grant Carnegie the
61.\" rights to redistribute these changes.
62.\"
63.Dd August 6, 2007
64.Dt RAIDCTL 8
65.Os
66.Sh NAME
67.Nm raidctl
68.Nd configuration utility for the RAIDframe disk driver
69.Sh SYNOPSIS
70.Nm
71.Op Fl v
72.Fl a Ar component Ar dev
73.Nm
74.Op Fl v
75.Fl A Op yes | no | root
76.Ar dev
77.Nm
78.Op Fl v
79.Fl B Ar dev
80.Nm
81.Op Fl v
82.Fl c Ar config_file Ar dev
83.Nm
84.Op Fl v
85.Fl C Ar config_file Ar dev
86.Nm
87.Op Fl v
88.Fl f Ar component Ar dev
89.Nm
90.Op Fl v
91.Fl F Ar component Ar dev
92.Nm
93.Op Fl v
94.Fl g Ar component Ar dev
95.Nm
96.Op Fl v
97.Fl G Ar dev
98.Nm
99.Op Fl v
100.Fl i Ar dev
101.Nm
102.Op Fl v
103.Fl I Ar serial_number Ar dev
104.Nm
105.Op Fl v
106.Fl p Ar dev
107.Nm
108.Op Fl v
109.Fl P Ar dev
110.Nm
111.Op Fl v
112.Fl r Ar component Ar dev
113.Nm
114.Op Fl v
115.Fl R Ar component Ar dev
116.Nm
117.Op Fl v
118.Fl s Ar dev
119.Nm
120.Op Fl v
121.Fl S Ar dev
122.Nm
123.Op Fl v
124.Fl u Ar dev
125.Sh DESCRIPTION
126.Nm
127is the user-land control program for
128.Xr raid 4 ,
129the RAIDframe disk device.
130.Nm
131is primarily used to dynamically configure and unconfigure RAIDframe disk
132devices.
133For more information about the RAIDframe disk device, see
134.Xr raid 4 .
135.Pp
136This document assumes the reader has at least rudimentary knowledge of
137RAID and RAID concepts.
138.Pp
139The command-line options for
140.Nm
141are as follows:
142.Bl -tag -width indent
143.It Fl a Ar component Ar dev
144Add
145.Ar component
146as a hot spare for the device
147.Ar dev .
148Component labels (which identify the location of a given
149component within a particular RAID set) are automatically added to the
150hot spare after it has been used and are not required for
151.Ar component
152before it is used.
153.It Fl A Ic yes Ar dev
154Make the RAID set auto-configurable.
155The RAID set will be automatically configured at boot
156.Ar before
157the root file system is mounted.
158Note that all components of the set must be of type
159.Dv RAID
160in the disklabel.
161.It Fl A Ic no Ar dev
162Turn off auto-configuration for the RAID set.
163.It Fl A Ic root Ar dev
164Make the RAID set auto-configurable, and also mark the set as being
165eligible to be the root partition.
166A RAID set configured this way will
167.Ar override
168the use of the boot disk as the root device.
169All components of the set must be of type
170.Dv RAID
171in the disklabel.
172Note that only certain architectures
173.Pq currently alpha, i386, pmax, sparc, sparc64, and vax
174support booting a kernel directly from a RAID set.
175.It Fl B Ar dev
176Initiate a copyback of reconstructed data from a spare disk to
177its original disk.
178This is performed after a component has failed,
179and the failed drive has been reconstructed onto a spare drive.
180.It Fl c Ar config_file Ar dev
181Configure the RAIDframe device
182.Ar dev
183according to the configuration given in
184.Ar config_file .
185A description of the contents of
186.Ar config_file
187is given later.
188.It Fl C Ar config_file Ar dev
189As for
190.Fl c ,
191but forces the configuration to take place.
192This is required the first time a RAID set is configured.
193.It Fl f Ar component Ar dev
194This marks the specified
195.Ar component
196as having failed, but does not initiate a reconstruction of that component.
197.It Fl F Ar component Ar dev
198Fails the specified
199.Ar component
200of the device, and immediately begin a reconstruction of the failed
201disk onto an available hot spare.
202This is one of the mechanisms used to start
203the reconstruction process if a component does have a hardware failure.
204.It Fl g Ar component Ar dev
205Get the component label for the specified component.
206.It Fl G Ar dev
207Generate the configuration of the RAIDframe device in a format suitable for
208use with the
209.Fl c
210or
211.Fl C
212options.
213.It Fl i Ar dev
214Initialize the RAID device.
215In particular, (re-)write the parity on the selected device.
216This
217.Em MUST
218be done for
219.Em all
220RAID sets before the RAID device is labeled and before
221file systems are created on the RAID device.
222.It Fl I Ar serial_number Ar dev
223Initialize the component labels on each component of the device.
224.Ar serial_number
225is used as one of the keys in determining whether a
226particular set of components belong to the same RAID set.
227While not strictly enforced, different serial numbers should be used for
228different RAID sets.
229This step
230.Em MUST
231be performed when a new RAID set is created.
232.It Fl p Ar dev
233Check the status of the parity on the RAID set.
234Displays a status message,
235and returns successfully if the parity is up-to-date.
236.It Fl P Ar dev
237Check the status of the parity on the RAID set, and initialize
238(re-write) the parity if the parity is not known to be up-to-date.
239This is normally used after a system crash (and before a
240.Xr fsck 8 )
241to ensure the integrity of the parity.
242.It Fl r Ar component Ar dev
243Remove the spare disk specified by
244.Ar component
245from the set of available spare components.
246.It Fl R Ar component Ar dev
247Fails the specified
248.Ar component ,
249if necessary, and immediately begins a reconstruction back to
250.Ar component .
251This is useful for reconstructing back onto a component after
252it has been replaced following a failure.
253.It Fl s Ar dev
254Display the status of the RAIDframe device for each of the components
255and spares.
256.It Fl S Ar dev
257Check the status of parity re-writing, component reconstruction, and
258component copyback.
259The output indicates the amount of progress
260achieved in each of these areas.
261.It Fl u Ar dev
262Unconfigure the RAIDframe device.
263.It Fl v
264Be more verbose.
265For operations such as reconstructions, parity
266re-writing, and copybacks, provide a progress indicator.
267.El
268.Pp
269The device used by
270.Nm
271is specified by
272.Ar dev .
273.Ar dev
274may be either the full name of the device, e.g.,
275.Pa /dev/rraid0d ,
276for the i386 architecture, or
277.Pa /dev/rraid0c
278for many others, or just simply
279.Pa raid0
280(for
281.Pa /dev/rraid0[cd] ) .
282It is recommended that the partitions used to represent the
283RAID device are not used for file systems.
284.Ss Configuration file
285The format of the configuration file is complex, and
286only an abbreviated treatment is given here.
287In the configuration files, a
288.Sq #
289indicates the beginning of a comment.
290.Pp
291There are 4 required sections of a configuration file, and 2
292optional sections.
293Each section begins with a
294.Sq START ,
295followed by the section name,
296and the configuration parameters associated with that section.
297The first section is the
298.Sq array
299section, and it specifies
300the number of rows, columns, and spare disks in the RAID set.
301For example:
302.Bd -literal -offset indent
303START array
3041 3 0
305.Ed
306.Pp
307indicates an array with 1 row, 3 columns, and 0 spare disks.
308Note that although multi-dimensional arrays may be specified, they are
309.Em NOT
310supported in the driver.
311.Pp
312The second section, the
313.Sq disks
314section, specifies the actual components of the device.
315For example:
316.Bd -literal -offset indent
317START disks
318/dev/sd0e
319/dev/sd1e
320/dev/sd2e
321.Ed
322.Pp
323specifies the three component disks to be used in the RAID device.
324If any of the specified drives cannot be found when the RAID device is
325configured, then they will be marked as
326.Sq failed ,
327and the system will operate in degraded mode.
328Note that it is
329.Em imperative
330that the order of the components in the configuration file does not
331change between configurations of a RAID device.
332Changing the order of the components will result in data loss
333if the set is configured with the
334.Fl C
335option.
336In normal circumstances, the RAID set will not configure if only
337.Fl c
338is specified, and the components are out-of-order.
339.Pp
340The next section, which is the
341.Sq spare
342section, is optional, and, if present, specifies the devices to be used as
343.Sq hot spares
344\(em devices which are on-line,
345but are not actively used by the RAID driver unless
346one of the main components fail.
347A simple
348.Sq spare
349section might be:
350.Bd -literal -offset indent
351START spare
352/dev/sd3e
353.Ed
354.Pp
355for a configuration with a single spare component.
356If no spare drives are to be used in the configuration, then the
357.Sq spare
358section may be omitted.
359.Pp
360The next section is the
361.Sq layout
362section.
363This section describes the general layout parameters for the RAID device,
364and provides such information as
365sectors per stripe unit,
366stripe units per parity unit,
367stripe units per reconstruction unit,
368and the parity configuration to use.
369This section might look like:
370.Bd -literal -offset indent
371START layout
372# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level
37332 1 1 5
374.Ed
375.Pp
376The sectors per stripe unit specifies, in blocks, the interleave
377factor; i.e., the number of contiguous sectors to be written to each
378component for a single stripe.
379Appropriate selection of this value (32 in this example)
380is the subject of much research in RAID architectures.
381The stripe units per parity unit and
382stripe units per reconstruction unit are normally each set to 1.
383While certain values above 1 are permitted, a discussion of valid
384values and the consequences of using anything other than 1 are outside
385the scope of this document.
386The last value in this section (5 in this example)
387indicates the parity configuration desired.
388Valid entries include:
389.Bl -tag -width inde
390.It 0
391RAID level 0.
392No parity, only simple striping.
393.It 1
394RAID level 1.
395Mirroring.
396The parity is the mirror.
397.It 4
398RAID level 4.
399Striping across components, with parity stored on the last component.
400.It 5
401RAID level 5.
402Striping across components, parity distributed across all components.
403.El
404.Pp
405There are other valid entries here, including those for Even-Odd
406parity, RAID level 5 with rotated sparing, Chained declustering,
407and Interleaved declustering, but as of this writing the code for
408those parity operations has not been tested with
409.Nx .
410.Pp
411The next required section is the
412.Sq queue
413section.
414This is most often specified as:
415.Bd -literal -offset indent
416START queue
417fifo 100
418.Ed
419.Pp
420where the queuing method is specified as fifo (first-in, first-out),
421and the size of the per-component queue is limited to 100 requests.
422Other queuing methods may also be specified, but a discussion of them
423is beyond the scope of this document.
424.Pp
425The final section, the
426.Sq debug
427section, is optional.
428For more details on this the reader is referred to
429the RAIDframe documentation discussed in the
430.Sx HISTORY
431section.
432.Pp
433See
434.Sx EXAMPLES
435for a more complete configuration file example.
436.Sh FILES
437.Bl -tag -width /dev/XXrXraidX -compact
438.It Pa /dev/{,r}raid*
439.Cm raid
440device special files.
441.El
442.Sh EXAMPLES
443It is highly recommended that before using the RAID driver for real
444file systems that the system administrator(s) become quite familiar
445with the use of
446.Nm ,
447and that they understand how the component reconstruction process works.
448The examples in this section will focus on configuring a
449number of different RAID sets of varying degrees of redundancy.
450By working through these examples, administrators should be able to
451develop a good feel for how to configure a RAID set, and how to
452initiate reconstruction of failed components.
453.Pp
454In the following examples
455.Sq raid0
456will be used to denote the RAID device.
457Depending on the architecture,
458.Pa /dev/rraid0c
459or
460.Pa /dev/rraid0d
461may be used in place of
462.Pa raid0 .
463.Ss Initialization and Configuration
464The initial step in configuring a RAID set is to identify the components
465that will be used in the RAID set.
466All components should be the same size.
467Each component should have a disklabel type of
468.Dv FS_RAID ,
469and a typical disklabel entry for a RAID component might look like:
470.Bd -literal -offset indent
471f:  1800000  200495     RAID              # (Cyl.  405*- 4041*)
472.Ed
473.Pp
474While
475.Dv FS_BSDFFS
476will also work as the component type, the type
477.Dv FS_RAID
478is preferred for RAIDframe use, as it is required for features such as
479auto-configuration.
480As part of the initial configuration of each RAID set,
481each component will be given a
482.Sq component label .
483A
484.Sq component label
485contains important information about the component, including a
486user-specified serial number, the row and column of that component in
487the RAID set, the redundancy level of the RAID set, a
488.Sq modification counter ,
489and whether the parity information (if any) on that
490component is known to be correct.
491Component labels are an integral part of the RAID set,
492since they are used to ensure that components
493are configured in the correct order, and used to keep track of other
494vital information about the RAID set.
495Component labels are also required for the auto-detection
496and auto-configuration of RAID sets at boot time.
497For a component label to be considered valid, that
498particular component label must be in agreement with the other
499component labels in the set.
500For example, the serial number,
501.Sq modification counter ,
502number of rows and number of columns must all be in agreement.
503If any of these are different, then the component is
504not considered to be part of the set.
505See
506.Xr raid 4
507for more information about component labels.
508.Pp
509Once the components have been identified, and the disks have
510appropriate labels,
511.Nm
512is then used to configure the
513.Xr raid 4
514device.
515To configure the device, a configuration file which looks something like:
516.Bd -literal -offset indent
517START array
518# numRow numCol numSpare
5191 3 1
520
521START disks
522/dev/sd1e
523/dev/sd2e
524/dev/sd3e
525
526START spare
527/dev/sd4e
528
529START layout
530# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_5
53132 1 1 5
532
533START queue
534fifo 100
535.Ed
536.Pp
537is created in a file.
538The above configuration file specifies a RAID 5
539set consisting of the components
540.Pa /dev/sd1e ,
541.Pa /dev/sd2e ,
542and
543.Pa /dev/sd3e ,
544with
545.Pa /dev/sd4e
546available as a
547.Sq hot spare
548in case one of the three main drives should fail.
549A RAID 0 set would be specified in a similar way:
550.Bd -literal -offset indent
551START array
552# numRow numCol numSpare
5531 4 0
554
555START disks
556/dev/sd10e
557/dev/sd11e
558/dev/sd12e
559/dev/sd13e
560
561START layout
562# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_0
56364 1 1 0
564
565START queue
566fifo 100
567.Ed
568.Pp
569In this case, devices
570.Pa /dev/sd10e ,
571.Pa /dev/sd11e ,
572.Pa /dev/sd12e ,
573and
574.Pa /dev/sd13e
575are the components that make up this RAID set.
576Note that there are no hot spares for a RAID 0 set,
577since there is no way to recover data if any of the components fail.
578.Pp
579For a RAID 1 (mirror) set, the following configuration might be used:
580.Bd -literal -offset indent
581START array
582# numRow numCol numSpare
5831 2 0
584
585START disks
586/dev/sd20e
587/dev/sd21e
588
589START layout
590# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_1
591128 1 1 1
592
593START queue
594fifo 100
595.Ed
596.Pp
597In this case,
598.Pa /dev/sd20e
599and
600.Pa /dev/sd21e
601are the two components of the mirror set.
602While no hot spares have been specified in this
603configuration, they easily could be, just as they were specified in
604the RAID 5 case above.
605Note as well that RAID 1 sets are currently limited to only 2 components.
606At present, n-way mirroring is not possible.
607.Pp
608The first time a RAID set is configured, the
609.Fl C
610option must be used:
611.Bd -literal -offset indent
612raidctl -C raid0.conf raid0
613.Ed
614.Pp
615where
616.Pa raid0.conf
617is the name of the RAID configuration file.
618The
619.Fl C
620forces the configuration to succeed, even if any of the component
621labels are incorrect.
622The
623.Fl C
624option should not be used lightly in
625situations other than initial configurations, as if
626the system is refusing to configure a RAID set, there is probably a
627very good reason for it.
628After the initial configuration is done (and
629appropriate component labels are added with the
630.Fl I
631option) then raid0 can be configured normally with:
632.Bd -literal -offset indent
633raidctl -c raid0.conf raid0
634.Ed
635.Pp
636When the RAID set is configured for the first time, it is
637necessary to initialize the component labels, and to initialize the
638parity on the RAID set.
639Initializing the component labels is done with:
640.Bd -literal -offset indent
641raidctl -I 112341 raid0
642.Ed
643.Pp
644where
645.Sq 112341
646is a user-specified serial number for the RAID set.
647This initialization step is
648.Em required
649for all RAID sets.
650As well, using different serial numbers between RAID sets is
651.Em strongly encouraged ,
652as using the same serial number for all RAID sets will only serve to
653decrease the usefulness of the component label checking.
654.Pp
655Initializing the RAID set is done via the
656.Fl i
657option.
658This initialization
659.Em MUST
660be done for
661.Em all
662RAID sets, since among other things it verifies that the parity (if
663any) on the RAID set is correct.
664Since this initialization may be quite time-consuming, the
665.Fl v
666option may be also used in conjunction with
667.Fl i :
668.Bd -literal -offset indent
669raidctl -iv raid0
670.Ed
671.Pp
672This will give more verbose output on the
673status of the initialization:
674.Bd -literal -offset indent
675Initiating re-write of parity
676Parity Re-write status:
677 10% |****                                   | ETA:    06:03 /
678.Ed
679.Pp
680The output provides a
681.Sq Percent Complete
682in both a numeric and graphical format, as well as an estimated time
683to completion of the operation.
684.Pp
685Since it is the parity that provides the
686.Sq redundancy
687part of RAID, it is critical that the parity is correct as much as possible.
688If the parity is not correct, then there is no
689guarantee that data will not be lost if a component fails.
690.Pp
691Once the parity is known to be correct, it is then safe to perform
692.Xr disklabel 8 ,
693.Xr newfs 8 ,
694or
695.Xr fsck 8
696on the device or its file systems, and then to mount the file systems
697for use.
698.Pp
699Under certain circumstances (e.g., the additional component has not
700arrived, or data is being migrated off of a disk destined to become a
701component) it may be desirable to configure a RAID 1 set with only
702a single component.
703This can be achieved by using the word
704.Dq absent
705to indicate that a particular component is not present.
706In the following:
707.Bd -literal -offset indent
708START array
709# numRow numCol numSpare
7101 2 0
711
712START disks
713absent
714/dev/sd0e
715
716START layout
717# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_1
718128 1 1 1
719
720START queue
721fifo 100
722.Ed
723.Pp
724.Pa /dev/sd0e
725is the real component, and will be the second disk of a RAID 1 set.
726The first component is simply marked as being absent.
727Configuration (using
728.Fl C
729and
730.Fl I Ar 12345
731as above) proceeds normally, but initialization of the RAID set will
732have to wait until all physical components are present.
733After configuration, this set can be used normally, but will be operating
734in degraded mode.
735Once a second physical component is obtained, it can be hot-added,
736the existing data mirrored, and normal operation resumed.
737.Pp
738The size of the resulting RAID set will depend on the number of data
739components in the set.
740Space is automatically reserved for the component labels, and
741the actual amount of space used
742for data on a component will be rounded down to the largest possible
743multiple of the sectors per stripe unit (sectPerSU) value.
744Thus, the amount of space provided by the RAID set will be less
745than the sum of the size of the components.
746.Ss Maintenance of the RAID set
747After the parity has been initialized for the first time, the command:
748.Bd -literal -offset indent
749raidctl -p raid0
750.Ed
751.Pp
752can be used to check the current status of the parity.
753To check the parity and rebuild it necessary (for example,
754after an unclean shutdown) the command:
755.Bd -literal -offset indent
756raidctl -P raid0
757.Ed
758.Pp
759is used.
760Note that re-writing the parity can be done while
761other operations on the RAID set are taking place (e.g., while doing a
762.Xr fsck 8
763on a file system on the RAID set).
764However: for maximum effectiveness of the RAID set, the parity should be
765known to be correct before any data on the set is modified.
766.Pp
767To see how the RAID set is doing, the following command can be used to
768show the RAID set's status:
769.Bd -literal -offset indent
770raidctl -s raid0
771.Ed
772.Pp
773The output will look something like:
774.Bd -literal -offset indent
775Components:
776           /dev/sd1e: optimal
777           /dev/sd2e: optimal
778           /dev/sd3e: optimal
779Spares:
780           /dev/sd4e: spare
781Component label for /dev/sd1e:
782   Row: 0 Column: 0 Num Rows: 1 Num Columns: 3
783   Version: 2 Serial Number: 13432 Mod Counter: 65
784   Clean: No Status: 0
785   sectPerSU: 32 SUsPerPU: 1 SUsPerRU: 1
786   RAID Level: 5  blocksize: 512 numBlocks: 1799936
787   Autoconfig: No
788   Last configured as: raid0
789Component label for /dev/sd2e:
790   Row: 0 Column: 1 Num Rows: 1 Num Columns: 3
791   Version: 2 Serial Number: 13432 Mod Counter: 65
792   Clean: No Status: 0
793   sectPerSU: 32 SUsPerPU: 1 SUsPerRU: 1
794   RAID Level: 5  blocksize: 512 numBlocks: 1799936
795   Autoconfig: No
796   Last configured as: raid0
797Component label for /dev/sd3e:
798   Row: 0 Column: 2 Num Rows: 1 Num Columns: 3
799   Version: 2 Serial Number: 13432 Mod Counter: 65
800   Clean: No Status: 0
801   sectPerSU: 32 SUsPerPU: 1 SUsPerRU: 1
802   RAID Level: 5  blocksize: 512 numBlocks: 1799936
803   Autoconfig: No
804   Last configured as: raid0
805Parity status: clean
806Reconstruction is 100% complete.
807Parity Re-write is 100% complete.
808Copyback is 100% complete.
809.Ed
810.Pp
811This indicates that all is well with the RAID set.
812Of importance here are the component lines which read
813.Sq optimal ,
814and the
815.Sq Parity status
816line.
817.Sq Parity status: clean
818indicates that the parity is up-to-date for this RAID set,
819whether or not the RAID set is in redundant or degraded mode.
820.Sq Parity status: DIRTY
821indicates that it is not known if the parity information is
822consistent with the data, and that the parity information needs
823to be checked.
824Note that if there are file systems open on the RAID set,
825the individual components will not be
826.Sq clean
827but the set as a whole can still be clean.
828.Pp
829To check the component label of
830.Pa /dev/sd1e ,
831the following is used:
832.Bd -literal -offset indent
833raidctl -g /dev/sd1e raid0
834.Ed
835.Pp
836The output of this command will look something like:
837.Bd -literal -offset indent
838Component label for /dev/sd1e:
839   Row: 0 Column: 0 Num Rows: 1 Num Columns: 3
840   Version: 2 Serial Number: 13432 Mod Counter: 65
841   Clean: No Status: 0
842   sectPerSU: 32 SUsPerPU: 1 SUsPerRU: 1
843   RAID Level: 5  blocksize: 512 numBlocks: 1799936
844   Autoconfig: No
845   Last configured as: raid0
846.Ed
847.Ss Dealing with Component Failures
848If for some reason
849(perhaps to test reconstruction) it is necessary to pretend a drive
850has failed, the following will perform that function:
851.Bd -literal -offset indent
852raidctl -f /dev/sd2e raid0
853.Ed
854.Pp
855The system will then be performing all operations in degraded mode,
856where missing data is re-computed from existing data and the parity.
857In this case, obtaining the status of raid0 will return (in part):
858.Bd -literal -offset indent
859Components:
860           /dev/sd1e: optimal
861           /dev/sd2e: failed
862           /dev/sd3e: optimal
863Spares:
864           /dev/sd4e: spare
865.Ed
866.Pp
867Note that with the use of
868.Fl f
869a reconstruction has not been started.
870To both fail the disk and start a reconstruction, the
871.Fl F
872option must be used:
873.Bd -literal -offset indent
874raidctl -F /dev/sd2e raid0
875.Ed
876.Pp
877The
878.Fl f
879option may be used first, and then the
880.Fl F
881option used later, on the same disk, if desired.
882Immediately after the reconstruction is started, the status will report:
883.Bd -literal -offset indent
884Components:
885           /dev/sd1e: optimal
886           /dev/sd2e: reconstructing
887           /dev/sd3e: optimal
888Spares:
889           /dev/sd4e: used_spare
890[...]
891Parity status: clean
892Reconstruction is 10% complete.
893Parity Re-write is 100% complete.
894Copyback is 100% complete.
895.Ed
896.Pp
897This indicates that a reconstruction is in progress.
898To find out how the reconstruction is progressing the
899.Fl S
900option may be used.
901This will indicate the progress in terms of the
902percentage of the reconstruction that is completed.
903When the reconstruction is finished the
904.Fl s
905option will show:
906.Bd -literal -offset indent
907Components:
908           /dev/sd1e: optimal
909           /dev/sd2e: spared
910           /dev/sd3e: optimal
911Spares:
912           /dev/sd4e: used_spare
913[...]
914Parity status: clean
915Reconstruction is 100% complete.
916Parity Re-write is 100% complete.
917Copyback is 100% complete.
918.Ed
919.Pp
920At this point there are at least two options.
921First, if
922.Pa /dev/sd2e
923is known to be good (i.e., the failure was either caused by
924.Fl f
925or
926.Fl F ,
927or the failed disk was replaced), then a copyback of the data can
928be initiated with the
929.Fl B
930option.
931In this example, this would copy the entire contents of
932.Pa /dev/sd4e
933to
934.Pa /dev/sd2e .
935Once the copyback procedure is complete, the
936status of the device would be (in part):
937.Bd -literal -offset indent
938Components:
939           /dev/sd1e: optimal
940           /dev/sd2e: optimal
941           /dev/sd3e: optimal
942Spares:
943           /dev/sd4e: spare
944.Ed
945.Pp
946and the system is back to normal operation.
947.Pp
948The second option after the reconstruction is to simply use
949.Pa /dev/sd4e
950in place of
951.Pa /dev/sd2e
952in the configuration file.
953For example, the configuration file (in part) might now look like:
954.Bd -literal -offset indent
955START array
9561 3 0
957
958START drives
959/dev/sd1e
960/dev/sd4e
961/dev/sd3e
962.Ed
963.Pp
964This can be done as
965.Pa /dev/sd4e
966is completely interchangeable with
967.Pa /dev/sd2e
968at this point.
969Note that extreme care must be taken when
970changing the order of the drives in a configuration.
971This is one of the few instances where the devices and/or
972their orderings can be changed without loss of data!
973In general, the ordering of components in a configuration file should
974.Em never
975be changed.
976.Pp
977If a component fails and there are no hot spares
978available on-line, the status of the RAID set might (in part) look like:
979.Bd -literal -offset indent
980Components:
981           /dev/sd1e: optimal
982           /dev/sd2e: failed
983           /dev/sd3e: optimal
984No spares.
985.Ed
986.Pp
987In this case there are a number of options.
988The first option is to add a hot spare using:
989.Bd -literal -offset indent
990raidctl -a /dev/sd4e raid0
991.Ed
992.Pp
993After the hot add, the status would then be:
994.Bd -literal -offset indent
995Components:
996           /dev/sd1e: optimal
997           /dev/sd2e: failed
998           /dev/sd3e: optimal
999Spares:
1000           /dev/sd4e: spare
1001.Ed
1002.Pp
1003Reconstruction could then take place using
1004.Fl F
1005as describe above.
1006.Pp
1007A second option is to rebuild directly onto
1008.Pa /dev/sd2e .
1009Once the disk containing
1010.Pa /dev/sd2e
1011has been replaced, one can simply use:
1012.Bd -literal -offset indent
1013raidctl -R /dev/sd2e raid0
1014.Ed
1015.Pp
1016to rebuild the
1017.Pa /dev/sd2e
1018component.
1019As the rebuilding is in progress, the status will be:
1020.Bd -literal -offset indent
1021Components:
1022           /dev/sd1e: optimal
1023           /dev/sd2e: reconstructing
1024           /dev/sd3e: optimal
1025No spares.
1026.Ed
1027.Pp
1028and when completed, will be:
1029.Bd -literal -offset indent
1030Components:
1031           /dev/sd1e: optimal
1032           /dev/sd2e: optimal
1033           /dev/sd3e: optimal
1034No spares.
1035.Ed
1036.Pp
1037In circumstances where a particular component is completely
1038unavailable after a reboot, a special component name will be used to
1039indicate the missing component.
1040For example:
1041.Bd -literal -offset indent
1042Components:
1043           /dev/sd2e: optimal
1044          component1: failed
1045No spares.
1046.Ed
1047.Pp
1048indicates that the second component of this RAID set was not detected
1049at all by the auto-configuration code.
1050The name
1051.Sq component1
1052can be used anywhere a normal component name would be used.
1053For example, to add a hot spare to the above set, and rebuild to that hot
1054spare, the following could be done:
1055.Bd -literal -offset indent
1056raidctl -a /dev/sd3e raid0
1057raidctl -F component1 raid0
1058.Ed
1059.Pp
1060at which point the data missing from
1061.Sq component1
1062would be reconstructed onto
1063.Pa /dev/sd3e .
1064.Pp
1065When more than one component is marked as
1066.Sq failed
1067due to a non-component hardware failure (e.g., loss of power to two
1068components, adapter problems, termination problems, or cabling issues) it
1069is quite possible to recover the data on the RAID set.
1070The first thing to be aware of is that the first disk to fail will
1071almost certainly be out-of-sync with the remainder of the array.
1072If any IO was performed between the time the first component is considered
1073.Sq failed
1074and when the second component is considered
1075.Sq failed ,
1076then the first component to fail will
1077.Em not
1078contain correct data, and should be ignored.
1079When the second component is marked as failed, however, the RAID device will
1080(currently) panic the system.
1081At this point the data on the RAID set
1082(not including the first failed component) is still self consistent,
1083and will be in no worse state of repair than had the power gone out in
1084the middle of a write to a file system on a non-RAID device.
1085The problem, however, is that the component labels may now have 3 different
1086.Sq modification counters
1087(one value on the first component that failed, one value on the second
1088component that failed, and a third value on the remaining components).
1089In such a situation, the RAID set will not autoconfigure,
1090and can only be forcibly re-configured
1091with the
1092.Fl C
1093option.
1094To recover the RAID set, one must first remedy whatever physical
1095problem caused the multiple-component failure.
1096After that is done, the RAID set can be restored by forcibly
1097configuring the raid set
1098.Em without
1099the component that failed first.
1100For example, if
1101.Pa /dev/sd1e
1102and
1103.Pa /dev/sd2e
1104fail (in that order) in a RAID set of the following configuration:
1105.Bd -literal -offset indent
1106START array
11071 4 0
1108
1109START drives
1110/dev/sd1e
1111/dev/sd2e
1112/dev/sd3e
1113/dev/sd4e
1114
1115START layout
1116# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_5
111764 1 1 5
1118
1119START queue
1120fifo 100
1121
1122.Ed
1123.Pp
1124then the following configuration (say "recover_raid0.conf")
1125.Bd -literal -offset indent
1126START array
11271 4 0
1128
1129START drives
1130/dev/sd6e
1131/dev/sd2e
1132/dev/sd3e
1133/dev/sd4e
1134
1135START layout
1136# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_5
113764 1 1 5
1138
1139START queue
1140fifo 100
1141.Ed
1142.Pp
1143(where
1144.Pa /dev/sd6e
1145has no physical device) can be used with
1146.Bd -literal -offset indent
1147raidctl -C recover_raid0.conf raid0
1148.Ed
1149.Pp
1150to force the configuration of raid0.
1151A
1152.Bd -literal -offset indent
1153raidctl -I 12345 raid0
1154.Ed
1155.Pp
1156will be required in order to synchronize the component labels.
1157At this point the file systems on the RAID set can then be checked and
1158corrected.
1159To complete the re-construction of the RAID set,
1160.Pa /dev/sd1e
1161is simply hot-added back into the array, and reconstructed
1162as described earlier.
1163.Ss RAID on RAID
1164RAID sets can be layered to create more complex and much larger RAID sets.
1165A RAID 0 set, for example, could be constructed from four RAID 5 sets.
1166The following configuration file shows such a setup:
1167.Bd -literal -offset indent
1168START array
1169# numRow numCol numSpare
11701 4 0
1171
1172START disks
1173/dev/raid1e
1174/dev/raid2e
1175/dev/raid3e
1176/dev/raid4e
1177
1178START layout
1179# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_0
1180128 1 1 0
1181
1182START queue
1183fifo 100
1184.Ed
1185.Pp
1186A similar configuration file might be used for a RAID 0 set
1187constructed from components on RAID 1 sets.
1188In such a configuration, the mirroring provides a high degree
1189of redundancy, while the striping provides additional speed benefits.
1190.Ss Auto-configuration and Root on RAID
1191RAID sets can also be auto-configured at boot.
1192To make a set auto-configurable,
1193simply prepare the RAID set as above, and then do a:
1194.Bd -literal -offset indent
1195raidctl -A yes raid0
1196.Ed
1197.Pp
1198to turn on auto-configuration for that set.
1199To turn off auto-configuration, use:
1200.Bd -literal -offset indent
1201raidctl -A no raid0
1202.Ed
1203.Pp
1204RAID sets which are auto-configurable will be configured before the
1205root file system is mounted.
1206These RAID sets are thus available for
1207use as a root file system, or for any other file system.
1208A primary advantage of using the auto-configuration is that RAID components
1209become more independent of the disks they reside on.
1210For example, SCSI ID's can change, but auto-configured sets will always be
1211configured correctly, even if the SCSI ID's of the component disks
1212have become scrambled.
1213.Pp
1214Having a system's root file system
1215.Pq Pa /
1216on a RAID set is also allowed, with the
1217.Sq a
1218partition of such a RAID set being used for
1219.Pa / .
1220To use raid0a as the root file system, simply use:
1221.Bd -literal -offset indent
1222raidctl -A root raid0
1223.Ed
1224.Pp
1225To return raid0a to be just an auto-configuring set simply use the
1226.Fl A Ar yes
1227arguments.
1228.Pp
1229Note that kernels can only be directly read from RAID 1 components on
1230architectures that support that
1231.Pq currently alpha, i386, pmax, sparc, sparc64, and vax .
1232On those architectures, the
1233.Dv FS_RAID
1234file system is recognized by the bootblocks, and will properly load the
1235kernel directly from a RAID 1 component.
1236For other architectures, or to support the root file system
1237on other RAID sets, some other mechanism must be used to get a kernel booting.
1238For example, a small partition containing only the secondary boot-blocks
1239and an alternate kernel (or two) could be used.
1240Once a kernel is booting however, and an auto-configuring RAID set is
1241found that is eligible to be root, then that RAID set will be
1242auto-configured and used as the root device.
1243If two or more RAID sets claim to be root devices, then the
1244user will be prompted to select the root device.
1245At this time, RAID 0, 1, 4, and 5 sets are all supported as root devices.
1246.Pp
1247A typical RAID 1 setup with root on RAID might be as follows:
1248.Bl -enum
1249.It
1250wd0a - a small partition, which contains a complete, bootable, basic
1251.Nx
1252installation.
1253.It
1254wd1a - also contains a complete, bootable, basic
1255.Nx
1256installation.
1257.It
1258wd0e and wd1e - a RAID 1 set, raid0, used for the root file system.
1259.It
1260wd0f and wd1f - a RAID 1 set, raid1, which will be used only for
1261swap space.
1262.It
1263wd0g and wd1g - a RAID 1 set, raid2, used for
1264.Pa /usr ,
1265.Pa /home ,
1266or other data, if desired.
1267.It
1268wd0h and wd1h - a RAID 1 set, raid3, if desired.
1269.El
1270.Pp
1271RAID sets raid0, raid1, and raid2 are all marked as auto-configurable.
1272raid0 is marked as being a root file system.
1273When new kernels are installed, the kernel is not only copied to
1274.Pa / ,
1275but also to wd0a and wd1a.
1276The kernel on wd0a is required, since that
1277is the kernel the system boots from.
1278The kernel on wd1a is also
1279required, since that will be the kernel used should wd0 fail.
1280The important point here is to have redundant copies of the kernel
1281available, in the event that one of the drives fail.
1282.Pp
1283There is no requirement that the root file system be on the same disk
1284as the kernel.
1285For example, obtaining the kernel from wd0a, and using
1286sd0e and sd1e for raid0, and the root file system, is fine.
1287It
1288.Em is
1289critical, however, that there be multiple kernels available, in the
1290event of media failure.
1291.Pp
1292Multi-layered RAID devices (such as a RAID 0 set made
1293up of RAID 1 sets) are
1294.Em not
1295supported as root devices or auto-configurable devices at this point.
1296(Multi-layered RAID devices
1297.Em are
1298supported in general, however, as mentioned earlier.)
1299Note that in order to enable component auto-detection and
1300auto-configuration of RAID devices, the line:
1301.Bd -literal -offset indent
1302options    RAID_AUTOCONFIG
1303.Ed
1304.Pp
1305must be in the kernel configuration file.
1306See
1307.Xr raid 4
1308for more details.
1309.Ss Swapping on RAID
1310A RAID device can be used as a swap device.
1311In order to ensure that a RAID device used as a swap device
1312is correctly unconfigured when the system is shutdown or rebooted,
1313it is recommended that the line
1314.Bd -literal -offset indent
1315swapoff=YES
1316.Ed
1317.Pp
1318be added to
1319.Pa /etc/rc.conf .
1320.Ss Unconfiguration
1321The final operation performed by
1322.Nm
1323is to unconfigure a
1324.Xr raid 4
1325device.
1326This is accomplished via a simple:
1327.Bd -literal -offset indent
1328raidctl -u raid0
1329.Ed
1330.Pp
1331at which point the device is ready to be reconfigured.
1332.Ss Performance Tuning
1333Selection of the various parameter values which result in the best
1334performance can be quite tricky, and often requires a bit of
1335trial-and-error to get those values most appropriate for a given system.
1336A whole range of factors come into play, including:
1337.Bl -enum
1338.It
1339Types of components (e.g., SCSI vs. IDE) and their bandwidth
1340.It
1341Types of controller cards and their bandwidth
1342.It
1343Distribution of components among controllers
1344.It
1345IO bandwidth
1346.It
1347file system access patterns
1348.It
1349CPU speed
1350.El
1351.Pp
1352As with most performance tuning, benchmarking under real-life loads
1353may be the only way to measure expected performance.
1354Understanding some of the underlying technology is also useful in tuning.
1355The goal of this section is to provide pointers to those parameters which may
1356make significant differences in performance.
1357.Pp
1358For a RAID 1 set, a SectPerSU value of 64 or 128 is typically sufficient.
1359Since data in a RAID 1 set is arranged in a linear
1360fashion on each component, selecting an appropriate stripe size is
1361somewhat less critical than it is for a RAID 5 set.
1362However: a stripe size that is too small will cause large IO's to be
1363broken up into a number of smaller ones, hurting performance.
1364At the same time, a large stripe size may cause problems with
1365concurrent accesses to stripes, which may also affect performance.
1366Thus values in the range of 32 to 128 are often the most effective.
1367.Pp
1368Tuning RAID 5 sets is trickier.
1369In the best case, IO is presented to the RAID set one stripe at a time.
1370Since the entire stripe is available at the beginning of the IO,
1371the parity of that stripe can be calculated before the stripe is written,
1372and then the stripe data and parity can be written in parallel.
1373When the amount of data being written is less than a full stripe worth, the
1374.Sq small write
1375problem occurs.
1376Since a
1377.Sq small write
1378means only a portion of the stripe on the components is going to
1379change, the data (and parity) on the components must be updated
1380slightly differently.
1381First, the
1382.Sq old parity
1383and
1384.Sq old data
1385must be read from the components.
1386Then the new parity is constructed,
1387using the new data to be written, and the old data and old parity.
1388Finally, the new data and new parity are written.
1389All this extra data shuffling results in a serious loss of performance,
1390and is typically 2 to 4 times slower than a full stripe write (or read).
1391To combat this problem in the real world, it may be useful
1392to ensure that stripe sizes are small enough that a
1393.Sq large IO
1394from the system will use exactly one large stripe write.
1395As is seen later, there are some file system dependencies
1396which may come into play here as well.
1397.Pp
1398Since the size of a
1399.Sq large IO
1400is often (currently) only 32K or 64K, on a 5-drive RAID 5 set it may
1401be desirable to select a SectPerSU value of 16 blocks (8K) or 32
1402blocks (16K).
1403Since there are 4 data sectors per stripe, the maximum
1404data per stripe is 64 blocks (32K) or 128 blocks (64K).
1405Again, empirical measurement will provide the best indicators of which
1406values will yeild better performance.
1407.Pp
1408The parameters used for the file system are also critical to good performance.
1409For
1410.Xr newfs 8 ,
1411for example, increasing the block size to 32K or 64K may improve
1412performance dramatically.
1413As well, changing the cylinders-per-group
1414parameter from 16 to 32 or higher is often not only necessary for
1415larger file systems, but may also have positive performance implications.
1416.Ss Summary
1417Despite the length of this man-page, configuring a RAID set is a
1418relatively straight-forward process.
1419All that needs to be done is the following steps:
1420.Bl -enum
1421.It
1422Use
1423.Xr disklabel 8
1424to create the components (of type RAID).
1425.It
1426Construct a RAID configuration file: e.g.,
1427.Pa raid0.conf
1428.It
1429Configure the RAID set with:
1430.Bd -literal -offset indent
1431raidctl -C raid0.conf raid0
1432.Ed
1433.Pp
1434.It
1435Initialize the component labels with:
1436.Bd -literal -offset indent
1437raidctl -I 123456 raid0
1438.Ed
1439.Pp
1440.It
1441Initialize other important parts of the set with:
1442.Bd -literal -offset indent
1443raidctl -i raid0
1444.Ed
1445.Pp
1446.It
1447Get the default label for the RAID set:
1448.Bd -literal -offset indent
1449disklabel raid0 \*[Gt] /tmp/label
1450.Ed
1451.Pp
1452.It
1453Edit the label:
1454.Bd -literal -offset indent
1455vi /tmp/label
1456.Ed
1457.Pp
1458.It
1459Put the new label on the RAID set:
1460.Bd -literal -offset indent
1461disklabel -R -r raid0 /tmp/label
1462.Ed
1463.Pp
1464.It
1465Create the file system:
1466.Bd -literal -offset indent
1467newfs /dev/rraid0e
1468.Ed
1469.Pp
1470.It
1471Mount the file system:
1472.Bd -literal -offset indent
1473mount /dev/raid0e /mnt
1474.Ed
1475.Pp
1476.It
1477Use:
1478.Bd -literal -offset indent
1479raidctl -c raid0.conf raid0
1480.Ed
1481.Pp
1482To re-configure the RAID set the next time it is needed, or put
1483.Pa raid0.conf
1484into
1485.Pa /etc
1486where it will automatically be started by the
1487.Pa /etc/rc.d
1488scripts.
1489.El
1490.Sh SEE ALSO
1491.Xr ccd 4 ,
1492.Xr raid 4 ,
1493.Xr rc 8
1494.Sh HISTORY
1495RAIDframe is a framework for rapid prototyping of RAID structures
1496developed by the folks at the Parallel Data Laboratory at Carnegie
1497Mellon University (CMU).
1498A more complete description of the internals and functionality of
1499RAIDframe is found in the paper "RAIDframe: A Rapid Prototyping Tool
1500for RAID Systems", by William V. Courtright II, Garth Gibson, Mark
1501Holland, LeAnn Neal Reilly, and Jim Zelenka, and published by the
1502Parallel Data Laboratory of Carnegie Mellon University.
1503.Pp
1504The
1505.Nm
1506command first appeared as a program in CMU's RAIDframe v1.1 distribution.
1507This version of
1508.Nm
1509is a complete re-write, and first appeared in
1510.Nx 1.4 .
1511.Sh COPYRIGHT
1512.Bd -literal
1513The RAIDframe Copyright is as follows:
1514
1515Copyright (c) 1994-1996 Carnegie-Mellon University.
1516All rights reserved.
1517
1518Permission to use, copy, modify and distribute this software and
1519its documentation is hereby granted, provided that both the copyright
1520notice and this permission notice appear in all copies of the
1521software, derivative works or modified versions, and any portions
1522thereof, and that both notices appear in supporting documentation.
1523
1524CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
1525CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND
1526FOR ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
1527
1528Carnegie Mellon requests users of this software to return to
1529
1530 Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU
1531 School of Computer Science
1532 Carnegie Mellon University
1533 Pittsburgh PA 15213-3890
1534
1535any improvements or extensions that they make and grant Carnegie the
1536rights to redistribute these changes.
1537.Ed
1538.Sh WARNINGS
1539Certain RAID levels (1, 4, 5, 6, and others) can protect against some
1540data loss due to component failure.
1541However the loss of two components of a RAID 4 or 5 system,
1542or the loss of a single component of a RAID 0 system will
1543result in the entire file system being lost.
1544RAID is
1545.Em NOT
1546a substitute for good backup practices.
1547.Pp
1548Recomputation of parity
1549.Em MUST
1550be performed whenever there is a chance that it may have been compromised.
1551This includes after system crashes, or before a RAID
1552device has been used for the first time.
1553Failure to keep parity correct will be catastrophic should a
1554component ever fail \(em it is better to use RAID 0 and get the
1555additional space and speed, than it is to use parity, but
1556not keep the parity correct.
1557At least with RAID 0 there is no perception of increased data security.
1558.Sh BUGS
1559Hot-spare removal is currently not available.
1560