xref: /netbsd-src/sbin/raidctl/raidctl.8 (revision 3b01aba77a7a698587faaae455bbfe740923c1f5)
1.\"     $NetBSD: raidctl.8,v 1.24 2001/07/10 01:30:52 lukem Exp $
2.\"
3.\" Copyright (c) 1998 The NetBSD Foundation, Inc.
4.\" All rights reserved.
5.\"
6.\" This code is derived from software contributed to The NetBSD Foundation
7.\" by Greg Oster
8.\"
9.\" Redistribution and use in source and binary forms, with or without
10.\" modification, are permitted provided that the following conditions
11.\" are met:
12.\" 1. Redistributions of source code must retain the above copyright
13.\"    notice, this list of conditions and the following disclaimer.
14.\" 2. Redistributions in binary form must reproduce the above copyright
15.\"    notice, this list of conditions and the following disclaimer in the
16.\"    documentation and/or other materials provided with the distribution.
17.\" 3. All advertising materials mentioning features or use of this software
18.\"    must display the following acknowledgement:
19.\"        This product includes software developed by the NetBSD
20.\"        Foundation, Inc. and its contributors.
21.\" 4. Neither the name of The NetBSD Foundation nor the names of its
22.\"    contributors may be used to endorse or promote products derived
23.\"    from this software without specific prior written permission.
24.\"
25.\" THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
26.\" ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
27.\" TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
28.\" PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
29.\" BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
30.\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
31.\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
32.\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
33.\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
34.\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
35.\" POSSIBILITY OF SUCH DAMAGE.
36.\"
37.\"
38.\" Copyright (c) 1995 Carnegie-Mellon University.
39.\" All rights reserved.
40.\"
41.\" Author: Mark Holland
42.\"
43.\" Permission to use, copy, modify and distribute this software and
44.\" its documentation is hereby granted, provided that both the copyright
45.\" notice and this permission notice appear in all copies of the
46.\" software, derivative works or modified versions, and any portions
47.\" thereof, and that both notices appear in supporting documentation.
48.\"
49.\" CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
50.\" CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND
51.\" FOR ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
52.\"
53.\" Carnegie Mellon requests users of this software to return to
54.\"
55.\"  Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU
56.\"  School of Computer Science
57.\"  Carnegie Mellon University
58.\"  Pittsburgh PA 15213-3890
59.\"
60.\" any improvements or extensions that they make and grant Carnegie the
61.\" rights to redistribute these changes.
62.\"
63.Dd July 10, 2001
64.Dt RAIDCTL 8
65.Os
66.Sh NAME
67.Nm raidctl
68.Nd configuration utility for the RAIDframe disk driver
69.Sh SYNOPSIS
70.Nm ""
71.Op Fl v
72.Fl a Ar component Ar dev
73.Nm ""
74.Op Fl v
75.Fl A Op yes | no | root
76.Ar dev
77.Nm ""
78.Op Fl v
79.Fl B Ar dev
80.Nm ""
81.Op Fl v
82.Fl c Ar config_file Ar dev
83.Nm ""
84.Op Fl v
85.Fl C Ar config_file Ar dev
86.Nm ""
87.Op Fl v
88.Fl f Ar component Ar dev
89.Nm ""
90.Op Fl v
91.Fl F Ar component Ar dev
92.Nm ""
93.Op Fl v
94.Fl g Ar component Ar dev
95.Nm ""
96.Op Fl v
97.Fl G Ar dev
98.Nm ""
99.Op Fl v
100.Fl i Ar dev
101.Nm ""
102.Op Fl v
103.Fl I Ar serial_number Ar dev
104.Nm ""
105.Op Fl v
106.Fl p Ar dev
107.Nm ""
108.Op Fl v
109.Fl P Ar dev
110.Nm ""
111.Op Fl v
112.Fl r Ar component Ar dev
113.Nm ""
114.Op Fl v
115.Fl R Ar component Ar dev
116.Nm ""
117.Op Fl v
118.Fl s Ar dev
119.Nm ""
120.Op Fl v
121.Fl S Ar dev
122.Nm ""
123.Op Fl v
124.Fl u Ar dev
125.Sh DESCRIPTION
126.Nm ""
127is the user-land control program for
128.Xr raid 4 ,
129the RAIDframe disk device.
130.Nm ""
131is primarily used to dynamically configure and unconfigure RAIDframe disk
132devices.  For more information about the RAIDframe disk device, see
133.Xr raid 4 .
134.Pp
135This document assumes the reader has at least rudimentary knowledge of
136RAID and RAID concepts.
137.Pp
138The command-line options for
139.Nm
140are as follows:
141.Bl -tag -width indent
142.It Fl a Ar component Ar dev
143Add
144.Ar component
145as a hot spare for the device
146.Ar dev .
147.It Fl A Ic yes Ar dev
148Make the RAID set auto-configurable.  The RAID set will be
149automatically configured at boot
150.Ar before
151the root file system is
152mounted.  Note that all components of the set must be of type RAID in the
153disklabel.
154.It Fl A Ic no Ar dev
155Turn off auto-configuration for the RAID set.
156.It Fl A Ic root Ar dev
157Make the RAID set auto-configurable, and also mark the set as being
158eligible to be the root partition.  A RAID set configured this way
159will
160.Ar override
161the use of the boot disk as the root device.  All components of the
162set must be of type RAID in the disklabel.  Note that the kernel being
163booted must currently reside on a non-RAID set.
164.It Fl B Ar dev
165Initiate a copyback of reconstructed data from a spare disk to
166its original disk.  This is performed after a component has failed,
167and the failed drive has been reconstructed onto a spare drive.
168.It Fl c Ar config_file Ar dev
169Configure the RAIDframe device
170.Ar dev
171according to the configuration given in
172.Ar config_file .
173A description of the contents of
174.Ar config_file
175is given later.
176.It Fl C Ar config_file Ar dev
177As for
178.Ar -c ,
179but forces the configuration to take place.  This is required the
180first time a RAID set is configured.
181.It Fl f Ar component Ar dev
182This marks the specified
183.Ar component
184as having failed, but does not initiate a reconstruction of that
185component.
186.It Fl F Ar component Ar dev
187Fails the specified
188.Ar component
189of the device, and immediately begin a reconstruction of the failed
190disk onto an available hot spare.  This is one of the mechanisms used to start
191the reconstruction process if a component does have a hardware failure.
192.It Fl g Ar component Ar dev
193Get the component label for the specified component.
194.It Fl G Ar dev
195Generate the configuration of the RAIDframe device in a format suitable for
196use with
197.Nm
198.Fl c
199or
200.Fl C .
201.It Fl i Ar dev
202Initialize the RAID device.  In particular, (re-write) the parity on
203the selected device.  This
204.Ar MUST
205be done for
206.Ar all
207RAID sets before the RAID device is labeled and before
208file systems are created on the RAID device.
209.It Fl I Ar serial_number Ar dev
210Initialize the component labels on each component of the device.
211.Ar serial_number
212is used as one of the keys in determining whether a
213particular set of components belong to the same RAID set.  While not
214strictly enforced, different serial numbers should be used for
215different RAID sets.  This step
216.Ar MUST
217be performed when a new RAID set is created.
218.It Fl p Ar dev
219Check the status of the parity on the RAID set.  Displays a status
220message, and returns successfully if the parity is up-to-date.
221.It Fl P Ar dev
222Check the status of the parity on the RAID set, and initialize
223(re-write) the parity if the parity is not known to be up-to-date.
224This is normally used after a system crash (and before a
225.Xr fsck 8 )
226to ensure the integrity of the parity.
227.It Fl r Ar component Ar dev
228Remove the spare disk specified by
229.Ar component
230from the set of available spare components.
231.It Fl R Ar component Ar dev
232Fails the specified
233.Ar component ,
234if necessary, and immediately begins a reconstruction back to
235.Ar component .
236This is useful for reconstructing back onto a component after
237it has been replaced following a failure.
238.It Fl s Ar dev
239Display the status of the RAIDframe device for each of the components
240and spares.
241.It Fl S Ar dev
242Check the status of parity re-writing, component reconstruction, and
243component copyback.  The output indicates the amount of progress
244achieved in each of these areas.
245.It Fl u Ar dev
246Unconfigure the RAIDframe device.
247.It Fl v
248Be more verbose.  For operations such as reconstructions, parity
249re-writing, and copybacks, provide a progress indicator.
250.El
251.Pp
252The device used by
253.Nm
254is specified by
255.Ar dev .
256.Ar dev
257may be either the full name of the device, e.g. /dev/rraid0d,
258for the i386 architecture, and /dev/rraid0c
259for all others, or just simply raid0 (for /dev/rraid0d).
260.Pp
261.Ss Configuration file
262The format of the configuration file is complex, and
263only an abbreviated treatment is given here.  In the configuration
264files, a
265.Sq #
266indicates the beginning of a comment.
267.Pp
268There are 4 required sections of a configuration file, and 2
269optional sections.  Each section begins with a
270.Sq START ,
271followed by
272the section name, and the configuration parameters associated with that
273section.  The first section is the
274.Sq array
275section, and it specifies
276the number of rows, columns, and spare disks in the RAID set.  For
277example:
278.Bd -unfilled -offset indent
279START array
2801 3 0
281.Ed
282.Pp
283indicates an array with 1 row, 3 columns, and 0 spare disks.  Note
284that although multi-dimensional arrays may be specified, they are
285.Ar NOT
286supported in the driver.
287.Pp
288The second section, the
289.Sq disks
290section, specifies the actual
291components of the device.  For example:
292.Bd -unfilled -offset indent
293START disks
294/dev/sd0e
295/dev/sd1e
296/dev/sd2e
297.Ed
298.Pp
299specifies the three component disks to be used in the RAID device.  If
300any of the specified drives cannot be found when the RAID device is
301configured, then they will be marked as
302.Sq failed ,
303and the system will
304operate in degraded mode.  Note that it is
305.Ar imperative
306that the order of the components in the configuration file does not
307change between configurations of a RAID device.  Changing the order
308of the components will result in data loss if the set is configured
309with the
310.Fl C
311option.  In normal circumstances, the RAID set will not configure if
312only
313.Fl c
314is specified, and the components are out-of-order.
315.Pp
316The next section, which is the
317.Sq spare
318section, is optional, and, if
319present, specifies the devices to be used as
320.Sq hot spares
321-- devices
322which are on-line, but are not actively used by the RAID driver unless
323one of the main components fail.  A simple
324.Sq spare
325section might be:
326.Bd -unfilled -offset indent
327START spare
328/dev/sd3e
329.Ed
330.Pp
331for a configuration with a single spare component.  If no spare drives
332are to be used in the configuration, then the
333.Sq spare
334section may be omitted.
335.Pp
336The next section is the
337.Sq layout
338section.  This section describes the
339general layout parameters for the RAID device, and provides such
340information as sectors per stripe unit, stripe units per parity unit,
341stripe units per reconstruction unit, and the parity configuration to
342use.  This section might look like:
343.Bd -unfilled -offset indent
344START layout
345# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level
34632 1 1 5
347.Ed
348.Pp
349The sectors per stripe unit specifies, in blocks, the interleave
350factor; i.e. the number of contiguous sectors to be written to each
351component for a single stripe.  Appropriate selection of this value
352(32 in this example) is the subject of much research in RAID
353architectures.  The stripe units per parity unit and
354stripe units per reconstruction unit are normally each set to 1.
355While certain values above 1 are permitted, a discussion of valid
356values and the consequences of using anything other than 1 are outside
357the scope of this document.  The last value in this section (5 in this
358example) indicates the parity configuration desired.  Valid entries
359include:
360.Bl -tag -width inde
361.It 0
362RAID level 0.  No parity, only simple striping.
363.It 1
364RAID level 1.  Mirroring.  The parity is the mirror.
365.It 4
366RAID level 4.  Striping across components, with parity stored on the
367last component.
368.It 5
369RAID level 5.  Striping across components, parity distributed across
370all components.
371.El
372.Pp
373There are other valid entries here, including those for Even-Odd
374parity, RAID level 5 with rotated sparing, Chained declustering,
375and Interleaved declustering, but as of this writing the code for
376those parity operations has not been tested with
377.Nx .
378.Pp
379The next required section is the
380.Sq queue
381section.  This is most often
382specified as:
383.Bd -unfilled -offset indent
384START queue
385fifo 100
386.Ed
387.Pp
388where the queuing method is specified as fifo (first-in, first-out),
389and the size of the per-component queue is limited to 100 requests.
390Other queuing methods may also be specified, but a discussion of them
391is beyond the scope of this document.
392.Pp
393The final section, the
394.Sq debug
395section, is optional.  For more details
396on this the reader is referred to the RAIDframe documentation
397discussed in the
398.Sx HISTORY
399section.
400
401See
402.Sx EXAMPLES
403for a more complete configuration file example.
404
405.Sh EXAMPLES
406
407It is highly recommended that before using the RAID driver for real
408file systems that the system administrator(s) become quite familiar
409with the use of
410.Nm "" ,
411and that they understand how the component reconstruction process
412works.  The examples in this section will focus on configuring a
413number of different RAID sets of varying degrees of redundancy.
414By working through these examples, administrators should be able to
415develop a good feel for how to configure a RAID set, and how to
416initiate reconstruction of failed components.
417.Pp
418In the following examples
419.Sq raid0
420will be used to denote the RAID device.  Depending on the
421architecture,
422.Sq /dev/rraid0c
423or
424.Sq /dev/rraid0d
425may be used in place of
426.Sq raid0 .
427.Pp
428.Ss Initialization and Configuration
429The initial step in configuring a RAID set is to identify the components
430that will be used in the RAID set.  All components should be the same
431size.  Each component should have a disklabel type of
432.Dv FS_RAID ,
433and a typical disklabel entry for a RAID component
434might look like:
435.Bd -unfilled -offset indent
436f:  1800000  200495     RAID              # (Cyl.  405*- 4041*)
437.Ed
438.Pp
439While
440.Dv FS_BSDFFS
441will also work as the component type, the type
442.Dv FS_RAID
443is preferred for RAIDframe use, as it is required for features such as
444auto-configuration.  As part of the initial configuration of each RAID
445set, each component will be given a
446.Sq component label .
447A
448.Sq component label
449contains important information about the component, including a
450user-specified serial number, the row and column of that component in
451the RAID set, the redundancy level of the RAID set, a 'modification
452counter', and whether the parity information (if any) on that
453component is known to be correct.  Component labels are an integral
454part of the RAID set, since they are used to ensure that components
455are configured in the correct order, and used to keep track of other
456vital information about the RAID set.  Component labels are also
457required for the auto-detection and auto-configuration of RAID sets at
458boot time.  For a component label to be considered valid, that
459particular component label must be in agreement with the other
460component labels in the set.  For example, the serial number,
461.Sq modification counter ,
462number of rows and number of columns must all
463be in agreement.  If any of these are different, then the component is
464not considered to be part of the set.  See
465.Xr raid 4
466for more information about component labels.
467.Pp
468Once the components have been identified, and the disks have
469appropriate labels,
470.Nm ""
471is then used to configure the
472.Xr raid 4
473device.  To configure the device, a configuration
474file which looks something like:
475.Bd -unfilled -offset indent
476START array
477# numRow numCol numSpare
4781 3 1
479
480START disks
481/dev/sd1e
482/dev/sd2e
483/dev/sd3e
484
485START spare
486/dev/sd4e
487
488START layout
489# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_5
49032 1 1 5
491
492START queue
493fifo 100
494.Ed
495.Pp
496is created in a file.  The above configuration file specifies a RAID 5
497set consisting of the components /dev/sd1e, /dev/sd2e, and /dev/sd3e,
498with /dev/sd4e available as a
499.Sq hot spare
500in case one of
501the three main drives should fail. A RAID 0 set would be specified in
502a similar way:
503.Bd -unfilled -offset indent
504START array
505# numRow numCol numSpare
5061 4 0
507
508START disks
509/dev/sd10e
510/dev/sd11e
511/dev/sd12e
512/dev/sd13e
513
514START layout
515# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_0
51664 1 1 0
517
518START queue
519fifo 100
520.Ed
521.Pp
522In this case, devices /dev/sd10e, /dev/sd11e, /dev/sd12e, and /dev/sd13e
523are the components that make up this RAID set.  Note that there are no
524hot spares for a RAID 0 set, since there is no way to recover data if
525any of the components fail.
526.Pp
527For a RAID 1 (mirror) set, the following configuration might be used:
528.Bd -unfilled -offset indent
529START array
530# numRow numCol numSpare
5311 2 0
532
533START disks
534/dev/sd20e
535/dev/sd21e
536
537START layout
538# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_1
539128 1 1 1
540
541START queue
542fifo 100
543.Ed
544.Pp
545In this case, /dev/sd20e and /dev/sd21e are the two components of the
546mirror set.  While no hot spares have been specified in this
547configuration, they easily could be, just as they were specified in
548the RAID 5 case above.  Note as well that RAID 1 sets are currently
549limited to only 2 components.  At present, n-way mirroring is not
550possible.
551.Pp
552The first time a RAID set is configured, the
553.Fl C
554option must be used:
555.Bd -unfilled -offset indent
556raidctl -C raid0.conf raid0
557.Ed
558.Pp
559where
560.Sq raid0.conf
561is the name of the RAID configuration file.  The
562.Fl C
563forces the configuration to succeed, even if any of the component
564labels are incorrect.  The
565.Fl C
566option should not be used lightly in
567situations other than initial configurations, as if
568the system is refusing to configure a RAID set, there is probably a
569very good reason for it.  After the initial configuration is done (and
570appropriate component labels are added with the
571.Fl I
572option) then raid0 can be configured normally with:
573.Bd -unfilled -offset indent
574raidctl -c raid0.conf raid0
575.Ed
576.Pp
577When the RAID set is configured for the first time, it is
578necessary to initialize the component labels, and to initialize the
579parity on the RAID set.  Initializing the component labels is done with:
580.Bd -unfilled -offset indent
581raidctl -I 112341 raid0
582.Ed
583.Pp
584where
585.Sq 112341
586is a user-specified serial number for the RAID set.  This
587initialization step is
588.Ar required
589for all RAID sets.  As well, using different
590serial numbers between RAID sets is
591.Ar strongly encouraged ,
592as using the same serial number for all RAID sets will only serve to
593decrease the usefulness of the component label checking.
594.Pp
595Initializing the RAID set is done via the
596.Fl i
597option.  This initialization
598.Ar MUST
599be done for
600.Ar all
601RAID sets, since among other things it verifies that the parity (if
602any) on the RAID set is correct.  Since this initialization may be
603quite time-consuming, the
604.Fl v
605option may be also used in conjunction with
606.Fl i :
607.Bd -unfilled -offset indent
608raidctl -iv raid0
609.Ed
610.Pp
611This will give more verbose output on the
612status of the initialization:
613.Bd -unfilled -offset indent
614Initiating re-write of parity
615Parity Re-write status:
616 10% |****                                   | ETA:    06:03 /
617.Ed
618.Pp
619The output provides a
620.Sq Percent Complete
621in both a numeric and graphical format, as well as an estimated time
622to completion of the operation.
623.Pp
624Since it is the parity that provides the
625.Sq redundancy
626part of RAID, it is critical that the parity is correct
627as much as possible.  If the parity is not correct, then there is no
628guarantee that data will not be lost if a component fails.
629.Pp
630Once the parity is known to be correct,
631it is then safe to perform
632.Xr disklabel 8 ,
633.Xr newfs 8 ,
634or
635.Xr fsck 8
636on the device or its file systems, and then to mount the file systems
637for use.
638.Pp
639Under certain circumstances (e.g. the additional component has not
640arrived, or data is being migrated off of a disk destined to become a
641component) it may be desirable to to configure a RAID 1 set with only
642a single component.  This can be achieved by configuring the set with
643a physically existing component (as either the first or second
644component) and with a
645.Sq fake
646component.  In the following:
647.Bd -unfilled -offset indent
648START array
649# numRow numCol numSpare
6501 2 0
651
652START disks
653/dev/sd6e
654/dev/sd0e
655
656START layout
657# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_1
658128 1 1 1
659
660START queue
661fifo 100
662.Ed
663.Pp
664/dev/sd0e is the real component, and will be the second disk of a RAID 1
665set.  The component /dev/sd6e, which must exist, but have no physical
666device associated with it, is simply used as a placeholder.
667Configuration (using
668.Fl C
669and
670.Fl I Ar 12345
671as above) proceeds normally, but initialization of the RAID set will
672have to wait until all physical components are present.  After
673configuration, this set can be used normally, but will be operating
674in degraded mode.  Once a second physical component is obtained, it
675can be hot-added, the existing data mirrored, and normal operation
676resumed.
677.Pp
678.Ss Maintenance of the RAID set
679After the parity has been initialized for the first time, the command:
680.Bd -unfilled -offset indent
681raidctl -p raid0
682.Ed
683.Pp
684can be used to check the current status of the parity.  To check the
685parity and rebuild it necessary (for example, after an unclean
686shutdown) the command:
687.Bd -unfilled -offset indent
688raidctl -P raid0
689.Ed
690.Pp
691is used.  Note that re-writing the parity can be done while
692other operations on the RAID set are taking place (e.g. while doing a
693.Xr fsck 8
694on a file system on the RAID set).  However: for maximum effectiveness
695of the RAID set, the parity should be known to be correct before any
696data on the set is modified.
697.Pp
698To see how the RAID set is doing, the following command can be used to
699show the RAID set's status:
700.Bd -unfilled -offset indent
701raidctl -s raid0
702.Ed
703.Pp
704The output will look something like:
705.Bd -unfilled -offset indent
706Components:
707           /dev/sd1e: optimal
708           /dev/sd2e: optimal
709           /dev/sd3e: optimal
710Spares:
711           /dev/sd4e: spare
712Component label for /dev/sd1e:
713   Row: 0 Column: 0 Num Rows: 1 Num Columns: 3
714   Version: 2 Serial Number: 13432 Mod Counter: 65
715   Clean: No Status: 0
716   sectPerSU: 32 SUsPerPU: 1 SUsPerRU: 1
717   RAID Level: 5  blocksize: 512 numBlocks: 1799936
718   Autoconfig: No
719   Last configured as: raid0
720Component label for /dev/sd2e:
721   Row: 0 Column: 1 Num Rows: 1 Num Columns: 3
722   Version: 2 Serial Number: 13432 Mod Counter: 65
723   Clean: No Status: 0
724   sectPerSU: 32 SUsPerPU: 1 SUsPerRU: 1
725   RAID Level: 5  blocksize: 512 numBlocks: 1799936
726   Autoconfig: No
727   Last configured as: raid0
728Component label for /dev/sd3e:
729   Row: 0 Column: 2 Num Rows: 1 Num Columns: 3
730   Version: 2 Serial Number: 13432 Mod Counter: 65
731   Clean: No Status: 0
732   sectPerSU: 32 SUsPerPU: 1 SUsPerRU: 1
733   RAID Level: 5  blocksize: 512 numBlocks: 1799936
734   Autoconfig: No
735   Last configured as: raid0
736Parity status: clean
737Reconstruction is 100% complete.
738Parity Re-write is 100% complete.
739Copyback is 100% complete.
740.Ed
741.Pp
742This indicates that all is well with the RAID set.  Of importance here
743are the component lines which read
744.Sq optimal ,
745and the
746.Sq Parity status
747line which indicates that the parity is up-to-date.  Note that if
748there are file systems open on the RAID set, the individual components
749will not be
750.Sq clean
751but the set as a whole can still be clean.
752.Pp
753To check the component label of /dev/sd1e, the following is used:
754.Bd -unfilled -offset indent
755raidctl -g /dev/sd1e raid0
756.Ed
757.Pp
758The output of this command will look something like:
759.Bd -unfilled -offset indent
760Component label for /dev/sd1e:
761   Row: 0 Column: 0 Num Rows: 1 Num Columns: 3
762   Version: 2 Serial Number: 13432 Mod Counter: 65
763   Clean: No Status: 0
764   sectPerSU: 32 SUsPerPU: 1 SUsPerRU: 1
765   RAID Level: 5  blocksize: 512 numBlocks: 1799936
766   Autoconfig: No
767   Last configured as: raid0
768.Ed
769.Pp
770.Ss Dealing with Component Failures
771If for some reason
772(perhaps to test reconstruction) it is necessary to pretend a drive
773has failed, the following will perform that function:
774.Bd -unfilled -offset indent
775raidctl -f /dev/sd2e raid0
776.Ed
777.Pp
778The system will then be performing all operations in degraded mode,
779where missing data is re-computed from existing data and the parity.
780In this case, obtaining the status of raid0 will return (in part):
781.Bd -unfilled -offset indent
782Components:
783           /dev/sd1e: optimal
784           /dev/sd2e: failed
785           /dev/sd3e: optimal
786Spares:
787           /dev/sd4e: spare
788.Ed
789.Pp
790Note that with the use of
791.Fl f
792a reconstruction has not been started.  To both fail the disk and
793start a reconstruction, the
794.Fl F
795option must be used:
796.Bd -unfilled -offset indent
797raidctl -F /dev/sd2e raid0
798.Ed
799.Pp
800The
801.Fl f
802option may be used first, and then the
803.Fl F
804option used later, on the same disk, if desired.
805Immediately after the reconstruction is started, the status will report:
806.Bd -unfilled -offset indent
807Components:
808           /dev/sd1e: optimal
809           /dev/sd2e: reconstructing
810           /dev/sd3e: optimal
811Spares:
812           /dev/sd4e: used_spare
813[...]
814Parity status: clean
815Reconstruction is 10% complete.
816Parity Re-write is 100% complete.
817Copyback is 100% complete.
818.Ed
819.Pp
820This indicates that a reconstruction is in progress.  To find out how
821the reconstruction is progressing the
822.Fl S
823option may be used.  This will indicate the progress in terms of the
824percentage of the reconstruction that is completed.  When the
825reconstruction is finished the
826.Fl s
827option will show:
828.Bd -unfilled -offset indent
829Components:
830           /dev/sd1e: optimal
831           /dev/sd2e: spared
832           /dev/sd3e: optimal
833Spares:
834           /dev/sd4e: used_spare
835[...]
836Parity status: clean
837Reconstruction is 100% complete.
838Parity Re-write is 100% complete.
839Copyback is 100% complete.
840.Ed
841.Pp
842At this point there are at least two options.  First, if /dev/sd2e is
843known to be good (i.e. the failure was either caused by
844.Fl f
845or
846.Fl F ,
847or the failed disk was replaced), then a copyback of the data can
848be initiated with the
849.Fl B
850option.  In this example, this would copy the entire contents of
851/dev/sd4e to /dev/sd2e.  Once the copyback procedure is complete, the
852status of the device would be (in part):
853.Bd -unfilled -offset indent
854Components:
855           /dev/sd1e: optimal
856           /dev/sd2e: optimal
857           /dev/sd3e: optimal
858Spares:
859           /dev/sd4e: spare
860.Ed
861.Pp
862and the system is back to normal operation.
863.Pp
864The second option after the reconstruction is to simply use /dev/sd4e
865in place of /dev/sd2e in the configuration file.  For example, the
866configuration file (in part) might now look like:
867.Bd -unfilled -offset indent
868START array
8691 3 0
870
871START drives
872/dev/sd1e
873/dev/sd4e
874/dev/sd3e
875.Ed
876.Pp
877This can be done as /dev/sd4e is completely interchangeable with
878/dev/sd2e at this point.  Note that extreme care must be taken when
879changing the order of the drives in a configuration.  This is one of
880the few instances where the devices and/or their orderings can be
881changed without loss of data!  In general, the ordering of components
882in a configuration file should
883.Ar never
884be changed.
885.Pp
886If a component fails and there are no hot spares
887available on-line, the status of the RAID set might (in part) look like:
888.Bd -unfilled -offset indent
889Components:
890           /dev/sd1e: optimal
891           /dev/sd2e: failed
892           /dev/sd3e: optimal
893No spares.
894.Ed
895.Pp
896In this case there are a number of options.  The first option is to add a hot
897spare using:
898.Bd -unfilled -offset indent
899raidctl -a /dev/sd4e raid0
900.Ed
901.Pp
902After the hot add, the status would then be:
903.Bd -unfilled -offset indent
904Components:
905           /dev/sd1e: optimal
906           /dev/sd2e: failed
907           /dev/sd3e: optimal
908Spares:
909           /dev/sd4e: spare
910.Ed
911.Pp
912Reconstruction could then take place using
913.Fl F
914as describe above.
915.Pp
916A second option is to rebuild directly onto /dev/sd2e.  Once the disk
917containing /dev/sd2e has been replaced, one can simply use:
918.Bd -unfilled -offset indent
919raidctl -R /dev/sd2e raid0
920.Ed
921.Pp
922to rebuild the /dev/sd2e component.  As the rebuilding is in progress,
923the status will be:
924.Bd -unfilled -offset indent
925Components:
926           /dev/sd1e: optimal
927           /dev/sd2e: reconstructing
928           /dev/sd3e: optimal
929No spares.
930.Ed
931.Pp
932and when completed, will be:
933.Bd -unfilled -offset indent
934Components:
935           /dev/sd1e: optimal
936           /dev/sd2e: optimal
937           /dev/sd3e: optimal
938No spares.
939.Ed
940.Pp
941In circumstances where a particular component is completely
942unavailable after a reboot, a special component name will be used to
943indicate the missing component.  For example:
944.Bd -unfilled -offset indent
945Components:
946           /dev/sd2e: optimal
947          component1: failed
948No spares.
949.Ed
950.Pp
951indicates that the second component of this RAID set was not detected
952at all by the auto-configuration code.  The name
953.Sq component1
954can be used anywhere a normal component name would be used.  For
955example, to add a hot spare to the above set, and rebuild to that hot
956spare, the following could be done:
957.Bd -unfilled -offset indent
958raidctl -a /dev/sd3e raid0
959raidctl -F component1 raid0
960.Ed
961.Pp
962at which point the data missing from
963.Sq component1
964would be reconstructed onto /dev/sd3e.
965.Pp
966.Ss RAID on RAID
967RAID sets can be layered to create more complex and much larger RAID
968sets.  A RAID 0 set, for example, could be constructed from four RAID
9695 sets.  The following configuration file shows such a setup:
970.Bd -unfilled -offset indent
971START array
972# numRow numCol numSpare
9731 4 0
974
975START disks
976/dev/raid1e
977/dev/raid2e
978/dev/raid3e
979/dev/raid4e
980
981START layout
982# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_0
983128 1 1 0
984
985START queue
986fifo 100
987.Ed
988.Pp
989A similar configuration file might be used for a RAID 0 set
990constructed from components on RAID 1 sets.  In such a configuration,
991the mirroring provides a high degree of redundancy, while the striping
992provides additional speed benefits.
993.Pp
994.Ss Auto-configuration and Root on RAID
995RAID sets can also be auto-configured at boot.  To make a set
996auto-configurable, simply prepare the RAID set as above, and then do
997a:
998.Bd -unfilled -offset indent
999raidctl -A yes raid0
1000.Ed
1001.Pp
1002to turn on auto-configuration for that set.  To turn off
1003auto-configuration, use:
1004.Bd -unfilled -offset indent
1005raidctl -A no raid0
1006.Ed
1007.Pp
1008RAID sets which are auto-configurable will be configured before the
1009root file system is mounted.  These RAID sets are thus available for
1010use as a root file system, or for any other file system.  A primary
1011advantage of using the auto-configuration is that RAID components
1012become more independent of the disks they reside on.  For example,
1013SCSI ID's can change, but auto-configured sets will always be
1014configured correctly, even if the SCSI ID's of the component disks
1015have become scrambled.
1016.Pp
1017Having a system's root file system
1018.Pq Pa /
1019on a RAID set is also allowed,
1020with the
1021.Sq a
1022partition of such a RAID set being used for
1023.Pa / .
1024To use raid0a as the root file system, simply use:
1025.Bd -unfilled -offset indent
1026raidctl -A root raid0
1027.Ed
1028.Pp
1029To return raid0a to be just an auto-configuring set simply use the
1030.Fl A Ar yes
1031arguments.
1032.Pp
1033Note that kernels can only be directly read from RAID 1 components on
1034alpha and pmax architectures.  On those architectures, the
1035.Dv FS_RAID
1036file system is recognized by the bootblocks, and will properly load the
1037kernel directly from a RAID 1 component.  For other architectures, or
1038to support the root file system on other RAID sets, some other
1039mechanism must be used to get a kernel booting.  For example, a small
1040partition containing only the secondary boot-blocks and an alternate
1041kernel (or two) could be used.  Once a kernel is booting however, and
1042an auto-configuring RAID set is found that is eligible to be root,
1043then that RAID set will be auto-configured and used as the root
1044device.  If two or more RAID sets claim to be root devices, then the
1045user will be prompted to select the root device.  At this time, RAID
10460, 1, 4, and 5 sets are all supported as root devices.
1047.Pp
1048A typical RAID 1 setup with root on RAID might be as follows:
1049.Bl -enum
1050.It
1051wd0a - a small partition, which contains a complete, bootable, basic
1052NetBSD installation.
1053.It
1054wd1a - also contains a complete, bootable, basic NetBSD installation.
1055.It
1056wd0e and wd1e - a RAID 1 set, raid0, used for the root file system.
1057.It
1058wd0f and wd1f - a RAID 1 set, raid1, which will be used only for
1059swap space.
1060.It
1061wd0g and wd1g - a RAID 1 set, raid2, used for
1062.Pa /usr ,
1063.Pa /home ,
1064or other data, if desired.
1065.It
1066wd0h and wd0h - a RAID 1 set, raid3, if desired.
1067.El
1068.Pp
1069RAID sets raid0, raid1, and raid2 are all marked as
1070auto-configurable.  raid0 is marked as being a root file system.
1071When new kernels are installed, the kernel is not only copied to
1072.Pa / ,
1073but also to wd0a and wd1a.  The kernel on wd0a is required, since that
1074is the kernel the system boots from.  The kernel on wd1a is also
1075required, since that will be the kernel used should wd0 fail.  The
1076important point here is to have redundant copies of the kernel
1077available, in the event that one of the drives fail.
1078.Pp
1079There is no requirement that the root file system be on the same disk
1080as the kernel.  For example, obtaining the kernel from wd0a, and using
1081sd0e and sd1e for raid0, and the root file system, is fine.  It
1082.Ar is
1083critical, however, that there be multiple kernels available, in the
1084event of media failure.
1085.Pp
1086Multi-layered RAID devices (such as a RAID 0 set made
1087up of RAID 1 sets) are
1088.Ar not
1089supported as root devices or auto-configurable devices at this point.
1090(Multi-layered RAID devices
1091.Ar are
1092supported in general, however, as mentioned earlier.)  Note that in
1093order to enable component auto-detection and auto-configuration of
1094RAID devices, the line:
1095.Bd -unfilled -offset indent
1096options    RAID_AUTOCONFIG
1097.Ed
1098.Pp
1099must be in the kernel configuration file.  See
1100.Xr raid 4
1101for more details.
1102.Pp
1103.Ss Unconfiguration
1104The final operation performed by
1105.Nm
1106is to unconfigure a
1107.Xr raid 4
1108device.  This is accomplished via a simple:
1109.Bd -unfilled -offset indent
1110raidctl -u raid0
1111.Ed
1112.Pp
1113at which point the device is ready to be reconfigured.
1114.Pp
1115.Ss Performance Tuning
1116Selection of the various parameter values which result in the best
1117performance can be quite tricky, and often requires a bit of
1118trial-and-error to get those values most appropriate for a given system.
1119A whole range of factors come into play, including:
1120.Bl -enum
1121.It
1122Types of components (e.g. SCSI vs. IDE) and their bandwidth
1123.It
1124Types of controller cards and their bandwidth
1125.It
1126Distribution of components among controllers
1127.It
1128IO bandwidth
1129.It
1130file system access patterns
1131.It
1132CPU speed
1133.El
1134.Pp
1135As with most performance tuning, benchmarking under real-life loads
1136may be the only way to measure expected performance.  Understanding
1137some of the underlying technology is also useful in tuning.  The goal
1138of this section is to provide pointers to those parameters which may
1139make significant differences in performance.
1140.Pp
1141For a RAID 1 set, a SectPerSU value of 64 or 128 is typically
1142sufficient.  Since data in a RAID 1 set is arranged in a linear
1143fashion on each component, selecting an appropriate stripe size is
1144somewhat less critical than it is for a RAID 5 set.  However: a stripe
1145size that is too small will cause large IO's to be broken up into a
1146number of smaller ones, hurting performance.  At the same time, a
1147large stripe size may cause problems with concurrent accesses to
1148stripes, which may also affect performance.  Thus values in the range
1149of 32 to 128 are often the most effective.
1150.Pp
1151Tuning RAID 5 sets is trickier.  In the best case, IO is presented to
1152the RAID set one stripe at a time.  Since the entire stripe is
1153available at the beginning of the IO, the parity of that stripe can
1154be calculated before the stripe is written, and then the stripe data
1155and parity can be written in parallel.  When the amount of data being
1156written is less than a full stripe worth, the
1157.Sq small write
1158problem occurs.  Since a
1159.Sq small write
1160means only a portion of the stripe on the components is going to
1161change, the data (and parity) on the components must be updated
1162slightly differently.  First, the
1163.Sq old parity
1164and
1165.Sq old data
1166must be read from the components.  Then the new parity is constructed,
1167using the new data to be written, and the old data and old parity.
1168Finally, the new data and new parity are written.  All this extra data
1169shuffling results in a serious loss of performance, and is typically 2
1170to 4 times slower than a full stripe write (or read).  To combat this
1171problem in the real world, it may be useful to ensure that stripe
1172sizes are small enough that a
1173.Sq large IO
1174from the system will use exactly one large stripe write. As is seen
1175later, there are some file system dependencies which may come into play
1176here as well.
1177.Pp
1178Since the size of a
1179.Sq large IO
1180is often (currently) only 32K or 64K, on a 5-drive RAID 5 set it may
1181be desirable to select a SectPerSU value of 16 blocks (8K) or 32
1182blocks (16K).  Since there are 4 data sectors per stripe, the maximum
1183data per stripe is 64 blocks (32K) or 128 blocks (64K).  Again,
1184empirical measurement will provide the best indicators of which
1185values will yeild better performance.
1186.Pp
1187The parameters used for the file system are also critical to good
1188performance.  For
1189.Xr newfs 8 ,
1190for example, increasing the block size to 32K or 64K may improve
1191performance dramatically.  As well, changing the cylinders-per-group
1192parameter from 16 to 32 or higher is often not only necessary for
1193larger file systems, but may also have positive performance
1194implications.
1195.Pp
1196.Ss Summary
1197Despite the length of this man-page, configuring a RAID set is a
1198relatively straight-forward process.  All that needs to be done is the
1199following steps:
1200.Bl -enum
1201.It
1202Use
1203.Xr disklabel 8
1204to create the components (of type RAID).
1205.It
1206Construct a RAID configuration file: e.g.
1207.Sq raid0.conf
1208.It
1209Configure the RAID set with:
1210.Bd -unfilled -offset indent
1211raidctl -C raid0.conf raid0
1212.Ed
1213.Pp
1214.It
1215Initialize the component labels with:
1216.Bd -unfilled -offset indent
1217raidctl -I 123456 raid0
1218.Ed
1219.Pp
1220.It
1221Initialize other important parts of the set with:
1222.Bd -unfilled -offset indent
1223raidctl -i raid0
1224.Ed
1225.Pp
1226.It
1227Get the default label for the RAID set:
1228.Bd -unfilled -offset indent
1229disklabel raid0 > /tmp/label
1230.Ed
1231.Pp
1232.It
1233Edit the label:
1234.Bd -unfilled -offset indent
1235vi /tmp/label
1236.Ed
1237.Pp
1238.It
1239Put the new label on the RAID set:
1240.Bd -unfilled -offset indent
1241disklabel -R -r raid0 /tmp/label
1242.Ed
1243.Pp
1244.It
1245Create the file system:
1246.Bd -unfilled -offset indent
1247newfs /dev/rraid0e
1248.Ed
1249.Pp
1250.It
1251Mount the file system:
1252.Bd -unfilled -offset indent
1253mount /dev/raid0e /mnt
1254.Ed
1255.Pp
1256.It
1257Use:
1258.Bd -unfilled -offset indent
1259raidctl -c raid0.conf raid0
1260.Ed
1261.Pp
1262To re-configure the RAID set the next time it is needed, or put
1263raid0.conf into /etc where it will automatically be started by
1264the /etc/rc scripts.
1265.El
1266.Pp
1267.Sh WARNINGS
1268Certain RAID levels (1, 4, 5, 6, and others) can protect against some
1269data loss due to component failure.  However the loss of two
1270components of a RAID 4 or 5 system, or the loss of a single component
1271of a RAID 0 system will result in the entire file system being lost.
1272RAID is
1273.Ar NOT
1274a substitute for good backup practices.
1275.Pp
1276Recomputation of parity
1277.Ar MUST
1278be performed whenever there is a chance that it may have been
1279compromised.  This includes after system crashes, or before a RAID
1280device has been used for the first time.  Failure to keep parity
1281correct will be catastrophic should a component ever fail -- it is
1282better to use RAID 0 and get the additional space and speed, than it
1283is to use parity, but not keep the parity correct.  At least with RAID
12840 there is no perception of increased data security.
1285.Pp
1286.Sh FILES
1287.Bl -tag -width /dev/XXrXraidX -compact
1288.It Pa /dev/{,r}raid*
1289.Cm raid
1290device special files.
1291.El
1292.Pp
1293.Sh SEE ALSO
1294.Xr raid 4 ,
1295.Xr ccd 4 ,
1296.Xr rc 8
1297.Sh BUGS
1298Hot-spare removal is currently not available.
1299.Sh HISTORY
1300RAIDframe is a framework for rapid prototyping of RAID structures
1301developed by the folks at the Parallel Data Laboratory at Carnegie
1302Mellon University (CMU).
1303A more complete description of the internals and functionality of
1304RAIDframe is found in the paper "RAIDframe: A Rapid Prototyping Tool
1305for RAID Systems", by William V. Courtright II, Garth Gibson, Mark
1306Holland, LeAnn Neal Reilly, and Jim Zelenka, and published by the
1307Parallel Data Laboratory of Carnegie Mellon University.
1308.Pp
1309The
1310.Nm
1311command first appeared as a program in CMU's RAIDframe v1.1 distribution.  This
1312version of
1313.Nm
1314is a complete re-write, and first appeared in
1315.Nx 1.4 .
1316.Sh COPYRIGHT
1317.Bd -unfilled
1318The RAIDframe Copyright is as follows:
1319
1320Copyright (c) 1994-1996 Carnegie-Mellon University.
1321All rights reserved.
1322
1323Permission to use, copy, modify and distribute this software and
1324its documentation is hereby granted, provided that both the copyright
1325notice and this permission notice appear in all copies of the
1326software, derivative works or modified versions, and any portions
1327thereof, and that both notices appear in supporting documentation.
1328
1329CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
1330CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND
1331FOR ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
1332
1333Carnegie Mellon requests users of this software to return to
1334
1335 Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU
1336 School of Computer Science
1337 Carnegie Mellon University
1338 Pittsburgh PA 15213-3890
1339
1340any improvements or extensions that they make and grant Carnegie the
1341rights to redistribute these changes.
1342.Ed
1343