Tags

Related Posts

Share This

Veritas FS

Petit tutorial sur Veritas FS, gestion du file system de VERITAS.

Last Updated: Thu May 10 14:23:37 CDT 2001

A1000 Dual Port Setup and Configuration
with Veritas Volume Manager and Filesystem
———————————————————–

Overview:

The following tutorial documents the steps used to configure a single
A1000 array to be shared between two hosts. It then documents the
steps used to create simple Veritas volumes and filesystems on each
of the logical array drives, as well as the steps necessary to deport
and import the volumes between the two hosts.

This tutorial is based on an actual install of the configuration used
in the examples.

Hardware configs:

Sun StorEdge A1000 – SCSI Target ID 1
w/ 7 18GB disk drives

d0 d1 d2 HS d0 d1 d2
———————————————–
| 2,0 2,1 2,2 2,3 | 1,0 1,1 1,2 |
———————————————–
| |
| controller 1 | controller 1
| |
————– ————-
| E220R | | E220R |
————– ————-
node1 node2
| |
c0t0d0 c0t0d0 Internal root drive
c0t1d0 c0t1d0 Internal root mirror

– Both E220R's have a dual-channel Differential Ultra-SCSI card, with the
first port (controller 1) connected to the A1000 array.

– The A1000 array was configured as 3 logical drives, plus one hot spare.
The logical drives consist of 2 18GB disk drives mirrored (RAID 1).

– The A1000 logical drives are seen as:

c1t1d0, c1t1d1, c1t1d2

– Final configs are setup as:

Internal Drives:

Boot drive c0t0d0 encapsulated as rootdisk and mirrored to disk01 (c0t1d0)

A1000 Drives are all simple (concatenated) veritas volumes with vxfs filesystems:

/mh = disk02 = c1t1d0 = RAID 1 hardware mirror using array drives 1,0 and 2,0

/ms = disk03 = c1t1d1 = RAID 1 hardware mirror using array drives 1,1 and 2,1

/amt = disk04 = c1t1d2 = RAID 1 hardware mirror using array drives 1,2 and 2,2

Hardware RAID Hot Spare drive is 2,3

– Steps used to configure RAID:

1. Installed A1000 in Rack

2. Cabled A1000 to port 1 of dual-port differential SCSI card on node1

3. Set SCSI ID switch on back of A1000 to target 1

4. powered on A1000

5. powered on E220R

6. At boot prompt, verified the array was seen:

setenv auto-boot? false
reset-all
probe-scsi-all

7. Booted E220R

8. Verified OS and app patch requirements per Sun Infodoc 20029
"A1000/A3x00/A3500FC Software/Firmware Configuration Matrix"
(only patch required was the RAIDManager version 6.22 jumbo
patch 108834-09)

9. Installed the RAIDManager ver. 6.22 software from the included CD:

mount -r -F hsfs /dev/sr0 /cdrom
cd /cdrom/… (don't remember the exact path)
pkgadd -d . SUNWosar SUNWosafw SUNWosamn SUNWosau

This installs the following packages:

system SUNWosafw Open Storage Array Firmware
system SUNWosamn Open Storage Array Man Pages
system SUNWosar Open Storage Array (Root)
system SUNWosau Open Storage Array (Usr)

10. Verified that /etc/osa/mnf does not have a period (.) in the name.
There is a known problem with this. If it does, change to an
underscore (_). This was per the A1000 install notes at:
http://www.eng.auburn.edu/pub/mail-lists/ssastuff/Solaris8-A1000.html

11. Installed the RAIDManager jumbo patch 108834-09

12. Performed a reconfiguratoin reboot:
touch /reconfigure
init 6

13. Verified the A1000 could be seen, and what firmware revision level
it was:

/usr/lib/osa/bin/raidutil -c c1t1d0 -i

Firmware was currently at:
03.01.02.35

14. Upgraded firmware to current versions. The current versions
are included as part of the patch install, and are stored in
/usr/lib/osa/fw.

To install the latest boot code:

/usr/lib/osa/bin/fwutil 03010304.bwd c1t1d0

To install the latest firmware:

/usr/lib/osa/bin/fwutil 03010363.apd c1t1d0

15. Verified the A1000 was updated:

/usr/lib/osa/bin/raidutil -c c1t1d0 -i

Reports something similar to:

LUNs found on c1t1d0.
LUN 0 RAID 5 103311 MB

Vendor ID Symbios
ProductID StorEDGE A1000
Product Revision 0301
Boot Level 03.01.03.04
Boot Level Date 07/06/00
Firmware Level 03.01.03.63
Firmware Date 03/15/01
raidutil succeeded!

16. Now verify that the drives can be all seen:

/usr/lib/osa/bin/drivutil -i c1t1d0

This will report something similar to:

Drive Information for ig028_002

Location Capacity Status Vendor Product Firmware Serial
(MB) ID Version Number
[1,0] 17274 Optimal IBM DDYST1835SUN18G S96H 010811E164
[2,0] 17274 Optimal IBM DDYST1835SUN18G S96H 0108109219
[1,1] 17274 Optimal IBM DDYST1835SUN18G S96H 0108115692
[2,1] 17274 Optimal IBM DDYST1835SUN18G S96H 010811E211
[1,2] 17274 Optimal IBM DDYST1835SUN18G S96H 010810V958
[2,2] 17274 Optimal IBM DDYST1835SUN18G S96H 01081WH714
[2,3] 17274 Spare-Stdby IBM DDYST1835SUN18G S96H 010810V946

17. Delete the default lun 0 configuration:

/usr/lib/osa/bin/raidutil -c c1t1d0 -D 0

Reports something similar to:

LUNs found on c1t0d0.
LUN 0 RAID 5 103644 MB
Deleting LUN 0.
Press Control C to abort.

LUNs successfully deleted

raidutil succeeded!

18. Created the RAID 1 drive mirrors by mirroring the following
pairs of drives:

1,0 –> 2,0 mirrored as logical unit 0 (d0)
1,1 –> 2,1 mirrored as logical unit 1 (d1)
1,2 –> 2,2 mirrored as logical unit 2 (d2)

Commands used:

/usr/lib/osa/bin/raidutil -c c1t1d0 -l 1 -n 0 -s 0 -r fast -g 10,20

/usr/lib/osa/bin/raidutil -c c1t1d0 -l 1 -n 1 -s 0 -r fast -g 11,21

/usr/lib/osa/bin/raidutil -c c1t1d0 -l 1 -n 2 -s 0 -r fast -g 12,22

19. Created the hot spare drive using drive 2,3:

/usr/lib/osa/bin/raidutil -c c1t1d0 -h 23

20. Partitioned and labeled each new drive with a single slice 2
partition as the whole disk using the format command.

At this point the A1000 was ready to go on the first system. If this was
the only system, you would just build your filesystems and mount the drives
at this point, the same as any other drive.

Because we were dual-porting this between two nodes of a future cluster,
we now needed to configure the second node. We now need to change the
scsi-initatior-id of the second controller to 6 (from the default of 7).
This is so both scsi controllers can be connected to the array at the
same time.

To configure the second node:

1. Leave the array disconnected from the second system for now

2. Power on the second E220R

3. Update the nvramrc to set the controller to id 6 per
the Sun Infodoc 20704 "Setting the scsi-initatior-id on PCI
Systems with Sun Cluster Software". (This applies to any
dual ported systems, cluster or not.)

A. From the OBP prompt, get list of controllers:

ok setenv auto-boot? false
ok reset-all
ok probe-scsi-all

B. edit nvramrc using the path for the scsi controller(s)
that you are changing:

ok nvedit
0: probe-all install-console banner
1: cd /pci@1f,4000/scsi@2,1
2: 6 " scsi-initiator-id" integer-property
3: device-end
4: cd /pci@1f,4000/scsi@2
5: 6 " scsi-initiator-id" integer-property
6: device-end
7: banner (Control C)

C. Do a ctrl-c, and store the nvramrc:

ok nvstore

D. Set the system to use the nvramrc:

ok setenv use-nvramrc? true

E. Do a reset:

ok reset-all

4. Verify the nvramrc settings were saved and that the
scsi-initiator-id was changed to 6 on the card:

ok cd /pci@1f,4000/scsi@2,1
ok .properties

It should report something like:

"scsi-initiator-id 000000006"

5. Cable the second system (node2) via port 1 of the dual-port
differential SCSI card to the second SCSI port on the A1000.

6. Reset the system again and then probe the scsi bus
to verify it sees the array:

ok reset-all
ok probe-scsi-all

7. Reset the auto-boot parameter, and then reset the
system and allow it to boot:

ok setenv auto-boot? true
ok reset-all

8. Install the RAIDManager ver 6.22 software and jumbo patch
as you did on the first node.

9. Verify the RAIDManager software can see the configured array:

/usr/lib/osa/bin/raidutil -c c1t1d0 -i
/usr/lib/osa/bin/drivutil -i c1t1d0

*** DO NOT CONFIGURE THE RAID – IT IS ALREADY CONFIGURED ***

10. Verify the the OS utilities (format, prtvtoc, etc.) can see the drives.

At this point, the hardware is all configured. Next we need to
configure the Volume Manager and File System software.

VXVM/VXFS configs:

1. Install the VXVM and VXFS software:

mount -r -F hsfs /dev/sr0 /cdrom
cd /cdrom/… (don't remember the exact path)
pkgadd -d . VRTSvxvm VRTSvmdev VRTSvmdoc VRTSvmman VRTSvmsa VRTSvxfs VRTSfsdoc

This installs the following packages:

system VRTSfsdoc VERITAS File System Documentation Package
system VRTSvmdev VERITAS Volume Manager, Header and Library Files
system VRTSvmdoc VERITAS Volume Manager (user documentation)
system VRTSvmman VERITAS Volume Manager, Manual Pages
system VRTSvmsa VERITAS Volume Manager Storage Administrator
system VRTSvxfs VERITAS File System
system VRTSvxvm VERITAS Volume Manager, Binaries

NOTE: The manpages and docs are all optional. Also, the latest
packages can be obtained via the Veritas ftp site after
contacting Veritas.

2 – Install the Veritas licenses:

vxserial -c

Enter the license key for each product. At a minimum, you need the
base volume manager key, and the veritas filesystem key. If you will
be using RAID 5, also enter that key.

3 – Run vxinstall to complete the installation:

vxinstall

This will prompt for a quick or custom install. Select Quick Install

1. Quick Installation
– encapsulate the boot drive c0t0d0
– use default disk names
– initial the mirror drive c0t1d0
– initial all drives on controller 1 (the array)

NOTE: In order to properly encapsulate the boot drive, you need to have:

– an unused cylinder at the beginning or end of drive
– slices 3 and 4 must be unused

To do this, I usually label the boot drive to have the root
slice 0 start at cylinder 1, and then only use slices 1,5,6,7 for
swap and the other filesystems.

4 – Reboot the system when prompted

5 – Verify all drives were configured and are seen:

vxdisk list

6 – Verify the root drive was encapsulated. The /etc/vfstab file and
df should both show that the root filesystems now are using /dev/vx
devices.

7 – Delete the array drives from rootdg (quick install only creates rootdg
and places all drives in it.):

vxdg -g rootdg rmdisk disk02
vxdg -g rootdg rmdisk disk03
vxdg -g rootdg rmdisk disk04

8 – Now initialize the datadg disk group. You do this by naming the first
disk that will be in the group:

vxdg init datadg disk02=c1t1d0s2

9 – Now add the remaining drives to datadg:

vxdg -g datadg adddisk disk03=c1t1d1s2
vxdg -g datadg adddisk disk04=c1t1d2s2

10 – Mirror the root drive. First we mirror the root filesystem and
make the mirror drive bootable:

/etc/vx/bin/vxrootmir disk01

11 – Now mirror the remainder of the root volumes:

vxassist -g rootdg mirror swapvol disk01
vxassist -g rootdg mirror usr disk01
vxassist -g rootdg mirror opt disk01
vxassist -g rootdg mirror var disk01

NOTE: You cannot reboot until the volumes have completed the mirroring
process. If you do, you have to start them again. To verify the
mirrors are done, run:

vxprint -ht

And each volume shows as being ENABLED and ACTIVE, i.e.:

v usr – ENABLED ACTIVE 4283208 fsgen – ROUND
pl usr-01 usr ENABLED ACTIVE 4283208 CONCAT – RW
sd rootdisk-03 usr-01 rootdisk 31080352 4283208 0 c0t0d0 ENA
pl usr-02 usr ENABLED ACTIVE 4283208 CONCAT – RW
sd disk01-03 usr-02 disk01 2709400 4283208 0 c0t1d0 ENA

12 – Now build the datadg volumes. In this example, we are building 3 simple
volumes, one per disk, and we use the filesystem name as the volume name.

A. Get the maxsize of the drive(s):

vxassist -g datadg disk02

B. Now create the mh volume using the maxsize returned above (17228m)
on disk02:

vxassist -g datadg make mh 17228m layout=concat disk02

C. Now create the ms volume using the maxsize returned above (17228m)
on disk03:

vxassist -g datadg make ms 17228m layout=concat disk03

D. Now create the amt volume using the maxsize returned above (17228m)
on disk04:

vxassist -g datadg make amt 17228m layout=concat disk04

13 – Now build the VXFS filesystems on each volume:

mkfs -F vxfs -o largefiles /dev/vx/rdsk/datadg/mh
mkfs -F vxfs -o largefiles /dev/vx/rdsk/datadg/ms
mkfs -F vxfs -o largefiles /dev/vx/rdsk/datadg/amt

14 – At this point the volumes and filesystems are ready to go. Use vxprint
to verify all configs:

vxprint -ht | more

The output will look like:

Disk group: rootdg

DG NAME NCONFIG NLOG MINORS GROUP-ID
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
V NAME RVG KSTATE STATE LENGTH USETYPE PREFPLEX RDPOL
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE

dg rootdg default default 0 989258071.1025.node1

dm disk01 c0t1d0s2 sliced 4711 35363560 –
dm rootdisk c0t0d0s2 sliced 4711 35363560 –

v opt – ENABLED ACTIVE 20662120 fsgen – ROUND
pl opt-01 opt ENABLED ACTIVE 20662120 CONCAT – RW
sd rootdisk-04 opt-01 rootdisk 10418232 20662120 0 c0t0d0 ENA
pl opt-02 opt ENABLED ACTIVE 20662120 CONCAT – RW
sd disk01-04 opt-02 disk01 6992608 20662120 0 c0t1d0 ENA

v rootvol – ENABLED ACTIVE 607848 root – ROUND
pl rootvol-01 rootvol ENABLED ACTIVE 607848 CONCAT – RW
sd rootdisk-02 rootvol-01 rootdisk 0 607848 0 c0t0d0 ENA
pl rootvol-02 rootvol ENABLED ACTIVE 607848 CONCAT – RW
sd disk01-01 rootvol-02 disk01 0 607848 0 c0t1d0 ENA

v swapvol – ENABLED ACTIVE 2101552 swap – ROUND
pl swapvol-01 swapvol ENABLED ACTIVE 2101552 CONCAT – RW
sd rootdisk-01 swapvol-01 rootdisk 607848 2101552 0 c0t0d0 ENA
pl swapvol-02 swapvol ENABLED ACTIVE 2101552 CONCAT – RW
sd disk01-02 swapvol-02 disk01 607848 2101552 0 c0t1d0 ENA

v usr – ENABLED ACTIVE 4283208 fsgen – ROUND
pl usr-01 usr ENABLED ACTIVE 4283208 CONCAT – RW
sd rootdisk-03 usr-01 rootdisk 31080352 4283208 0 c0t0d0 ENA
pl usr-02 usr ENABLED ACTIVE 4283208 CONCAT – RW
sd disk01-03 usr-02 disk01 2709400 4283208 0 c0t1d0 ENA

v var – ENABLED ACTIVE 7708832 fsgen – ROUND
pl var-01 var ENABLED ACTIVE 7708832 CONCAT – RW
sd rootdisk-05 var-01 rootdisk 2709400 7708832 0 c0t0d0 ENA
pl var-02 var ENABLED ACTIVE 7708832 CONCAT – RW
sd disk01-05 var-02 disk01 27654728 7708832 0 c0t1d0 ENA

Disk group: datadg

DG NAME NCONFIG NLOG MINORS GROUP-ID
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
V NAME RVG KSTATE STATE LENGTH USETYPE PREFPLEX RDPOL
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE

dg datadg default default 59000 989266501.1167.node1

dm disk02 c1t1d0s2 sliced 4095 35282944 –
dm disk03 c1t1d1s2 sliced 4095 35282944 –
dm disk04 c1t1d2s2 sliced 4095 35282944 –

v amt – ENABLED ACTIVE 35282944 fsgen – SELECT
pl amt-01 amt ENABLED ACTIVE 35282944 CONCAT – RW
sd disk04-01 amt-01 disk04 0 35282944 0 c1t1d2 ENA

v mh – ENABLED ACTIVE 35282944 fsgen – SELECT
pl mh-01 mh ENABLED ACTIVE 35282944 CONCAT – RW
sd disk02-01 mh-01 disk02 0 35282944 0 c1t1d0 ENA

v ms – ENABLED ACTIVE 35282944 fsgen – SELECT
pl ms-01 ms ENABLED ACTIVE 35282944 CONCAT – RW
sd disk03-01 ms-01 disk03 0 35282944 0 c1t1d1 ENA

15 – If the volumes/filesystems are to be managed as part of a cluster
the should now be added to it. If not, then the filesystems can
be added to node 1 using standard methods:

mkdir /mh
mkdir /ms
mkdir /amt
edit /etc/vfstab and add the 3 new volumes/filesystems
mountall

At this point, all filesystems should be online and operational. To manually
failover the array between nodes:

1 – Unmount the filesystems on node 1

umount /mh
umount /ms
umount /amt

2 – Deport the datadg disk group from node 1 (you can also use
the vxdiskadm menu for this):

vxdg deport datadg

3 – Import the datadg disk group on node 2 (you can also use
the vxdiskadm menu for this):

vxdg import datadg

NOTE: If the disks show as offline on node 2 (vxdisk list),
then you need to put them online before importing, i.e:

vxdisk online c1t1d0s2

Also, if the first node has failed and you were unable
to deport the disks first, you would need to force the
import with the -f option.

4 – Enable the imported volumes:

vxrecover -s

This will attempt to enable all volumes. If you wish to
enable only certain volumes, use:

vxrecover -s [volume_name]

If you forget this step, you will get an error about the
volume not existing when you try to mount/access it.

5 – Mount the filesystems

mount /dev/vx/dsk/datadg/mh /mh
mount /dev/vx/dsk/datadg/ms /ms
mount /dev/vx/dsk/datadg/amt /amt

To fail back, you would use the same procedure.