{"id":106,"date":"2004-08-09T08:30:34","date_gmt":"2004-08-09T06:30:34","guid":{"rendered":""},"modified":"2007-07-03T09:47:28","modified_gmt":"2007-07-03T07:47:28","slug":"veritas-fs","status":"publish","type":"post","link":"http:\/\/www.lookit.org\/blog\/?p=106","title":{"rendered":"Veritas FS"},"content":{"rendered":"<p>Petit tutorial sur Veritas FS, gestion du file system de VERITAS.<br \/><!--more--><br \/>Last Updated: Thu May 10 14:23:37 CDT 2001<\/p>\n<p>\t\t\tA1000 Dual Port Setup and Configuration<br \/>\t\t\twith Veritas Volume Manager and Filesystem<br \/>\t\t&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<\/p>\n<p>Overview:<\/p>\n<p>The following tutorial documents the steps used to configure a single<br \/>A1000 array to be shared between two hosts.  It then documents the<br \/>steps used to create simple Veritas volumes and filesystems on each<br \/>of the logical array drives, as well as the steps necessary to deport<br \/>and import the volumes between the two hosts.<\/p>\n<p>This tutorial is based on an actual install of the configuration used<br \/>in the examples.<\/p>\n<p>Hardware configs:<\/p>\n<p>\t\t     Sun StorEdge A1000 &#8211; SCSI Target ID 1<br \/>\t\t\t   w\/ 7 18GB disk drives<\/p>\n<p>                   d0   d1   d2   HS         d0   d1   d2<br \/>\t\t &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<br \/>\t\t|  2,0  2,1  2,2  2,3     |  1,0  1,1  1,2      |<br \/>\t\t &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<br \/>\t\t\t|\t\t\t|<br \/>\t\t\t| controller 1\t\t| controller 1<br \/>\t\t\t|\t\t\t|<br \/>\t\t &#8212;&#8212;&#8212;&#8212;&#8211;\t\t&#8212;&#8212;&#8212;&#8212;-<br \/>\t\t |   E220R    |\t\t|   E220R   |<br \/>\t\t &#8212;&#8212;&#8212;&#8212;&#8211;\t\t&#8212;&#8212;&#8212;&#8212;-<br \/>\t\t      node1\t\t      node2<br \/>                        |                       |<br \/>                    c0t0d0                  c0t0d0      Internal root drive<br \/>                    c0t1d0                  c0t1d0      Internal root mirror<\/p>\n<p>&#8211; Both E220R&#39;s have a dual-channel Differential Ultra-SCSI card, with the                           <br \/>  first port (controller 1) connected to the A1000 array.                            <\/p>\n<p>&#8211; The A1000 array was configured as 3 logical drives, plus one hot spare.  <br \/>  The logical drives consist of 2 18GB disk drives mirrored (RAID 1).<\/p>\n<p>&#8211; The A1000 logical drives are seen as:<\/p>\n<p>\tc1t1d0, c1t1d1, c1t1d2<\/p>\n<p>&#8211; Final configs are setup as:<\/p>\n<p>  Internal Drives:<\/p>\n<p>\tBoot drive c0t0d0 encapsulated as rootdisk and mirrored to disk01 (c0t1d0)<\/p>\n<p>   A1000 Drives are all simple (concatenated) veritas volumes with vxfs filesystems:<\/p>\n<p>\t\/mh = disk02 = c1t1d0 = RAID 1 hardware mirror using array drives 1,0 and 2,0<\/p>\n<p>\t\/ms = disk03 = c1t1d1 = RAID 1 hardware mirror using array drives 1,1 and 2,1<\/p>\n<p>\t\/amt = disk04 = c1t1d2 = RAID 1 hardware mirror using array drives 1,2 and 2,2<\/p>\n<p>\tHardware RAID Hot Spare drive is 2,3<\/p>\n<p>&#8211; Steps used to configure RAID:<\/p>\n<p>\t1. Installed A1000 in Rack<\/p>\n<p>\t2. Cabled A1000 to port 1 of dual-port differential SCSI card on node1<\/p>\n<p>\t3. Set SCSI ID switch on back of A1000 to target 1<\/p>\n<p>\t4. powered on A1000<\/p>\n<p>\t5. powered on E220R<\/p>\n<p>\t6. At boot prompt, verified the array was seen:<\/p>\n<p>\t\tsetenv auto-boot? false<br \/>\t\treset-all<br \/>\t\tprobe-scsi-all<\/p>\n<p>\t7. Booted E220R<\/p>\n<p>\t8. Verified OS and app patch requirements per Sun Infodoc 20029<br \/>\t   &quot;A1000\/A3x00\/A3500FC Software\/Firmware Configuration Matrix&quot;<br \/>\t   (only patch required was the RAIDManager version 6.22 jumbo<br \/>\t    patch 108834-09)<\/p>\n<p>\t9. Installed the RAIDManager ver. 6.22 software from the included CD:<\/p>\n<p>\t\tmount -r -F hsfs \/dev\/sr0 \/cdrom<br \/>\t\tcd \/cdrom\/&#8230; (don&#39;t remember the exact path)<br \/>\t\tpkgadd -d . SUNWosar SUNWosafw SUNWosamn SUNWosau<\/p>\n<p>\t   This installs the following packages:<\/p>\n<p>\t\tsystem      SUNWosafw      Open Storage Array Firmware<br \/>\t\tsystem      SUNWosamn      Open Storage Array Man Pages<br \/>\t\tsystem      SUNWosar       Open Storage Array (Root)<br \/>\t\tsystem      SUNWosau       Open Storage Array (Usr)<\/p>\n<p>\t10. Verified that \/etc\/osa\/mnf does not have a period (.) in the name.<br \/>\t    There is a known problem with this.  If it does, change to an<br \/>\t    underscore (_).  This was per the A1000 install notes at:<br \/>\t    http:\/\/www.eng.auburn.edu\/pub\/mail-lists\/ssastuff\/Solaris8-A1000.html<\/p>\n<p>\t11. Installed the RAIDManager jumbo patch 108834-09<\/p>\n<p>\t12. Performed a reconfiguratoin reboot:<br \/>\t\ttouch \/reconfigure<br \/>\t\tinit 6<\/p>\n<p>\t13. Verified the A1000 could be seen, and what firmware revision level<br \/>\t    it was:<\/p>\n<p>\t\t\/usr\/lib\/osa\/bin\/raidutil -c c1t1d0 -i<\/p>\n<p>\t    Firmware was currently at:<br \/>\t\t03.01.02.35<\/p>\n<p>\t14. Upgraded firmware to current versions.  The current versions<br \/>\t    are included as part of the patch install, and are stored in<br \/>\t    \/usr\/lib\/osa\/fw.<\/p>\n<p>\t    To install the latest boot code:<\/p>\n<p>\t\t\/usr\/lib\/osa\/bin\/fwutil 03010304.bwd c1t1d0<\/p>\n<p>\t    To install the latest firmware:<\/p>\n<p>\t\t\/usr\/lib\/osa\/bin\/fwutil 03010363.apd c1t1d0<\/p>\n<p>\t15. Verified the A1000 was updated:<\/p>\n<p>\t\t\/usr\/lib\/osa\/bin\/raidutil -c c1t1d0 -i<\/p>\n<p>\t    Reports something similar to:<\/p>\n<p>\t\tLUNs found on c1t1d0.<br \/>\t\tLUN 0    RAID 5    103311 MB<\/p>\n<p>\t\tVendor ID         Symbios<br \/>\t\tProductID         StorEDGE A1000<br \/>\t\tProduct Revision  0301<br \/>\t\tBoot Level        03.01.03.04<br \/>\t\tBoot Level Date   07\/06\/00<br \/>\t\tFirmware Level    03.01.03.63<br \/>\t\tFirmware Date     03\/15\/01<br \/>\t\traidutil succeeded!<\/p>\n<p>\t16. Now verify that the drives can be all seen:<\/p>\n<p>\t\t\/usr\/lib\/osa\/bin\/drivutil -i c1t1d0<\/p>\n<p>\t    This will report something similar to:<\/p>\n<p>Drive Information for ig028_002<\/p>\n<p>Location  Capacity   Status         Vendor  Product          Firmware \tSerial<br \/>            (MB)                              ID             Version  \tNumber<br \/>[1,0]     17274      Optimal        IBM     DDYST1835SUN18G  S96H       010811E164<br \/>[2,0]     17274      Optimal        IBM     DDYST1835SUN18G  S96H       0108109219<br \/>[1,1]     17274      Optimal        IBM     DDYST1835SUN18G  S96H       0108115692<br \/>[2,1]     17274      Optimal        IBM     DDYST1835SUN18G  S96H       010811E211<br \/>[1,2]     17274      Optimal        IBM     DDYST1835SUN18G  S96H       010810V958<br \/>[2,2]     17274      Optimal        IBM     DDYST1835SUN18G  S96H       01081WH714<br \/>[2,3]     17274      Spare-Stdby    IBM     DDYST1835SUN18G  S96H       010810V946<\/p>\n<p>\t17. Delete the default lun 0 configuration:<\/p>\n<p>\t\t\/usr\/lib\/osa\/bin\/raidutil -c c1t1d0 -D 0<\/p>\n<p>\t    Reports something similar to:<\/p>\n<p>\t\tLUNs found on c1t0d0.<br \/>\t\tLUN 0    RAID 5    103644 MB<br \/>\t\tDeleting LUN 0.<br \/>\t\tPress Control C to abort.<\/p>\n<p> \t\tLUNs successfully deleted<\/p>\n<p>\t\traidutil succeeded!<\/p>\n<p>\t18. Created the RAID 1 drive mirrors by mirroring the following<br \/>\t    pairs of drives:<\/p>\n<p>\t\t1,0 &#8211;&gt; 2,0   mirrored as logical unit 0  (d0)<br \/>\t\t1,1 &#8211;&gt; 2,1   mirrored as logical unit 1  (d1)<br \/>\t\t1,2 &#8211;&gt; 2,2   mirrored as logical unit 2  (d2)<\/p>\n<p>\t    Commands used:<\/p>\n<p>\t\t\/usr\/lib\/osa\/bin\/raidutil -c c1t1d0 -l 1 -n 0 -s 0 -r fast -g 10,20<\/p>\n<p>\t\t\/usr\/lib\/osa\/bin\/raidutil -c c1t1d0 -l 1 -n 1 -s 0 -r fast -g 11,21<\/p>\n<p>\t\t\/usr\/lib\/osa\/bin\/raidutil -c c1t1d0 -l 1 -n 2 -s 0 -r fast -g 12,22<\/p>\n<p>\t19. Created the hot spare drive using drive 2,3:<\/p>\n<p>\t\t\/usr\/lib\/osa\/bin\/raidutil -c c1t1d0 -h 23<\/p>\n<p>\t20. Partitioned and labeled each new drive with a single slice 2<br \/>\t    partition as the whole disk using the format command.<\/p>\n<p>At this point the A1000 was ready to go on the first system.  If this was<br \/>the only system, you would just build your filesystems and mount the drives<br \/>at this point, the same as any other drive.<\/p>\n<p>Because we were dual-porting this between two nodes of a future cluster,<br \/>we now needed to configure the second node.  We now need to change the <br \/>scsi-initatior-id of the second controller to 6 (from the default of 7).  <br \/>This is so both scsi controllers can be connected to the array at the <br \/>same time.<\/p>\n<p>To configure the second node:<\/p>\n<p>\t1. Leave the array disconnected from the second system for now<\/p>\n<p>\t2. Power on the second E220R<\/p>\n<p>\t3. Update the nvramrc to set the controller to id 6 per<br \/>\t   the Sun Infodoc 20704 &quot;Setting the scsi-initatior-id on PCI<br \/>           Systems with Sun Cluster Software&quot;. (This applies to any<br \/>\t   dual ported systems, cluster or not.)<\/p>\n<p>\t\tA. From the OBP prompt, get list of controllers:<\/p>\n<p>\t\t\tok  setenv auto-boot? false<br \/>\t\t\tok  reset-all<br \/>\t\t\tok  probe-scsi-all<\/p>\n<p>\t\tB. edit nvramrc using the path for the scsi controller(s)<br \/>\t\t   that you are changing:<\/p>\n<p>      \t\t\tok  nvedit<br \/>      \t\t\t0:  probe-all install-console banner<br \/>      \t\t\t1:  cd \/pci@1f,4000\/scsi@2,1<br \/>      \t\t\t2:  6 &quot; scsi-initiator-id&quot; integer-property<br \/>      \t\t\t3:  device-end<br \/>      \t\t\t4:  cd \/pci@1f,4000\/scsi@2<br \/>      \t\t\t5:  6 &quot; scsi-initiator-id&quot; integer-property<br \/>      \t\t\t6:  device-end<br \/>      \t\t\t7:  banner (Control C)<\/p>\n<p>\t\tC.  Do a ctrl-c, and store the nvramrc:<\/p>\n<p>      \t\t\tok  nvstore<\/p>\n<p>\t\tD.  Set the system to use the nvramrc:<\/p>\n<p>      \t\t\tok  setenv use-nvramrc? true<\/p>\n<p>\t\tE.  Do a reset:<\/p>\n<p>      \t\t\tok  reset-all<\/p>\n<p>\t4. Verify the nvramrc settings were saved and that the <br \/>\t   scsi-initiator-id was changed to 6 on the card:<\/p>\n<p>\t\tok  cd \/pci@1f,4000\/scsi@2,1<br \/>\t\tok  .properties<\/p>\n<p>\t   It should report something like:<\/p>\n<p>\t\t&quot;scsi-initiator-id    000000006&quot;<\/p>\n<p>\t5. Cable the second system (node2) via port 1 of the dual-port<br \/>\t   differential SCSI card to the second SCSI port on the A1000.<\/p>\n<p>\t6. Reset the system again and then probe the scsi bus<br \/>\t   to verify it sees the array:<\/p>\n<p>\t\tok  reset-all<br \/>\t\tok  probe-scsi-all<\/p>\n<p>\t7. Reset the auto-boot parameter, and then reset the<br \/>\t   system and allow it to boot:<\/p>\n<p>\t\tok  setenv auto-boot? true<br \/>\t\tok  reset-all<\/p>\n<p>\t8. Install the RAIDManager ver 6.22 software and jumbo patch<br \/>\t   as you did on the first node.<\/p>\n<p>\t9. Verify the RAIDManager software can see the configured array:<\/p>\n<p>\t   \/usr\/lib\/osa\/bin\/raidutil -c c1t1d0 -i<br \/>\t   \/usr\/lib\/osa\/bin\/drivutil -i c1t1d0<\/p>\n<p>\t   *** DO NOT CONFIGURE THE RAID &#8211; IT IS ALREADY CONFIGURED ***<\/p>\n<p>\t10. Verify the the OS utilities (format, prtvtoc, etc.) can see the drives.<\/p>\n<p>At this point, the hardware is all configured.  Next we need to<br \/>configure the Volume Manager and File System software.<\/p>\n<p>VXVM\/VXFS configs:<\/p>\n<p>\t1. Install the VXVM and VXFS software:<\/p>\n<p>\t\tmount -r -F hsfs \/dev\/sr0 \/cdrom<br \/>\t\tcd \/cdrom\/&#8230; (don&#39;t remember the exact path)<br \/>\t\tpkgadd -d . VRTSvxvm VRTSvmdev VRTSvmdoc VRTSvmman VRTSvmsa VRTSvxfs VRTSfsdoc<\/p>\n<p>\t   This installs the following packages:<\/p>\n<p>\t\tsystem      VRTSfsdoc      VERITAS File System Documentation Package<br \/>\t\tsystem      VRTSvmdev      VERITAS Volume Manager, Header and Library Files<br \/>\t\tsystem      VRTSvmdoc      VERITAS Volume Manager (user documentation)<br \/>\t\tsystem      VRTSvmman      VERITAS Volume Manager, Manual Pages<br \/>\t\tsystem      VRTSvmsa       VERITAS Volume Manager Storage Administrator<br \/>\t\tsystem      VRTSvxfs       VERITAS File System<br \/>\t\tsystem      VRTSvxvm       VERITAS Volume Manager, Binaries<\/p>\n<p>\t   NOTE: The manpages and docs are all optional.  Also, the latest<br \/>\t\t packages can be obtained via the Veritas ftp site after<br \/>\t\t contacting Veritas.<\/p>\n<p>\t2 &#8211; Install the Veritas licenses:<\/p>\n<p>\t\tvxserial -c<\/p>\n<p>\t    Enter the license key for each product.  At a minimum, you need the<br \/>\t    base volume manager key, and the veritas filesystem key.  If you will<br \/>\t    be using RAID 5, also enter that key.<\/p>\n<p>\t3 &#8211; Run vxinstall to complete the installation:<\/p>\n<p>\t\tvxinstall<\/p>\n<p>\t    This will prompt for a quick or custom install.  Select Quick Install<\/p>\n<p>\t\t1. Quick Installation<br \/>\t\t\t&#8211; encapsulate the boot drive c0t0d0<br \/>\t\t\t&#8211; use default disk names<br \/>\t\t\t&#8211; initial the mirror drive c0t1d0<br \/>\t\t\t&#8211; initial all drives on controller 1 (the array)<\/p>\n<p>\t   NOTE: In order to properly encapsulate the boot drive, you need to have:<\/p>\n<p>\t\t&#8211; an unused cylinder at the beginning or end of drive<br \/>\t\t&#8211; slices 3 and 4 must be unused<\/p>\n<p>\t\tTo do this, I usually label the boot drive to have the root <br \/>\t\tslice 0 start at cylinder 1, and then only use slices 1,5,6,7 for<br \/>\t\tswap and the other filesystems.<\/p>\n<p>\t4 &#8211; Reboot the system when prompted<\/p>\n<p>\t5 &#8211; Verify all drives were configured and are seen:<\/p>\n<p>\t\tvxdisk list<\/p>\n<p>\t6 &#8211; Verify the root drive was encapsulated.  The \/etc\/vfstab file and<br \/>\t    df should both show that the root filesystems now are using \/dev\/vx<br \/>\t    devices.<\/p>\n<p>\t7 &#8211; Delete the array drives from rootdg (quick install only creates rootdg<br \/>\t    and places all drives in it.):<\/p>\n<p>\t\tvxdg -g rootdg rmdisk disk02<br \/>\t\tvxdg -g rootdg rmdisk disk03<br \/>\t\tvxdg -g rootdg rmdisk disk04<\/p>\n<p>\t8 &#8211; Now initialize the datadg disk group.  You do this by naming the first<br \/>            disk that will be in the group:<\/p>\n<p>\t\tvxdg init datadg disk02=c1t1d0s2<\/p>\n<p>\t9 &#8211; Now add the remaining drives to datadg:<\/p>\n<p>\t\tvxdg -g datadg adddisk disk03=c1t1d1s2<br \/>\t\tvxdg -g datadg adddisk disk04=c1t1d2s2<\/p>\n<p>\t10 &#8211; Mirror the root drive.  First we mirror the root filesystem and<br \/>\t    make the mirror drive bootable:<\/p>\n<p>\t\t\/etc\/vx\/bin\/vxrootmir disk01<\/p>\n<p>\t11 &#8211; Now mirror the remainder of the root volumes:<\/p>\n<p>\t\tvxassist -g rootdg mirror swapvol disk01<br \/>\t\tvxassist -g rootdg mirror usr disk01<br \/>\t\tvxassist -g rootdg mirror opt disk01<br \/>\t\tvxassist -g rootdg mirror var disk01<\/p>\n<p>\t     NOTE:  You cannot reboot until the volumes have completed the mirroring<br \/>\t\t    process.  If you do, you have to start them again.  To verify the<br \/>\t\t    mirrors are done, run:<\/p>\n<p>\t\t\tvxprint -ht<\/p>\n<p>\t\t    And each volume shows as being ENABLED and ACTIVE, i.e.:<\/p>\n<p>\tv  usr          &#8211;            ENABLED  ACTIVE   4283208  fsgen     &#8211;        ROUND<br \/>\tpl usr-01       usr          ENABLED  ACTIVE   4283208  CONCAT    &#8211;        RW<br \/>\tsd rootdisk-03  usr-01       rootdisk 31080352 4283208  0         c0t0d0   ENA<br \/>\tpl usr-02       usr          ENABLED  ACTIVE   4283208  CONCAT    &#8211;        RW<br \/>\tsd disk01-03    usr-02       disk01   2709400  4283208  0         c0t1d0   ENA<\/p>\n<p>\t12 &#8211; Now build the datadg volumes.  In this example, we are building 3 simple<br \/>\t     volumes, one per disk, and we use the filesystem name as the volume name.<\/p>\n<p>\t\tA. Get the maxsize of the drive(s):<\/p>\n<p>\t\t\tvxassist -g datadg disk02<\/p>\n<p>\t\tB. Now create the mh volume using the maxsize returned above (17228m)<br \/>\t\t   on disk02:<\/p>\n<p>\t\t\tvxassist -g datadg make mh 17228m layout=concat disk02<\/p>\n<p>\t\tC. Now create the ms volume using the maxsize returned above (17228m)<br \/>\t\t   on disk03:<\/p>\n<p>\t\t\tvxassist -g datadg make ms 17228m layout=concat disk03<\/p>\n<p>\t\tD. Now create the amt volume using the maxsize returned above (17228m)<br \/>\t\t   on disk04:<\/p>\n<p>\t\t\tvxassist -g datadg make amt 17228m layout=concat disk04<\/p>\n<p>\t13 &#8211; Now build the VXFS filesystems on each volume:<\/p>\n<p>\t\tmkfs -F vxfs -o largefiles \/dev\/vx\/rdsk\/datadg\/mh<br \/>\t\tmkfs -F vxfs -o largefiles \/dev\/vx\/rdsk\/datadg\/ms<br \/>\t\tmkfs -F vxfs -o largefiles \/dev\/vx\/rdsk\/datadg\/amt<\/p>\n<p>\t14 &#8211; At this point the volumes and filesystems are ready to go.  Use vxprint<br \/>\t     to verify all configs:<\/p>\n<p>\t\tvxprint -ht | more<\/p>\n<p>\t     The output will look like:<\/p>\n<p>Disk group: rootdg<\/p>\n<p>DG NAME         NCONFIG      NLOG     MINORS   GROUP-ID<br \/>DM NAME         DEVICE       TYPE     PRIVLEN  PUBLEN   STATE<br \/>RV NAME         RLINK_CNT    KSTATE   STATE    PRIMARY  DATAVOLS  SRL<br \/>RL NAME         RVG          KSTATE   STATE    REM_HOST REM_DG    REM_RLNK<br \/>V  NAME         RVG          KSTATE   STATE    LENGTH   USETYPE   PREFPLEX RDPOL<br \/>PL NAME         VOLUME       KSTATE   STATE    LENGTH   LAYOUT    NCOL\/WID MODE<br \/>SD NAME         PLEX         DISK     DISKOFFS LENGTH   [COL\/]OFF DEVICE   MODE<br \/>SV NAME         PLEX         VOLNAME  NVOLLAYR LENGTH   [COL\/]OFF AM\/NM    MODE<\/p>\n<p>dg rootdg       default      default  0        989258071.1025.node1<\/p>\n<p>dm disk01       c0t1d0s2     sliced   4711     35363560 &#8211;<br \/>dm rootdisk     c0t0d0s2     sliced   4711     35363560 &#8211;<\/p>\n<p>v  opt          &#8211;            ENABLED  ACTIVE   20662120 fsgen     &#8211;        ROUND<br \/>pl opt-01       opt          ENABLED  ACTIVE   20662120 CONCAT    &#8211;        RW<br \/>sd rootdisk-04  opt-01       rootdisk 10418232 20662120 0         c0t0d0   ENA<br \/>pl opt-02       opt          ENABLED  ACTIVE   20662120 CONCAT    &#8211;        RW<br \/>sd disk01-04    opt-02       disk01   6992608  20662120 0         c0t1d0   ENA<\/p>\n<p>v  rootvol      &#8211;            ENABLED  ACTIVE   607848   root      &#8211;        ROUND<br \/>pl rootvol-01   rootvol      ENABLED  ACTIVE   607848   CONCAT    &#8211;        RW<br \/>sd rootdisk-02  rootvol-01   rootdisk 0        607848   0         c0t0d0   ENA<br \/>pl rootvol-02   rootvol      ENABLED  ACTIVE   607848   CONCAT    &#8211;        RW<br \/>sd disk01-01    rootvol-02   disk01   0        607848   0         c0t1d0   ENA<\/p>\n<p>v  swapvol      &#8211;            ENABLED  ACTIVE   2101552  swap      &#8211;        ROUND<br \/>pl swapvol-01   swapvol      ENABLED  ACTIVE   2101552  CONCAT    &#8211;        RW<br \/>sd rootdisk-01  swapvol-01   rootdisk 607848   2101552  0         c0t0d0   ENA<br \/>pl swapvol-02   swapvol      ENABLED  ACTIVE   2101552  CONCAT    &#8211;        RW<br \/>sd disk01-02    swapvol-02   disk01   607848   2101552  0         c0t1d0   ENA<\/p>\n<p>v  usr          &#8211;            ENABLED  ACTIVE   4283208  fsgen     &#8211;        ROUND<br \/>pl usr-01       usr          ENABLED  ACTIVE   4283208  CONCAT    &#8211;        RW<br \/>sd rootdisk-03  usr-01       rootdisk 31080352 4283208  0         c0t0d0   ENA<br \/>pl usr-02       usr          ENABLED  ACTIVE   4283208  CONCAT    &#8211;        RW<br \/>sd disk01-03    usr-02       disk01   2709400  4283208  0         c0t1d0   ENA<\/p>\n<p>v  var          &#8211;            ENABLED  ACTIVE   7708832  fsgen     &#8211;        ROUND<br \/>pl var-01       var          ENABLED  ACTIVE   7708832  CONCAT    &#8211;        RW<br \/>sd rootdisk-05  var-01       rootdisk 2709400  7708832  0         c0t0d0   ENA<br \/>pl var-02       var          ENABLED  ACTIVE   7708832  CONCAT    &#8211;        RW<br \/>sd disk01-05    var-02       disk01   27654728 7708832  0         c0t1d0   ENA<\/p>\n<p>Disk group: datadg<\/p>\n<p>DG NAME         NCONFIG      NLOG     MINORS   GROUP-ID<br \/>DM NAME         DEVICE       TYPE     PRIVLEN  PUBLEN   STATE<br \/>RV NAME         RLINK_CNT    KSTATE   STATE    PRIMARY  DATAVOLS  SRL<br \/>RL NAME         RVG          KSTATE   STATE    REM_HOST REM_DG    REM_RLNK<br \/>V  NAME         RVG          KSTATE   STATE    LENGTH   USETYPE   PREFPLEX RDPOL<br \/>PL NAME         VOLUME       KSTATE   STATE    LENGTH   LAYOUT    NCOL\/WID MODE<br \/>SD NAME         PLEX         DISK     DISKOFFS LENGTH   [COL\/]OFF DEVICE   MODE<br \/>SV NAME         PLEX         VOLNAME  NVOLLAYR LENGTH   [COL\/]OFF AM\/NM    MODE<\/p>\n<p>dg datadg       default      default  59000    989266501.1167.node1<\/p>\n<p>dm disk02       c1t1d0s2     sliced   4095     35282944 &#8211;<br \/>dm disk03       c1t1d1s2     sliced   4095     35282944 &#8211;<br \/>dm disk04       c1t1d2s2     sliced   4095     35282944 &#8211;<\/p>\n<p>v  amt          &#8211;            ENABLED  ACTIVE   35282944 fsgen     &#8211;        SELECT<br \/>pl amt-01       amt          ENABLED  ACTIVE   35282944 CONCAT    &#8211;        RW<br \/>sd disk04-01    amt-01       disk04   0        35282944 0         c1t1d2   ENA<\/p>\n<p>v  mh           &#8211;            ENABLED  ACTIVE   35282944 fsgen     &#8211;        SELECT<br \/>pl mh-01        mh           ENABLED  ACTIVE   35282944 CONCAT    &#8211;        RW<br \/>sd disk02-01    mh-01        disk02   0        35282944 0         c1t1d0   ENA<\/p>\n<p>v  ms           &#8211;            ENABLED  ACTIVE   35282944 fsgen     &#8211;        SELECT<br \/>pl ms-01        ms           ENABLED  ACTIVE   35282944 CONCAT    &#8211;        RW<br \/>sd disk03-01    ms-01        disk03   0        35282944 0         c1t1d1   ENA<\/p>\n<p>\t15 &#8211; If the volumes\/filesystems are to be managed as part of a cluster<br \/>\t     the should now be added to it.  If not, then the filesystems can<br \/>\t     be added to node 1 using standard methods:<\/p>\n<p>\t\tmkdir \/mh<br \/>\t\tmkdir \/ms<br \/>\t\tmkdir \/amt<br \/>\t\tedit \/etc\/vfstab and add the 3 new volumes\/filesystems<br \/>\t\tmountall<\/p>\n<p>At this point, all filesystems should be online and operational.  To manually<br \/>failover the array between nodes:<\/p>\n<p>\t1 &#8211; Unmount the filesystems on node 1<\/p>\n<p>\t\tumount \/mh<br \/>\t\tumount \/ms<br \/>\t\tumount \/amt<\/p>\n<p>\t2 &#8211; Deport the datadg disk group from node 1 (you can also use<br \/>\t    the vxdiskadm menu for this):<\/p>\n<p>\t\tvxdg deport datadg<\/p>\n<p>\t3 &#8211; Import the datadg disk group on node 2 (you can also use<br \/>\t    the vxdiskadm menu for this):<\/p>\n<p>\t\tvxdg import datadg<\/p>\n<p>\t    NOTE: If the disks show as offline on node 2 (vxdisk list),<br \/>\t\t  then you need to put them online before importing, i.e:<\/p>\n<p>\t\t\tvxdisk online c1t1d0s2<\/p>\n<p>                  Also, if the first node has failed and you were unable<br \/>                  to deport the disks first, you would need to force the<br \/>                  import with the -f option.<\/p>\n<p>\t4 &#8211; Enable the imported volumes:<\/p>\n<p>\t\tvxrecover -s<\/p>\n<p>\t    This will attempt to enable all volumes.  If you wish to<br \/>\t    enable only certain volumes, use:<\/p>\n<p>\t\tvxrecover -s [volume_name]<\/p>\n<p>\t    If you forget this step, you will get an error about the<br \/>\t    volume not existing when you try to mount\/access it.<\/p>\n<p>\t5 &#8211; Mount the filesystems<\/p>\n<p>\t\tmount \/dev\/vx\/dsk\/datadg\/mh \/mh<br \/>\t\tmount \/dev\/vx\/dsk\/datadg\/ms \/ms<br \/>\t\tmount \/dev\/vx\/dsk\/datadg\/amt \/amt<\/p>\n<p>To fail back, you would use the same procedure.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Petit tutorial sur Veritas FS, gestion du file system de VERITAS.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[8],"tags":[],"class_list":["post-106","post","type-post","status-publish","format-standard","hentry","category-informatique"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pWrTo-1I","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"http:\/\/www.lookit.org\/blog\/index.php?rest_route=\/wp\/v2\/posts\/106","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.lookit.org\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.lookit.org\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.lookit.org\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.lookit.org\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=106"}],"version-history":[{"count":0,"href":"http:\/\/www.lookit.org\/blog\/index.php?rest_route=\/wp\/v2\/posts\/106\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.lookit.org\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=106"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.lookit.org\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=106"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.lookit.org\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=106"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}