Today a colleague asked me about a strange situation when adding new disks to a existing disk group in AIX 7.1. He had two new disks from two different storages. The first disk could be added easily as always:
# vxdg -g oradg adddisk emc_clariion1_79
But when he tried to add the second disk he became the following error:
# vxdg -g oradg adddisk emc_clariion0_80
VxVM vxdg ERROR V-5-1-0 Disk Group oradg has only standard disks and trying to add cloned disk to diskgroup.Mix of standard and cloned disks in a diskgroup is not allowed. Please follow the vxdg (1M) man page.
Showing posts with label VxVM. Show all posts
Showing posts with label VxVM. Show all posts
Tuesday, November 4, 2014
Wednesday, April 16, 2014
Destroy a deported disk group
Today I wanted to destroy a disk group that was deported. A import fails because there were missing disks (it was just a test environment so loss of data was wanted):
# vxdg import testdg
VxVM vxdg ERROR V-5-1-10978 Disk group testdg: import failed:
Disk for disk group not found
# vxdg import testdg
VxVM vxdg ERROR V-5-1-10978 Disk group testdg: import failed:
Disk for disk group not found
Wednesday, December 11, 2013
VxVM vxassist ERROR V-5-1-15304 Cannot allocate space for dco volume
That one gave me a headache and I couldn't find any solution on the internet. So maybe this article might help some one some day.
Story so far: a customer has a RAC envinment based on Veritas CVM. One day a disk was in state FAILING like this:
Story so far: a customer has a RAC envinment based on Veritas CVM. One day a disk was in state FAILING like this:
Thursday, November 14, 2013
plex in RECOVER state, subdisk in RELOCATE state
After a 'crash' plexes were in RECOVER state and subdisk in RELOCATE state (the customer updated and rebooted his storages online which causes Veritas to fail).
The system was a HP-UX 11.31. The situation was like that:
The system was a HP-UX 11.31. The situation was like that:
Tuesday, June 25, 2013
VxVM vxdisksetup ERROR V-5-2-5716 Disk c1t1d0 is in use by ZFS.
Haha, this is a stupid one since the error message contains the solutions already.
If you ever want/need to reuse a disk that was used bei ZFS prior as a disk for VxVM then you might encounter the following error:
# vxdisksetup -i c1t1d0
VxVM vxdisksetup ERROR V-5-2-5716 Disk c1t1d0 is in use by ZFS. Slice(s) 0 are in use as ZFS zpool (or former) devices.
If you still want to initialize this device for VxVM use, please destroy the zpool by running 'zpool' command if it is still active, and then remove the ZFS signature from each of these slice(s) as follows:
dd if=/dev/zero of=/dev/vx/rdmp/c1t1d0s[n] oseek=31 bs=512 count=1
[n] is the slice number.
If you ever want/need to reuse a disk that was used bei ZFS prior as a disk for VxVM then you might encounter the following error:
# vxdisksetup -i c1t1d0
VxVM vxdisksetup ERROR V-5-2-5716 Disk c1t1d0 is in use by ZFS. Slice(s) 0 are in use as ZFS zpool (or former) devices.
If you still want to initialize this device for VxVM use, please destroy the zpool by running 'zpool' command if it is still active, and then remove the ZFS signature from each of these slice(s) as follows:
dd if=/dev/zero of=/dev/vx/rdmp/c1t1d0s[n] oseek=31 bs=512 count=1
[n] is the slice number.
Monday, June 24, 2013
VxVM vxvol ERROR V-5-1-607 Diskgroup oracledg not found
Last night a customer had a blackout in his small datacenter. Luckily all machines came up again without any problem. The only problem he figured was his veritas environment. Volumes seemed to be mounted without content, disk groups could not be found etc.
The first thing I deceided to do was to unmount all volumes. First I listed all volumes in the Oracle disk group:
The first thing I deceided to do was to unmount all volumes. First I listed all volumes in the Oracle disk group:
Monday, May 27, 2013
VxVM vxvol ERROR V-5-1-10128 Configuration daemon error 441
When you get the following error while trying to stop a Veritas volume:
# vxvol -g testdg stop testvol01
VxVM vxvol ERROR V-5-1-10128 Configuration daemon error 441
Then run a flush on the disk group that contains the volume:
# vxvol -g testdg stop testvol01
VxVM vxvol ERROR V-5-1-10128 Configuration daemon error 441
Then run a flush on the disk group that contains the volume:
Wednesday, February 13, 2013
Upgrade VxFS
First get the current version:
# fstyp -v /dev/vx/dsk/testdg/testvol
vxfs
magic a501fcf5 version 7 ctime Mon Sep 10 03:06:00 2012
...
# fstyp -v /dev/vx/dsk/testdg/testvol
vxfs
magic a501fcf5 version 7 ctime Mon Sep 10 03:06:00 2012
...
Thursday, January 31, 2013
Scan for new Lun(s) in AIX
After creating a Lun in a storage and mapping the new Lun to a AIX server you should list the current disks:
# vxdisk -e list
...
# vxdisk -e list
...
Monday, November 26, 2012
VxVM enable failed: License has expired
When you work with Veritas Storage Foundation then you need a valid key. Otherwise you might encounter the following message:
VxVM vxconfigd ERROR V-5-1-1589 enable failed: License has expired or is not available for operation
transactions are disabled.
VxVM vxconfigd ERROR V-5-1-1589 enable failed: License has expired or is not available for operation
transactions are disabled.
Tuesday, November 20, 2012
vxdisk list: state=disabled
Today I had a strange situation with VxFS. When running vxprint I saw several disks in error state. A closer look at the devices gave me a clue:
# vxdisk list stor1_42
...
c1t20140080E518D5C0d26s2 state=disabled type=primary
c3t20140080E518D5C0d26s2 state=disabled type=primary
c4t20150080E518D5C0d26s2 state=disabled type=secondary
c2t20150080E518D5C0d26s2 state=disabled type=secondary
# vxdisk list stor1_42
...
c1t20140080E518D5C0d26s2 state=disabled type=primary
c3t20140080E518D5C0d26s2 state=disabled type=primary
c4t20150080E518D5C0d26s2 state=disabled type=secondary
c2t20150080E518D5C0d26s2 state=disabled type=secondary
Monday, November 12, 2012
vxprint: NODEVICE
Today I had a broken Veritas mirror. In the middle of the night one path to a storage was missing and when I took look at the Veritas configuration I got the following (output truncated):
# vxprint -g oracledg
Disk group: oracledg
...
v oravol03 fsgen ENABLED 1570766848 - ACTIVE - -
pl oravol03-01 oravol03 ENABLED 1570766848 - NODEVICE - -
sd stor1-vol003-01 oravol03-01 ENABLED 1570766848 0 NODEVICE - -
pl oravol03-02 oravol03 ENABLED 1570766848 - ACTIVE - -
sd stor2-vol003-01 oravol03-02 ENABLED 1570766848 0 - - -
...
# vxprint -g oracledg
Disk group: oracledg
...
v oravol03 fsgen ENABLED 1570766848 - ACTIVE - -
pl oravol03-01 oravol03 ENABLED 1570766848 - NODEVICE - -
sd stor1-vol003-01 oravol03-01 ENABLED 1570766848 0 NODEVICE - -
pl oravol03-02 oravol03 ENABLED 1570766848 - ACTIVE - -
sd stor2-vol003-01 oravol03-02 ENABLED 1570766848 0 - - -
...
Subscribe to:
Posts (Atom)