I have some FC hardware and I thought I try to setup a small SAN. Here is my hardware list:
Server (hq01): Slackware64
FC Adapter: QLogic 2300 (1 FC Port)
San Switch (ssw01): Compaq StorageWorks San Switch 8 (Brocade)
Storage (stor01): Sun StorEdge 6120 (14x33GB)
The server is connected with one FC cable to port 0 of the San Switch and the StorEdge 6120 ist also connected with one FC cable to port 1 of the San Switch. This article contains the following topics:
Setting up the FC adapter
Configure the Sun StorEdge 6120
Bringing all together: setting up the San Switch
Use the new Lun
I had some troubles to set it all up - especially with the QLogic adapter. The following chapters are straight forward. Hope you enjoy!
Setting up the FC adapter
Setting up the FC adapter gave me a slight headache. I have several adapaters accessable but I couldn't bring them up and running inside my Slackware machine. At least I bought a QLogic 2300 adapter and even this adapter was tricky to handle. I tried a lot like reloading the driver, setting up a initial ramdisk with the qla2xxx driver, checked firmware, tried another firmware etc. Nothing works. All I saw was the following log:
# dmesg
...
[ 61.920135] qla2xxx [0000:02:09.0]-0063:0: Failed to load firmware image (ql2300_fw.bin).
[ 61.924957] qla2xxx [0000:02:09.0]-0083:0: Fimware image unavailable.
...
The final hint was to use modprobe -r instead of rmmod for unloading the driver:
# modprobe -r qla2xxx
FATAL: Module qla2xxx is builtin.
If you try to unload the qla2xxx driver with rmmod then it won't give you this nice hint. So I deceided to recompile my the kernel. The fun part is that the qla2xxx driver is already confgured as module and the module resides in /lib/modules/3.2.29/:
# ls /lib/modules/3.2.29/kernel/drivers/scsi/qla2xxx/
qla2xxx.ko
Anyway, after one frustrating day I recompiled the kernel (without any configuration changes):
# cd /usr/src/linux
# make all && make modules_install
And copied to /boot:
# cp /usr/src/linux/arch/x86/boot/bzImage /boot/qla
Added the new kernel to lilo.conf:
# vi /etc/lilo.conf
...
image = /boot/qla
root = /dev/sda2
label = Linux-qla
read-only
...
Installed lilo again:
# lilo
Added Linux-qla
...
And rebooted:
# shutdown -r now
...
And voila, the driver loaded without any errors:
# lspci -v -s `lspci | awk '/Fibre/ {print $1}'`
02:09.0 Fibre Channel: QLogic Corp. QLA2300 64-bit Fibre Channel Adapter (rev 01)
Subsystem: QLogic Corp. Device 0106
Flags: bus master, 66MHz, medium devsel, latency 128, IRQ 17
I/O ports at 4000 [size=256]
Memory at f0220000 (64-bit, non-prefetchable) [size=4K]
[virtual] Expansion ROM at 40000000 [disabled] [size=128K]
Capabilities: [44] Power Management version 2
Capabilities: [4c] PCI-X non-bridge device
Capabilities: [54] MSI: Enable- Count=1/8 Maskable- 64bit+
Kernel driver in use: qla2xxx
The server is ready for some FC Luns now.
Configure the Sun StorEdge 6120
First get a little familar with storage and display the available disks etc:
stor01:/:<1>fru stat
...
DISK STATUS STATE ROLE PORT1 PORT2 TEMP VOLUME
------ ------- ---------- ---------- --------- --------- ---- ------
u1d01 ready enabled unassigned ready ready 14 -
...
u1d14 ready enabled unassigned ready ready 16 -
...
Then create a Raid 1 volume with 6 disks and one spare disk:
stor01:/:<2>vol add v0 data u1d1-6 raid 1 standby u1d14
The above command will create a Raid 1 volume with the first six disks (d1-6) from the first enclosure (u1) and use disk 14 as spare disk. The volume is named v0. Next initialize and mount the new volume (initializing may take some time):
stor01:/:<3>vol init v0 data
WARNING - Volume data will be initialized to zero.
WARNING - Volume initialization can take a significant amount of time.
Continue ? [N]: Y
Volume initialization in progress...
stor01:/:<4>vol mount v0
Check that everything is ready:
stor01:/:<5>vol list
volume capacity raid data standby
v0 101.167 GB 1 u1d01-06 u1d14
stor01:/:<6>vol stat
v0: mounted
u1d01: mounted
u1d02: mounted
u1d03: mounted
u1d04: mounted
u1d05: mounted
u1d06: mounted
Standby: u1d14: mounted
On the above created device you need to create a slice. This slice can be finally used by your host:
stor01:/:<8>volslice create s0 -z 10GB v0
1 out of Max. 64 slices created, 63 available.
I have choosen to create a simple 10GB slice (s0) on the 100GB volume (v0):
stor01:/:<9>volslice list
Slice Slice Num Start Blk Size Blks Capacity Volume
s0 0 0 20984832 10.005 GB v0
- - 20984832 191180544 91.161 GB v0
Now your host needs access to the slice, the following example will give r/w access to the lun for my prior configured server hq01:
stor01:/:<10>lun perm lun 0 rw wwn 210000e08b1bb434
stor01:/:<11>lun perm list
lun slice WWN Group Name Group Perm WWN Perm Effective Perm
--------------------------------------------------------------------------------------------------------
0 0 default -- -- none none
0 0 210000e08b1bb434 -- -- rw rw
--------------------------------------------------------------------------------------------------------
Where then WWN 21:00:00:e0:8b:1b:b4:34 is the WWN for the QLogic adapter that is build in my server hq01.
Bringing all together: setting up the San Switch
The first thing that needs to be done is to specify a configuration:
ssw01:admin> cfgCreate "karellen_cfg", "karellen_zone"
The configuartion is named karellen_cfg and I added the zone karellen_zone which will I create later. The next thing I did was to create a few aliases, one for my host hq01, one for the storage stor01 and two more for the first two ports of the san switch:
ssw01:admin> alicreate "hq01_p0", "21:00:00:e0:8b:1b:b4:34"
ssw01:admin> alicreate "stor01_p0", "20:03:00:03:ba:4e:83:64"
ssw01:admin> alicreate "ssw01_p0", "20:00:00:60:69:22:32:ea"
ssw01:admin> alicreate "ssw01_p1", "20:01:00:60:69:22:32:ea"
Then I created the zone karellen_zone with four members (the aliases I created above):
ssw01:admin> zoneCreate "karellen_zone", "hq01_p0; stor01_p0; ssw01_p0; ssw01_p1"
The final step is to activate the configuration:
ssw01:admin> cfgenable "karellen_cfg"
0x102a7510 (tRcs): Jan 19 17:46:43
INFO ZONE-MSGSAVE, 4, cfgSave completes successfully.
cfgEnable successfully completed
ss01:admin> 0x10247a70 (tThad): Jan 19 17:46:43
INFO FW-CHANGED, 4, fabricZC000 (Fabric Zoning change) value has changed. current value : 2 Zone Change(s). (info)
Everything is setup now, time to use the new Lun.
Use the new Lun:
First check that the new Lun is available in your Linux system:
# lsscsi
...
[8:0:0:0] disk SUN T4 0301 /dev/sdc
...
Check the capacity:
# fdisk -l /dev/sdc
Disk /dev/sdc: 10.7 GB, 10744233984 bytes
64 heads, 32 sectors/track, 10246 cylinders, total 20984832 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
That's all!
After Assigning the LUN from the Storage to the host , does it appear automatically when I run lspci? OR should I rescan the the devices/scsi buses?
ReplyDelete