How to provide access to shared data using Linux based servers and cluster file system? In this post we are going to present quick and painless way to configure the whole environment that will consist of a few servers using Linux Debian Squeeze and another one with DSS V6 as a storage provider. To be more interesting we are going to present also an alternative way to manage DSS V6.
Suppose you have several servers based on Linux Debian Squeeze. You want to provide some storage for these servers, which e.g. can be used to store application data shared between servers (e.g. some static data). For this purpose you can use the DSS V6 as a software specializing in storage management. You can easily export data via iSCSI Target and using your servers based on Debian Squeeze import them via iSCSI Initiator.
1. Creating iSCSI Target
This step represents creating iSCSI Target using Open-E DSS V6. Alternative to GUI way to manage DSS V6 is to use SSH access channel. It can help achieve the automation of system management as well as allows you to perform an action on the DSS V6 system when it is not possible to use graphical environment. To activate this channel, go to SETUP->administrator->API Configuration, enable and configure it according to your preferences (by specifying the destination port to connect to the service, password, allowed IP address or generate a key that will allow you authentication without password for this service). When you activate this option, use command ssh in order to create iSCSI logical volume and export it via iSCSI Target.
NOTE: All available commands for SSH channel you can get using the command ‘help’. In addition for each command you can use the ‘-h’ parameter to get more details.
2. Configuring Debian Squeeze servers
At this stage we will need the iSCSI Initiator (Open-iSCSI) and cluster file system (ocfs2).
necessary software you can install using following command:apt-get install open-iscsi ocfs2-tools
connecting to iSCSI Target (this step has to be performed on all servers):iscsiadm -m discovery -t st -p 192.168.2.14 192.168.2.14:3260,1 mytarget
iscsiadm -m node -T mytarget -p 192.168.2.14 –login Logging in to [iface: default, target: mytarget, portal: 192.168.2.14,3260] Login to [iface: default, target: mytarget, portal: 192.168.2.14,3260]: successful
NOTE: To ensure automatic connection iSCSI Initiator to iSCSI Target at Open-iSCSI service startup (de facto at the server startup) you can set option node.startup with value automatic in the configuration of this target. You can do this by iscsiadm -m node -T mytarget –op update -n node.startup -v automatic where mytarget is the name of the iSCSI Target (you can set this option globally for all targets in /etc/iscsi/iscsid.conf configuration file).
configuring ocfs2 cluster file system (this step has to be performed on all servers):The next step is to configure ocfs2. It is very simple and bases on a configuration file /etc/ocfs2/cluster.conf, which should be placed on all nodes. It may have the following content (assuming our two-node configuration):node: ip_port = 7777 ip_address = 192.168.2.8 number = 0 name = node-first cluster = ocfs2
node: ip_port = 7777 ip_address = 192.168.2.9 number = 1 name = node-second cluster = ocfs2
cluster: node_count = 2 name = ocfs2
in next step we will ensure that o2cb service starts at system boot. You can make it through dpkg-reconfigure ocfs2-tools. Then (assuming that o2cb service is running) format your device that will be used to store common data using mkfs.ocfs2 command (e.g. mkfs.ocfs2 /dev/sdb, assuming that /dev/sdb is external device connected via iSCSI Initiator) and mount it on all nodes (mount /dev/sdb /mnt/common).Optionally you can set an entry in /etc/fstab:/dev/sdb /mnt/common ocfs2 defaults 0 0
that will be responsible for automatic device mount at system boot.
NOTE: I’ve noticed some problems with script boot sequence in rcS.d. I’ve changed LSB header dependencies for scripts /etc/init.d/ocfs2 and /etc/init.d/o2cb – now they have open-iscsi in Required-Start field. After changing it I’ve ran insserv command which caused reordering scripts in rcS.d directory and correct service start at system boot.
In this way we have gone through the configuration of all servers – good luck!