Success story: Improving Big Data storage and processing for geoengineering specialist
In this article, we present a successful installation of Open-E JovianDSS for a company that…
Read MoreHow to provide access to shared data using Linux based servers and cluster file system? In this post we are going to present quick and painless way to configure the whole environment that will consist of a few servers using Linux Debian Squeeze and another one with DSS V6 as a storage provider. To be more interesting we are going to present also an alternative way to manage DSS V6.
Suppose you have several servers based on Linux Debian Squeeze. You want to provide some storage for these servers, which e.g. can be used to store application data shared between servers (e.g. some static data). For this purpose you can use the DSS V6 as a software specializing in storage management. You can easily export data via iSCSI Target and using your servers based on Debian Squeeze import them via iSCSI Initiator.
1. Creating iSCSI Target
This step represents creating iSCSI Target using Open-E DSS V6. Alternative to GUI way to manage DSS V6 is to use SSH access channel. It can help achieve the automation of system management as well as allows you to perform an action on the DSS V6 system when it is not possible to use graphical environment. To activate this channel, go to SETUP->administrator->API Configuration, enable and configure it according to your preferences (by specifying the destination port to connect to the service, password, allowed IP address or generate a key that will allow you authentication without password for this service). When you activate this option, use command ssh in order to create iSCSI logical volume and export it via iSCSI Target.
NOTE: All available commands for SSH channel you can get using the command ‘help’. In addition for each command you can use the ‘-h’ parameter to get more details.
2. Configuring Debian Squeeze servers
At this stage we will need the iSCSI Initiator (Open-iSCSI) and cluster file system (ocfs2).
iscsiadm -m node -T mytarget -p 192.168.2.14 –login
Logging in to [iface: default, target: mytarget, portal: 192.168.2.14,3260]
Login to [iface: default, target: mytarget, portal: 192.168.2.14,3260]: successful
NOTE: To ensure automatic connection iSCSI Initiator to iSCSI Target at Open-iSCSI service startup (de facto at the server startup) you can set option node.startup with value automatic in the configuration of this target. You can do this by iscsiadm -m node -T mytarget –op update -n node.startup -v automatic where mytarget is the name of the iSCSI Target (you can set this option globally for all targets in /etc/iscsi/iscsid.conf configuration file).
node:
ip_port = 7777
ip_address = 192.168.2.9
number = 1
name = node-second
cluster = ocfs2
cluster:
node_count = 2
name = ocfs2
that will be responsible for automatic device mount at system boot.
NOTE: I’ve noticed some problems with script boot sequence in rcS.d. I’ve changed LSB header dependencies for scripts /etc/init.d/ocfs2 and /etc/init.d/o2cb – now they have open-iscsi in Required-Start field. After changing it I’ve ran insserv command which caused reordering scripts in rcS.d directory and correct service start at system boot.
In this way we have gone through the configuration of all servers – good luck!
Leave a Reply