Here we present two ways of thinking about the issue of boot media in cooperation with Open-E DSS V6. Take a look below – there are following options to consider:
The first is to use a Small Boot Volume that resides on the Hardware RAID controller. This is a very reliable and stabile boot media and there is no need to buy an extra boot media. The small 2GB logical volume on the RAID Array can be created after creating a RAID set. What you will have is a small logical volume then the data volume will need to be created. Overall what you will have is a RAID set with two RAID volumes, the first for boot and the second for user data.
The DSS software installer will place the DSS boot image onto the small volume during the installation. Once the software is installed and system is rebooted the WEB GUI will show the user data volume as available for format. In this case we have data and operating system separated on the RAID logical volumes. In case of complete RAID corruption we lose the data and we lose the DSS boot image. When event like this has occurred the DSS can be re-installed again new and the data can be restored from a backup. If the administrator has saved the DSS settings, the settings can be restored as well from the settings.cnf file.
Very reliable thanks to RAID
Less RMA cases
Logical Volume Manager restoration cannot be used in the event of a RAID crash.
The User must download LOGs every time the Volume Group and or Logical Volumes configuration is changed.
The second option is a separate boot device like SATA DOM, ATA DOM, HDD, SSD or even extra RAID1 with small HDD or SSD.
The reliability depends on the media quality. The DSS OS image requires a minimum of 1GB of space and a maximum of 2GB. The recommended size is 2GB. The space above 2GB which is available on a particular boot device cannot be used for user data or to create volumes. This is why the ideal size of the boot device is 2GB. If the boot device is flash based (SATA DOM, ATA DOM, SSD) and has a larger size than 2GB, for example 8GB, it is recommended to assign 2GB for DSS and the rest 6GB will remain unused. This will increase flash lifetime as the SSD wear leveling technique will use those unused space for prolonging the service life.
In case of RAID crash the LVM restore can be run from console
Easy restoration of totally crashed node of Failover
Need to purchase an extra boot device
Potential RMA in case of failure
Only 2GB space is used and the rest cannot be used
For Systems using the Failover function the preferred method will be to have separate boot media because it will be easier to re-create the Failover member after a total RAID crash. This is important only if the cluster services cannot be stopped and in case of total RAID crash and the failed system will need to be recovered as failback ready.
In case the Failover node is totally destroyed the user will still need to have the LOGs in order to re-create the destroyed node. This is again is important if the cluster services cannot be stopped while adding the missing cluster node after recreation.
Single systems can use both boot methods. In every case it is a very good idea to download and save system LOGs after every change in Volume Group or Logical Volume configuration.
NOTE: We do NOT recommend any kind of USB based DOM. The USB based media today have very limited reliability. The lifetime of the USB based media is mostly in the range of 2-3 years and generates a high number of RMA cases.