Virtualization is the process of creating a simulated (virtual) computer environment that uses physical resources determined in the configuration implementation – that’s the short definition.
The virtualization system and storage for virtualization should guarantee high availability, data safety, and good performance. Just to have everything clear – storage for virtualization is a set of physical devices managed by software where virtual machine images and user data are actually stored. As storage is the foundation of virtualization solutions, hardware, and software used for such implementations must meet the highest standards.
Undoubtedly, Open-E JovianDSS should be taken into consideration when it comes to software for storage for virtualization as the software actually specializes in collaboration with virtualization platforms as storage for them. 80% of Open-E implementations involve virtualization, which is proof of its outstanding usability in this scenario as the software can be set up on physical servers or as a virtual storage appliance.
In fact, Open-E JovianDSS is a perfect choice for any solution with virtualization, whether it’s storage for virtualization or virtualized storage, as it supports VMware, Citrix, Microsoft Hyper-V, and ProxMox.
In today’s article, we’d like to highlight the technical tips and recommendations for Open-E JovianDSS as storage for virtualization. We’ll answer the following questions:
What does it look like in practice?
What issues might you encounter during installation?
What are the hardware recommendations?
What are the technical tips and tricks?
Let’s start with the problems, which are going to be quickly fixed.
Poor overall performance of storage servers caused by wrong hardware used.
Insufficient total network layer throughput with a large number of machines operating on large data.
No redundancy of network connections between storage and the virtualization system.
Storage is not scalable enough for the constantly increasing number of virtual machines.
As you can see in the list above, the issues that might show up involve hardware. Therefore it is critical to use proper and high-quality hardware. Fortunately, Open-E specialists have provided a comprehensive list of hardware tips to avoid those issues and how to make your storage for virtualization solution with Open-E JovianDSS as optimal as possible, in an uncomplicated and affordable way.
General hardware recommendations – virtualization
So, let’s talk about the hardware that is recommended to be used to make such a solution optimal.
As for the data groups, it is recommended to use HDD SAS disks, preferably 10k RPM. For more demanding environments, we’d suggest using All-Flash storage based on dual-port SSD for shared storage clusters or All-Flash storage based on high-capacity, multi-layer 3D NAND SSDs for non-shared storage clusters.
For read cache, a fast, read-intensive SSD is recommended, as the capacity depends on the hot data footprint (strictly speaking – the number of virtual machines). When it comes to All-Flash storage, read cache is not required, or you can consider using L2ARC read cache for metadata only.
Talking about recommendations for writelog, the following recommendations have been collected by our specialists:
In the case of data groups on HDD, fast, low latency, write-intensive SSD is recommended.
For All-flash, it is usually not necessary to use writelog. Using the writelog may be beneficial when the SSD storage is relatively slow (e.g., a small number of QLC NAND disks) and the writelog device is very fast, e.g., Intel Optane (Note: such solutions always have to be tested before implementation).
Random performance may improve when using writelog (SLOG) with All-Flash disks, but sequential performance may be poor. In such a situation, if it’s possible to select zvols for which the priority is a sequential performance, set ZFS logbias to throughput for them. Thanks to this, write operations on these zvols will bypass the SLOG.
For optimization from the CPU’s side and for extremely intensive load installations, we recommend a fast processor around 3.0 GHz (the preferred line is Intel Xeon Gold or an equivalent from AMD). For standard load installations, the Intel Xeon Silver CPU with a 2.4 GHz clock is enough. Keep in mind that the number of cores depends on the number of storage controllers, network adapters, and other devices such as NVMe disks that will be included in the storage server.
For RAM, you should use a large (at least 64GB) and fast (adapted to the controller in the CPU) RAM for even better IOPS. When it comes to the storage controller, there are no special requirements.
For network controllers, we recommend high-speed network adapters with RDMA support for the mirror path with the number of ports that allow using the MPIO in the connection to the client.
In the case of network switches, they should definitely be of high quality and high speed with Rapid Spanning Tree Protocol (RSTP) support to prevent any bottlenecks in network connectivity.
How should you configure Open-E JovianDSS to make your solution optimal?
Take a look at the 8 points below and remember them for the future configuration of the Open-E software for use as storage for virtualization:
A 2-way mirror or 4-way mirror (especially in the case of a non-shared storage cluster) is a must for optimal redundancy and performance.
Set up thin provisioning in zvol configurations for optimal use of storage capacity.
For better performance and connection redundancy set the MPIO on iSCSI connection to the client system.
Zvol volblocksize should be matched to the application/client system requirements.
For higher IOPS, use a lower volblocksize and, for higher throughput, a bigger volblocksize.
When configuring the architecture and storage parameters, don’t forget to use the best practices document dedicated to storage, prepared by the manufacturer of the virtualization platform you are going to use.
Use tunings for SAN protocols available in the Open-E JovianDSS Release Notes.
Use several volumes instead of one and attach up to 4 volumes per target because of a separate command queue for each iSCSI target – this recommendation applies only to iSCSI TCP connections and does not apply to RDMA connections.
High Availability implementation precautions
In the case of the Open-E Non-shared Storage High Availability Clusters, fast NICs on the mirror path (25+ GbE recommended) can be helpful to achieve a good throughput on the HA cluster replication. NICs with RDMA support are recommended for even better performance in large data operations. The general rule is that network bandwidth should be balanced with storage performance.
Talking about High Availability even further, use static discovery in all SAN initiators and extend timeouts in all SAN. Also, make sure that the resource switch time is within an acceptable range, especially in the case of non-shared storage HA clusters and a large number of disks. If the switchover time is too long due to a large number of disks, it can be fixed by means of the RAID controller.
After you finish the configuration process, pre-production tests should be conducted – performance and basic failover operations/triggers should be checked (system restart, power off, manual move of resources). Don’t forget that the second ring is recommended in the HA cluster, and up to six ping nodes are recommended.
All in all, virtualization is a technology that provides not only high performance, great efficiency, and flexibility, but also a range of other benefits. Investing in proven storage for virtualization solutions and at the same time, following the strict implementation rules results in building a future-proof, reliable, and high-performing storage solution that will last for years.