The latest version of Open-E JovianDSS Up30, launched on January 2024, comes with many sophisticated features and enhancements designed to boost your data storage’s performance, reliability, safety, and cost-effectiveness. A standout feature in this release is the introduction of ZFS special devices, a functionality that enables to store the particular data types on high-speed storage devices. Other functions worth mentioning and provided with this update of Open-E JovianDSS are MacOS Time Machine Support, Active Directory with RID Range, Active Directory support for rfc2307 (AD backend for Samba), SED support for non-shared storage clusters, or Samba Recycle Bin Support for Windows among others.
Open-E JovianDSS Up30 also provided another great feature of NVMe partition utilization, which many IT administrators highlight as extremely helpful and significantly increasing the system performance if NVMe drives are used in the storage matrix. It allows for segmenting disk space for NVMe partition and using a single device for various functions, such as read cache, write log, or ZFS special devices. This feature enables users to maximize their disk space usage, reduce costs, and streamline their data storage administration and upkeep.
Write Log / Read Cache / ZFS Special Devices – Open-E Partitioning Guidelines
Following our webinar on Open-E JovianDSS Up30, inquiries emerged regarding NVMe partition guidelines. Specifically for the capacity distribution of write cache, read cache, and ZFS special devices. This article provides Open-E’s best practices and key considerations for optimizing the volume and function of each partition.
Write Log Partition Guidelines
Optimal write log disk sizing hinges on the potential data volume transferable to the server during three consecutive ZFS operations. A limit often influenced by network bandwidth. Given a default ZFS operation duration of 5 seconds, the write log device must be capable of storing 15 seconds’ worth of data transfer (equivalent to three operation groups). From an economic standpoint, oversizing offers no advantage, yet insufficient sizing can impede synchronous write performance. A practical recommendation for this NVMe partition is a 100 GB write log.
For enhanced data security, we advise implementing write log redundancy, such as a mirrored configuration. This safeguard ensures data integrity even if one of the Write Log disks experiences a failure.
Read Cache Partition Guidelines
The best size for the read cache can be approximated using the below formula. It considers the required RAM, the bytes reserved by the single read cache header structure (l2hdr), and volblocksize (or recordsize):
Read cache size = (RAM owned × volblocksize or recordsize / bytes reserved by l2hdr structure)
Volblocksize is a fixed value that ensures any data written to a ZFS volume (zvol) will be stored in blocks that match the specified volblocksize.
The bytes reserved by single read cache header structure size (l2hdr structure) is the part for each cached record that must be stored in RAM.
Let’s use this new formula for:
57981809664B – 54GB RAM owned
70B – bytes reserved by the single read cache header structure size (l2hdr structure)
8192B – volblocksize or recordsize.
Size of Read Cache = (57981809664B * 8192B / 70B)
You can calculate the exact value using these numbers. Please note that the result will be in bytes, and you may want to convert it to a more readable unit like GB or TB.
ZFS Special Devices Guidelines
In the case of ZFS Special Devices, it’s not that simple. However, we are able to provide some basic guidelines to help you understand what to consider when creating the NVMe partition for this purpose.
Understand Workload and Data Type:
- The performance improvement percentage provided by ZFS special devices can vary significantly based on the workload and data type. Consider the specific use case, whether it involves heavy metadata operations, large file transfers, or a mix of both.
- Remember that there isn’t a universal percentage increase; it depends on the specifics of your workload
Hardware Considerations:
- Evaluate the performance characteristics of the storage devices used for the special vdev. If possible, use high-performance NVMe for ZFS special devices
- Contrast this with the primary pool storage. For instance, if your main pool consists of slower HDDs, the performance boost for metadata operations and small file access can be substantial when using NVMe for the special vdev
Benchmark Testing:
- Conduct benchmark tests that mimic your actual use case to measure the performance difference accurately
- Compare performance with and without the special vdev under identical conditions and workloads. It will provide insights into the impact of ZFS special devices on overall system performance
ZFS Special Devices Group:
Devices placed in the ZFS special devices group serve specific purposes:
- Metadata: Store metadata efficiently. Metadata-intensive workloads benefit from faster access to metadata
- Indirect Blocks: These blocks are crucial for data access Optimize their storage within the ZFS special devices group
- Deduplication Tables: Decide whether to include deduplication tables in the ZFS special devices group or place them in a separate group – the deduplication group
Provisioning for Small File Blocks:
- Configure ZFS special devices to accept small file blocks. Adjust the block size as needed using the relevant settings
As can be concluded from the above guidelines, there is no specific numerical data on partition capacity. However, it should be taken into account that it may require between 1% and 10% of the volume of the entire data storage capacity.
Remember that the effectiveness of ZFS special devices depends on your unique environment, workload, and hardware configuration.
Video Tutorial – Setup in Practice
If you’re interested in maximizing the efficiency of your NVMe disks using Open-E JovianDSS Up30, check out the video below. This recent update allows you to create partitions on your NVMe drives for read cache, write log, and ZFS special devices. Doing so can save costs, reduce hardware requirements, and gain greater flexibility in designing and maintaining your data storage system!