1.1 HA Cluster Ring can be now configured as a two single connections
1.2 HA cluster Ping-nodes can be now configured within any available interfaces and subnetworks
1.3 Static routing configuration is available in Web-GUI
1.4 Custom SSL/TLS certificates can be now manually imported in Web-GUI
1.5 ZFS Datasets can now get a record size value from range of 4KiB up to 16MiB (default record size value is 1MiB)
1.6 Fibre Channel Target mode is now available for ATTO Fiber Channel Adapter (supported only with VMware client)
1.7 Improved performance of LDAP database replication mechanism
1.8 Storage performance test tool is available in TUI (System console -> Ctrl+Alt+t -> Add-ons -> Storage performance tool)
1.9 HPE tools for managing HP Smart Array controllers are now available in Web-GUI and TUI
1.10 MacOS Spotlight search support allows to quickly locate the files and search through their contents
1.11 Installer creates now 128GB boot medium partition size (more space for further upgrade processes)
1.12 New filtering options for Event Viewer (selection by: error, warning, information and the date ranges)
1.13 Kdump (kernel crash dumping mechanism)
1.14 The default SCSI ID for iSCSI and FC luns can be now manually set in Web-GUI
1.15 Deduplication statistics for zpool are now available in Web-GUI
1.16 Ethernet cards detailed statistics (amount of data sent and received) are now available in the system logs
1.17 Statistics for MPIO devices are displayed on the GUI (Diagnostics -> Disk usage)
1.18 Linux iostat and S.M.A.R.T data are now available in Checkmk monitoring system
2.1 Samba 4.9.4
2.2 Mellanox ConnectX-3 driver (mlx4_core, v4.4-2.0.7)
2.3 Mellanox ConnectX-4/5 driver (mlx5_core, v4.4-2.0.7)
2.4 Intel 10/40GbE driver i40e (i40e, v2.9.21)
2.5 Broadcom BCM5706/5708/5709/5716 driver (bnx2, v2.2.5x)
2.6 Broadcom BCM57710/57711/57711E/57712/57712_MF/57800/57800_MF/57810/57810_MF/57840/57840_MF driver (bnx2x, v.1.715.0)
2.7 ATTO Celerity 16Gb/32Gb Fibre Channel Adapter driver (celerity16fc, v1.76.0f1)
2.8 Microsemi Adaptec RAID and HBA driver (aacraid, v220.127.116.11013src)
2.9 Microsemi Adaptec SmartRAID and SmartHBA driver (smartpqi, v1.2.6-015)
2.10 Broadcom MegaRAID SAS Driver (megaraid_sas, v07.709.08.00)
2.11 Microsemi Adaptec MaxView tool (v3.02-23600)
2.12 Areca SAS/SATA RAID Controller Driver (arcmsr, v1.40.0X.10-20181227)
2.13 Smartmontools 7.0
2.14 VMware tools v10.3.10.10540
2.15 Page cache for zvol File I/O mode is reduced to 50%
3.1 RSS does not check if gateway is set up and if RSS server is available
3.2 System activation on XEN VSA does not work
3.3 Cannot use XEN drives for Metro Cluster in XEN VSA
3.4 Zvol configured as a destination in OODP still can be set as a LUN for target
3.5 Dataset configured as a destination in OODP still can be used as a location for a Share
3.6 VMware VCenter/VSphere snapshot autoremove mechanism deletes all ESX snapshots
3.7 Listing of OODP snapshots lasts very long
3.8 activation.xml is cleaned while activation server was unavailable e.g. because of firewall settings
3.9 System restart in watchdog for processes which works more than 300 sec.
3.10 Problems with ssh and jumboframes (MTU)
3.11 The SIDs are not mapped to usernames and groups for shares in Windows (fixed for new JovianDSS installations only)
3.12 Unstable working of Intel X710/XL710 and Intel X722 network cards configured as LACP or Balance Round Robin bonding mode
4 Important notes for JovianDSS HA configuration
4.1 It is necessary to use sync always option for zvols and datasets in cluster
4.2 It is strongly recommended not to use more than eight ping nodes
4.3 It is strongly recommended to configure each IP address in separate subnetwork
4.4 It is necessary to run Scrub scanner after failover action triggered by power failure (dirty system close)
4.5 It is strongly recommended to use UPS unit for each cluster node
4.6 It is necessary to use static discovery in all iSCSI initiators
4.7 It is strongly not recommended to change any settings when both nodes do not have the same JovianDSS version, for example during software updating
4.8 It is necessary to use different Server names for cluster nodes
4.9 HA cluster does not work properly with Infiniband controllers
4.10 HA cluster does not work stable with ALB bonding mode
4.11 FC Target HA cluster does not support Persistant Reservation Synchronization and it cannot be used as a storage for Microsoft Hyper-V cluster. This problem will be solved in future releases.
4.12 When using certain Broadcom (previously LSI) SAS HBA controllers with SAS MPIO, Broadcom recommends to install specific firmware from Broadcom SAS vendor.
5 Known issues
It is strongly recommended to use Mozilla Firefox browser to navigate the system’s GUI. When using other browsers some slight problems with displaying content may occur.
Web browser’s cache
After updating from previous versions, some problems with WebGUI content and navigation may occur. To resolve this problems, please clear Web browser cache.
System as a guest in virtual environments
In case of installing the system as a Hyper-V guest please use the following settings:
- Number of virtual processors: 4
- Memory: Minimum 8GB
- Boot Disk: 20GB IDE Disk
- Add at least 6 virtual disk
In case of installing the system as a VMware ESXi guest please use the following settings:
- Guest OS: Other 2.6.x Linux ( 64bit )
- Number of Cores: 4
- Memory: Minimum 8GB
- Network Adapter: VMXNET 3
- SCSI Controller Type: Paravirtual or LSI Logic SAS
- Boot Disk : 20GB Thick Provision Eager Zeroed
- Add at least 6 virtual disk
- Edit Settings->Options->Advanced-General->Configuration-> Add row: disk.EnableUUID : TRUE
Reclaim deleted blocks on thin-provisioned LUNs in various systems
In case of deleting large amounts of data, reclaimed deleted blocks on thin-provisioned LUNs in Windows 2012 can significantly slow down system performance. If you predict frequent deletions of large amounts of data, we recommend turning off the automatic reclaim function in Windows 2012. This can be done by disabling the "file-delete notification" feature in the system registry. To do so, follow the steps below:
- start Registry Editor.
- locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem
- double-click DisableDeleteNotification.
- in the Value data box, enter a value of 1, and then click OK.
In order to reclaim the free space in Windows 2012 please change the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification key value back to 0 and use "Optimize" tool located in Disc Management->[disk]->Properties->Tools. As the operation can generate a very high load in the system, it is recommended to perform it after-hours.
In case of VMware ESXi, the automatic reclaim feature is disabled by default. To reclaim the space of deleted blocks on thin-provisioned LUNs, please use vmkfstools. For details, please refer to the VMware Knowledge Base:
For VMware ESXi 5.0: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2014849
For VMware ESXi 5.5 and newer: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2057513
In case of using Windows 2008 there is no possibility to reclaim the space released by deleted data of thin-provisioned LUNs.
Deduplication issues and recommendations
Please be aware that deleting the zvol with deduplication enabled can generate a very high load in the system and lead to unstable behavior. It is strongly recommended to perform such operation only after-hours. To avoid this issue, please use (if possible) single zvol on zpools dedicated for deduplication and delete the zpool which includes the single zvol.
To determine the amount of System RAM required for deduplication, use this formula:
(Size of Zvol / Volume block size) * 320B / 0.75 / 0.25
320B - is the size of entry in DDT table
0.75 - Percentage of RAM reservation for ARC (75%)
0.25 - Percentage of DDT reservation in ARC (25%)
Example for 1TB data and 64KB Volume block size:
(1099511627776B / 65536B) * 320B / 0.75 / 0.25 = 28633115306.67B
28633115306.67B / 1024 / 1024 / 1024 = 26.67GB
so for every extra 1TB of storage, system needs extra 26.67GB RAM.
Example for 1TB data and 128KB Volume block size:
(1099511627776B / 131072B) * 320B / 0.75 / 0.25 = 14316557653.33B
14316557653.33B / 1024 / 1024 / 1024 = 13.33GB
so for every extra 1TB of storage, system needs extra 13.33GB RAM.
Example for 1TB data and 1MB Volume block size:
(1099511627776B / 1048576B) * 320B / 0.75 / 0.25 = 1789569706,66B
1789569706,66B / 1024 / 1024 / 1024 = 1.66GB
so for every extra 1TB of storage, system needs extra 1.66GB RAM.
IMPORTANT: The above calculations only apply to the worst case scenario, when data is completely unique and will not be deduplicated. For the deduplicable data, the need for RAM drastically decreases. If SSD based Read Cache is present, part of deduplication table will be moved to the SSD and deduplication will work with good performance using less RAM.
IMPORTANT: With SAN (iSCSI) it is CRITICAL to match User-File-System format block size with the zvol volume-block-size. A simple example is a Windows file system NTFS with default format block size 4k and zvol default volume-block-size is 128k. With defaults like this deduplication will mostly NOT match because files can be aligned in 32 (128/4) different positions on the pool. If the NTFS format is increased to 64k and the zvol default volume-block-size is 128k, deduplication match can fail only one time because a file can be aligned to 2 (128/64) different positions on the pool. Every next write will match already as both alignment options already exist on the pool. In order to achieve all files matching and efficient memory usage NTFS must use 64k format block size and the zvol volume-block-size must equal 64k. Another option is that the NTFS=32k and zvol=32k, but in this case the deduplication table will be twice as large. That is why the NTFS=64k and zvol=64k is the most efficient setting for deduplication.
IMPORTANT: With NAS (NFS, SMB/CIFs) deduplication matching works always due to the data blocks being aligned by ZFS natively.
IMPORTANT: De-duplication is working on the pool level in the pool-range. This is why zvol-Physical size cannot show de-duplication benefit. In order to prove that deduplication saved space run the scrub and notice the current physical data space on the pool reported by the scrub. Next copy of new data and run the scrub again. Now scrub will show new physical data space. Comparing the data size from storage client side with the data space growth from the scrub will give the deduplication advantage. The exact pool of the deduplication ratio can be found in LOGs in zfs.log.
Zvols configuration issues and recommendations
It is strongly recommended to set the client file system block size same as the zvol volume block size. For example, when using 64k zvol volume block size, the Windows Allocation unit size of NTFS should be set to 64k.
Target number limit
In case of more than 60 targets, GUI will not be displayed correctly. This issue will be fixed in the next releases.
Targets with the same name are not assigned correctly
Having two or more targets with the same name but belonging to various Zpools, will cause that all targets with the same name will be assigned to one Zpool during the import process.
Installation on disks containing LVM metadata
There is no possibility to install the system on disks containing LVM metadata. You will need to clear those disks before installation. To do so, use the “Remove ZFS data structures and disks partitions” function located in the Extended tools. To access this function, boot the system from a temporary media like a USB drive or DVD.
Import Zpool with broken write log
There is no option to import Zpool with a broken write log disk using the system’s functions. This is why it is STRONGLY recommended to use mirrored disks for write logs. In case it is necessary to import Zpool with a broken write log, please contact technical support.
Replacing disks in data groups for larger ones can cause your storage license capacity to be exceeded
In case of replacing damaged disks for larger ones, the size of the entire Zpool will increased. Make sure that the new size will not exceed your purchased storage license.
Periodically after some operations, the GUI needs to be manually refreshed
After performing some operations, e.g. resilvering, the GUI will show outdated information. In this case refresh the web page manually by pressing F5 on your keyboard. This issue will be fixed in next releases.
Replacing disks in data groups for smaller ones can cause an error and make the disk disappear from the list of available disks
Operation of replacing a disk in a data group for a smaller one will cause an error "zpool unknown error, exit code 255", and the disk will become unavailable. In order to reuse this disk, please use function "Remove ZFS data structures and disks partitions" located in the Extended tools on the Console screen.
It is strongly recommended to use 64KB or higher Volume block size
Smaller than 64KB block sizes used with deduplication or read cache will cause very high memory consumption.
RAM recommendations for Read Cache
To determine how much System RAM is required for Read Cache, use the following formula:
RAM needed = (Size of Read Cache - reserved size and labels) * bytes reserved by l2hdr structure / Volume block size
For 8KB Volume block size and 1TB Read Cache:
RAM needed = (1099511627776B - 4718592B) * 432B / 8192B = 57981809664B
57981809664B / 1024 / 1024 / 1024 = 54GB
1099511627776B - 1TB Read Cache
4718592B - reserved size and labels
432B - bytes reserved by l2hdr structure
8192B - Volume block size
For 64KB Volume block size and 1TB Read Cache:
RAM needed = (1099511627776B - 4718592B) * 432B / 65536B = 7247726208B
7247726208B / 1024 / 1024 /1024 = 6.75GB
For 128KB Volume block size and 1TB Read Cache:
RAM needed = (1099511627776B - 4718592B) * 432B / 131072B = 3623863104B
3623863104B / 1024 / 1024 /1024 = 3.37GB
Multiple GUI disk operations may result in an inaccurate available disks list
Multiple operations of adding and detaching disks from groups can cause that the next operation of detaching will fail, but the disk will be shown on a list of available disks. When trying to add this disk to one group it will fail with the following error "[zfslib-wrap-zpool-ZpoolCmdError-1] invalid vdev specification". In this case, detach this disk once again.
After removing disks from groups they may not be displayed on a list of available disks
Sometimes after removing disks from groups, Spare/Read Cache/Write Log disks are displayed on a list of unassigned disks, but they are not on a list of available disks. In this case, click the rescan button located in the adding group form.
Reusing disks from an exported and deleted Zpool
After deleting an exported Zpool, not all disks which were a part of a Zpool become immediately available. Before you can reuse disks, which were previously used as a Spare or a Read Cache, you must first clean them. Use “Remove ZFS data structures and disks partitions” function located in the “Extended tools”.
Negotiated speed of network interfaces may not display correctly
For some network interfaces, the negotiated speed field may display an incorrect value in GUI and Console. This issue will be fixed in next releases.
Limited possibility to display a large number of elements by the GUI
After creating multiple snapshots, clones or zvols some forms in GUI work very slow. If you need to create many snapshots, clones or zvols, it is strongly recommended to use CLI in order to perform operations on them.
Open-E VSS Hardware Provider system recommendations
It is strongly recommended to use Windows Server 2012. On the other Windows systems, Open-E VSS Hardware Provider Configuration works unstable.
Exceeded quota for dataset does not allow to remove files
Files located on datasets with exceeded quota cannot be removed. In this case, please resize quota and then remove unnecessary files.
Slow WebGUI with multiple datagroups
Zpool with more than 20 datagroups causes that some forms on WebGUI work very slow. If you need to create many datagroups, it is strongly recommended to use CLI API.
Slow WebGUI with multiple datasets
More than 25 datasets cause that WebGUI works slow.
For Open-E JovianDSS users, it is recommended to upgrade Zpools to the latest ZFS file system. Although the file system upgrade is absolutely safe for your data, and takes only few minutes, please be aware that this operation cannot be undone. In order to upgrade a single Zpool, please use "WebGUI -> Zpool options -> Upgrade file system" from Zpool's option menu.
Intel® Ethernet Controller XL710 Family
In case of using Open-E JovianDSS with Intel® Ethernet Controller XL710 Family, it is necessary to update firmware’s network controller to the version: f4.33.31377 a1.2 n4.42 e1932.
Motherboards with x2APIC technology
In case of using a motherboard with x2APIC technology enabled, it is necessary to disable x2APIC in BIOS. Otherwise, problems with CPU cores will occur.
NFS FSIDs and Zpool name
One of the factors that have been taken into account when NFS FSIDs are generated is Zpool name. It indicates that when Zpool name is changed, e.g. during export and import with different names, FSIDs for NFS Shares located on this Zpool will also be changed.
High Availability shared storage cluster does not work with Infiniband controllers
Due to technical reasons the High Availability shared storage cluster does not work properly when using the Infiniband controllers for VIP interface configuration. This limitation will be removed in the future releases.
Static routing functionality was removed
Starting from up10, there is no possibility to configure static routing in TUI. In case the static routing was configured in previous versions, this configuration will be removed from the system.
Disks with LVM data cannot be used with the created Zpool
Attempt to create Zpool with drives that contain LVM data will fail with the following error:
"cannot open 'lvm-pv-uuid-R25lTS-kcDc-eiAN-eAlf-ppgi-rAqu-Oxy1Si': no such device in /dev must be a full path or shorthand device name"
In this case, if you want use those disks, please use “Remove ZFS data structures and disks partitions” function located in “Extended tools”.
Unexpected long failover time, especially with HA-Cluster with two or more pools
Current failover mechanism procedure is moving pools in sequence. Since up27 release, up to 3 pools are supported in HA-cluster. If all pools are active on single node and failover needs to move all 3 pools, the failover may take longer than 60 seconds which is a default iSCSI timeout in Hyper-V Clusters. In some environments, under heavy load a problem with too long time of cluster resources switching may occur as well. If the switching time exceeds the iSCSI initiator timeout, it is strongly recommended to increase the timeout up to 600 seconds.
In case of using Windows, to increase iSCSI initiator timeout, please perform following steps:
1. Run regedit tool and find: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\...\Parameters\MaxRequestHoldTime registry key
2. Change value of the key from default 60 sec to 600 sec (decimal)
In case of using VMware, to increase iSCSI initiator timeout, please perform following steps:
1. Select the host in the vSphere Web Client navigator
2. Go to Settings in the Manage tab
3. Under System, select Advanced System Settings
4. Choose the Misc.APDTimeout attribute and click the Edit icon
5. Change value from default 140 to 600 sec.
In case of using XenServer, to increase iSCSI initiator timeout, please perform following steps:
A. For existing Storage Repositories (SR):
1. Edit /etc/iscsi/iscsid.conf
2. node.session.timeo.replacement_timeout = 120
3. Change value from default 120 to 600 sec.
4. Detach and reattach SRs. This will update the new iSCSI timeout settings for the existing SRs.
B. For new Storage Repositories (SR):
1. Edit /etc/iscsi/iscsid.conf
2. node.session.timeo.replacement_timeout = 120
3. Change value from default 120 to 600 sec.
4. Create the new SR. New and existing SRs will be updated with the new iSCSI timeout settings.
Activation may be lost after update
In some environments, after update to up11 system may require re-activation. This issue will be removed in the future releases.
Bonding ALB and Round-Robin do not work in Hyper-V and VMware environments
In case of using JovianDSS as Hyper-V or VMware guest, bonding ALB and Round-Robin is not supported. Please use another type of bonding.
Continuous writing in VMware guest can cause that deleting VMware snapshot can take long time
Using ODPS on zvol/dataset with VMware guest where many I/O operations are performed can cause that the process of deleting VMware snapshot can take long time. Please take this into consideration while you set up the scheduler for Off-site Data Protection Service task.
Enabling quota on dataset can cause file transfer interrupt
Enabling quota functionality on a dataset can cause file transfer interrupt. Before using it in production environment, please enable quota on dataset, or make sure that no file transfers are active.
Nodes connected to the same AD server must have unique Server names
If JovianDSS nodes are connected to the same AD server, they cannot have the same Server names.
Share can not be named the same as Zpool
In case of share with the same name as Pool connections problem will occur. Please use different names.
No persistent rules for network cards in virtual environment
Changing settings of virtual network cards (delete, changing MAC, etc.) can cause unstable system behaviour. Please do not change settings on production system. This issue will be fixed in next releases.
Downgrade to up17 or earlier is not possible
Starting from up18 bootable medium has always SW RAID structure. Attempt to come back to earlier version is impossible. If you need come back to earlier version, you must reinstall version again.
System cannot be installed on NVMe disks and on cciss based controllers
This issue will be fixed in next releases.
Interrupt the process of adding second disk to SW RAID (bootable medium) can cause run system from disk with uncompleted data
Performing operation like: reboot, shutdown, power off, etc. during mirroring data on new added disk can cause that system will be booted from new disk which has incomplete data. In this case, SW RAID function shows empty status and wrong number of RAID members. To resolve this issue, please plug off disk which has incomplete data, boot system, plug in disk and add it once again to SW RAID.
SAS-MPIO cannot be used with Cluster over Ethernet
It is strongly not recommended to use Cluster over Ethernet with SAS-MPIO functionality. Such a configuration can lead to a very unstable cluster behavior.
On- & Off-site Data Protection backward compatibility problem
In case of using On- & Off-site Data Protection functionality in up21 or earlier, it is strongly recommended to remove all backup tasks created by CLI API and re-create it using GUI.
Wrong state of storage devices in VMware after power cycle of both nodes in HA FC Target
In FC Target HA environment, power cycle of both nodes simultaneously may lead to a situation when VMware is not able to restore proper state of the storage devices. In vSphere GUI LUNs are displayed as Error, Unknown or Normal,Degraded. Moving affected pools to another node and back to its native node should bring LUNs back to normal. A number two option is to restart the Failover in Jovian’s GUI. Refresh vSphere’s Adapters and Devices tab afterwards.
Problem with maintenance in case of disk failure
In case of disk failure, please remove the damaged disks from the system, before starting administrative work to replace the disk. The order of actions is important.
Separated mode after update from JovianDSS up24 to JovianDSS up25
In HA cluster environment after updating of one node from JovianDSS up24 to JovianDSS up25 the other node can fall into separated mode and the mirror path might indicate disconnected status. In such a case go to Failover Settings and in the Failover status section select Stop Failover on both nodes. Once this operation is finished select Start Failover.
Different Write Cache default setting for zvols in early beta versions of Jovian DSS up25
In the early beta versions of JovianDSS up25 the default value of the Write Cache Log bias of zvols was set to “In Pool (Throughput)”. In the final release of Jovian up25 the Log bias is set to “Write log device (Latency)”.
Please note, that “In Pool (Throughput)” setting may cause a drop in performance in environments with a lot of random access workloads which is a common factor for a majority of production environments.
Target alias name is required while configuring HA FC Target in case of adding two or more ports to one FC group
If we want to have more then one port in each FC group (in HA FC configuration) it is necessary to type in Target alias name for every port. Otherwise an error message “Target alias is already used” can show up while setting up remote port mapping for FC targets in (pool name) -> Fibre Channel -> Targets and initiators assigned to this zpool. This issue will be resolved in the future release.
New default value for qlini_mode parameter for FC kernel module qla2xxx_scst
In order to configure FC Target, kernel module parameter qlini_mode should be set to “exclusive” (in some early beta versions of Jovian up25 qlini_mode was set up to “enabled”). In order to verify the value of this parameter open Jovian TUI and use CTRL+ALT+W key combination to launch Hardware configuration. Press "Yes" to acknowledge the initial warning message. Type in the password. Choose option: Kernel module parameters. Select qla2xxx_scst QLogic Fibre Channel HBA Driver and make sure the value of this parameter is set to “exclusive”.
Please note that in order to change this parameter Failover must be stopped first.
Very low performance of FIO/WT in case of mixed FIO/WT and FIO/WB zvol configurations over Fiber Channel
In case of the mixed FIO/WT and FIO/WB zvol configurations over FC one can observe significantly decreased performance on FIO/WT.
More than one zvol with FIO/WB mode can cause instability of the Fiber Channel connection
If there's more than one FIO/WB zvol or a FIO/WB zvol is coexisting with other types of zvols it can cause an instability of the FC connection with client machines. As a result client machines may unexpectedly lose FC connected resources.
In certain situations system page cache is not able to flush File I/O errors by itself and cache flushing has to be performed manually
Under certain conditions (like overfilling zvol and then expanding its size) some File I/O errors may be held by the system page cache and it requires manual flushing (in GUI use Storage -> Rescan).
Updating nodes of the Jovian cluster from up24 and earlier versions changes FC ports to target mode resulting in losing connection to a storage connected via FC initiator
There is a significant difference in FC configurations in up24 and earlier versions. Those versions allowed the FC ports to be configured in initiator mode only, while later versions allow both target and initiator mode with target as default, so in case of using storage connected via FC initiator, FC port(s) must be manually corrected in GUI of the updated node.
Updating Metro Cluster node with NVMe disks as read cache from JovianDSS up26 or earlier can cause the system to lose access to the NVMe disks
The process of updating of Metro Cluster node from JovianDSS up26 or earlier is changing NVME disk IDs. In consequence moving pool back to updated node is possible but the read cache is gone (ID mismatch). In order to bring read cache back to the pool we recommend to use console tools in the following way: press Ctrl+Alt+x -> “Remove ZFS data structures and disks partitions”, locate and select the missing NVMe disk and press OK to remove all ZFS metadata on the disk. After this operation click Rescan button in GUI -> Storage. The missing NVMe disk should now appear in Unassigned disks at the bottom of the page which allows to select that disk in pool’s Disk group’s tab. Open Disk group tab of the pool, press the Add group button and select Add read cache. The missing disk should now be available to select it as a read cache.
LDAP synchronization fails and cannot be resumed, in case of power cycle of source node while synchronization is in progress
A temporary solution is to restart the destination node and if synchronization is not automatically resumed, use Reset button in GUI -> User Management -> Lightweight Directory Access Protocol (LDAP) function. This problem will be solved in the future releases.
Synchronization of a large LDAP database can last for a long time (e.g. 10h for 380K users) and can be associated with high system load
This problem will be solved in future releases.
Cluster node is not joining after update if restart operation was performed while LDAP synchronisation was in progress
Updating of cluster node while LDAP synchronisation was in progress may corrupt LDAP database which causes the node joining process to fail. It’s recommended to wait with updating of nodes until the LDAP database is fully synchronized (check the status of synchronization in GUI -> User Management -> Lightweight Directory Access Protocol (LDAP) function).
Long time of a failover procedure in case of Xen client with iSCSI MPIO configuration
In a scenario where Xen client is an iSCSI initiator in MPIO configuration, the power-off of one node starts the failover procedure that takes a very long time. Pool is finally moved successfully but there are many errors showing up in dmesg in meantime. In case of such an environment we recommend to add the following entry in the device section of the configuration file: /etc/multipath.conf:
The structure of the device section should look as follows:
path_selector "round-robin 0"