Back to the past: How to access and restore previous versions of files and folders with Open-E JovianDSS
With Open-E JovianDSS you can easily go back in time. Not literally, of course, so…
Read MoreMuch like with many things in IT, “provisioning” can mean several things. For instance:
All exist and refer to various aspects of IT infrastructure, so it’s important to clarify what kind of provisioning is going to be discussed here. In this case, we’re referring to the type of provisioning that occurs when we allocate space in data storage virtualization systems, centralized disk storage systems, and storage area networks (SANs). Using
Open-E JovianDSS, it’s possible to initially allocate this space in two different ways, either through thick-provisioning or thin-provisioning. Over-provisioning is also available by choosing the thin-provisioning option.
Thick-provisioning, sometimes called “fat provisioning”, is the most conventional way to allocate storage space. When using this method, any time space becomes logically assigned, it also becomes physically blocked off on the disks and, therefore, cannot be used by anything other than the volume it’s allocated to. This method comes in two widely used variants – lazy-zeroed and eager-zeroed thick-provisioning. It’s important to note that Open-E JovianDSS uses the lazy-zeroing approach to thick-provisioning.
Before going any further, let’s clarify what zeroing is. When a system sets something to be deleted, the old data isn’t generally deleted instantly. Instead, it remains in the system and is set to be overwritten. Zeroing something out is the process of changing that old data that’s set to be overwritten into zeroes, which can then be overwritten with new data.
Pros:
There are multiple benefits to using thick-provisioning, some of them true to both variants and some just true to one or the other. Let’s start with the general advantages of thick provisioning.
Cons:
There are also multiple downsides to using thick-provisioning, some associated with a specific variant and others with thick-provisioning as a whole. Let’s take a look at some of those now.
Recommendation:
Suppose you own a massive enterprise with a budget to match who’s reliant on getting the most performance out of every part of your IT system. In this case, it may be worth using eager-zeroed thick-provisioned disks for faster write performance and assurances that the storage won’t go mismanaged. If you’re more dependent on being able to create disks quickly and don’t mind the initially slower write speed while still wanting the security of knowing that the storage space will be there when you need it, lazy-zeroed thick-provisioned disks will work just fine for your purposes. Thick-provisioning is not recommended for the budget-conscious or those looking for storage space efficiency due to the lack of efficient ways to manage space or power consumption, which in turn raises costs.
Thin-provisioning was the tech community’s response to the wasted space problem caused by thick-provisioning. In essence, what thin-provisioning does is to allow for the allocation of logical space without actually reserving the said space physically until it’s actually used. So if an IT administrator wanted to allocate 50GB of data for a user, they could. If that user then only used 10GB of that data, then only that 10GB would be reserved physically. The rest could still be utilized by somebody else despite the original user having 50GB reserved logically. In this way, there’d be a lot less wasted space.
As mentioned before, this is done by assigning the space logically but not actually reserving it physically until something is written. In this way, you could actually create as many volumes as you’d like on a system, and they’d all function until the physical cap is met. So, for instance, an administrator could create one hundred volumes of 100GB despite only having a total of 100GB of physical storage, and each of those volumes could be used until the aggregate total reached 100GB. With thick-provisioning, if you only had 100GB of physical storage and you created a volume for 100GB, then you wouldn’t be able to create another volume because all of that space would have been reserved both logically and physically whether it was used or not.
Pros:
Cons:
Recommendation:
Thin-provisioning is absolutely fantastic for any organization looking to save money on their storage needs, bar one caveat. That caveat is, any organization that wants to use thin-provisioning needs to first ensure that they have people who can handle the responsibility that comes with managing and monitoring the thin-provisioned disks. If those individuals are already employed with the organization or the organization is willing to spend a bit more upfront to employ those people, then using thin-provisioned disks is a great way to save money on both your electricity and IT bills. This really adds up in larger organizations that could potentially have hundreds or thousands of active volumes that would otherwise all have to waste quite a bit of storage space using thick-provisioned disks.
See the difference between thin and over-provisioning by watching the video:
Over-provisioning, also sometimes referred to as over-allocation, is a feature of some thin-provisioning implementations that actually makes it possible to assign more space to a virtual disk than what is actually physically available. For instance, giving a user 50GB of storage space while only having physical disks that equal 25GB of space that’s what over-provisioning refers to in this case.
Using the over-provisioning feature of thin-provisioning is recommended for any company or enterprise that has highly experienced staff to ensure that the entire storage system won’t collapse if they’re ever caught being dependent on it. If that staff is there or the company is willing to invest in acquiring that staff, then over-provisioning can allow a company to benefit from the savings of having thin-provisioned volumes while also giving them the flexibility to handle any sudden spikes in storage requirements. It’s quite amazing if you can keep it all under control and in order. Alternatively, the company could choose to invest in solutions that mitigate this problem. This is actually one of the reasons why Open-E JovianDSS is so amazing, as it does a lot of this work for companies. It also provides the tools needed to properly monitor the system as well as actively sending appropriate warnings and events to administrators, giving them more room for error.
This concludes our brief guide on provisioning. Now it’s your turn. What do you think, is thick-provisioning better than thin-provisioning? Is there actually a difference in performance between eager-zeroed or lazy-zeroed thick provisioned disks? Is over-provisioning a trap? Let us know in the comments below!
Leave a Reply