The process of files being corrupted due to some hardware or software failure is known as data corruption and this is one of the main problems which hosting companies face since the larger a hard drive is and the more data is stored on it, the much more likely it is for data to get corrupted. You can find a couple of fail-safes, but often the data gets corrupted silently, so neither the file system, nor the administrators notice a thing. Consequently, a damaged file will be handled as a regular one and if the hard disk is part of a RAID, that file will be duplicated on all other disk drives. Theoretically, this is done for redundancy, but in reality the damage will be worse. Once a given file gets corrupted, it will be partially or fully unreadable, which means that a text file will not be readable, an image file will display a random mix of colors if it opens at all and an archive shall be impossible to unpack, so you risk sacrificing your content. Although the most widely used server file systems feature various checks, they are likely to fail to identify some problem early enough or require an extensive amount of time to be able to check all of the files and the server will not be operational for the time being.
No Data Corruption & Data Integrity in Hosting
We guarantee the integrity of the data uploaded in each and every hosting account which is generated on our cloud platform due to the fact that we employ the advanced ZFS file system. The latter is the only one which was designed to avert silent data corruption through a unique checksum for each and every file. We'll store your data on a number of SSD drives which work in a RAID, so identical files will exist on several places at the same time. ZFS checks the digital fingerprint of all of the files on all the drives in real time and in the event that the checksum of any file differs from what it should be, the file system swaps that file with an undamaged copy from a different drive in the RAID. There's no other file system which uses checksums, so it is possible for data to get silently damaged and the bad file to be replicated on all drives over time, but since that can never happen on a server running ZFS, you won't have to concern yourself with the integrity of your info.
No Data Corruption & Data Integrity in Semi-dedicated Servers
You will not need to deal with any kind of silent data corruption issues whatsoever in case you get one of our semi-dedicated server plans as the ZFS file system that we use on our cloud hosting platform uses checksums in order to make sure that all of your files are undamaged all the time. A checksum is a unique digital fingerprint that is allotted to each and every file kept on a server. Due to the fact that we store all content on a number of drives simultaneously, the same file uses the same checksum on all drives and what ZFS does is that it compares the checksums between the different drives in real time. In the event that it detects that a file is corrupted and its checksum is different from what it has to be, it replaces that file with a healthy copy without delay, avoiding any possibility of the corrupted copy to be synchronized on the remaining drives. ZFS is the sole file system on the market which uses checksums, which makes it much more dependable than other file systems that cannot identify silent data corruption and copy bad files across hard drives.