zeroaccess
Active member
- Local time
- Today, 00:54
- Joined
- Jan 30, 2020
- Messages
- 671
From 2009 onward I have only bought solid state disks for my computers. Fast forward to today and I bought my first mechanical hard disk in 12 years. Why?
Because as much as I love them, it's for detachable backup and the value proposition for SSD's isn't there for this use case.
This disk will exclusively host backup archives, currently .tib files from Acronis. This writes a single, full backup file of your machine (unless you tell it to split them up), which is currently ~ 1.5 TB but set to grow. I then do weekly incremental backups 9 times, when the 10th backup is another full backup, wiping the previous incremental files. At any rate all files are very large.
When formatting the disk, I decided to spring for the new 2 MB NTFS cluster size as I'm not aware of any downsides. Unless I'm misunderstanding, the backup software will create a compressed archive in the form of a long, continuous write to the disk which is where you can benefit from a large cluster size. If disaster recovery is ever needed, I would be looking at a long, continuous read of the same.
Has anyone else worked with large cluster sizes? Allocation sizes beyond 64kb are fairly new, introduced in Windows 10 1709.
Shown is the disk resting on my case while formatting (a very long operation). I eventually moved it over the top case fan which reduced temperatures from 42°C to 32°C.
Because as much as I love them, it's for detachable backup and the value proposition for SSD's isn't there for this use case.
This disk will exclusively host backup archives, currently .tib files from Acronis. This writes a single, full backup file of your machine (unless you tell it to split them up), which is currently ~ 1.5 TB but set to grow. I then do weekly incremental backups 9 times, when the 10th backup is another full backup, wiping the previous incremental files. At any rate all files are very large.
When formatting the disk, I decided to spring for the new 2 MB NTFS cluster size as I'm not aware of any downsides. Unless I'm misunderstanding, the backup software will create a compressed archive in the form of a long, continuous write to the disk which is where you can benefit from a large cluster size. If disaster recovery is ever needed, I would be looking at a long, continuous read of the same.
Has anyone else worked with large cluster sizes? Allocation sizes beyond 64kb are fairly new, introduced in Windows 10 1709.
Shown is the disk resting on my case while formatting (a very long operation). I eventually moved it over the top case fan which reduced temperatures from 42°C to 32°C.
Attachments
Last edited: