So the zfs experiment continues. Upon the release of b129 I set off into the unknown on a voyage of dedupe. Which at first had the promise of lower disk usage, faster IO speeds and a warm fuzzy feeling deep down that you only get from awesome ideas becoming reality. ahem
Most sources say you need more ram, and that is true, what they don’t say is how much ram for what size data set, which might be more useful to home users like me. My boxes have 2gb of ram each, and that is not enough for dedupe, no way near. Not if you have a 6 TB of randomish data. I might retry when I get to 8gb ram but not before. You see, if it can’t keep the whole of the dedupe table in ram ALL the time, any write to a dedupe enabled volume will result in reads for the rest of the table, or at least seeks. So what I saw was a gradual slowdown while writing to the volume, I was determined to let it finish, to see what savings I would make, and then scrap it due to performance, but after waiting 16 days for the copy, I cancelled it.
The only way I found to even see the contents/size of the dudupe table (DDT) is: zdb -DD <poolname> which results in an output like this
|
|
Saving’s of around 80gb with dedupe and compression (backup box so no real world performance requirement) is just not worth the need for 3-n times the ram and possibly an ssd for the l2arc cache to speed things up. Yep, the suggestion and observed behaviour was to hook up a cheap small (30gb) SSD for cache to accelerate it. I don’t mind that so much for a primary but this is my backup/2nd copy box so it’s not really ideal. Certainly not for 80gb of savings, or at current prices around $5 of disk.
My second attempt is now underway, this time I’ve sliced up my data sets into more volumes, and by more that means smaller average size, so this time around 2TB max per volume, which from experience at work I’ve learned is a good rule of thumb. So now I can enable compress+dedupe on only specific bits, hopefully where the most savings is to be made, and then the rest is just stored raw. This way the savings might be similar, but without the major write speed penalty. I’ve also realised for the production box if I want screaming performance, I’ll throw an ssd on there, but that means more sata ports, which means a major change. I also need to work on power management too.
One thing that has gone right this time, is I’m now using CF->IDE adaptors and booting off that. This way the OS think’s it’s on a 2gb hdd, so booting doesn’t have the complexity of usb boot and also uses less power and doesn’t take up a sata port. Of course new boards don’t have pata anymore so I might need to get a CF->sata one in future.
Another thing that must be said, Solaris’s CIFS server is fast.