I'm thinking about building a server that uses an vdev SSD cache to dump writes, before moving them to a much larger HDD vdev for long term storage.
For implementing the HSM wouldn't it basically be take a ZFS snapshot of the SSD vdev, move the data to the HDD vdev, take a snapshot of the HDD vdev, and run ZFS diff. If they don't match move data from the SSDs to the HDDs again for the parts that do not match?
Years ago, I was reading Mike Acton and these C++ devs told me to use managed pointers. But, Mike Acton gave a lecture about C++ devs writing slow code. "You just need a char*. Throw the stl library out", he said. So, I spent three months debugging my char* class till it could ingest an entire XML file, and I could manipulate it with other code without bugs. It was basically call malloc once at the start of the program, read the entire file in, and then use memmove with start_block + (end_block - start_block) +/- 1, and then deallocate before the program closes.
Shouldn't all of the data transfers be sequential reads and writes, and wouldn't my already existing string class work for correcting differences between snapshots potentially? Maybe, I have to add some stuff to make it work with ZFS, but I believe I should already have most of the pointer math and bit twiddling needed to correct errors between snapshots. If there's data corruption from sequential writes won't it be { good_block } { bad_block } { good_block }, and I just do some pointer math between both good_blocks to overwrite the bad_block?
For implementing the HSM wouldn't it basically be take a ZFS snapshot of the SSD vdev, move the data to the HDD vdev, take a snapshot of the HDD vdev, and run ZFS diff. If they don't match move data from the SSDs to the HDDs again for the parts that do not match?
Years ago, I was reading Mike Acton and these C++ devs told me to use managed pointers. But, Mike Acton gave a lecture about C++ devs writing slow code. "You just need a char*. Throw the stl library out", he said. So, I spent three months debugging my char* class till it could ingest an entire XML file, and I could manipulate it with other code without bugs. It was basically call malloc once at the start of the program, read the entire file in, and then use memmove with start_block + (end_block - start_block) +/- 1, and then deallocate before the program closes.
Shouldn't all of the data transfers be sequential reads and writes, and wouldn't my already existing string class work for correcting differences between snapshots potentially? Maybe, I have to add some stuff to make it work with ZFS, but I believe I should already have most of the pointer math and bit twiddling needed to correct errors between snapshots. If there's data corruption from sequential writes won't it be { good_block } { bad_block } { good_block }, and I just do some pointer math between both good_blocks to overwrite the bad_block?