I set up a 16 TB zfs pool (vdevs are all 2x mirrors) to create a "life preserver" array for a colleague who needed iSCSI storage for his 12TB life's work online in a matter of weeks. I see there are different phases to an iSCSI transfer, but I don't see how anything the target would normally send would come anywhere close to 10 TB up (that's almost the same as the dataset!). Especially since these are mainly 500 MB image files, there shouldn't be much overhead per file. This seems rather up-heavy to me, compared to my limited experience, but that's all been unix-to-unix, and never iSCSI. So I really have 3 unknowns:
1) iSCSI - is there something inherently busy about the protocol?
2) the iSCSI initiator is a Windows 2003 Server. Could it be NTFS that's using this much?
3) Within Windows, I'm using robocopy. I know this is a FreeBSD forum, but I'm guessing more than a few folks providing ZFS services have experience with a heterogenous network, including Windows servers. I'm only checking for file exists, file size, and write date, but could my robocopy options be the problem?
1) iSCSI - is there something inherently busy about the protocol?
2) the iSCSI initiator is a Windows 2003 Server. Could it be NTFS that's using this much?
3) Within Windows, I'm using robocopy. I know this is a FreeBSD forum, but I'm guessing more than a few folks providing ZFS services have experience with a heterogenous network, including Windows servers. I'm only checking for file exists, file size, and write date, but could my robocopy options be the problem?