Alternative to clonehdd?

Previously, prior to FreeBSD 9.0, disk cloning successfully used sysutils/clonehdd.
Unfortunately, the utility does not work with modern GPT partitions..
Has there been a convenient alternative to backup disk, cloning under FreeBSВ, conveniently creating an exact copy of the disk?
Perhaps someone wrote their own automation script, similar to the work of the previous clonehdd.
 
dd or cstream the whole disk (all partitions and their table)?
Yes, you need to create a copy (clone) of the hard drive with all the data and partitions. As a mirrored cool-swappable spare drive.
Also how clonehdd worked.
 
If your cloning a disk with dd to a ssd
ensure the ssd is really completely clean first ("set to factory values")
(otherwise [not-so]-funny things may happen later)

camcontrol security .... for SSDs camcontrol(8)
(you'll first need to get the pw, activate security, then erase the disk)
or
nvmecontrol sanitize... for NVDs nvmecontrol(8)
 
I don't use dd or sdd disks..
My disks - hdd.
You need a working and convenient tool for cloning hdd 1 in 1 on servers, likewise clonehdd.
Possibly a script based dump/restore.
 
The tool is working, but for the purposes of disk cloning it is very, very slow due to the sector-by-sector algorithm of work..

The source disk with the following fill is only 5% complete in 30 minutes.
Code:
# df -h
Filesystem     Size    Used   Avail Capacity  Mounted on
/dev/ada0p3    969G    252M    891G     0%    /
devfs          1.0K    1.0K      0B   100%    /dev
/dev/ada0p4    484G     32M    446G     0%    /home
/dev/ada0p5    969G     16G    875G     2%    /usr
/dev/ada0p7    2.8T     45M    2.6T     0%    /usr2
/dev/ada0p6    1.9T    316M    1.7T     0%    /var

I did not find any launch keys that would speed up the process of completing the task in the description.
Any ideas?
 
I don't use dd or sdd disks..
My disks - hdd.
You need a working and convenient tool for cloning hdd 1 in 1 on servers, likewise clonehdd.
Possibly a script based dump/restore.

With that level of 'emptiness', dump and restore are about all that makes sense, especially if you make gz or xz'ipped tars that will be relatively tiny.

First use # gpart show -p ada0 to record exact size & start sector of each partition (including boot & swap) so you can easily recreate these on the new disk with gpart.

Then newfs the new partitions and restore the dumps.

That shouldn't take more than an hour or so, even if you dump and restore via USB 3 memsticks.
 
5% complete in 30 minutes
This means 10h to complete - this ain't bad.

In my pre-FreeBSD times I cloned a lot of disks (with gparted-live [it cannot handle neither freebsd-ufs nor zfs]).
Overnight runs are something absolutely common for cloning disks up to 1TB.

Half a year ago I moved my data storage to another computer.
Copying - not cloning sector by sector - lasts two and a half days (Some Pros here working on real machines may only laugh about this.)
Impatience kill data? - large disks/amount of data simply needs time.

That's one reason why I'm using FreeBSD and zfs,
to avoid cloning as far as possible.

Like smithi pointed in that direction:
If you do not necessarily depend on having exact 1:1 clones by disk-sectors,
there are (several) other and (much) quicker ways to move your data.
 
I also clone disks - if I have to clone disks, which, as I pointed out, try to avoid - with dd.
Once you dealt a bit with dd it's a very powerful tool, because it's so low level.

I just had an issue by making an iso-image of a CD.
I figured out it's a nasty copy-protection which caused dd to stop (no error/warning [low level] - you have to check otherwise if the job was done.)
However:
So I was looking for alternatives how to dump the CD to an ISO image and I stumbled over recoverdisk,
whith which I also played a bit, and thought of this thread here...

Nice tool, though it cannot outsmart the copyprotection - but that's another story and shall not be told here. ?

But since we're talking dd anyway:
If I enlarge the blocksize e.g. dd ... bs=16M its speed can significantly increased.
But it would not produce a 1:1 copy of a disk anymore cause for that the sectorsize needs to be matched, right?
 
Nono. dd make bit-exact copies also in length. But internal during the working the chuchks/buffers can be set larger. And this decreases cpu-usage. Alot of hardware is now 4K previous 512bytes. When you have enough ram i set bs=1G when i have to dd alot.
 
That's very interesting, and very good to know. thanks!
I'll check that out.
If this is true this means:
a) the copying/cloning process can be significantly increased
b) the bs=2048 to produce CD-iso-images would make no sense... (?)
 
With that level of 'emptiness', dump and restore are about all that makes sense, especially if you make gz or xz'ipped tars that will be relatively tiny.

First use # gpart show -p ada0 to record exact size & start sector of each partition (including boot & swap) so you can easily recreate these on the new disk with gpart.

Then newfs the new partitions and restore the dumps.

That shouldn't take more than an hour or so, even if you dump and restore via USB 3 memsticks.

Yes, agree, dump and restore with gpart is the correct option, only how to use them correctly, for example, to clone such a disk 1 in 1?
Code:
# gpart show -p da0
=>      40  83886000    da0  GPT  (40G)
        40      1024  da0p1  freebsd-boot  (512K)
      1064   6291456  da0p2  freebsd-swap  (3.0G)
   6292520  10485760  da0p3  freebsd-ufs  (5.0G)
  16778280   2097152  da0p4  freebsd-ufs  (1.0G)
  18875432  31457280  da0p5  freebsd-ufs  (15G)
  50332712  16777216  da0p6  freebsd-ufs  (8.0G)
  67109928  16775168  da0p7  freebsd-ufs  (8.0G)
  83885096       944         - free -  (472K)
 
This means 10h to complete - this ain't bad.
If you do not necessarily depend on having exact 1:1 clones by disk-sectors,
there are (several) other and (much) quicker ways to move your data.
11.5 hours is an unacceptable runtime considering running on a production server. It should be borne in mind that beforehand I try to stop all running daemons and services.
Yes, i don't need to have exact 1:1 clones across disk sectors.
It is strange that a port utility similar to the old clonehdd has not been written, using the standard dump, restore, gpart tools to work in the modern FreeBSD environment((
 
11.5 hours is an unacceptable runtime considering running on a production server. It should be borne in mind that beforehand I try to stop all running daemons and services.
Yes, i don't need to have exact 1:1 clones across disk sectors.
It is strange that a port utility similar to the old clonehdd has not been written, using the standard dump, restore, gpart tools to work in the modern FreeBSD environment((
Judging by the output of your df command given above, this is probably an 8TB disk (the file systems listed there add up to 7.2TB). With perfect tools, you can read or write a disk at about 150...200 MB/second. Long division tells us that copying an 8TB disk will take between 11.1 and 14.8 hours, assuming those IO rates. In reality, those IO rates are somewhat difficult to accomplish; you need to use large IOs (512 bytes of 4KiB is out, I would recommend 1MiB and up), you need to overlap reads and writes (so both disks are simultaneously busy), and for best performance you need to have multiple outstanding writes at all times. If you got 11.5 hours, you are doing very well.

What this really points out is that doing image (byte accurate) copies of disks can be very inefficient, if the source file systems are quite empty (which is the case in your situation, judging by the df output). Theoretically, something like dump and restore would be a lot faster in this situation (since most of the disk is unallocated, and doesn't need to be copied). But note that this is specific to your situation of mostly empty file systems on the source disk; if instead the file systems were mostly full, then a byte-wise copy using very large IOs would be significantly faster, because the disk would be used sequentially and queued, not random. The exact cutover point for this tradeoff is hard to guess, and depends on specifics (like disk technology and file system fragmentation).

Why has nobody implemented such as tool recently? Don't know. This kind of copy can be relatively easily done by hand: Partition the target disk according to your tastes. Then iterate over all source file systems. If the source is pretty full, and the target partition the same size or larger than the source partition, use a good binary copy tool (dd is decent but not perfect, a multi-threaded one can be hacked up in an afternoon in C or python) to perform an image copy. If the target partition is larger than the source, use the standard commands to resize the target file system afterwards. On the other hand, if the source is pretty empty, then perform mkfs on the target partition, and use dump/restore to copy the data.
 
Yes, agree, dump and restore with gpart is the correct option, only how to use them correctly, for example, to clone such a disk 1 in 1?
Code:
# gpart show -p da0
=>      40  83886000    da0  GPT  (40G)
        40      1024  da0p1  freebsd-boot  (512K)
      1064   6291456  da0p2  freebsd-swap  (3.0G)
   6292520  10485760  da0p3  freebsd-ufs  (5.0G)
  16778280   2097152  da0p4  freebsd-ufs  (1.0G)
  18875432  31457280  da0p5  freebsd-ufs  (15G)
  50332712  16777216  da0p6  freebsd-ufs  (8.0G)
  67109928  16775168  da0p7  freebsd-ufs  (8.0G)
  83885096       944         - free -  (472K)

All my machines are and have been MBR / BSD partitioned, so I shouldn't advise about GPT.

That said, the EXAMPLES for GPT in gpart(8) seem clear and thorough to me, so that procedure is what I'd use.

About time: I expect the newfs(8) steps will take by far the longest, so after partitioning you could script the multiple newfs steps and have a good sleep <&^}=
 
Back
Top