UFS Gmirror and GPT

It is possible to mirror an entire disk using /sbin/gmirror when partitioning with GPT?

I know there were issues in the past, but wondering if that was resolved in the newer versions of FreeBSD ( I plan to install 14.2 ).

If not, what are my alternatives ( besides ZFS )?

When I try to partition using MBR, I get a "This partition scheme is not bootable on this platform" message.
 
Using raid, I'm worried what will happen if I switch servers and there is a different raid controller.

It's probably a dumb question, but if I use graid and I have to replace the SSD, will the remining one still boot up if graid is not used?
 
Well covacat uses graid which will work on any graid scheme.
Originally graid was Intel motherboard raid somewhere along the line it morphed into graid software raid usable on any hardware.

I use gmirror and think it is worth checking out for EFI.
Thread 71904
 
will the remining one still boot up if graid is not used?
Yes indeed.
My scheme on my firewall is old 16GB SSD Innodisk industrial drives.
I swap them out whenever I like. Monthy swap is what I try for.
My aim is keep 3 disk raid with one a cold spare drive. Trying to keep wear leveling even on drives.
I can use the cold spare in any machine. Interface names might need adjusting.

So I also have a spare firewall with cold spare drive from my main firewall. Just in case.
Looks like I have at least 4 usable drives in my raid1 setup for firewall.

To me that one machine would be the hardest to reproduce quickly. So I use raid1.
 
Using raid, I'm worried what will happen if I switch servers and there is a different raid controller.
Well in my above example it was all motherboard attached drives of the "ata" type.

So for storage controllers you would need to assume "da" drive assignments.

The only glitch could be how the OS adds EFI setting to BIOS. I am not sure there.

I do use dual sammy NVMe AIC in gmirror array in one server for redundancy.
 
So for storage controllers you would need to assume "da" drive assignments.
And if you are booting off a hardware controller you might need to kldload the storage driver in bsdinstall shell before creating array.

You would also need to add a new line for your controller in your new installation like this line:
echo 'mrsas_load="YES"' > /tmp/bsdinstall_boot/loader.conf
That way on first boot it works OOB. mrsas driver is just used as an example here.

Usually storage controllers are auto detected but you should be aware of whats going on. You may need to load a driver.
 
graid mirror is just like gmirror. metadata is stored at the provider's end.
i just used graid because it was the successor of ataraid which i used before it.
and vinum and ccd before it but those would not work to boot from the raid (metadata was stored in a file on the fs and were not autoconfigured by the kernel at boot)

graid does not need any special hw, will work with any stuff just like gmirror
 
The nice thing about extra RAID1 disks for spares is when its time to upgrade FreeBSD I do a swap. Pull out a good drive and slip in a cold spare making sure it is online. Now run freebsd-update and breath easy. You have whole bootable disk incase upgrade goes south. You can't have your firewall go down.

Before I slip in the cold spare I zero the drive. That way there is no worrying about which version of files to use.

Upgrades are a tedious time. Having a spare disk as backup is reassuring.
 
Graid is up and running. The only thing that concerns me is that it's staying as "dirty". MySQL is constantly writting to the drive, so I'm not sure if that has anything to do with it.

Code:
$ graid status
   Name   Status  Components
raid/r0  OPTIMAL  nda0 (ACTIVE (ACTIVE))
                  nda1 (ACTIVE (ACTIVE))
                 
$ graid list
Geom name: Intel-c4c3e6cb
State: OPTIMAL
Metadata: Intel
Providers:
1. Name: raid/r0
   Mediasize: 256060510208 (238G)
   Sectorsize: 512
   Mode: r4w4e7
   Subdisks: nda0 (ACTIVE), nda1 (ACTIVE)
   Dirty: Yes
   State: OPTIMAL
   Strip: 131072
   Components: 2
   Transformation: RAID1
   RAIDLevel: RAID1
   Label: myraid
   descr: Intel RAID1 volume
Consumers:
1. Name: nda0
   Mediasize: 256060514304 (238G)
   Sectorsize: 512
   Mode: r1w1e1
   ReadErrors: 0
   Subdisks: r0(myraid):0@0
   State: ACTIVE (ACTIVE)
2. Name: nda1
   Mediasize: 256060514304 (238G)
   Sectorsize: 512
   Mode: r1w1e1
   ReadErrors: 0
   Subdisks: r0(myraid):1@0
   State: ACTIVE (ACTIVE)
 
It's in my output above. Here's a snippet if it :

Code:
$ graid list
Geom name: Intel-c4c3e6cb
State: OPTIMAL
Metadata: Intel
Providers:
1. Name: raid/r0
   Mediasize: 256060510208 (238G)
   Sectorsize: 512
   Mode: r4w4e7
   Subdisks: nda0 (ACTIVE), nda1 (ACTIVE)
   Dirty: Yes
 
I wanted to leave my procedure for adding cold storage drive back into system.

First off when changing disks I like to shut down the machine and swap.

On restart the array will be degraded:

Only one consumer is showing. ada0 was replaced with blanked drive and not found now.
Code:
/home/firewall # gmirror list
Geom name: gm0
State: DEGRADED
Components: 2
Balance: load
Slice: 4096
Flags: NONE
GenID: 0
SyncID: 2
ID: 2401551351
Type: AUTOMATIC
Providers:
1. Name: mirror/gm0
   Mediasize: 16013852160 (15G)
   Sectorsize: 512
   Mode: r1w1e2
Consumers:
1. Name: ada1
   Mediasize: 16013852672 (15G)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
   Priority: 1
   Flags: NONE
   GenID: 0
   SyncID: 2
   ID: 1413621284

So then I must do:
gmirror forget gm0
gmirror insert gm0 ada0

Then it rebuilds:
Code:
/home/firewall # gmirror status
      Name    Status  Components
mirror/gm0  DEGRADED  ada1 (ACTIVE)
                      ada0 (SYNCHRONIZING, 5%)
 
Back
Top