ZFS From device name to GPT name

I've got a zpool with name "ZUSB4" and device "da1p1".
How to give it a gpt label , ie no bios numbering.
And import the zpool with that specific gpt label.
[I want zfs indiscriminate of disk ordering ]
The device da1p1 looks at first like hard-wired.
 
Still problems,
Code:
zpool list -v
NAME            SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
ZHD             904G   313G   592G        -         -     8%    34%  1.00x    ONLINE  -
  ada0s2        834G   313G   519G        -         -     8%  37.6%      -    ONLINE
special            -      -      -        -         -      -      -      -         -
  gpt/special    73G   434M  72.1G        -         -    30%  0.58%      -    ONLINE
ZT              330G   152G   178G        -         -    49%    46%  1.00x    ONLINE  -
  ada2p3        150G  70.8G  78.2G        -         -    48%  47.5%      -    ONLINE
  ada2p10       182G  81.4G  99.6G        -         -    50%  45.0%      -    ONLINE
logs               -      -      -        -         -      -      -      -         -
  ada1s3         14G  1.68M  13.5G        -         -     0%  0.01%      -    ONLINE
cache              -      -      -        -         -      -      -      -         -
  ada1s1         27G  26.9G   103M        -         -     0%  99.6%      -    ONLINE
All those disk-sort-ids. Its no good. I should do gpt something ... Without ordening numbering of partition/slice. Just the name.
How do i give a gpt label xxx to ada0s2 and then export/import that specific abel ?
Same for ada2p3 & afa2p10 & ada1s3.
Purpose is to have no bios-ordening. Just name&label.
 
If something is mirrored, would removing/detaching the one with partioning and then re-attaching it with the /dev/gpt work?

This is partially in context of mirrored boot pools can't really export and import them
 
gpt labeling worked for external drives, daxpy, but not for internal adaXpY. I have no explanation.
How do I gpt label, do i use glabel or gpart ?
I should find the devices immediatly in /dev/*/*
I tried
Code:
gpart modify -i 3 -l ada2p3 ada2
but nothing in /dev/
A bug ?
 
How do I gpt label, do i use glabel or gpart ?
I see you have in pool "ZT" GPT partitions (ada2p3 , ada2p10) and MBR slices (ada1s3 , ada1s1). GPT partitions and MBR slices only glabel(8) can label both. The device nodes will be created under /dev/label.

Import the pool executing zpool import -d /dev/label ZT.

The same apply to "ZHD" pool.
 
I should find the devices immediatly in /dev/*/*
I don't think so.
Device Withering is the operative term.
If you gpart a device say looking like this:
gpart show -l
=> 40 488397088 ada0 GPT (233G)
40 1024 1 gptboot0 [bootme] (512K)
1064 984 - free - (492K)
2048 4194304 2 swap0 (2.0G)
4196352 484200448 3 zfs0 (231G)
488396800 328 - free - (164K)

means that ada0p2 could be referenced by /dev/gpt/zfs0 OR /dev/ada0p2,
Now you want to import or reference ada0p2 you can do it by /dev/gpt/zfs0 or /dev/ada0p2 If you use /dev/ada0p2 (like adding it to a mirror) then the kernel makes /dev/gpt/zfs0 disappear, meaning if you "ls /dev/gpt" you don't see the zfs0

Withering comes about from exclusive access. Until the first exclusive access all ways of refering to it are available. But once you use one of them, the others are invisible. mount command, import are exclusive access.
In my output if I reference /dev/gpt/zfs0 in the commands (import) then /dev/ada0p2 goes away.

You have something imported using the ada0 path, if that is root stuff you can't export it and then import it so the gpt paths are not going to be visible.
Even on the external stuff, I'm guessing that the /dev/gpt paths aren't visible until you have exported the pool. Exporitng the pool gets rid of the exclusive access and makes all paths visible.
 
T-Daemon thanks.
I had already had things gpart labeled, one of the mirror was already using the gpt label so

zpool detach storage ada2p1
zpool attach storage gpt/existing label gpt/label for ada2p1

seems to be working as desired. Just need to wait for resilvering to finish.

To tie that in with my post #9:
Before doing the "zpool detach" my /dev/gpt did not show the label for ada2p1
After the zpool detach /dev/gpt showed the label for ada2p1.
That's what the withering is all about.
 
You need to remove ada2p3 from the pool, glabel(8) it and add back.
If it's a mirror the steps I did would work. But if it's not a mirror (zpool add vs zpool attach a mistake I've made in the past) doesn't removing a device require a "replace" of the device while it's in degraded condition?
 
As of now, no export/import dance is needed and it even works without rebooting:

# zpool set path=/dev/gpt/<label> <pool> <vdev>
Apparently the setting doesn't survive a reboot, and more important, it seems the value of "path" can be not only disk device names or labels but any device name under /dev, even non-existent device names:
Rich (BB code):
# zpool status -P tank
  pool: tank
 state: ONLINE
config:

    NAME           STATE     READ WRITE CKSUM
    tank           ONLINE       0     0     0
      /dev/ada1p1  ONLINE       0     0     0

errors: No known data errors
Rich (BB code):
# zpool set path=/dev/some_random_name tank ada1p1
Rich (BB code):
# zpool status -P tank
  pool: tank
 state: ONLINE
config:

    NAME                     STATE     READ WRITE CKSUM
    tank                     ONLINE       0     0     0
      /dev/some_random_name  ONLINE       0     0     0

errors: No known data errors
Code:
# ls /dev/some_random_name
ls: /dev/some_random_name: No such file or directory

Do you happen to know a documentation for the "path" property? I couldn't find any.
 
Interesting. It worked like a charm for me. The change was instantaneous and survived the reboot. On several installations, both bare metal and VMs. Also both when disk vdevs were part of a pool directly and part of the mirror vdevs. FreeBSD 14.1.

The only documentation for the path property that I have is vdevprops(7). I took it from the Allan Jude's video about VDEV properties. Starting from 14:03, the link points to that section.
 
As of now, no export/import dance is needed and it even works without rebooting:

# zpool set path=/dev/gpt/<label> <pool> <vdev>
Thanks. It did work for zroot but not after reboot. Maybe because I use GELI. I tried setting geli_devices="gpt/zfs0" in /etc/rc.conf to make GELI mount the provider using GPT label but it didn't work either.:(

Code:
% freebsd-version -kru ; uname -aKU
14.1-RELEASE-p5
14.1-RELEASE-p5
14.1-RELEASE-p5
FreeBSD hale 14.1-RELEASE-p5 FreeBSD 14.1-RELEASE-p5 GENERIC amd64 1401000 1401000
 
I wonder, what's the difference. I'm not using anything special here. Just normal ZFS on root installations with either mirrors or single disks, VMs and metal.
 
I also wonder, what is the use for this other than informational. I switched to GPT labels mainly to put serial numbers into the vdev names, as Allan Jude suggested.

In my experience and as I saw other people mentioning this on this forum, ZFS doesn't care about the device paths. The paths are not hard-coded in the pool. The information is stored on the devices themselves. When I switched the disk image files of a VM and booted, ZFS picked up the changes correctly.

On the other hand, there are numerous posts when ZFS got confused after disk path changes. Was it on the older versions of ZFS? Or was it not just an order or interface change but something more?
 
Here are my guesses why it might not survive the reboot and why it takes any device name, even not existing.

It doesn't survive the reboot probably because of the same mechanism that hides the GPT device names when the normal name is used first. At some point during boot something else accesses the normal device name and ZFS keeps using that because it doesn't care about the specific path. I guess, you don't see the GPT labels in /dev/gpt as I do after a reboot. And it works for me because my installations are rather straightforward in terms of disk usage.

It takes any device name probably because it again doesn't really care about the path, it cares about other information on disk that allows it to identify the disk as part of the existing pool. And I guess, this fake device name doesn't survive the reboot either.
 
I can confirm that the vdev property path can be set to "/dev/some_random_name"
Rich (BB code):
[1] # zpool status -P tank
  pool: tank
 state: ONLINE
config:

        NAME          STATE     READ WRITE CKSUM
        tank          ONLINE       0     0     0
          /dev/da0p1  ONLINE       0     0     0

errors: No known data errors
[2] # zpool set path=/dev/some_random_name tank /dev/da0p1
[3] # echo $?
0
[4] # zpool status -P tank
  pool: tank
 state: ONLINE
config:

        NAME                     STATE     READ WRITE CKSUM
        tank                     ONLINE       0     0     0
          /dev/some_random_name  ONLINE       0     0     0

errors: No known data errors
That this does not result in an error, or isn't documented in vdevprops(7) is, IMO, less than ideal.

That same pool gets succesfully imported with zpool import -d /dev/diskid/DISK-9000162464A85009p1
My testing also shows that setting the path property of this imported zpool to /dev/diskid/DISK-9000162464A85009p1, where it was imported with /dev/da0p1, is not persistent after a reboot ( shutdown -r now) and subsequent manual zpool import ( zpool import -a).

Based on:
an intended use to persistently set a vdev path property is clear. It cannot be that such intended setting is both persistent and not persistent.
This feels like a bug.

___
* Mark Maybee shows an interesting example of the allocating property.
 
Based on:
[...] so basically allows you to change the path in the zpool config that it will try first for opening that vdev so next time you import it it should pick up from the correct path.
That's a great find! It confirms my suspicion above that the path property is not strict but rather a hint to search for device first.

My second guess above was that it doesn't survive the reboot when there is something else that accesses the disk via the normal /dev/<disk> path first. I bet it would work for you on a fresh install (as it does for me) with the default ZFS settings and then changing path to the automatically created /dev/gpt/zfs0 GPT label. When setting the vdev path, the file /dev/gpt/zfs0 would not exist in the fresh install (but the label itself exists because it's created by the installer). And then it persists after a reboot and the file /dev/gpt/zfs0 exists because on boot it was accessed before the normal /dev/<disk> path.
 
Apparently the setting doesn't survive a reboot,
I bet it would work for you on a fresh install (as it does for me) with the default ZFS settings and then changing path to the automatically created /dev/gpt/zfs0 GPT label.
I don't know why it didn't work last time when I tested in a preexisting VM, but I can't reproduce the issue on a fresh (VM) installation. The path setting is persistent this time.

One more element I noticed is the zpool set path command isn't logged in zpool-history(8).

The fake path device name remains still possible though.
 
Back
Top