The following is the zdata array in a FreeBSD 13.2-RELEASE system:
The failing drive is the one in slot 9 with the HD device name of mfisyspd9 and the passthrough name of pass10.
This is the procedure that I use to replace the failing drive:
1) take out failing drive
2) insert replacement drive
3) gpart as follows:
# gpart create -s gpt mfisyspd9
# gpart add -t freebsd-zfs -l data_disk10_1 mfisyspd9
4) confirm gpt structure
# gpart backup mfisyspd9
5) start replacement of failing drive (may take a day or so to resilver)
# zpool replace zdata gpt/data_disk10 gpt/data_disk10_1
I've successfully replaced a few drives in this manner in the past. However, this time, when I inserted the new drive, it did not get recognized as mfisyspd9. When I attempt to 'gpart create -s gpt mfisyspd9', it barfed an error message as follows:
gpart: arg0 'mfisyspd9': invalid argument
Running 'camcontrol devlist -v' shows the new disk as pass10 which is expected but when I enumerate the /dev directory, it does not show an entry for the mfisyspd9 device. I notice that the new drive has a newer firmware than the rest of the drives which have had three various firmware versions ( 40H- the original version, 460 and 984). The firmware of this new drive is 9G0. Would this explain the reason why the drive fails to show up as mfisyspd9?
To compound the issue further, while researching this issue, I noted the use of the 'zpool offline' command to offline the failing drive. I used it to offline the failing drive. I then swapped out the bad one for the good drive. In the process of troubleshooting, I accidentally 'zpool online' the good drive. I offlined it and swapped it back to the failing drive and tried to online it. It failed saying that it was not the expected drive.
At this point, I am not sure what steps I need to take. Do I need to run the following command:
# zpool replace zdata gpt/data_disk10 gpt/data_disk10
It seems counterintuitive in that I'm using the same gpt label...
Please advise.
~Doug
Code:
HD Device Passthrough Device Serial GPT Model Firmware Slot #
mfisyspd0 pass0 V8H9UVMR data_disk11 HGST HUS726T6TALE6L4 40H 0
mfisyspd1 pass1 V9G5S89L data_disk12_1 HGST HUS726T6TALE6L4 984 1
mfisyspd2 pass2 V9HTSEEL data_disk13_1 HGST HUS726T6TALE6L4 984 2
mfisyspd3 pass3 V8H9DH1R data_disk14 HGST HUS726T6TALE6L4 40H 3
mfisyspd4 pass4 V9G81MWL data_disk15_1 HGST HUS726T6TALE6L4 460 4
mfisyspd5 pass5 V8H9US9R data_disk16 HGST HUS726T6TALE6L4 40H 5
mfisyspd6 pass6 V8H9V1LR data_disk17 HGST HUS726T6TALE6L4 40H 6
mfisyspd7 pass7 V9H3L26L data_disk18_2 HGST HUS726T6TALE6L4 984 7
mfisyspd8 pass8 V8KZXZWF data_disk19_1 HGST HUS726T6TALE6L4 984 8
mfisyspd9 pass10 V8H9G39R data_disk10 HGST HUS726T6TALE6L4 40H 9
The failing drive is the one in slot 9 with the HD device name of mfisyspd9 and the passthrough name of pass10.
This is the procedure that I use to replace the failing drive:
1) take out failing drive
2) insert replacement drive
3) gpart as follows:
# gpart create -s gpt mfisyspd9
# gpart add -t freebsd-zfs -l data_disk10_1 mfisyspd9
4) confirm gpt structure
# gpart backup mfisyspd9
5) start replacement of failing drive (may take a day or so to resilver)
# zpool replace zdata gpt/data_disk10 gpt/data_disk10_1
I've successfully replaced a few drives in this manner in the past. However, this time, when I inserted the new drive, it did not get recognized as mfisyspd9. When I attempt to 'gpart create -s gpt mfisyspd9', it barfed an error message as follows:
gpart: arg0 'mfisyspd9': invalid argument
Running 'camcontrol devlist -v' shows the new disk as pass10 which is expected but when I enumerate the /dev directory, it does not show an entry for the mfisyspd9 device. I notice that the new drive has a newer firmware than the rest of the drives which have had three various firmware versions ( 40H- the original version, 460 and 984). The firmware of this new drive is 9G0. Would this explain the reason why the drive fails to show up as mfisyspd9?
To compound the issue further, while researching this issue, I noted the use of the 'zpool offline' command to offline the failing drive. I used it to offline the failing drive. I then swapped out the bad one for the good drive. In the process of troubleshooting, I accidentally 'zpool online' the good drive. I offlined it and swapped it back to the failing drive and tried to online it. It failed saying that it was not the expected drive.
At this point, I am not sure what steps I need to take. Do I need to run the following command:
# zpool replace zdata gpt/data_disk10 gpt/data_disk10
It seems counterintuitive in that I'm using the same gpt label...
Please advise.
~Doug