- Simple answer: No, ZFS will not control replacement LEDs. Not without additional software, which AFAIK is not standardized and no pre-cooked package for that exists. Matter-of-fact, a general solution is nearly impossible. Writing a specific solution for specific hardware is not terribly hard; with a few hours of scripting, it can be done by a competent sys admin.
Complex answer: That's because ZFS does not know that LEDs exist. And even if it went and knew about mechanisms to control LEDs, it would not know which LED corresponds to which disk. And this is where a RAID controller is capable of doing better.
Explanation: Say we have a system with a disk enclosure, which contains multiple disks, typically hot-swappable (which often implies that the enclosure can turn power for the disk off and on), and that has multiple LEDs per disk. Typically, there will be up to three LEDs in the enclosure per drive slot: (a) A power LED. That one does not require any intelligence to control, it is directly wired to the power pins that feed the disk, perhaps in conjunction with a simple transistor that detects whether the disk is present. (b) A disk activity LED; that one is driven electrically directly from the disk drive. (c) An error or replace LED. This one is driven by the enclosure controller. If you think about it, it's obvious that it has to be controlled by the enclosure controller, not by the disk: a disk may have to be replaced because it is dead; or the admin may have to insert a disk into an empty slot, so one can not rely on the disk drive controlling the error/replace LED.
And the way that last LED is controlled is the problem. From a data path point of view, there are separate devices involved. The disk speaks to the computer over SATA protocol, or over a SCSI block protocol; this is what cause the OS to create many devices with names like /dev/da... and /dev/ada..., which are then used by the file system (for example ZFS). The enclosure controller speaks to the computer over the SCSI SES (enclosure services) protocol, which causes a single device /dev/ses... to be created for the whole enclosure.
To turn error/replace LEDs off and on requires implementing the following logic:
- Make a list of all the disks that ZFS needs, translating their identity to block device names like /dev/ada... or /dev/da... that can be opened, but also storing the "serial number" (called WWN) of the disk drive.
- Persistently remember that list (you'll see below why we need to store it permanently).
- Find all enclosures that are connected, with names like /dev/ses...
- Ask each enclosure how many disk drive slots it has.
- For each slot in each enclosure, ask the enclosure whether that slot is occupied, and if yes, which WWN is in that slot.
- And storing that mapping persistently again.
- Ask each enclosure whether it has a controllable LED for that slot.
Now, if a drive fails, we can do the following: Ask our stored list which maps all the disks to slots where the disk is, or where it was last seen (if it is currently not communicating). If we know it, and that slot has a controllable LED, turn that LED on. There is an enormous number of special cases that need to be considered to make this work, and implementing it correctly takes literally person-years of engineering. On the other hand, a quick hack script that mostly works for a specific hardware can be put together quickly by a person experienced in scripting, and who knows the SCSI protocols.
The problem with ZFS doing all this is: ZFS is a general purpose file system. It works on all manner of hardware, including disks that are not in enclosure slots (for example I use ZFS on my home server with 4 disks, none of which have LEDs, none of which are an enclosure, and there are no /dev/ses... devices). Implementing all that logic correctly would be a heck of a lot of work, and it would only benefit a small number of users; and those users either have their own methods of dealing with it (write their own scripts, do not replace disk drives, use manual methods to track disk locations, or spend money on hardware/software/services to take care of it). Most importantly, ZFS does not make any money (it is not sold as a product), so there is no funding for implementing all this, unless someone donates or volunteers.
For a RAID controller, a lot of this is easier to do. RAID controllers already live in the world of SCSI protocols, so they know how to talk to disk drives and enclosures (someone needs to). They already need to store information about disks (namely the identity of the disk), but unlike ZFS, they can specialize that to storing the WWN. And they have a revenue stream (namely selling hardware) that can be used to implement these features. And they have a revenue stream because their users want these features.
Just to be clear: I am heavily in favor of using ZFS as the RAID layer, and not using hardware RAID.