no nvme module loaded in installer

I noticed that the 14.2 installer comes without nvme driver. Any idea what could be the issue ?

rpviewer.png


I am trying to install using the disc1 iso on a dell r640 updated with latest bios and IDRAC with 2 u2 intel nvme disks on slot 6 and 7. Disks are viewed in idrac.
 
well i used latest iso. I was also expecting they would be there. I will retry from another download.
 
If your 2 nvme cards are sanely recognized, on 14.2, /dev/nda0 and /dev/nda1 should be seen. So you would be able to create partition(s) and install there.
If none, there should be incompatibilities.

For example, my nvme drive is seen as below on dmesg.
Code:
    (snip)
nvme0: <Generic NVMe Device> mem 0xa4100000-0xa4103fff at device 0.0 on pci3
    (snip)
nda0 at nvme0 bus 0 scbus2 target 0 lun 1
nda0: <Samsung SSD 970 EVO Plus 2TB 1B2QEXM7 ***************>
nda0: Serial Number ***************
nda0: nvme version 1.3
nda0: 1907729MB (3907029168 512 byte sectors)
    (snip)

And in /dev/
Code:
% ls -l /dev/nvme*
crw-------  1 root wheel 0x42 12月 19 01:34 /dev/nvme0
crw-------  1 root wheel 0x61 12月 19 01:34 /dev/nvme0ns1
% % ls -l /dev/nda*
crw-r-----  1 root operator 0x7f 12月 19 01:34 /dev/nda0
crw-r-----  1 root operator 0x81 12月 19 01:34 /dev/nda0p1
crw-r-----  1 root operator 0x83 12月 19 01:34 /dev/nda0p2
crw-r-----  1 root operator 0x85 12月 19 01:34 /dev/nda0p3
crw-r-----  1 root operator 0x87 12月 19 01:34 /dev/nda0p4

And gpart
Code:
% gpart show nda0
=>        40  3907029088  nda0  GPT  (1.8T)
          40        2008        - free -  (1.0M)
        2048     1126400     1  efi  (550M)
     1128448        2048     2  freebsd-boot  (1.0M)
     1130496  3770679296     3  freebsd-zfs  (1.8T)
  3771809792   135219200     4  freebsd-swap  (64G)
  3907028992         136        - free -  (68K)

Note that I've created partitions manually via subcommands of gpart manually (without using installer) and created ZFS pool manually, too.
(On my previous, already dead computer while it was still running.)

Actually, on UEFI only boot, freebsd-boot partition (nda0p2) is not at all needed.
This is just a remnant of testing not-yet-ready UEFI boot codes.
On the other hand, ESP (efi partition at nda0p1) is not at all needed for legacy (BIOS) only boots.
 
Why don't you drop to shell from installer and check pciconf -lv and see if your NVMe shows up there.
It will also show if the driver attached by the name.
If you see NVMe in the name than the driver has attached.

For example:
nvme@pci0:0:0

No driver attached:
none@pci0:0:0
 
Well, did you try what SirDice told you to do?
yes.
Why don't you drop to shell from installer and check pciconf -lv and see if your NVMe shows up there.
It will also show if the driver attached by the name.
If you see NVMe in the name than the driver has attached.

For example:
nvme@pci0:0:0

No driver attached:
none@pci0:0:0
I did it of coruse. no nvme controller is found and kldlist doesn't return nvme or nvf/nva . When I try to load them these modules are not found.

I have checked on the iso they are present though.

Loading mannually the module at boot return the following:


Code:
OK load /boot/kernel/nvme.ko
elf64_obj_loadfile: can't load module before kernel
elf64_obj_loadfile: can't load module before kernel
can't load file '/boot/kernel/nvme.ko': operation not permitted
OK
 
after loading the kernel `load /boot/kernel/kernel` i was abble to load nvme.ko and nvd.ko. Bit after booot ing I still have no nvme module loaded. running kldlload failed also. Any idea is welcome.
 
So is it like the installer can't see the NVMe devices, but once you get FreeBSD actually loaded, FreeBSD can?

I've not had any trouble like this with NVMe devices - they "just work", and I think that's how it is meant to be.

I had some faffing (think it was the PCIe to NVMe cards I was experimenting with) with a R450 but if the Dell BIOS could see it, so could FreeBSD.

I've got a couple of R640s here but think they are set for SATA drives so I can try with a PCIe card but not sure how close that is to your set-up.
 
Erichans it is empty. So is the pci list.

Apparently according the supplier it's due the lack of a second CPU. 2 CPUs are needed to have nvme drives on the machine. I will check when i get it and update. It it's the case this kind of interesting ... would be good to have a warning at some point.
 
I am trying to install using the disc1 iso on a dell r640 updated with latest bios and IDRAC with 2 u2 intel nvme disks on slot 6 and 7. Disks are viewed in idrac.
Apparently according the supplier it's due the lack of a second CPU.
A second CPU may not be necessary.

Based on Dell EMC PowerEdge R640 Installation and Service Manual I can't see any reference to "slot 6 and 7". Are these actually 2 u.2 disks or, as I see in the manual 2 m.2 NVMe disks that are placed on a riser card? In the manual it is shown in Table 27. Expansion card riser configurations that depending on the type of riser card, it can be located in either Slot1 or Slot 2; each slot connects to a different CPU socket. If you can move the riser card to the other slot that connects to your current CPU, you should be able to use it in your system in its current configuration.

It it's the case this kind of interesting ... would be good to have a warning at some point.
The infrastructure of PCIe lanes divided over two Intel CPU sockets is the cause of this "problem". The BIOS/firmware is designed in such a way that the BMC reports its analysis bypassing any of the two intel CPU. The BMC is a CPU in its own right and is designed to scan for all/most connected peripherals attached. The straightforward conclusion would be that the BMC detects the NVMe drives and "is of the opinion" that it is for the user to "connect the dots" and is responsible for the card being addressable by the desired CPU.

Regarding your "warning": I think too that the messaging could be improved. However, to be clear, I don't think this can be blamed on FreeBSD (and its loader); this is the domain of Dell and its specific firmware implementation.
 
Back
Top