nvme adapter with hp ex900 ssd pci-e nvme

Hello,
I have a small FreeBSD server (an old Intel Xeon e1200 with 8GB DDR3) at my home that serve as a SAMBA Server.
Is configured with 4 disks in RAID10 using ZFS. Everything works fine, I use this setup for over 1year.
Recently I bought a PCI-E NVME M.2 ADAPTER + HP SSD EX900 M.2 128GB for using as a cache disk on ZFS.

OS: FreeBSD 13.1

The problem is that my card / ssd is not detected.

Code:
nvmecontrol devlist

No NVMe controllers found.

camcontrol devlist return:

Code:
camcontrol devlist


<ST2000DM008-2FR102 0001>          at scbus0 target 0 lun 0 (pass0,ada0)
<ST2000DM008-2FR102 0001>          at scbus1 target 0 lun 0 (pass1,ada1)
<ST2000DM008-2FR102 0001>          at scbus2 target 0 lun 0 (pass2,ada2)
<ST2000DM008-2FR102 0001>          at scbus3 target 0 lun 0 (pass3,ada3)
<TOSHIBA DT01ACA200 MX4OABB0>      at scbus4 target 0 lun 0 (pass4,ada4)
<AHCI SGPIO Enclosure 2.00 0001>   at scbus5 target 0 lun 0 (pass5,ses0)

Does the bios needs to be mandatory UEFI?

any idea?

Thanks!
 
Hello SirDice and thank you for your fast answer...

So... my loader.conf looks like this:

Code:
# cat /boot/loader.conf
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
cryptodev_load="YES"
zfs_load="YES"
coretemp_load="YES"
ahci_load="YES"
nvme_load="YES"
nvd_load="YES"

pciconf -lv list the following:

Code:
 pciconf -lv
hostb0@pci0:0:0:0:    class=0x060000 rev=0x09 hdr=0x00 vendor=0x8086 device=0x0108 subvendor=0x8086 subdevice=0x2010
    vendor     = 'Intel Corporation'
    device     = 'Xeon E3-1200 Processor Family DRAM Controller'
    class      = bridge
    subclass   = HOST-PCI
em0@pci0:0:25:0:    class=0x020000 rev=0x05 hdr=0x00 vendor=0x8086 device=0x1502 subvendor=0x8086 subdevice=0x3578
    vendor     = 'Intel Corporation'
    device     = '82579LM Gigabit Network Connection (Lewisville)'
    class      = network
    subclass   = ethernet
ehci0@pci0:0:26:0:    class=0x0c0320 rev=0x05 hdr=0x00 vendor=0x8086 device=0x1c2d subvendor=0x8086 subdevice=0x7270
    vendor     = 'Intel Corporation'
    device     = '6 Series/C200 Series Chipset Family USB Enhanced Host Controller'
    class      = serial bus
    subclass   = USB
pcib1@pci0:0:28:0:    class=0x060400 rev=0xb5 hdr=0x01 vendor=0x8086 device=0x1c10 subvendor=0x8086 subdevice=0x7270
    vendor     = 'Intel Corporation'
    device     = '6 Series/C200 Series Chipset Family PCI Express Root Port 1'
    class      = bridge
    subclass   = PCI-PCI
pcib2@pci0:0:28:4:    class=0x060400 rev=0xb5 hdr=0x01 vendor=0x8086 device=0x1c18 subvendor=0x8086 subdevice=0x7270
    vendor     = 'Intel Corporation'
    device     = '6 Series/C200 Series Chipset Family PCI Express Root Port 5'
    class      = bridge
    subclass   = PCI-PCI
pcib3@pci0:0:28:5:    class=0x060400 rev=0xb5 hdr=0x01 vendor=0x8086 device=0x1c1a subvendor=0x8086 subdevice=0x7270
    vendor     = 'Intel Corporation'
    device     = '6 Series/C200 Series Chipset Family PCI Express Root Port 6'
    class      = bridge
    subclass   = PCI-PCI
ehci1@pci0:0:29:0:    class=0x0c0320 rev=0x05 hdr=0x00 vendor=0x8086 device=0x1c26 subvendor=0x8086 subdevice=0x7270
    vendor     = 'Intel Corporation'
    device     = '6 Series/C200 Series Chipset Family USB Enhanced Host Controller'
    class      = serial bus
    subclass   = USB
pcib4@pci0:0:30:0:    class=0x060401 rev=0xa5 hdr=0x01 vendor=0x8086 device=0x244e subvendor=0x8086 subdevice=0x7270
    vendor     = 'Intel Corporation'
    device     = '82801 PCI Bridge'
    class      = bridge
    subclass   = PCI-PCI
isab0@pci0:0:31:0:    class=0x060100 rev=0x05 hdr=0x00 vendor=0x8086 device=0x1c54 subvendor=0x8086 subdevice=0x7270
    vendor     = 'Intel Corporation'
    device     = 'C204 Chipset LPC Controller'
    class      = bridge
    subclass   = PCI-ISA
ahci0@pci0:0:31:2:    class=0x010601 rev=0x05 hdr=0x00 vendor=0x8086 device=0x1c02 subvendor=0x8086 subdevice=0x7270
    vendor     = 'Intel Corporation'
    device     = '6 Series/C200 Series Chipset Family 6 port Desktop SATA AHCI Controller'
    class      = mass storage
    subclass   = SATA
ichsmb0@pci0:0:31:3:    class=0x0c0500 rev=0x05 hdr=0x00 vendor=0x8086 device=0x1c22 subvendor=0x8086 subdevice=0x7270
    vendor     = 'Intel Corporation'
    device     = '6 Series/C200 Series Chipset Family SMBus Controller'
    class      = serial bus
    subclass   = SMBus
em1@pci0:2:0:0:    class=0x020000 rev=0x00 hdr=0x00 vendor=0x8086 device=0x10d3 subvendor=0x8086 subdevice=0x3578
    vendor     = 'Intel Corporation'
    device     = '82574L Gigabit Network Connection'
    class      = network
    subclass   = ethernet
vgapci0@pci0:3:0:0:    class=0x030000 rev=0x04 hdr=0x00 vendor=0x102b device=0x0522 subvendor=0x8086 subdevice=0x0102
    vendor     = 'Matrox Electronics Systems Ltd.'
    device     = 'MGA G200e [Pilot] ServerEngines (SEP1)'
    class      = display
    subclass   = VGA

Honestly I don't see the card anywhere... I'm thinking to try on another pcie slot... I tested the card on another machine with EL8 and is detected correctly, including the SSD.
 
I tested the card on another machine with EL8 and is detected correctly, including the SSD.
Linux should have a similar pciconf(8) command, have a look at how the card is identified there. The IDs, classes, etc. should be the same even if the output looks a little different.

The cool thing about pciconf(8) is that it just enumerates all the devices on the bus, regardless if the OS supports them or not. If the card doesn't show up it's a hardware issue (or some BIOS/EUFI setting) that's preventing the card from being detected.
 
Intel Xeon e1200
Depending on CPU version NVMe might not work.

E3-12xx V1 are Sandybridge and very few NVMe work there. Depends on motherboard.
SandyBridge was PCIe 2.x

Not until IvyBridge (E3-12xx V2) did PCIe 3.x hit.
Even then NVMe were not widely recognized or bootable. Depended on motherboard manufacturer.

Haswell they generally hit full blast. E3-12xx V3
Have you tried updating your bios? That is the first stop.
I see Matrox VGA so it is probably a server board.
What are the details?
 
Hi Phishfry,

Yes, is a server board... is an intel S1200BTL and the cpu is E3-1200v1 ... I got another SFF Fujitsu server that has E3-1200V2 and looks like is working like a charm with this nvme ssd.

Keep you updated... stay tune... I plan to switch the CPUs to see if the problem is from there... I see that the boards has almost the same chipset... the Fujitsu has C202 and the intel board has C204.

Dumb question... E3-12xx v3 can work on this board (intel S1200BTL)?

ps. it has the latest bios update...
 
Generally good SandyBridge boards allowed Ivy with BIOS upgrade.(C202/204/206)

V3 Xeon take another board.(C206 C222/224/226) They can take V4 with BIOS upgrade from manufacturer.
 
intel S1200BTL
I think Intel might have released another sku for the Ivy Bridge version. So dig deep and see what you really have.
 
Ok, so in case are other people with same problem:
1. The m.2 nvme ssd will not work on intel xeon E3-1220 v1. It will work on any intel xeon E3 v2 or newer ( thanks Phishfry )
2. The cheapest solution is to upgrade the procesor by replacing it with an intel xeon E3-1220 v2 and keep you motherboard and the rest of the setup.

Now, that I sort that out I got another problem... in fact 2 questions:

First question:

This mainboard has 2x SATA3 (6gb/s) and 4xSATA2 (3gb/2). Of course, for my RAID10 array the system now uses 2 drives on SATA3 and other 2 on SATA2... so basically the hole system is kind of limited.
I got raid card (ibm m5015)... this can be flashed to IT mode? Or is better to create a hardware raid10 array and use ZFS as a file system? In this case, what would be the advantages/disadvantages? (I know if the card fails, I need to have one identically to replace it). Can I use my m.2 nvme to make cache CACHE of LOG and CACHE ( ZIL and L2ARC) ?



Second question:
1. I try to use this M.2 nvme ssd for both LOG and CACHE by having 2 partitions... ( I know is not a good practice, I just want to make some tests).
I did the following:

Code:
root@bsd# gpart create -s gpt /dev/nvd0
nvd0 created
root@bsd # gpart add -s 16G -t freebsd-zfs -l log nvd0
nvd0p1 added
root@bsd# gpart add -t freebsd-zfs -l cache nvd0
nvd0p2 added
root@bsd# zpool add zroot log /dev/nvd0p1 
root@bsd# zpool add zroot cache /dev/nvd0p2
root@bsd# zpool status
  pool: zroot
 state: ONLINE
  scan: scrub repaired 0B in 00:00:30 with 0 errors on Tue Jul 12 11:26:56 2022
remove: Removal of vdev 6 copied 92K in 0h0m, completed on Tue Jul 12 11:07:35 2022
	360 memory used for removed device mappings
config:

	NAME          STATE     READ WRITE CKSUM
	zroot         ONLINE       0     0     0
	  ada0p3      ONLINE       0     0     0
	  ada1p3      ONLINE       0     0     0
	logs	
	  nvd0p1      ONLINE       0     0     0
	cache
	  nvd0p2      ONLINE       0     0     0

errors: No known data errors


zpool iostat -v
              capacity     operations     bandwidth 
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zroot       3.48G   917G      1      1  21.7K  18.3K
  ada0p3    1.64G   458G      0      0  8.60K  7.58K
  ada1p3    1.84G   458G      0      0  12.8K  8.10K
  indirect-3      -      -      0      0      0      0
  indirect-6      -      -      0      0      0      0
logs            -      -      -      -      -      -
  nvd0p1        0  15.5G      0      0  2.36K  27.3K
cache           -      -      -      -      -      -
  nvd0p2    17.2M  95.8G      0      2  4.84K   181K
----------  -----  -----  -----  -----  -----  -----


everything looks ok until now... but... if I restart the system, when booting at some point restarts itself automatically, and the system basically never boot up. what I did wrong? what I miss?


Thanks a lot!
 
I guess it will remain a mystery... I will try to recreate zfs pools, reinstall the OS and add the cache again...
 
Hi, I sorted out... here is what I did...

Code:
gpart destroy -F nvd0
gpart create -s gpt nvd0
gpart add -b 2048 -a 4k -s 50G -t freebsd-zfs -l log nvd0
gpart add -a 4k -s 50G -t freebsd-zfs -l cache nvd0
gnop create -S 4096 /dev/gpt/log
zpool add zroot log /dev/gpt/log.nop
zpool add zroot cache /dev/gpt/cache

I rebooted the machine and booted correctly... and the array looks like this:

Code:
zpool status
  pool: zroot
 state: ONLINE
config:

	NAME         STATE     READ WRITE CKSUM
	zroot        ONLINE       0     0     0
	  ada0p3     ONLINE       0     0     0
	  ada1p3     ONLINE       0     0     0
	logs	
	  nvd0p1     ONLINE       0     0     0
	cache
	  gpt/cache  ONLINE       0     0     0

it only remain the question of using hardware raid with zfs and ssd caching... thanks !
 
Hi, i really need your help... so i ws trying the same scenario as the one above on other machine running fbsd 13.1, zfs with raid10 array... after i add the log device, the system works perfectly until the first reboot... after that, loads the bootloader, start to loading the kernel and at some point restarts. I tried adding sepparate slog and l2arc devices, tried to destroy the gpt and recreate... tried adding the log device from using a livecd shell... the result is the same.... the server start to load the kernel and at some point restarts without kernel panic or anything...

any idea?
 
Note , I don't know if "gnop" is still usefull today ?

Boot from an USB stick, import the pool with altroot , check root-filesystem , /boot , /kernel, loader.conf , rc.conf

Try to drop into the bootloader during the booting process.
 
Hi Alain, i tried with and without gnop... same result...
if i boot from usb import the zpool, remove the log device, export the zpool and then reboot from hard drive it works... so i doubt to be a problem with the filesystem...
 
If there is a problem with the filesystem you are able to see it with
Code:
zpool status -v
After a correct boot you can try:
Code:
cp /etc/zfs/zpool.cache /boot/zfs/zpool.cache
 
Ok... i found something very strange... if a ssd is new (unformated) and run the following commands:

Code:
gpart create -s gpt nvd0
gpart add -b 2048 -a 4k -s 50G -t freebsd-zfs -l log nvd0
gpart add -a 4k -s 50G -t freebsd-zfs -l cache nvd0
zpool add zroot log /dev/gpt/log
zpool add zroot cache /dev/gpt/cache

works like a charm... then, if i do the following..

Code:
zpool remove zroot /dev/gpt/log
zpool remove zroot /dev/gpt/cache
reboot

it will remove the log and cache and reboot from hard drive without caching device... good... works...

then, if i try to readd the log and cache using the following commands

Code:
gpart create -s gpt nvd0
gpart add -b 2048 -a 4k -s 50G -t freebsd-zfs -l log nvd0
gpart add -a 4k -s 50G -t freebsd-zfs -l cache nvd0
zpool add zroot log /dev/gpt/log
zpool add zroot cache /dev/gpt/cache
reboot the system and bang.... the problem reaper.. so... i guess is something to do with that GPT label that is not vanished properly... or don't know... what i miss?

Thanks
 
it rather looks booting from a pool that has a cache and/or log device has a problem/bug
what if you have just one of them cache or log?
 
The problems appears only when adding the log device. If i add only the cache device there is no problem.
 
I tried on a new test server, this time the ssd cache was a sata2 connected on the same controller as the disks... same routine... at first add of the ssd cache ( log and cache), worked like a charm... i reboot the machine, then i remove the log and cache, reboot again straight from the disk ( worked)... i run a gpart destroy -F <disk> , recreate again the log and cache partitions, reboot the machine, bang... when wants to mount the root filesystem, reboots.

so either is a bug somewhere in the kernel, either i do something extremally wrong and i don't figure out what.
 
You could try to create your cache&log device just with
Code:
gpart add  -s 50G -t freebsd-zfs -l log nvd0
gpart add  -s 50G -t freebsd-zfs -l cache nvd0
 
You could try to create your cache&log device just with
Code:
gpart add  -s 50G -t freebsd-zfs -l log nvd0
gpart add  -s 50G -t freebsd-zfs -l cache nvd0

HI Alain, i also tried that.. same result.

covacat,
hi, yes... same result...

Phishfry... well, the device i use is less important at this point... i just want to figure out where is the problem... and how i can sort it out... after that, i will invest in a performance ssd/nvme.
 
Back
Top