Excruciatingly slow writes on VNET jail NFS export

Hi,

I've been trying to set up a new NFS share in a VNET jail (baremetal, 14.2-RELEASE-p0), but the writes are so painfully slow it is simply unusable.
Code:
nei@linbox$ dd if=/dev/urandom of=/mnt/nas/slow.img bs=128k count=4
4+0 records in
4+0 records out
524288 bytes (512.0KB) copied, 25.508981 seconds, 20.1KB/s

The weird thing is, reads perform just fine:
Code:
nei@linbox$ dd if=/mnt/nas/bigfile.img of=/dev/null bs=1M
4094+1 records in
4094+1 records out
4293386238 bytes (4.0GB) copied, 36.910001 seconds, 110.9MB/s

When I run nfsd outside the jail, everything works perfectly:
Code:
nei@linbox$ dd if=/dev/urandom of=/mnt/nas/slow.img bs=128k count=4
4+0 records in
4+0 records out
524288 bytes (512.0KB) copied, 0.019164 seconds, 26.1MB/s

nei@linbox$ dd if=/dev/urandom of=/mnt/nas/file.img bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.0GB) copied, 16.460509 seconds, 62.2MB/s

nei@linbox$ dd if=/mnt/nas/bigfile.img of=/dev/null bs=1M
4094+1 records in
4094+1 records out
4293386238 bytes (4.0GB) copied, 36.908626 seconds, 110.9MB/s

And that's using the same configuration in both cases:

/etc/exports
Code:
V4: /mnt/files -network 192.168.1.0/24
/mnt/files -mapall=1000:1000 -network 192.168.1.0/24

/etc/rc.conf
Code:
nfs_server_enable="YES"
nfs_server_flags="-t"
nfsv4_server_enable="YES"
nfsv4_server_only="YES"

Here's the jail's configuration:
/etc/jail.conf
Code:
nfsserver {
        exec.start += "/bin/sh /etc/rc";
        exec.stop += "/bin/sh /etc/rc.shutdown";
        exec.consolelog = "/var/log/jail.log";

        exec.clean;
        allow.nfsd;
        enforce.statfs = 1;
        allow.set_hostname = 0;
        allow.reserved_ports = 0;

        host.hostname = "nfsserver.home.local";
        path = "/jails/netshare";

        vnet;
        vnet.interface = "epair0b";
        exec.prestart = "/sbin/ifconfig epair0 create up";
        exec.prestart += "/sbin/ifconfig epair0a up";
        exec.prestart += "/sbin/ifconfig bridge0 addm epair0a up";
        
        exec.start += "/sbin/ifconfig epair0b 192.168.10.100 netmask 255.255.255.0 up";
        exec.start += "/sbin/route add default 192.168.10.1";
        
        exec.poststop += "/sbin/ifconfig bridge0 deletem epair0a";
        exec.poststop += "/sbin/ifconfig epair0a destroy";
}

Mounting devfs and using the jail_vnet ruleset in the jail configuration didn't seem to make a difference.

Currently, I have a ZFS dataset for /jails, and another one for /mnt/files. I haven't changed the "jailed" property of the exported dataset (as, as far as I can tell, this should only affect its management inside the jail, which is not something I need). However, I tried disabling sync on both the dataset and NFS (sysctl vfs.nfsd.async=1), but it didn't change anything.

On the client side (a Linux machine), fstab looks like this:

/etc/fstab
Code:
nfsserver.home.local:/ /mnt/nas nfs4 rw,nodev,noexec,nosuid,vers=4.2,_netdev,rsize=1048576,wsize=1048576
I've tried changing NFS version, wsize, and other parameters, to no avail.
There didn't seem to be anything of note in the logs on the server side. On the client side, however, "kernel: nfs: server nfsserver.home.local not responding, still trying" seems to be logged sometimes, as though the server process was frequently hanging.

I also tried disabling tso on the physical network interface used for the jail (as it seemed to cause issues in a past release), but it didn't improve performance.

I'm quite new to FreeBSD, and am unsure whether I'm missing something obvious... Is there anything else I could try?

Thanks a lot!
 
Last edited by a moderator:
Back
Top