A FreeBSD box for home

We are many here to ask questions. But few tell what they want to achieve.

I want to share this project involving FreeBSD. My goal was to set up a firewall / router just after my internet box and also use it as a media reader on my TV.

I almost completed the configuration. The main components are operational and under testing for the moment.

The hardware part is composed of a 0 decibel PC including the Asus H170I-PRO motherboard, an i3 7300T and 8 GiB of ram. The machine has to be as silent as possible because it lies near the TV and must be always running. I choose this motherboard because it has two wired network ports and a wifi device. I added two 250 GB ssd for system redundancy (zfs mirror).

You have an example of such a PC on this site: https://www.pcvert.fr/15-20-watts/444-euler-m-h170.html

The main problem is that FreeBSD doesn't have a functioning driver for the wifi device (QCA6174). One guy is working on porting the linux drivers to FreeBSD but currently it doesn't work with QCA6174 (it works on QCA988X): https://github.com/erikarn/athp

So I used a Debian VM to bring the wifi network to the box. The firewall / router part is powered by a pfSense VM (based on FreeBSD 11.2). Here, I have to passthru two network devices to these VM. For an Intel processor, you need the patch you can find in PR 229852. Without this one, the kernel crashes as soon as you run a bhyve VM with a passthru device.
6392


Finally, the harder part is the multimedia one. I'm facing to some minor issues I didn't solve so far.
 
Last edited:
Good stuff! I like your idea of passing through the wireless device to a Linux VM, probably for better driver support?
 
Would it have been less work to just use a temporary USB wireless device?
Possible. But I should have looked for a device that is recognised by pfSense and has hotspot creation capability; not sure it's so simple. Furthermore, the idea to buy a wifi device whereas I have already spend money for a brand new one does not please me.

As I said in another post, it's a general method to make network devices work with FreeBSD. The installation of Debian in a bhyve VM is pretty straightforward. The settings of this OS, "howtos" and tutorials are easy to find on internet.

It was more tricky to install the pfSense VM...
 
Last edited:
After two weeks in real use, I can say that the network / server part is perfect.

But... The multimedia side is disapointing. No sound in HDMI and video rendering is poor (tearing).
And believe me, I tested a bunch of things trying to mend these problems...

I think I have to wait several years, when drm drivers will work better on this "new" hardware...
 
I have trodden around that path myself, looking to cut down the hardware, and move as far as possible to virtual machines.

I was not able to get a virtualised firewall (neither pfSense, nor OPNsense) to work properly. It had a GPRS modem on the Internet side with a USB modem pass through to the VM. It nearly worked, but there was a problem with NAT that I never resolved. [Packets from some hosts on the internal network didn't get NAT'd on the way out (and disappeared into the ether) -- other hosts worked fine.]

So the firewall now happily resides (noiselessly) on a Raspberry Pi -- which is fine for as long as my Internet tops out at 800 KB/sec.

I looked at virtualising MythTV at the same time, using both USB and Ethernet DVB-T (Australian standard) TV tuners. I came to a number of conclusions:
  • multimedia is a real-time application, and needs real hardware; and
  • Linux offers the least grief, by far, for many multi-media applications.
Virtualising firewalls can work, but watch out for the chicken and egg, and mutual exclusion problems. e.g.
  • your VM server may want to talk to the Internet to boot;
  • when your VM server is down, so is your firewall, so no Googling on how to fix it; and
  • bridges and virtual machines limit bandwidth, and if you are lucky enough, you'll need a hardware firewall to shoulder the load.
Overall, I came away without virtualising anything. But learned a lot.

Your VM for the hotspot is really creative.?
 
Thanks, gpw928.

Concerning pfsense, there is a trick to install it: you need to use the serial console enabled memstick installer (IIRC).

The only problem I get at startup is with samba. It yells it can't send its packets because the pfsense VM is not yet active. The story of what I tried to avoid this is long. Eventually, a bug in samba RC script prevented me to resolve this annoyance. See here: PR 257188. Until now, this bug has never been corrected and I don't want to patch each new version of samba. So, let be it.

Since, I've upgraded all components, including debian VM. There is no need of patching the FreeBSD kernel anymore (PR 229852) since 12.1-RELEASE-p7. So, this box has joined the RELEASE branch and is currently up to date.

My only regret: have chosen iohyve for the management of the bhyve VMs, because it's not maintained anymore.
 
My private setup has some similarities (only one physical server for everything, using jails and bhyve to virtualize stuff), so I'm commenting here:

A firewall in a VM has one drawback you should be aware of. If there's any successful "jailbreak" attack on some other VM or jail on that same machine, the attacker will be in a position to control the firewall; from the host for VMs and jails, you can do everything. For my home network, I decided to accept that risk.

There's IMHO one thing that's important if you don't want to open up even more attack vectors with a "firewall VM": Make sure this VM has exclusive access to all your network interfaces. In practice, this means use PCI-passthrough for them, so only the VM can access them.

I can't help with "firewall distributions" like pfsense etc. My initial plan was to use OpenBSD for the firewall VM. Unfortunately, it had some problems bringing up my network interfaces, so I gave up on that and I'm now running FreeBSD on the firewall as well. My hardware offers two physical NICs, both of them "belong" to the firewall with PCI pass-through. They are connected to a manageable switch and aggregated using a lagg device with LACP protocol. For network zones (like internal, management, guest, chinese IOT stuff), I use VLANs. There's also a dedicated VLAN to connect some DSL modem to. The WAN connection is then established using base ppp(8) (and for IPv6, a tunnel offered by hurricane electric).

Apart from that, I have two other bhyve VMs running on the same host, one for running the wifi controller for my access points (this software, although written in Java, only works correctly on Linux), and one running some Windows Server, mostly for testing and for work when working from home.

Then, there are a lot of jails running as "light-weight virtual machines", e.g. one is my AD domain controller with samba, one my SMB file server for windows clients, one is my media server (using minidlna), one my internal mail server, and so on...

Everything is wired together using bridges (on both the host and the firewall VM).
 
(and for IPv6, a tunnel offered by hurricane electric)
Just in case anyone wonders about that: Of course base ppp(8) works perfectly fine with native IPv6 connectivity offered by your ISP via DSL. There are two reasons I still prefer a tunnel:
  • HE optionally routes you a /48 prefix, so you can have one /64 prefix for each of your local network zones, while (consumer) DSL ISPs typically only provide one /64 prefix (only enough for a single local subnet, at least if you want to make use of SLAAC)
  • The prefix you get from HE is "static" (as opposed to the dynamic assignment typically offered by your ISP) and you get the option for rDNS delegation, so you can properly assign DNS names to your machines.
 
I have trodden around that path myself, looking to cut down the hardware, and move as far as possible to virtual machines.

I was not able to get a virtualised firewall (neither pfSense, nor OPNsense) to work properly. It had a GPRS modem on the Internet side with a USB modem pass through to the VM. It nearly worked, but there was a problem with NAT that I never resolved. [Packets from some hosts on the internal network didn't get NAT'd on the way out (and disappeared into the ether) -- other hosts worked fine.]

So the firewall now happily resides (noiselessly) on a Raspberry Pi -- which is fine for as long as my Internet tops out at 800 KB/sec.

I looked at virtualising MythTV at the same time, using both USB and Ethernet DVB-T (Australian standard) TV tuners. I came to a number of conclusions:
  • multimedia is a real-time application, and needs real hardware; and
  • Linux offers the least grief, by far, for many multi-media applications.
Virtualising firewalls can work, but watch out for the chicken and egg, and mutual exclusion problems. e.g.
  • your VM server may want to talk to the Internet to boot;
  • when your VM server is down, so is your firewall, so no Googling on how to fix it; and
  • bridges and virtual machines limit bandwidth, and if you are lucky enough, you'll need a hardware firewall to shoulder the load.
Overall, I came away without virtualising anything. But learned a lot.

Your VM for the hotspot is really creative.?
I'm using an xbox one usb tuner (latest webcamd + tvheadend server) on 13 RELEASE. Rock solid over the network to my Kodi box.Works better than the same setup tested on various linux distros. (Could not make it work on 13 STABLE though-tvheadend doesn't see the tuner).
 
My goal was to set up a firewall / router just after my internet box and also use it as a media reader on my TV.

This is of course possible and has its benefits (VMs are generally a bit easier to manage) but you're creating a single point of failure by integrating router capabilities into your one and only home server. By the way: have you considered using virtual OpenWrt for AP capabilities?
 
you're creating a single point of failure by integrating router capabilities into your one and only home server.
My reason to do exactly that: Given I need a server for several things I just want to have anyways, reduce overall power consumption while still having a firewall (VM) that's somewhat powerful (able to manage several different network zones) and completely under my control.

Of course, you have to be aware of that single point of failure. In order not to lose anything, I use a raid-z for storage and regular incremental backups to an external USB drive stored far away ;) And my emergency plan to get at least internet access, should the hardware ever die, is to quickly hook up the plastic I got from my ISP to a port of my switch assigned to my "internal" network.
 
This is of course possible and has its benefits (VMs are generally a bit easier to manage) but you're creating a single point of failure by integrating router capabilities into your one and only home server. By the way: have you considered using virtual OpenWrt for AP capabilities?

I don't see your point. I did exactly what I wanted to do (except the multimedia part which is really unusable).

Yes, if pfSense stops, I lose internet connectivity but the local network is still functioning. That said, all is working under zfs RAID1 and each VM is saved with several snapshots.

Even if this box burns, I have just to reconfigure my ISP box to retrieve internet (and DHCP service).

I'm unaware of OpenWRT and I don't see what this can bring more or better than I already have. My problem was just to make the QC6174A wan device working, so the simplest way was to use a linux VM.
 
I don't see your point.
I think it's pretty obvious: With one machine doing "everything", you also lose everything at once if it ever breaks.

I also think that might be acceptable, given you are aware of it, actively accepting it and have a plan how to deal with such a situation ;)
 
I think it's pretty obvious: With one machine doing "everything", you also lose everything at once if it ever breaks.

This exactly, if this single server crashes you also (at least temporarily) lose Internet access and wireless network. I just prefer to separate routing, storage and application servers (if one thing fails I only lose access to that one thing) and pay a tiny bit more for electricity.

I also think that might be acceptable, given you are aware of it, actively accepting it and have a plan how to deal with such a situation ;)

This is acceptable if you concentrate entirely on lowest possible power draw and nothing else :)

That said, all is working under zfs RAID1 and each VM is saved with several snapshots.

ZFS is awesome :)

No recent experience, but years ago (probably from 2005/2006 onwards) I vaguely recall using it (probably in a coova context) to provide a captive portal etc. Very respectable software.

Heavily evolved in recent 5 years or so but I'd still gladly pay for something just as light (as in OS size and RAM usage) but based on BSD.
 
I just prefer to separate routing, storage and application servers (if one thing fails I only lose access to that one thing) and pay a tiny bit more for electricity.
My entire home office runs on a load which rarely exceeds 300 watts. That's three light bulbs in the old measure.

For that I get separate storage, media, and virtualisation servers, as well as several switches, firewall, TV tuners, and monitor. They are all on a UPS (which is why I know the current draw).

The ZFS server has six gigabit Ethernet ports, as does the vitrtualisation server. I have had lagg(4) running at various times, and done a lot of network experiments with varying aggregation protocols and Ethernet packet sizes.

The switches are VLAN capable, and for several years I have harboured a well-crafted and elegant design to implement VLANs.

However, today, my main network is a single broadcast domain, with no VLANs and no link aggregation. There are several private back-to-back gigabit Ethernet connections between servers (which carry most of the serious traffic).

The infrastructure design is largely driven by a desire to isolate functions, and isolate heavy network traffic, but keep things simple.

This approach is specifically for a home office. I appreciate it won't scale. But it doesn't have to.
 
Some news about this project:

1) The box is still running. :)
2) It joined the RELEASE branch at 12.1-RELEASE-p7 (now on 13.2-RELEASE-p2).
3) Upgraded memory to 32 GiB last year.
4) Added last year two 4 TB ssd (zfs RAID1) to store multimedia files and others.
5) Dropped the use of sysutils/iohyve today. It now uses my own simple framework (set of scripts around bhyve) that turn VMs into services.
 
Back
Top