Solved Running FreeBSD completely on a RAM disk

Some information before asking the questions:
-> I have 192 GB of RAM (97 GB read/write speed).
-> When building ports I am building them completely in RAM.
-> Big ports take around 100 GB to build.

Back then on Linux I had directories with frequent reads/writes mounted on a RAM disk.
An example is Ryujinx (Nintendo Switch Emulator) which often compiles shader.
I had the application, configuration, share directory, game directory completely mounted on a RAM disk.
The benefit was a very smooth experience with almost no pop-ups during shader compilation.
My goal for FreeBSD is it to run the whole OS on a RAM disk.

To achieve it, I have some ideas, but I do not know how I could make them possible.
What I basically want is:
1) Having the OS itself on a drive full-disk encrypted with something equivalent to linux serpent cipher.
I know that FreeBSD has GELI/GEOM to offer, with GEOM being the stronger one ?

2) During boot I want to decrypt that device, creating a RAM disk, and copying over the files to the RAM disk, then continuing the boot process like normal.
Can I have the boot loader encrypted on a USB flash drive ?
Do I need to set some kernel options for that ?
Or do I need to set options in boot/loader.conf ?
Or do I need to compile a custom kernel ?

3) Encrypt the decrypted backup device after boot

4) Before shutting down, I want to decrypt the backup device, make a incremental backup and only backup files which have been modified, and leave the unmodified files as they are, preferably compress the whole backup.
Probably rsync would be an option to achieve such kind of backup, but tar can also help, I guess.
Tar archives can also be compressed, but I do not know whether rsync offers compression options, too.

It is probably not so easy to achieve that, but I always wanted to have a OS completely on a RAM disk.
Other benefits I see there, are:
-> If the OS ever gets infected due to some mistake by me, I can easily revert back by shutting the computer down, and just restore a copy prior to the infection.
-> The system will be a lot snappier.
-> No performance penalty due to encryption, but with all files being encrypted on the backup device.
 
This is possible, you might want to look for "diskless" operation to get some hints. I believe mfsBSD also uses a similar construct.

But you need to realize the BIOS (or UEFI) needs to be able to read the boot code, it cannot be encrypted because the BIOS/UEFI won't be able to decrypt it.
 
I think the best way to build MFS images is poudriere image. Since it is simply a set of scripts it can be easily modified to suit your needs.



BSDRP has an easy start guide. It is focused on NanoBSD replacment but simply replace the "tpye build" -t with ufs+mfs.
This shows 6 minimum steps but infact you dont need to build ports. You can just focus on building images from base system then later add in ports or packages.
Try it out first them build it up. It can take alot of work with overlays but very flexible.

Hopefully something here helps:
Thread 95840
 
Before shutting down, I want to decrypt the backup device
This sounds possible with a rc.shutdown.local script but I worry about failure mode. What happens if this step fails. It seems very complex for a shutdown script.

You are asking alot here but I think you could work something out.

Perhaps use type zmfs+usb and make incremental backup on shutdown.

Maybe isolate your OS and Data Structures. Have Data Structure encrypted and overlay that on OS.

I know nothing of GELI so I need to hand off on that question. It seems weird to me to have a memdisk OS yet you want OS level encryption. I think you are overthinking this. Encrypt the Data.
 
It seems weird to me to have a memdisk OS yet you want OS level encryption.
What I meant is, I want full-disk (block level) encryption with the device being encrypted (not accessible) the whole time, except only one time to get the OS files into the RAM disk. Otherwise full-disk encryption is pointless when the device in question is decrypted. In decrypted state it acts similar to a normal device without encryption.
The point here is to have only a local copy of the system, and the main one being encrypted the whole time the OS runs.

This is possible, you might want to look for "diskless" operation to get some hints. I believe mfsBSD also uses a similar construct.
That sounds interesting, I will look into that.

But you need to realize the BIOS (or UEFI) needs to be able to read the boot code, it cannot be encrypted because the BIOS/UEFI won't be able to decrypt it.
I see, it is similar to my linux setup back then.
One question, do you recommend the use of UEFI mode, or BIOS mode ?
I heard that secureboot recently had several security risks.
My mainboard can handle both modes.

Another question, if I use the USB flash drive for storing purpose only, should I use UFS or ZFS ?
ZFS is somewhat heavier than UFS for flash memory devices, right ?
So, I would probably need 2 partitions then, one with the boot code for either UEFI or BIOS, and one with the encrypted boot loader + OS files ?

I think the best way to build MFS images is poudriere image. Since it is simply a set of scripts it can be easily modified to suit your needs.








man.freebsd.org
man.freebsd.org









mfsBSD in Base # https://reviews.freebsd.org/D41705cd /usr/src/releasemake mfsbsd-se.img# se here refers to mfsBSD special edition, which comes packed with# dist files - base.txz and kernel.txz - which are required for bsdinstall.cd /usr/obj/usr/src/${ARCH}/release/ls -lh Then, write mfsBSD to...

freebsdfoundation.org
freebsdfoundation.org




BSDRP has an easy start guide. It is focused on NanoBSD replacment but simply replace the "tpye build" -t with ufs+mfs.







bsdrp.net
bsdrp.net



This shows 6 minimum steps but infact you dont need to build ports. You can just focus on building images from base system then later add in ports or packages.
Try it out first them build it up. It can take alot of work with overlays but very flexible.

Hopefully something here helps:
Thread 95840
I will also look into that, too.
 
:snip

4) Before shutting down, I want to decrypt the backup device, make a incremental backup and only backup files which have been modified, and leave the unmodified files as they are, preferably compress the whole backup.
Probably rsync would be an option to achieve such kind of backup, but tar can also help, I guess.
Tar archives can also be compressed, but I do not know whether rsync offers compression options, too.
:snip
For this you might want to look into BorgBackup https://www.borgbackup.org/. No need to encrypt the entire device, borg archives can be fully encrypted by default and still utilize incremental backup.
I have been using BorgBackup for some years now, and it is truly a great backup tool, with very easy restore possibilities.
 
I think the best way to build MFS images is poudriere image. Since it is simply a set of scripts it can be easily modified to suit your needs.
I looked up both nanoBSD and poudriere image, and I think poudriere image is the solution I will go with. It seems somewhat straightforward.
 
Ok, I am ready now, to build a poudriere bare bones image for testing things out.
I have some question, though.
According to the poudriere-image man page:
Code:
 -s This specifies the maximum size of the image that is built.
Do I need this parameter, or can poudriere figure out the size needed for the image ?
If I need the size, how large should it be ?
My USB flash drive has 64 GB, so if I set -s to 64g, will the image be 64 gigabyte, or is it just the max limit size the image can have ?
 
You might look into reboot(8)’s -r switch. You can use that to switch to a ramdisk-based-root after boot as described here:
Using FreeBSD's re-root capability
I have looked through the guide.

Note: It is assumed this is run inside a /bin/sh shell as the root user
rm -f /var/crash/*
rm -f /var/tmp/*
find /var/db/freebsd-update/files -type f -print | xargs rm -f
Both rm commands seem clear to me.
Is it safe to delete freebsd-update files ?

Create 2GB ram disk - this needs to be big enough to hold all the data.
md=$(mdconfig -s 2g)
Put a UFS filesystem on it, mount it:
newfs /dev/$md
mount /dev/$md /mnt
Ok, a RAM disk called md will be created at /dev.
I am using ZFS, and ZFS can be used on a device without actually creating a filesystem.
Could I also put a zpool + datasets on md, or copying my available zpool + datasets over ?

Re-root the system.
kenv vfs.root.mountfrom=ufs:/dev/$md
reboot -r
It makes sense.
In my case it would be zfs.
Once issued, can I poweroff my system like always and expect the system to mount into the Ram disk without issuing reboot -r a second time ?
Provided my boot drive exists.

Code:
-r      The system kills all processes, unmounts all filesystems, mounts
             the new root filesystem, and begins the usual startup sequence.
             After changing vfs.root.mount from with kenv(1), reboot -r can be
             used to change the root filesystem while preserving kernel state.
             This requires the tmpfs(5) kernel module to be loaded because
             init(8) needs a place to store itself after the old root is
             unmounted, but before the new root is in place.

The man page for reboot says that the kernel state is preserved.
So, does that mean that I have to put vfs.root.mountfrom=zfs:/dev/$md into /boot/loader.conf to get the kernel into that state on each cold boot ?

And before powering the system off, I need to issue as root:
Code:
Copy the mfs filesystem back to the ZFS pool.
dump -0f - / | (cd /mnt; restore -rf -)
I think, but if I use dump -1f ..., I would backup incrementally, according to the man page.

Your suggestion so far seems to be the closest to what I actually want to achieve.
I am going to try your recommendation tomorrow, and then report back.
 
I didn't read every detail of this thread, but don't overlook FreeBSD's re-root capability. Just make a md disk, rsync a userland and kernel modules over, make sure there is a working init(1) and re-root.

Found this random piece:
Thank you.
I already read that thread, and I want to apply some of the things described there.
Eventually I am going to need to write a shell script, given everything needs to be synced, and resynced.
 
Once issued, can I poweroff my system like always and expect the system to mount into the Ram disk without issuing reboot -r a second time ?
No. RAM disks disappear on power off or normal reboot (not reboot -r) processes.
So, does that mean that I have to put vfs.root.mountfrom=zfs:/dev/$md into /boot/loader.conf to get the kernel into that state on each cold boot ?
Each cold boot you will have to recreate and repopulate the RAM disk.

And before powering the system off, I need to issue as root:
Copy the mfs filesystem back to the ZFS pool. dump -0f - / | (cd /mnt; restore -rf -)I think, but if I use dump -1f ..., I would backup incrementally, according to the man page.

If you’re using ZFS, zfs send/recv would be the right way to do that.

You’ll need some special scripting to pivot (kenv + reboot -r) from physical drives on each boot to ram disk-backed, and then another set of scripts to transfer new changes back on to persistent store before (real) reboots or shutdowns. You’ll need to keep the system thin in terms of not chewing up too much RAM.

The closest common thing to persistent RAM disks these days are flash drives.
 
I have created a new pool zramdisk, and added my RAM disk to it.
The mountpoint is set to /mnt.
Since my OS is on zroot/(dataset1, dataset2, etc...) I want to send everything over to zramdisk.
I have zroot encrypted, but since mounted and decrypted do I still need to specify the -w parameter for zfs send ?
1) zroot (sender) -> zramdisk (receiver):
Reading the man page, and having the scenario of a empty zramdisk at all times due to the nature of RAM after cold boot, I think I need to copy non-incrementally.
The copying happens on the same host locally.
My command would be issued as root:
Code:
zfs send -R [-w] zroot | zfs receive zramdisk
I am not to familiar with ZFS, but I believe that mountpoints for datasets are variable (since you can set them everywhere), and the -R parameter on zfs sends starting from zroot all datasets on it to zramdisk ?
Do I need to create the snapshot manually, or will something like a temporary snapshot for transmitting be created once zfs send, zfs receive is issued ?

2) zramdisk (sender) => zroot (receiver):
My command would be issued as root:
Code:
zfs send -R -I -F [-w] zramdisk | zfs receive zroot
Again, zroot is encrypted, but decrypted in order to put the incremental backup back.

I am kind of confused of how I should transmit the data with ZFS commands as zfs send works with datasets, and zfs receive just creates a snapshot of streams it receives.
Is a pool in the context of zfs send/receive treated as a dataset, too ?
 
I am not to familiar with ZFS, but I believe that mountpoints for datasets are variable (since you can set them everywhere), and the -R parameter on zfs sends starting from zroot all datasets on it to zramdisk ?
Do I need to create the snapshot manually, or will something like a temporary snapshot for transmitting be created once zfs send, zfs receive is issued ?
You want to do something like (assuming the empty ramdisk /dev/md0 is created):

zpool create -m none zram /dev/md0
zfs snapshot -r zroot@SEED
zfs send -LecR zroot@SEED | zfs recv -uF zram
I usually advise avoiding -F on receives, but I think it may be needed here since you are replacing the root of zram, even if it was never used. You definitely want -u to recv unmounted, both for this and for the final update. If you want it to be encrypted in RAM is up to you; since you’re doing all this for performance, I would suggest skipping -w. I don’t know if you will need to mask out (using -x property on recv) the encryption property (to keep it from creating and receiving into a re-encrypted state); you’ll have to test.

At this point you should be able to inspect and adjust the vfs.root.mountfrom kenv, and issue reboot -r. (You should only have to change zroot to zram in the string.)

Before shutdown, you’ll need to snapshot -r zram@END and send that (-LecRI zram@SEED zram@END) back down to zroot (receive with -u BUT NOT -F, just to be safe.). You will almost certainly need to issue rollbacks first of zroot to the SEED snapshot for the receive to work. (But this is preferable to -F IMHO to remove foot-shooting potential.) Once this receive finishes, delete both the END and SEED snapshots on zroot so you’re ready for the next time.

You can use -X on the initial send to exclude portions of zroot if they aren’t needed in ramdisk mode. (Like /usr/src or /usr/ports, if you have those on separate filesystems. If you do this definitely don’t use -F on the update recv (it will wipe them out.)
 
Actually it does not, I have excluded not needed datasets like mail, log, audit, crash, ports, and src.
If I do it every time the same way, though.

Now, I did not take into account, that I have already 10+ GB due to installed pkgs, etc. already eaten, not that I care that much, but the interesting thing is I do not get any visible performance gain for the applications I care.
Although it was fun to try a whole RAM disk OS, it is not really worth it, compared to my prior method where only games, configs, saves, shaders, etc were inside a RAM disk.
I thought it would change something if .so and other stuff also were in a RAM disk, but on both methods I still get a very small hiccup during heavy shader compiling + caching.
 
I thought it would change something if .so and other stuff also were in a RAM disk, but on both methods I still get a very small hiccup during heavy shader compiling + caching.

This is the magic of caching. ZFS was already caching (in RAM/ARC) frequently used files where possible, so it’s not too surprising that you don’t notice a big change. Tons of small synchronous writes would be the most likely thing to benefit, but that’s not a typical workload.
 
This is the magic of caching. ZFS was already caching (in RAM/ARC) frequently used files where possible, so it’s not too surprising that you don’t notice a big change. Tons of small synchronous writes would be the most likely thing to benefit, but that’s not a typical workload.
I noticed something about ARC while investigating my running processes through top, but I never thought that it was part of ZFS.

One thing that came today in my mind, and I wanted to ask is, I have a zraid with 4 HDD disks on a pool.
If something breaks on my system, or I want to do a full reinstallation, can I somehow safe the zraid pool on a UFS USB drive for example ?
So I can just recover the pool and get access to my data saved on the HDD raid.
I read about exporting pools, but that would mean that the pool is no longer usable on my current installation, if I am not wrong.
 
Back
Top