Python The sad state of Python on Freebsd

Python on Freebsd is in a bit of a sorry state

For work i have been trying various methods to install ComfyUI

ComfyUI lets you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface.


Vermaden wrote a guide a couple years ago for installing Stable Diffusion on Freebsd


However the issue is the linux-miniconda-installer package
hasnt been updated since 2023-10-23 11:27:43

and uses python 3.9


which is too old to run a lot of the most recent python applications like ComfyUI

The linux-miniconda-installer package uses the linuxulator bash shell with conda

so i tried using the linuxulator bash shell on centos to install miniconda

Code:
/compat/linux/bin/bash

the previous command doesnt put you inside the linuxulator chroot
so you can see the Freebsd filesystem

create the miniconda directory on Freebsd

Code:
mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
chmod +x ~/miniconda3/miniconda.sh

in the linuxulator bash shell install miniconda

Code:
~/miniconda3/miniconda.sh -b -u -p ~/miniconda3

remove the miniconda script

Code:
rm ~/miniconda3/miniconda.sh

activate miniconda

Code:
source ~/miniconda3/bin/activate

create the python env

Code:
conda create -n comfyenv
conda activate comfyenv

install torch

Code:
conda install pytorch torchvision torchaudio pytorch-cuda=12.4 -c pytorch -c nvidia

git clone ComfyUI

Code:
git clone https://github.com/comfyanonymous/ComfyUI.git

Code:
cd CofmyUI

pip install requirements

Code:
pip install -r requirements.txt

run comfyui

Code:
python main.py

fails due to an old library on centos
cant remember off hand which library

there is a rocky linux 9 package for the linuxulator

however the linux-nvidia-libs package is still built with centos userland
linux-c7 and not the rocky linux linux-rl9 package

Code:
pkg info -dx linux-nvidia-libs-550.127.05

Code:
linux-nvidia-libs-550.127.05:
        linux-c7-xorg-libs-7.7_11

so i cant uninstall linux-c7 and install linux-rl9
because that would remove the linux-nvidia-libs

so i set linux_enable to NO in my rc.conf
and created a rocky linux chroot in /compat

with these steps


however trying to run the rocky bash shell to install miniconda
the same way i did with centos failed

couldnt find some .so file

so there seems to be a difference between the linuxulator chroots bash shell
depending on if it was installed with the freebsd package or created manually

and you cant enter the linuxulator chroot
because then you cant "see" the Nvidia driver

and you have to install the exact same version of the linux driver in the chroot as on the freebsd host
which is possible, but the issue is you cant install the matching version of cuda in the chroot

you get the same issue with jails

you can use devfs rules to make the nvidia gpu visible in the jail
and then you have to install the exact same version of the nvidia driver in the jail

there is a nviida-cuda-toolkit package on ubuntu
but that doesnt work with python virtual env's created with either pip or miniconda

so you then have to manually install cuda in the jail
which has to match the cuda version on Freebsd

so you install cuda 12.4 and it doesnt work with the virtual env "no cuda found"
if you install the latest version of cuda 12.8 that stops the gpr from working

eg runnning

Code:
nvidia-smi

fails because it cant connect to the nvidia gpu due to mismtach of cuda 12.8 in the jail
and cuda 12.4 on freebsd

i tried with both Ubuntu 24.04 and Ubuntu 24.10
same issue

so jails dont work becuase of the cuda mismatch
the linuxulator with centos fails due to an old library
manual install of rocky in the linuxulator fails because the bash shell cant find the libraries

using a pip virtaul env on freebsd fails because there is no torch binary
trying to using the freebsd torch packages with the virtual env fails

also comfyui requires some additional packages in addition to torch
which arent available on freebsd, so trying to use the freebsd torch packages wont work

i cant use a bhyve vm because i only have 1 gpu so cant pass it thru to the vm

it may be possible to use the linux-rl9 package
and then use the bash shell to set up miniconda

but i cant unistall linux-c7 because then it would remove the linux-nvidia-libs package

there is supposed to be a way to use poudriere to build a package and specify using linux-rl9
instead of linux-c7 but i couldnt get that to work either

so at present i cant see a way to install applications like ComfyUI
or WhisperX on Freebsd

i have been through dozens of different pemutations on trying to create a python env
that uses the gpu and cuda

chroots, jails and natively on freebsd

also looked at install linux on an external drive
but that wont work because ubuntu adds an entry to the uefi bootloader

which means you always have to have that drive plugged in or freebsd will fail to boot
because it cant find the external drive which is listed in the uefi bootloader


so in short drawn a blank
 
It's not just FreeBSD. I see some of the same problems on Linux too. And on my personal Mac, the problems only don't appear because ... will be explained below.

The root of the problem is that there are multiple systems used to install "packages". One is pkg, which is by definition OS-specific (only works on FreeBSD, duh). The other is pip, which is by definition language specific (only does python, duh). So even in a simpler world of running ONLY ONE operating system (FreeBSD) and ONLY ONE language (python), it gets painful:
  1. For some packages, you can only use "pkg install", because they need FreeBSD-specific patches, and the generic pip version does not work.
  2. For some packages, you can only use pip, because nobody has bothered to make a FreeBSD pkg.
  3. And for some, you can choose either package installed, but you don't get the same version, usually because the FreeBSD pkg maintainer (a human volunteer) is running behind.
  4. And for some packages, it works perfect.
Now when it comes time to do upgrades, you need to run "pkg update/upgrade", and the pip magic required to do all upgrades (which is not quite trivial, you first need to list the installed packages). And every time you do this, some packages may migrate back and forth among the four categories described above.

If you add virtual environments to the list, it gets even harder, because those 4 categories are now environment specific.

What is the fix? Tough question. Try to do updates, if all your applications work, go drink. If they don't, mess around (usually I move from pkg to pip, and that often helps), or go back to installing from source. I have exactly the same issues on Linux (Debian running on RPi in my case). And there it's even harder than on FreeBSD, because there is no way to do a major version upgrade sanely, so I regularly have to wipe and reinstall, which increases the churn between the four categories above. The only reason this doesn't affect me on the Mac is: There I use no package mechanism other than pip for Python code, because on the Mac there don't seem to be any OS-specific patches needed.

In your case, it's even worse, since you have to drag a Linux install into the mix too.
 
really make you appreciate using pkg to install software
I think it comes down to using the native OS tool to install software. pkg for FreeBSD, apt for Ubuntu, yum or whatever the latest thing is for RedHat, and whatever for IllumOS or "install one thing like python and then use python tools to install new python packages".

I think using the native OS tool is best but it means everything you need has to be available in that framework.
 
This problem is not isolated to python. "Nodejs" with npm is another, what about Rust and Go?
Language based package managers are very messy, fragile things. And yet so many developers (particularly from the web ecosystems) love them.

One of C and C++ main strengths is that there is no standard package manager. It would be impossible because they can't deliver the OS specific patches.

But ultimately, any language that isn't C or C-based needs bindings to access the computing platform. These silly package managers are just a poorly designed symptom of that.
 
really make you appreciate using pkg to install software
pkg(8) is nice but what I used on the IBM mainframe, SMP/E, pales pkg. YUM/DNF, which is built on top of RPM, implements a feature like that on the mainframe. That of being able to list undo transactions. (RPM can't do this but YUM/DNF can.) It would be nice if pkg could undo a pkg upgrade or pkg install with something like pkg undo or pkg rollback.
 
I hear the same thing about all OSes and at least about all dynamically typed languages' "fetch me this library" managers.

The situation seems to be a bit better for statically typed languages such as Rust and Go.

Case in point:
"Is npm Enough? Why Startups Are Coming After This JavaScript Package Registry"
 
cracauer@ a related item is how many people using opensource keep their own repos to have better control? Better vetting of security items, better control over "not breaking things", better control over "someone deliberately screwing up upstream".

I think it often boils down to how much control you actually need. "Do I follow quarterly or latest" is a common question here. Security updates are often key for commercial products, but "if I update openssl what else breaks?".

I think those aspects create the need/desire for the alternative registries. But in theory, isn't this what the snap/flatpak/statically linked everything solves?
 
cracauer@ a related item is how many people using opensource keep their own repos to have better control? Better vetting of security items, better control over "not breaking things", better control over "someone deliberately screwing up upstream".

I think it often boils down to how much control you actually need. "Do I follow quarterly or latest" is a common question here. Security updates are often key for commercial products, but "if I update openssl what else breaks?".

I think those aspects create the need/desire for the alternative registries. But in theory, isn't this what the snap/flatpak/statically linked everything solves?

Yeah. There is a fundamental difference between FreeBSD ports and the Python and Javascript repositories.

The latter usually have the package author also maintain the registry entry. So a single person can change everything all the way to the client's computers. Problematic when there is credentials leak, a sold project or outright malice. Even temporary incompetence leads to big KABOOMs in many places.

In FreeBSD ports you have the original author, the port maintainer and the committer. While two of those might be the same only a small part of ports have all 3 in the hands of a single person. Usually to get bad code onto client machines you need the original author to screw up and then maintainer and committer not noticing before commiting. It's not perfect but a lot better.
 
The often repeted Python upgrade cycle is an issue on any operatingsystem with python-Apps that dont keep pace with the python releases.

Debian releases cannot, by policy. No wonder my old coworkers complain. At least FreeBSD can update Python whenever the dependency weather is right.
 
This is where containers come in handy. They bundle all dependencies and isolate it from the rest of the system.

Linux distros no longer package cloud service provider cli packages. There are far too many dependencies to take care about and the versions are all incompatible.
 
This is where containers come in handy. They bundle all dependencies and isolate it from the rest of the system.

Linux distros no longer package cloud service provider cli packages. There are far too many dependencies to take care about and the versions are all incompatible.
Using containers as a solution to that specific problem is basically admitting defeat that poor engineering is taking place!
  • Reusable libraries are great
  • Dependencies are bad

Unfortunately they are the same thing. It is important for the developer to make a good decision and not drag in, i.e the whole of NumPy just to interface with some contiguous C data (I see this one a lot).
 
spot on about containers

we do have Podman on Freebsd which i briefly tried

however i did run into issues trying to run some linux podman containers on Freebsd
may have been because they used systemd

on linux you have the nvidia container toolkit which is installed on the host
that can be used to pass though the gpu to the container

but there isnt a version of nvidia container toolkit for Freebsd


im not sure if it would be possible to pass through the gpu from Freebsd to a podman container
without the nvidia container toolkit or something similar

but it could be used to run a lot of python applications like ComfyUI or Whisperx on Freebsd
you can also run things like Davinci Resolve in a podman container

having podman containers we could just pull down and run for these kind of applications would be extremely useful
 
Containers are a way to make things work fast in exchange for more maintenance. Instead of having one machine to update you suddenly have -say- 5.

If you don't update I suppose that doesn't matter but that's not how I operate, and it seems I am not alone here.

It also doesn't solve the original problem discussed here, which is the OS providing one dependency chain for Python when what you need is a different one. The base of the container is still an OS distribution. You don't magically get an OS distribution with the right Python chain.
 
Using containers as a solution to that specific problem is basically admitting defeat that poor engineering is taking place!
  • Reusable libraries are great
  • Dependencies are bad

Unfortunately they are the same thing. It is important for the developer to make a good decision and not drag in, i.e the whole of NumPy just to interface with some contiguous C data (I see this one a lot).

And sometimes you have to admit defeat... The earlier the better.

Containers solve all the issues with Python dependencies because the base image comes with Python and then you can install the dependencies with pip in an isolated virtual environment. You wouldn't want a rogue dependency stealing your data or cloud credentials.

All Linux distros lack manpower to package each and every Python dependency, and it's not easy to make them all work for all software.

Some distros gave up on packaging complex packages such as AWS cli (same for Azure & GCP) because they have lots of incompatible dependencies. So people opt for downloading a bundled package from upstream or just use the Docker image which solves the issue.

And the problem is not just Python, but Python is the worst as they're deprecating features and renaming parameters all the time. So the only solution here is a bundle in the form of an OCI image that also provides some isolation.
 
Containers are a way to make things work fast in exchange for more maintenance. Instead of having one machine to update you suddenly have -say- 5.

If you don't update I suppose that doesn't matter but that's not how I operate, and it seems I am not alone here.

It also doesn't solve the original problem discussed here, which is the OS providing one dependency chain for Python when what you need is a different one. The base of the container is still an OS distribution. You don't magically get an OS distribution with the right Python chain.
Containers are not supposed to be treated as pets, but cattle. They're supposed to be ephemeral. You can recreate them with `podman build` from the Dockerfile.
 
Containers are not supposed to be treated as pets, but cattle. They're supposed to be ephemeral. You can recreate them with `podman build` from the Dockerfile.

But if you allow software updates by doing so you could invite breakage. So it is a kinda manual process. Which you have to <n> times for all the containers.
 
But if you allow software updates by doing so you could invite breakage. So it is a kinda manual process. Which you have to <n> times for all the containers.
In the case of Python containers, if you pin the specific versions with `pip freeze` you're guaranteed to have the same Python dependencies.

There are 2 types of containers: system containers that resemble FreeBSD Jails and application containers. Software updates are only needed for system containers. With application containers you can just recreate them from scratch. This build won't take that much as the OCI layers are cached. If the base image wasn't updated and there are no updates, then the new image is built instantly.

System containers can be treated as pets. Application containers should be treated as cattle.
 
Back
Top