Ollama set up on Freebsd to run llm's and use Emacs with gptel as a front end

Ollama run large language models on your computer including deepseek-r1, deepseek-coder, mistral and zephyr

In this video i install Ollama on Freebsd 14.2 on a Dell XPS 15 2019 with a NVIDIA GeForce GTX 1650 gpu with 16 gig of ram
on Freebsd 14.2 quarterly release with the 550.127.05 Nvidia driver


I also cover running the ollama lama server and pulling and running models and setting up Emacs with the gptel package as a front end

As well as taking a look at Google Gemini and how to create an access token you can use with a authinfo file for Emacs

ollama

ollama github

ollama notes on github

ollama gpu's

gptel notes on github

emacs init.el

ollama-server
 
Back
Top