ollama not working in Freebsd version 14.2

hello expert:

I'm come here to find some support about ollma running in FreeBSD(14.2-release).

I have the AMD CPU in my pc; everything Is fine after installed the FreeBSD(14.2-release).
Then I want to running ollama and deepseek, use the step by step here:
#pkg install ollama
--during this step, it's ok . the ollama pkg version is 0.3.6_4
#ollama serve
--it's ok ,the output like this :
Code:
025/04/24 22:40:44 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:[URL]http://127.0.0.1:11434[/URL] OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[[URL]http://localhost[/URL] [URL]https://localhost[/URL] http://localhost:* https://localhost:* [URL]http://127.0.0.1[/URL] [URL]https://127.0.0.1[/URL] http://127.0.0.1:* https://127.0.0.1:* [URL]http://0.0.0.0[/URL] [URL]https://0.0.0.0[/URL] http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
......
time=2025-04-24T22:40:44.844+08:00 level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2025-04-24T22:40:44.845+08:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1540461448/runners
time=2025-04-24T22:40:44.919+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 vulkan]"
time=2025-04-24T22:40:44.920+08:00 level=INFO source=types.go:105 msg="inference compute" id="" library=cpu compute="" driver=0.0 name="" total="23.8 GiB" available="918.2 MiB"
[GIN] 2025/04/24 - 22:43:37 | 200 |     140.924µs |       127.0.0.1 | GET      "/api/version"
#ollama run deepseek-r1:1.5b
root@home:~ # ollama run deepseek-r1:1.5b
there is an error like this:
Code:
Error: llama runner process has terminated: error loading model: vk::createInstance: ErrorIncompatibleDriver
llama_load_model_from_file: exception loading model

could you can give some suggestion about how to fix this ?
additional information , the "ollama -v" command output like this :
Code:
root@home:~ # ollama -v
ollama version is 0.0.0

I Don't know what the cause about this.
 
Hm. When I tried to fire up a query in codebooga with an AMD integrated graphics chip my wifi stack crashed and I lost connectivity. That's 15-current.

The query did succeed, though.

Code:
% ollama -v
ollama version is 0.0.0
 
Back
Top