DeepSeek and Qwen are truly amazing!

thank you for sharing the name of the projec
I still have it in my TODO list. Recently I bought OrangePI 5 max that has rk3588 SOC in it. It's far from ideal though how rockchip (and others along the chain) do the support. It's half baked, half glued software with blob bootcode from China. Even u-boot relies on rockchip's blob SPL. I treat it with certain level of scepticism. And yes, I know even RPI is not opened in this regard.
It's powerfull little thingy though.

Starting point for this project in my case: rknn. But basically I'd need to google how to setup local LLM with this chipset. There are some YT videos for this board where people tested chat bots on this.
 
That's probably the best usage of an AI I've read so far, it gives hope, thank you for sharing the name of the project because I never heard of it until today.
Google being Google let's hope they won't abandon it once the hype around AI will fade.

Yeah, there's Google's "oh, squirrel" mentality.

I hope somebody hacks up DeepSeek so that it can do the same thing as notebooklm, self-hosted.That would give future-proofing.
 
I looked over my over the 20 years worth of pdfs I collected. I don't want to delete them but frankly I never check them out. If I need something I google it.
Same here.

When I want to study something, I learned long ago to search through my sorted directories of these things from the terminal. I find things that are out of date or not what I thought they were. So I delete them. Often I find the solution to what I'm looking for. Using a graphical interface, like a desktop thing, blurs it all and I don't do anything.

I use find and grep a lot.
 
I use find and grep a lot.
I agree, I do the same. I have my private/notes dir with all the various notes (plain files) I created over all these years. Grepping around that is easy and does the job perfectly. The day I learned of grep's -H my life got easier too. ;)

And while you could also do pdf2text and grep that (doesn't solve the issue with the pdf full of pics) it's just easier then to google/search online. At least that's what I keep doing.
If (to be on safe side not saying when) I do play with these models I will share my experience.
 
I still have it in my TODO list. Recently I bought OrangePI 5 max that has rk3588 SOC in it. It's far from ideal though how rockchip (and others along the chain) do the support. It's half baked, half glued software with blob bootcode from China. Even u-boot relies on rockchip's blob SPL. I treat it with certain level of scepticism. And yes, I know even RPI is not opened in this regard.
It's powerfull little thingy though.

Starting point for this project in my case: rknn. But basically I'd need to google how to setup local LLM with this chipset. There are some YT videos for this board where people tested chat bots on this.

I'm building my own tablet based on FreeBSD and equipped with the RockPro RK399,but i see that your rkmn tool does not support it,why ?
 
Yeah, there's Google's "oh, squirrel" mentality.

I hope somebody hacks up DeepSeek so that it can do the same thing as notebooklm, self-hosted.That would give future-proofing.

Hello Master. I read the DeepSeek requires a very low amount of GPUs to train a dataset. Has it reached the point where it can be done using only a gpu like the RTX 2080 ti ? or maybe a 4090 or only a couple of them ? My dream is always the same. To train a dataset with the raw commands used by bhyve to create an AI that can do what we want starting by a well formatted prompt. I think this is the dream of a lot of virtualization lovers.
 
Hello Master. I read the DeepSeek requires a very low amount of GPUs to train a dataset. Has it reached the point where it can be done using only a gpu like the RTX 2080 ti ? or maybe a 4090 or only a couple of them ? My dream is always the same. To train a dataset with the raw commands used by bhyve to create an AI that can do what we want starting by a well formatted prompt. I think this is the dream of a lot of virtualization lovers.

No. It still uses datacenters full of GPUs for training. Just less so. When you install it locally you download it along with the trained model.

Apart from the computational load you would also have to download most text on the Internet to train it yourself.

The interesting part of playing with it isn't the training, though. You can use and analyze it just running the inference locally. A beefy NVidia GPU is enough for that, you don't need a 4090.
 
Back
Top