Hello, has anyone already tried VIM3 to run one of the recent lightweight AI models like TinyZero or DeepSeek at lowest package? Jiayi-Pan/TinyZero: Clean, accessible reproduction of DeepSeek R1-Zero
Hello @xmesaj2
I run the deepseek-r1 on VIM3 via ollama. The rate is about 1token/s.
khadas@Khadas:~$ curl -fsSL https://ollama.com/install.sh | sh
>>> Installing ollama to /usr/local
>>> Downloading Linux arm64 bundle
######################################################################## 100.0%
>>> Creating ollama user...
[sudo] password for khadas:
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
Created symlink /etc/systemd/system/default.target.wants/ollama.service → /etc/systemd/system/ollama.service.
>>> The Ollama API is now available at 127.0.0.1:11434.
>>> Install complete. Run "ollama" from the command line.
WARNING: No NVIDIA/AMD GPU detected. Ollama will run in CPU-only mode.
khadas@Khadas:~$
khadas@Khadas:~$ ls
khadas@Khadas:~$ ollama run ^C
khadas@Khadas:~$ ollama run deepseek-r1:1.5b
pulling manifest
pulling aabd4debf0c8... 100% ▕████████████████▏ 1.1 GB
pulling 369ca498f347... 100% ▕██████████���█████▏ 387 B
pulling 6e4c38e1172f... 100% ▕██████████████�██▏ 1.1 KB
pulling f4d24e9138dd... 100% ▕████████████████▏ 148 B
pulling a85fe2a2e58e... 100% ▕�████████████████▏ 487 B
verifying sha256 digest
writing manifest
success
>>>
>>> hello, how are you
<think>
</think>
Hello! I'm just a virtual assistant, so I don't have feelings, but I'm
here and ready to help you. How are *you* doing? 😊
>>> what's your name
<think>
</think>
Greetings! My name is DeepSeek-R1-Lite-Preview. I'm an AI assistant
created by DeepSeek. I'm at your service and would be delighted to assist
you with any inquiries or tasks you may have.
>>> Send a message (/? for help)
1 Like