Anyone tried installing LLAMA and/or Dalai?

Which system do you use? Android, Ubuntu, OOWOW or others?

Ubuntu 22.04.02

Which version of system do you use? Khadas official images, self built images, or others?

Original images from OOWOW

Please describe your issue below:

I have been wanting to install the LAMA and DALAI engine to test this browser based setup with no success.

Post a console log of your issue below:

Need to install the following packages:
dalai
Ok to proceed? (y) y
npm WARN EBADENGINE Unsupported engine {
npm WARN EBADENGINE package: ‘dalai@0.3.1’,
npm WARN EBADENGINE required: { node: ‘>=18.0.0’ },
npm WARN EBADENGINE current: { node: ‘v12.22.9’, npm: ‘8.5.1’ }
npm WARN EBADENGINE }
npm WARN EBADENGINE Unsupported engine {
npm WARN EBADENGINE package: ‘terminal-kit@3.0.0’,
npm WARN EBADENGINE required: { node: ‘>=16.13.0’ },
npm WARN EBADENGINE current: { node: ‘v12.22.9’, npm: ‘8.5.1’ }
npm WARN EBADENGINE }
npm WARN EBADENGINE Unsupported engine {
npm WARN EBADENGINE package: ‘webtorrent@1.9.7’,
npm WARN EBADENGINE required: { node: ‘>=14’ },
npm WARN EBADENGINE current: { node: ‘v12.22.9’, npm: ‘8.5.1’ }
npm WARN EBADENGINE }
npm WARN EBADENGINE Unsupported engine {
npm WARN EBADENGINE package: ‘seventh@0.8.2’,
npm WARN EBADENGINE required: { node: ‘>=16.13.0’ },
npm WARN EBADENGINE current: { node: ‘v12.22.9’, npm: ‘8.5.1’ }
npm WARN EBADENGINE }
npm WARN EBADENGINE Unsupported engine {
npm WARN EBADENGINE package: ‘string-kit@0.17.8’,
npm WARN EBADENGINE required: { node: ‘>=14.15.0’ },
npm WARN EBADENGINE current: { node: ‘v12.22.9’, npm: ‘8.5.1’ }
npm WARN EBADENGINE }

1 Like

I forgot to update this post to say that I saw the required module versions vs the current versions, so I updated all of them and made progress installing them (still 5 hours to go for the libraries) but I still would like to communicate more with other users that are trying to use the NPU and related processes. Thanks!

1 Like

Just to add that I was able to get it all running, problem was broken files in download, I have a good set of instructions for getting it working, but still I am surprised there are not more people here using the NPU to do AI modeling and queries. Not enough interest to get a topic setup?

I’m very interested with running LLMs locally on RK3588s. Currently I’m trying online models and frameworks (langchain, whisper, openai). And I want to play with local models as well.

Cool, I have locally installed and modded my LLM to run off a 128GB SD, I followed these instructions with some minor tweaks. Let me know if you get it going!

and go down to the bottom for FAQs

1 Like

So nice, Let me catch up!

Hey, thanks for the guides @TechnoTarzan! I have a couple of edge2’s and was looking at getting them running through docker containers. Does your setup leverage the npu? Do you get any performance improvements? Any chance you could share your performance here?

Cheers

Greets @goosems! I haven’t done any benchmarking per se, just looking at different ways to integrate the NPU. I have found these links that have helped me a lot, surprised there are not more users of the Edge using this incredible feature. I guess I can make a baseline for where I am now, and tweak it to see if I get a faster query, let me look into it. Here are the links I used to create my setup… Be warned, its a lot of reading, but VERY specific and filled in a lot of gaps in my knowledge base…

Stay in touch with your progress! You’re one of the only people actually focusing on this part of the Edge on this forum! Cheers!!!

Hey thanks for getting back so quickly @TechnoTarzan!

Also thanks for the resources here, it really helps. I am surprised as well as model sizes are coming down fast and I have seen some effort to get things running on PI’s. I’ll have a go at getting the demos setup and play around, I’ll definitely update on progress!

1 Like

Hey @TechnoTarzan, I haven’t had a long time to dive too deep and my CPP knowledge is limited, but I found this fork of llama.cpp that someone is experimenting with.

Not sure if it is functional but worth a try: GitHub - marty1885/llama.cpp at rknpu2-backend

Cheers,
Goose

I fixed the link, the correct one is this: the GitHub - marty1885/llama.cpp at rknpu2-backend

Thanks Goose for the reply and the link - a bit late on the strategies, but will take a look. I might have pushed my Edge2 a bit too hard, one day I woke up to find it totally dead, so I sent it back for analysis/repair. That’s why I have been offline in the community, no way to validate before sharing comments. Considering a Mind platform for more GPU power, but as a solid linux user, windows seems like putting the chicken in the hen house. Too much baked in spyware… Any questions, fire away. Thanks again!