I currently need some help setting up distcc to do some compiling, I’ve tried myself through many different documents, but it is not compiling with distcc, can anyone provide any suitable guide for setting it up ? @hyphop@numbqq
so I’ve been able to setup distcc and its working fantastically
I slimmed down the cluster to 4 nodes instead of the old 5, as its a bit easier to maintain,
performance with distcc shows that the cluster has compilation performance similar to that of a Xeon E5-2630 v4 while consuming much less power.
this proven the cluster is working efficiently and I’m very much happy to be using it
Dear @Electr1 Have you tested your Vim3 cluster for Deep Learning prupose?
Im planning to make an Neural Machine Learning Cluster for RNN jobs.
Im very interest to use Vim3 because it has Neural Processor Unit and the salles price is worth for RNN.
I didn’t find another SOC Solution that worth better than Vim3
That’s actually on my list to try out
I’m not very experienced with neural network models and training, but I am aware that with mpich-mpi4py and KSNN toolkit, its possible to make the application run in concurrency to utilise the NPUs on multiple VIM boards
If you have an idea about the prequisites please feel free to share, its a good learning experience as well
@Electr1 I’m excite to it too.
Can you execute some benchmark stuff on your Khandas Vim3 to compare to Colab, Kaggle please. Feel free to write down on the Colab Notebook as well
It will be good to compare the relationship between performance x google colab free/pro x Khandas vim3
I’m figuring out what the best price-solution to train RNN. If i buy Khandas Vim3 or Google colab pro
I don’t think that benchmark will run directly on the cluster, as custom software has to be written to interact with the internal NPUs on the VIM3 that is not required for the collab server,
also it seems the AI inference in that test seen running is CUDA powered, It definitely is more powerful than the individual VIM3,
@Frank I made some research about GPU, TPU and NPU as you suggested. As we expected, TPU (Google NPU tecnology) as far as we know, is much faster than an SOC NPU Khandas Vim3. I’m really convincet to buy Google Colab Pro version to do my NN or RNN models on TPU. Thank you, i appriciated for your advise.