Hey Khadas team — I wanted to share an idea that I think could make the Mind ecosystem even more powerful and appealing: add OCuLink support.
Right now, the only real high-speed connectivity on Mind devices is USB and HDMI (and DisplayPort with Mind Graphics), which is fine for general use but limiting for high-performance expansion. OCuLink, which carries PCIe directly over cable, could open up a world of possibilities — GPUs, FPGAs, NVMe arrays, high-speed NICs, capture cards, robotics controllers, you name it — all without relying solely on Khadas-made Mind Link modules.
And about Mind Link — it’s called open-source, but right now the spec isn’t publicly available without an approval process. If that spec was out in the wild, it would empower third-party developers to make their own Mind Link-to-OCuLink adapters. That would avoid hacky M.2-to-OCuLink solutions (which reduce performance) and instead provide direct, high-bandwidth PCIe connectivity. Even better, you could offer an official Mind Dock that’s just OCuLink — no GPU baked in — so users can connect whatever PCIe device they want.
This would position Mind as a true modular PCIe host instead of just a proprietary modular PC. It would also align with what other mini-PC makers (MinisForum, ASRock, GPD, etc.) are doing, but with Khadas’s own clean, well-engineered take. The market for OCuLink is growing fast among enthusiasts, AI researchers, and industrial users, and it could pull more power users into the Mind ecosystem while enabling rapid adoption in niche areas without you having to build every module yourselves.
In short — open up the spec, add OCuLink, and give Mind owners the freedom to plug in any PCIe hardware. It’s a win for Khadas, a win for users, and a huge boost to the Mind brand.
On a personal note, I’m actively exploring ways to use four RTX 3090 GPUs with NVLink/Bridge through an active PCIe switch that would let me run them all under the same PCIe Root Complex. The switching is sufficient for my purposes as most of the communication would be between GPUs within Pytorch pipelines. I really love the Khadas Mind 2s and want to extend its usage into everything else I am doing. Having native OCuLink support in the Mind ecosystem would make high-end setups like this far easier and more efficient to build.