Unlocking the Mind’s Potential: Request for OCuLink Support and Open Mind Link Spec

Hey Khadas team — I wanted to share an idea that I think could make the Mind ecosystem even more powerful and appealing: add OCuLink support.

Right now, the only real high-speed connectivity on Mind devices is USB and HDMI (and DisplayPort with Mind Graphics), which is fine for general use but limiting for high-performance expansion. OCuLink, which carries PCIe directly over cable, could open up a world of possibilities — GPUs, FPGAs, NVMe arrays, high-speed NICs, capture cards, robotics controllers, you name it — all without relying solely on Khadas-made Mind Link modules.

And about Mind Link — it’s called open-source, but right now the spec isn’t publicly available without an approval process. If that spec was out in the wild, it would empower third-party developers to make their own Mind Link-to-OCuLink adapters. That would avoid hacky M.2-to-OCuLink solutions (which reduce performance) and instead provide direct, high-bandwidth PCIe connectivity. Even better, you could offer an official Mind Dock that’s just OCuLink — no GPU baked in — so users can connect whatever PCIe device they want.

This would position Mind as a true modular PCIe host instead of just a proprietary modular PC. It would also align with what other mini-PC makers (MinisForum, ASRock, GPD, etc.) are doing, but with Khadas’s own clean, well-engineered take. The market for OCuLink is growing fast among enthusiasts, AI researchers, and industrial users, and it could pull more power users into the Mind ecosystem while enabling rapid adoption in niche areas without you having to build every module yourselves.

In short — open up the spec, add OCuLink, and give Mind owners the freedom to plug in any PCIe hardware. It’s a win for Khadas, a win for users, and a huge boost to the Mind brand.

On a personal note, I’m actively exploring ways to use four RTX 3090 GPUs with NVLink/Bridge through an active PCIe switch that would let me run them all under the same PCIe Root Complex. The switching is sufficient for my purposes as most of the communication would be between GPUs within Pytorch pipelines. I really love the Khadas Mind 2s and want to extend its usage into everything else I am doing. Having native OCuLink support in the Mind ecosystem would make high-end setups like this far easier and more efficient to build.

Hi~

So you’re saying we should build a more universal expansion board that lets everyone pick and plug in the modules they need—something like adding a discrete GPU or extra storage via PCIe, right? :wink:

1 Like

Yes, exactly — especially with an industry-standard PCIe-over-cable solution like OCuLink. Is something like this already on your roadmap, or is it something you’d need more community demand for before exploring?

2 Likes

Sorry for this late reply. I forgot to log in to the forum for a while.

To be honest, we haven’t included OCuLink in our plans yet. We are concerned that OCuLink might conflict with Mind Link, and it also has more restrictions in terms of both performance and application scenarios. Due to these reasons, we haven’t made up our minds to develop OCuLink. However, we haven’t ruled out this solution either. We are looking forward to receiving more feedback from you and other community users to help us clarify our thinking.