VIM3 and M.2 SSD - which ones work and which ones don't

Android will use a proper filesystem automatically, but on Linux you can use the filesystem you desire :wink:

it is possible, but as a rule, the android asks to fix the disk for itself, that’s good!

yes, it increases the ease for people to use external storage media, no wonder android is loved by many in terms of “easy to use, out of the box”

I had no luck with SK Hynix NVMe PC401M280S SSD. OS is ubuntu linux 4.9.224 (last stable). It is discovered on boot if I add u-boot option: “pci=pcie_bus_safe”:

...
[    1.025347] loop: module loaded
[    1.026236] nvme nvme0: pci function 0000:01:00.0
[    1.026307] nvme 0000:01:00.0: enabling device (0000 -> 0002)
[    1.026660] mtdoops: mtd device (mtddev=name/number) must be supplied
...

as kernel dmesg recommends:

amlogic-pcie-v2 fc000000.pcieA: the device class is not reported correctly from the register
pci 0000:00:00.0: [16c3:abcd] type 01 class 0x060400
pci 0000:00:00.0: reg 0x38: [mem 0x00000000-0x0000ffff pref]
pci 0000:00:00.0: supports D1
pci 0000:00:00.0: PME# supported from D0 D1 D3hot D3cold
pci 0000:01:00.0: [1c5c:1527] type 00 class 0x010802
pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00003fff 64bit]
pci 0000:01:00.0: can't set Max Payload Size to 256; if necessary, use "pci=pcie_bus_safe" and report a bug
pci 0000:00:00.0: BAR 8: assigned [mem 0xfc700000-0xfc7fffff]
pci 0000:00:00.0: BAR 6: assigned [mem 0xfc800000-0xfc80ffff pref]
pci 0000:01:00.0: BAR 0: assigned [mem 0xfc700000-0xfc703fff 64bit]
pci 0000:00:00.0: PCI bridge to [bus 01-ff]
pci 0000:00:00.0:   bridge window [mem 0xfc700000-0xfc7fffff] chip type:0x29
Advanced Linux Sound Architecture Driver Initialized.

And even get partitioned by appropriate tool [parted /dev/nvme0n1]
But then it isn’t much usable due to a lot of errors like this:

...
[   40.733252] usb 1-1.2.4: USB disconnect, device number 15
[   40.984693] nvme nvme0: I/O 53 QID 1 timeout, aborting
[   40.987378] nvme nvme0: I/O 55 QID 1 timeout, aborting
[   40.992232] nvme nvme0: Abort status: 0x0
[   40.996378] nvme nvme0: Abort status: 0x0
[   41.000546] nvme nvme0: I/O 56 QID 1 timeout, aborting
[   41.068703] usb 1-1.2.4: new low-speed USB device number 16 using xhci-hcd
...
[   71.709328] usb 1-1.2.4: USB disconnect, device number 26
[   71.832693] nvme nvme0: I/O 53 QID 1 timeout, reset controller
[   71.836336] nvme nvme0: Abort status: 0x0
[   72.044713] usb 1-1.2.4: new low-speed USB device number 27 using xhci-hcd
...
[  131.377865] hid-generic 0003:1C4F:0034.002E: input,hidraw2: USB HID v1.10 Mouse [SIGMACHIP Usb Mouse] on usb-xhci-hcd.0.auto-1.2.4/input0
[  132.228814] nvme nvme0: completing aborted command with status: 0007
[  132.238003] blk_update_request: I/O error, dev nvme0n1, sector 2112
[  132.246947] nvme nvme0: completing aborted command with status: 0007
[  132.255866] blk_update_request: I/O error, dev nvme0n1, sector 2176
[  132.264727] nvme nvme0: completing aborted command with status: fffffffc
[  132.273919] blk_update_request: I/O error, dev nvme0n1, sector 2064
[  133.661274] usb 1-1.2.4: USB disconnect, device number 48
[  133.996706] usb 1-1.2.4: new low-speed USB device number 49 using xhci-hcd
...
[  187.676381] amlogic-pcie-v2 fc000000.pcieA: the device class is not reported correctly from the register
[  196.010451] amlogic-pcie-v2 fc000000.pcieA: the device class is not reported correctly from the register
[  244.534000]  nvme0n1: p1
...
[  301.080743] nvme nvme0: I/O 88 QID 1 timeout, aborting
[  301.080838] nvme nvme0: I/O 89 QID 1 timeout, aborting
[  301.080859] nvme nvme0: I/O 131 QID 1 timeout, aborting
[  301.080867] nvme nvme0: I/O 132 QID 1 timeout, aborting
[  301.080875] nvme nvme0: I/O 133 QID 1 timeout, aborting
[  301.080882] nvme nvme0: I/O 134 QID 1 timeout, aborting
[  301.080890] nvme nvme0: I/O 135 QID 1 timeout, aborting
[  301.080898] nvme nvme0: I/O 136 QID 1 timeout, aborting
[  332.056751] nvme nvme0: I/O 84 QID 1 timeout, reset controller
[  332.056797] nvme nvme0: Abort status: 0x0
[  332.056803] nvme nvme0: Abort status: 0x0
[  332.056807] nvme nvme0: Abort status: 0x0
[  332.056811] nvme nvme0: Abort status: 0x0
...
[  332.056826] nvme nvme0: Abort status: 0x0
[  392.324828] nvme nvme0: completing aborted command with status: 0007
[  392.324837] blk_update_request: I/O error, dev nvme0n1, sector 248109248
[  392.326072] Buffer I/O error on dev nvme0n1p1, logical block 31013400, lost async page write
[  392.334762] Buffer I/O error on dev nvme0n1p1, logical block 31013401, lost async page write
[  392.343335] Buffer I/O error on dev nvme0n1p1, logical block 31013402, lost async page write
[  392.352000] Buffer I/O error on dev nvme0n1p1, logical block 31013403, lost async page write
[  392.360450] Buffer I/O error on dev nvme0n1p1, logical block 31013404, lost async page write
[  392.369044] Buffer I/O error on dev nvme0n1p1, logical block 31013405, lost async page write
[  392.377701] Buffer I/O error on dev nvme0n1p1, logical block 31013406, lost async page write
[  392.386332] Buffer I/O error on dev nvme0n1p1, logical block 31013407, lost async page write
[  392.394736] Buffer I/O error on dev nvme0n1p1, logical block 31013408, lost async page write
[  392.403335] Buffer I/O error on dev nvme0n1p1, logical block 31013409, lost async page write
[  392.411994] nvme nvme0: completing aborted command with status: 0007
[  392.411997] blk_update_request: I/O error, dev nvme0n1, sector 248109504
[  392.418857] nvme nvme0: completing aborted command with status: 0007
[  392.418860] blk_update_request: I/O error, dev nvme0n1, sector 248109760
[  392.425680] nvme nvme0: completing aborted command with status: 0007
[  392.425683] blk_update_request: I/O error, dev nvme0n1, sector 248110016
[  392.432755] nvme nvme0: completing aborted command with status: 0007
[  392.432759] blk_update_request: I/O error, dev nvme0n1, sector 248110272
[  392.439370] nvme nvme0: completing aborted command with status: 0007
[  392.439373] blk_update_request: I/O error, dev nvme0n1, sector 248110784
[  392.446260] nvme nvme0: completing aborted command with status: 0007
[  392.446263] blk_update_request: I/O error, dev nvme0n1, sector 248111040
[  392.453204] nvme nvme0: completing aborted command with status: 0007
[  392.453209] blk_update_request: I/O error, dev nvme0n1, sector 248112064
[  392.459956] nvme nvme0: completing aborted command with status: 0007
[  392.459960] blk_update_request: I/O error, dev nvme0n1, sector 248112320
[  392.466819] nvme nvme0: completing aborted command with status: 0007
[  392.466821] blk_update_request: I/O error, dev nvme0n1, sector 248112576
[  392.473653] nvme nvme0: completing aborted command with status: 0007
[  392.473759] nvme nvme0: completing aborted command with status: 0007
[  392.473875] nvme nvme0: completing aborted command with status: 0007
[  392.473967] nvme nvme0: completing aborted command with status: 0007
[  392.474056] nvme nvme0: completing aborted command with status: 0007
[  392.474147] nvme nvme0: completing aborted command with status: 0007
[  392.474238] nvme nvme0: completing aborted command with status: 0007
[  392.474327] nvme nvme0: completing aborted command with status: 0007
[  392.474424] nvme nvme0: completing aborted command with status: 0007
[  392.474517] nvme nvme0: completing aborted command with status: 0007
[  392.474610] nvme nvme0: completing aborted command with status: 0007
...
[  392.524153] nvme nvme0: completing aborted command with status: 0007
[  392.524289] nvme nvme0: completing aborted command with status: fffffffc
[  422.904788] nvme nvme0: I/O 20 QID 1 timeout, aborting
[  422.904883] nvme nvme0: I/O 21 QID 1 timeout, aborting
[  422.904891] nvme nvme0: I/O 22 QID 1 timeout, aborting
[  422.904899] nvme nvme0: I/O 23 QID 1 timeout, aborting
[  422.904906] nvme nvme0: I/O 24 QID 1 timeout, aborting
[  422.904914] nvme nvme0: I/O 25 QID 1 timeout, aborting
[  422.904921] nvme nvme0: I/O 26 QID 1 timeout, aborting
[  422.904929] nvme nvme0: I/O 27 QID 1 timeout, aborting
[  453.880740] nvme nvme0: I/O 20 QID 1 timeout, reset controller
[  453.880784] nvme nvme0: Abort status: 0x0
[  453.880790] nvme nvme0: Abort status: 0x0
[  453.880794] nvme nvme0: Abort status: 0x0
[  453.880798] nvme nvme0: Abort status: 0x0
[  453.880802] nvme nvme0: Abort status: 0x0
[  453.880805] nvme nvme0: Abort status: 0x0
[  453.880809] nvme nvme0: Abort status: 0x0
[  453.880813] nvme nvme0: Abort status: 0x0
[  514.904719] nvme nvme0: I/O 115 QID 0 timeout, reset controller
[  515.204823] nvme nvme0: completing aborted command with status: 0007
[  515.204828] blk_update_request: 496 callbacks suppressed
[  515.204830] blk_update_request: I/O error, dev nvme0n1, sector 248348352
[  515.204831] nvme nvme0: completing aborted command with status: 0007
[  515.204835] blk_update_request: I/O error, dev nvme0n1, sector 248260032
[  515.204838] buffer_io_error: 16182 callbacks suppressed
[  515.204840] Buffer I/O error on dev nvme0n1p1, logical block 31032248, lost async page write
[  515.204846] Buffer I/O error on dev nvme0n1p1, logical block 31032249, lost async page write
[  515.204849] Buffer I/O error on dev nvme0n1p1, logical block 31032250, lost async page write
[  515.204851] Buffer I/O error on dev nvme0n1p1, logical block 31032251, lost async page write
[  515.204853] Buffer I/O error on dev nvme0n1p1, logical block 31032252, lost async page write
[  515.204856] Buffer I/O error on dev nvme0n1p1, logical block 31032253, lost async page write
[  515.204858] Buffer I/O error on dev nvme0n1p1, logical block 31032254, lost async page write
[  515.204860] Buffer I/O error on dev nvme0n1p1, logical block 31032255, lost async page write
[  515.204862] Buffer I/O error on dev nvme0n1p1, logical block 31032256, lost async page write
[  515.204864] Buffer I/O error on dev nvme0n1p1, logical block 31032257, lost async page write
[  515.204897] nvme nvme0: completing aborted command with status: 0007
[  515.204898] blk_update_request: I/O error, dev nvme0n1, sector 248217024
[  515.204932] nvme nvme0: completing aborted command with status: 0007
[  515.204933] blk_update_request: I/O error, dev nvme0n1, sector 248217280
[  515.204965] nvme nvme0: completing aborted command with status: 0007
[  515.204966] blk_update_request: I/O error, dev nvme0n1, sector 248217536
[  515.204997] nvme nvme0: completing aborted command with status: 0007
[  515.204998] blk_update_request: I/O error, dev nvme0n1, sector 248217792
[  515.205049] nvme nvme0: completing aborted command with status: 0007
[  515.205050] blk_update_request: I/O error, dev nvme0n1, sector 248218048
[  515.205080] nvme nvme0: completing aborted command with status: 0007
[  515.205081] blk_update_request: I/O error, dev nvme0n1, sector 248218304
[  515.205112] nvme nvme0: completing aborted command with status: 0007
[  515.205113] blk_update_request: I/O error, dev nvme0n1, sector 248218560
[  515.205143] nvme nvme0: completing aborted command with status: 0007
[  515.205144] blk_update_request: I/O error, dev nvme0n1, sector 248218816
[  515.205174] nvme nvme0: completing aborted command with status: 0007
[  515.205204] nvme nvme0: completing aborted command with status: 0007
...
[  515.355854] nvme nvme0: completing aborted command with status: 0007
[  515.355884] nvme nvme0: completing aborted command with status: fffffffc
[  517.667573] VFS: Dirty inode writeback failed for block device nvme0n1p1 (err=-5).
[  539.692718] usb 1-1.2.3: new high-speed USB device number 51 using xhci-hcd

And there is a weird screen trembling effect from time to time during disk write operations as:

mkfs.ext4 -v /dev/nvme0n1p1

for exampple produces.

NVMe SSD Spec: HFS256GD9TNG-62A0A BB
Controller Chip Label: 88SS1093-BTB2 ~ Marvell 88SS1093? (nvme 1.1 spec support)

Will be glad to see some advice for going ahead if any.

Hmm,
SiliconPower P34A80 (512G) : PS5012-E12 (Phison, NVMe 1.3)

  • works well (no errors).

I’ve added it to this list, this is our current record of working SSDs :slightly_smiling_face:

Yeah, Thanks!
*
By the way it looks like workability mostly depends on solid-state-drive controller.

So if one collect most forum links then it is obvious that mostly work these controllers:

while probably don’t work ones that have chip:

  • Marvell 88SS1093 (?most Plextor Drives)
  • Micron ‘in-house’ T15SB1 (MTFDHBA512TCK (2200S))
  • PS5012-E12S-32 (SiliconPower 1tb)

Last mentioned controller usually assumes some additional firmware&hardware design from the drive producer as it is mentioned somewhere in reviews.
And that’s maybe the reason it is not completely compatible with available khadas firmware/software…

Correct me, please if I’m wrong.

1 Like

I think the driver problem is caused by the kernel. It can run many devices in newer kernels.

For example, I have a Sierra LTE7455 modem. Kernel 4.9 does not work. But although I didn’t install any drivers, the 5.7 rc7 also works. Unfortunately “Khadas” work with old software.

@AlexO Thank you for sharing !, will cross-reference and add it to the list, I wish this list can be added to the official Khadas Docs instead of just being buried under all the new messages… but that is a request for @numbqq and @Frank,

@fkaraokur I agree, I had asked a similar set of user from another forum about what SSD’s work for them and what don’t and the reply I got was that most of what works and doesn’t work is shared common among both of us,

also some info from their group, the most commonly used SSDs by their standard are, The Samsung Evo SSDs (960, 970), and the Intel 660p SSD…

More info about their supported SSDs can be found here

Cheers!

Maybe you are right. But my laptop’s first kernel was exactly the 4.19.0-5-amd64 and it has been used/booted with aforementioned drive SK Hynix PC401 (HFS256GD9TNG-62A0A BB) for some time just well (till next linux-distro update).
*
At the same time I see Marvell’s storage 88SS1093 controller spec talks that exactly PCIe Gen3 x4 lanes interface is available for host while Khadas spec talks only about PCIe Gen2 (i.e. 2.0) interface availability, does it?
maybe that’s all results in poor compatibility consequences…

strange note I see at this post about PCIe Gen2/3 compatibility:

  • The CPU supports the PCIe 3.0 standards. This is in sharp contrast to PCI Express 2.0
  • The protocol is mostly observed in x86 based Intel & AMD chipsets.

PCIe is just (as the name implies) a high-speed peripheral interface. PCIe is also backward compatible like most peripheral interfaces like USB or Thunderbolt etc.

its not the reason the hardware that makes the device incompatible. rather the use of drivers that is more readily prevalent on Windows but is much lately introduced into Linux.

Till now those who have reported improperly working SSDs never shared a dmesg log of sorts so we never knew what the exact cause of compatibility failure is…

PS5012-E12S-32 (SiliconPower 1tb): doesn’t work
{nvme nvme0: Removing after probe failure status: -19}
SM2262ENG (SiliconPower 512gb): detected, works.

@AlexO can you share the dmesg logs as well ?
Thank you

You are right it worth that to see probably:

[    0.564057] amlogic-pcie-v2 fc000000.pcieA: Set the RC Bus Master, Memory Space and I/O Space enables.
[    0.564084] amlogic-pcie-v2 fc000000.pcieA: normal gpio
[    0.564120] amlogic-pcie-v2 fc000000.pcieA: GPIO normal: amlogic_pcie_assert_reset
[    0.651075] amlogic-pcie-v2 fc000000.pcieA: Error: Wait linkup timeout.
[    0.741505] amlogic-pcie-v2 fc000000.pcieA: link up
[    0.741649] amlogic-pcie-v2 fc000000.pcieA: PCI host bridge to bus 0000:00
[    0.741673] pci_bus 0000:00: root bus resource [bus 00-ff]
[    0.741692] pci_bus 0000:00: root bus resource [io  0x0000-0xfffff]
[    0.741711] pci_bus 0000:00: root bus resource [mem 0xfc700000-0xfdffffff]
[    0.741745] amlogic-pcie-v2 fc000000.pcieA: the device class is not reported correctly from the register
[    0.741771] pci 0000:00:00.0: [16c3:abcd] type 01 class 0x060400
[    0.741796] pci 0000:00:00.0: reg 0x38: [mem 0x00000000-0x0000ffff pref]
[    0.741851] pci 0000:00:00.0: supports D1
[    0.741857] pci 0000:00:00.0: PME# supported from D0 D1 D3hot D3cold
[    0.742027] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    0.742203] pci 0000:01:00.0: [1987:5012] type 00 class 0x010802
[    0.742305] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00003fff 64bit]
[    0.809643] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[    0.809669] pci 0000:00:00.0: BAR 8: assigned [mem 0xfc700000-0xfc7fffff]
[    0.809692] pci 0000:00:00.0: BAR 6: assigned [mem 0xfc800000-0xfc80ffff pref]
[    0.809723] pci 0000:01:00.0: BAR 0: assigned [mem 0xfc700000-0xfc703fff 64bit]
[    0.809775] pci 0000:01:00.0: BAR 0: error updating (0xfc700004 != 0x000000)
[    0.809820] pci 0000:00:00.0: PCI bridge to [bus 01]
[    0.809840] pci 0000:00:00.0:   bridge window [mem 0xfc700000-0xfc7fffff]
[    0.809869] pci 0000:00:00.0: Max Payload Size set to  256/ 256 (was  128), Max Read Rq  512
[    0.809971] pci 0000:01:00.0: Max Payload Size set to  256/ 256 (was  128), Max Read Rq  512
[    0.810269] chip type:0x29
....
[    0.813214] clocksource: Switched to clocksource arch_sys_counter
....
[    0.874901] PCI: CLS 0 bytes, default 64
....
[    1.208367] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 242)
[    1.208492] io scheduler noop registered (default)
[    1.208510] io scheduler deadline registered
[    1.208550] io scheduler cfq registered
[    1.208793] pcieport 0000:00:00.0: enabling device (0000 -> 0002)
[    1.208984] amlogic-pcie-v2 fc000000.pcieA: the device class is not reported correctly from the register
[    1.209176] aer 0000:00:00.0:pcie002: service driver aer loaded
[    1.209267] pcieport 0000:00:00.0: Signaling PME through PCIe PME interrupt
[    1.209289] pci 0000:01:00.0: Signaling PME through PCIe PME interrupt
[    1.209311] pcie_pme 0000:00:00.0:pcie001: service driver pcie_pme loaded
[    1.217129] random: fast init done
[    1.217180] random: crng init done
....
[    1.231080] nvme nvme0: pci function 0000:01:00.0
[    1.231140] nvme 0000:01:00.0: enabling device (0000 -> 0002)
[    1.231218] nvme nvme0: Removing after probe failure status: -19

lspci:

01:00.0 Non-Volatile memory controller: Device 1987:5012 (rev 01) (prog-if 02 [NVM Express])
Subsystem: Device 1987:5012
Control: I/O- Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Interrupt: pin A routed to IRQ 103
Region 0: Memory at fc700000 (64-bit, non-prefetchable) [size=16K]
Capabilities: [80] Express (v2) Endpoint, MSI 00
	DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
		ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
	DevCtl:	Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
		RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
		MaxPayload 256 bytes, MaxReadReq 512 bytes
	DevSta:	CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr- TransPend-
	LnkCap:	Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L0s unlimited, L1 <64us
		ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
	LnkCtl:	ASPM Disabled; RCB 64 bytes Disabled- CommClk-
		ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
	LnkSta:	Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
	DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR+, OBFF Not Supported
	DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
	LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
		 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
		 Compliance De-emphasis: -6dB
	LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
		 EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
Capabilities: [d0] MSI-X: Enable- Count=9 Masked-
	Vector table: BAR=0 offset=00002000
	PBA: BAR=0 offset=00003000
Capabilities: [e0] MSI: Enable- Count=1/8 Maskable- 64bit+
	Address: 0000000000000000  Data: 0000
Capabilities: [f8] Power Management version 3
	Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
	Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [100 v1] Latency Tolerance Reporting
	Max snoop latency: 0ns
	Max no snoop latency: 0ns
Capabilities: [110 v1] L1 PM Substates
	L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
		  PortCommonModeRestoreTime=10us PortTPowerOnTime=60us
	L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
		   T_CommonMode=0us LTR1.2_Threshold=0ns
	L1SubCtl2: T_PwrOn=10us
Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
	ARICap:	MFVC- ACS-, Next Function: 0
	ARICtl:	MFVC- ACS-, Function Group: 0
Capabilities: [200 v2] Advanced Error Reporting
	UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
	UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
	UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP+ ECRC- UnsupReq- ACSViol-
	CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
	CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
	AERCap:	First Error Pointer: 00, GenCap- CGenEn- ChkCap+ ChkEn-
Capabilities: [300 v1] #19

that’s it.
Thak you as well!

yep, this was probably the reason, we need to increase the BAR memory allocation, maybe @numbqq could help here ?

If the BAR was the issue it won’t boot, at least that’s been my experience. I’m more concerned about the Probe error, I’m just wondering if the drive is bad or the controller doesn’t like Linux but finding out the controller for his NVME has been tougher than usual.

plz try start krescue and check how NVME works !!!

1 Like

Hi, I’m waiting for my VIM3 to arrive, do you think this SSD would work? A-Data SX6000 Lite: https://www.amazon.com/dp/B07N22YS84/ref=cm_sw_r_cp_awdb_imm_t1_Ent3FbWFTV5VQ

Thanks!

Hi, I have not seen on the forum that someone discussed this SSD model, I know for sure, Samsung will do, but you understand correctly, not every SSD can be suitable

@manuel-arguelles I have no idea of telling if its compatible or not, as the SSD controller is not specified.
if it using a phison branded controller, it has a high chance of compatibility, but there is no chance of knowing that.

if you need a decent quality SSD for a good price, go with the WD blue SN550 NVMe SSD, its reasonably priced, and has very low power consumption, best choice you can go for.

2 Likes