Hello!
I’m trying to convert the Joint Detection and Embedding (JDE) model to use in the Khadas android npu app for VIM3.
I tested the demo.py with CUDA and it works. I also tested the demo.py on the CPU (I removed all the .cuda()
calls) and it works too.
I converted the JDE-576x320
pytorch model to a jit model with the cvt2jit.py
file found in the JDE C++ implementation
With the CUDA version, when I try to convert the resulting file ( jde_576x320_torch14.pt ) with 0_import_model.sh
, I get this error :
$ sh 0_import_model_pt.sh
I Start importing pytorch...
WARNING: Token 'NEWLINE' defined, but not used
WARNING: There is 1 unused token
Traceback (most recent call last):
File "convertpytorch.py", line 96, in <module>
File "convertpytorch.py", line 86, in main
File "acuitylib/vsi_nn.py", line 219, in load_pytorch_by_onnx_backend
File "acuitylib/onnx_ir/frontend/pytorch_frontend/pytorch_frontend.py", line 62, in model_import
File "acuitylib/onnx_ir/frontend/pytorch_frontend/pytorch_frontend.py", line 134, in _model_parser
TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
[37] Failed to execute script convertpytorch
With the CPU version, when I try to convert the resulting file ( jde_576x320_torch14.pt ) with 0_import_model.sh
, I get this error :
$ sh 0_import_model_pt.sh
I Start importing pytorch...
WARNING: Token 'NEWLINE' defined, but not used
WARNING: There is 1 unused token
I Save onnx model: tmp_onnx_model_file.onnx
E Unsupport tensor 2638 and node aten::select_2638 and
schema aten::select.int(Tensor(a) self, int dim, int index) -> (Tensor(a)) ;
I ----------------Warning(0)----------------
Traceback (most recent call last):
File "convertpytorch.py", line 96, in <module>
File "convertpytorch.py", line 86, in main
File "acuitylib/vsi_nn.py", line 225, in load_pytorch_by_onnx_backend
File "acuitylib/onnx_ir/frontend/pytorch_frontend/pytorch_frontend.py", line 241, in model_export
File "acuitylib/onnx_ir/frontend/pytorch_frontend/pytorch_lower_to_onnx.py", line 110, in lower_to_onnx
File "acuitylib/onnx_ir/frontend/pytorch_frontend/pytorch_lower_to_onnx.py", line 105, in lower_match
File "acuitylib/acuitylog.py", line 251, in e
ValueError: Unsupport tensor 2638 and node aten::select_2638 and
schema aten::select.int(Tensor(a) self, int dim, int index) -> (Tensor(a)) ;
[16807] Failed to execute script convertpytorch
I know CUDA isn’t supported by the NPU, but I was wondering if this was a normal behavior.
Should I avoid using models with CUDA code in them?
Should I try to convert one of the JDE models into something other than Pytorch?
I don’t know what to do to solve this problem and I hope you can help me.
Thank you very much!