Unable to run docker yanwyb/npu:v1 on VIM4

I’ve tried reducing the iteration to 100 and got this instead

W:tensorflow:Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model.
W:tensorflow:AutoGraph could not transform <function trace_model_call.<locals>._wrapped_model at 0x7f7248cea5e0> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Unable to locate the source code of <function trace_model_call.<locals>._wrapped_model at 0x7f7248cea5e0>. Note that functions defined in certain environments, like the interactive Python shell, do not expose their source code. If that is the case, you should define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.experimental.do_not_convert. Original error: could not get source code
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
2024-10-29 18:17:46.445676: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
W:tensorflow:AutoGraph could not transform <function canonicalize_signatures.<locals>.signature_wrapper at 0x7f7117db3dc0> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Unable to locate the source code of <function canonicalize_signatures.<locals>.signature_wrapper at 0x7f7117db3dc0>. Note that functions defined in certain environments, like the interactive Python shell, do not expose their source code. If that is the case, you should define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.experimental.do_not_convert. Original error: could not get source code
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
keras/utils/generic_utils.py:494: CustomMaskWarning: Custom mask layers require a config and must override get_config. When loading, the custom mask layer must be passed to the custom_objects argument.
2024-10-29 18:18:08.208328: I tensorflow/core/grappler/devices.cc:75] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA or ROCm support)
2024-10-29 18:18:08.208660: I tensorflow/core/grappler/clusters/single_machine.cc:357] Starting new session
2024-10-29 18:18:08.244626: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1137] Optimization results for grappler item: graph_to_optimize
  function_optimizer: function_optimizer did nothing. time = 0.023ms.
  function_optimizer: function_optimizer did nothing. time = 0.002ms.

2024-10-29 18:18:09.532069: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:351] Ignored output_format.
2024-10-29 18:18:09.532136: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:354] Ignored drop_control_dependency.
2024-10-29 18:18:09.571974: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:210] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2024-10-29 18:18:09.711183: I tensorflow/compiler/mlir/lite/flatbuffer_export.cc:1899] Estimated count of arithmetic ops: 5.169 G  ops, equivalently 2.584 G  MACs
I Quantize_info: rep_data_gen shape:[[1, 736, 736, 3]], source_file:yolo_dataset.txt,g_channel_mean_value:[[0.0, 0.0, 0.0, 1.0]]
E Expected bias tensor to be a vector.
I ----------------Warning(0)----------------
Convert in Docker Done!!!

There is no adla file created inside the onnx_output directory


What should be the normalization model? I’ve read the docs, normalization for converting to adla file is m1,m2,m3 and scale right? What does that mean?

Hi Louis, any updates?

Hello @JietChoo ,

I check my convert code and find i make a mistake. I also meet the same problem. The ready-made model from website usually has some problem causing convert failed.

Recently i am studying the official code of paddleocr. Try to train and convert the model.

Normalization is a preprocess before model inference. Each mdoel has its normalization. m1, m2, m3 is mean for image channels. scale is variance. The calculation for them:

image[:, ;, 0] = (image[:, ;, 0] - m1) / scale 
image[:, ;, 1] = (image[:, ;, 1] - m2) / scale 
image[:, ;, 2] = (image[:, ;, 2] - m3) / scale 

We will open a realtime demo. So the problem of this model convert failed is no longer deal with. You can close this topic and if some update i will notify you in this topic.
Realtime Text Recognition with VIM4 and IMX415 MIPI Camera - VIM4 - Khadas Community

1 Like

Thank Louis, will wait for your update on realtime text recognition demo at the other side.

Hello @JietChoo

The issue is resolved, about the demo we will followup in another topic, and we will close this tpoic.