I’ve tried reducing the iteration to 100 and got this instead
W:tensorflow:Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model.
W:tensorflow:AutoGraph could not transform <function trace_model_call.<locals>._wrapped_model at 0x7f7248cea5e0> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Unable to locate the source code of <function trace_model_call.<locals>._wrapped_model at 0x7f7248cea5e0>. Note that functions defined in certain environments, like the interactive Python shell, do not expose their source code. If that is the case, you should define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.experimental.do_not_convert. Original error: could not get source code
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
2024-10-29 18:17:46.445676: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
W:tensorflow:AutoGraph could not transform <function canonicalize_signatures.<locals>.signature_wrapper at 0x7f7117db3dc0> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Unable to locate the source code of <function canonicalize_signatures.<locals>.signature_wrapper at 0x7f7117db3dc0>. Note that functions defined in certain environments, like the interactive Python shell, do not expose their source code. If that is the case, you should define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.experimental.do_not_convert. Original error: could not get source code
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
keras/utils/generic_utils.py:494: CustomMaskWarning: Custom mask layers require a config and must override get_config. When loading, the custom mask layer must be passed to the custom_objects argument.
2024-10-29 18:18:08.208328: I tensorflow/core/grappler/devices.cc:75] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA or ROCm support)
2024-10-29 18:18:08.208660: I tensorflow/core/grappler/clusters/single_machine.cc:357] Starting new session
2024-10-29 18:18:08.244626: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1137] Optimization results for grappler item: graph_to_optimize
function_optimizer: function_optimizer did nothing. time = 0.023ms.
function_optimizer: function_optimizer did nothing. time = 0.002ms.
2024-10-29 18:18:09.532069: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:351] Ignored output_format.
2024-10-29 18:18:09.532136: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:354] Ignored drop_control_dependency.
2024-10-29 18:18:09.571974: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:210] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2024-10-29 18:18:09.711183: I tensorflow/compiler/mlir/lite/flatbuffer_export.cc:1899] Estimated count of arithmetic ops: 5.169 G ops, equivalently 2.584 G MACs
I Quantize_info: rep_data_gen shape:[[1, 736, 736, 3]], source_file:yolo_dataset.txt,g_channel_mean_value:[[0.0, 0.0, 0.0, 1.0]]
E Expected bias tensor to be a vector.
I ----------------Warning(0)----------------
Convert in Docker Done!!!
There is no adla file created inside the onnx_output directory
What should be the normalization model? I’ve read the docs, normalization for converting to adla file is m1,m2,m3 and scale right? What does that mean?