Acuity-toolkit convert tflite model got error

@ahmetkemal What’s the meaning? I can’t understand

Sorry for my bad english,
When I converted with the script before, everything seemed normal, but neural network was working with vim3, but it didn’t give the correct result. When I examined the nn.quantize file that appeared with the 1_quantize.sh script, I saw an abnormality in the output layers. It may be the same problem.
Need to examine the Retina.quantize file.

@ahmetkemal Can you show you parameter setting in conversion script and you convert log info ?

The last time I replaced the functions inside tensorflow when the model was converted, it might cause the problem that the model output was all 0 after quantization.

This time I did not modify any of the code inside tensorflow, but the execution of the quantization always reported an error.

D Acuity output shape(concat): (11 80 80 64)
D Real output shape: (11, 80, 80, 64)
D Process re_lu/clip_by_value/Minimum_28 ...
D Acuity output shape(relun): (11 80 80 64)
Traceback (most recent call last):
  File "tensorzonex.py", line 425, in <module>
  File "tensorzonex.py", line 362, in main
  File "acuitylib/app/tensorzone/quantization.py", line 144, in run
  File "acuitylib/app/tensorzone/quantization.py", line 91, in _run_quantization
  File "acuitylib/app/tensorzone/workspace.py", line 168, in _setup_graph
  File "acuitylib/app/tensorzone/graph.py", line 59, in generate
  File "acuitylib/acuitynetbuilder.py", line 273, in build
  File "acuitylib/acuitynetbuilder.py", line 300, in build_layer
  File "acuitylib/acuitynetbuilder.py", line 300, in build_layer
  File "acuitylib/acuitynetbuilder.py", line 300, in build_layer
  File "acuitylib/acuitynetbuilder.py", line 300, in build_layer
  File "acuitylib/acuitynetbuilder.py", line 300, in build_layer
  File "acuitylib/acuitynetbuilder.py", line 330, in build_layer
  File "acuitylib/layer/acuitylayer.py", line 280, in compute_tensor
  File "acuitylib/layer/reluN.py", line 34, in compute_out_tensor
ValueError: cannot convert float NaN to integer

command:

NAME=retinaface
ACUITY_PATH=../bin/

convert_caffe=${ACUITY_PATH}convertcaffe
convert_tf=${ACUITY_PATH}convertensorflow
convert_tflite=${ACUITY_PATH}convertflite
convert_darknet=${ACUITY_PATH}convertdarknet
convert_onnx=${ACUITY_PATH}convertonnx


   
$convert_tf \
    --tf-pb data_2/model.pb \
    --inputs input_1 \
    --input-size-list '640,640,3' \
    --outputs 'concatenate_3/concat concatenate_4/concat concatenate_5/concat' \
    --net-output ${NAME}.json \
    --data-output ${NAME}.data 
tensorzone=${ACUITY_PATH}tensorzonex
#asymmetric_quantized-u8 dynamic_fixed_point-8 dynamic_fixed_point-16
$tensorzone \
    --action quantization \
    --source text \
    --source-file data_2/validation.txt \
    --channel-mean-value '127.5 127.5 127.5 127.5' \
    --model-input ${NAME}.json \
    --model-data ${NAME}.data \
    --quantized-dtype dynamic_fixed_point-8 \
	--quantized-rebuild

files link: https://share.weiyun.com/5udzHeI

@jujuede ValueError: cannot convert float NaN to integer
maybe you can try this parameter --quantized-dtype dynamic_fixed_point-16

use dynamic_fixed_point-16 or asymmetric_quantized-u8 got same error

You can avoid this with a mask method. Note first that in python NaN is defined as the number which is not equal to itself:

>float('nan') == float('nan')      
False

It might be worth avoiding use of np.NaN altogether. NaN literally means “not a number”, and it cannot be converted to an integer. In general, Python prefers raising an exception to returning NaN, so things like sqrt(-1) and log(0.0) will generally raise instead of returning NaN. However, you may get this value back from some other library. From v0.24, you actually can. Pandas introduces Nullable Integer Data Types which allows integers to coexist with NaNs. Also, even at the lastest versions of pandas if the column is object type you would have to convert into float first, something like:

df['column_name'].astype(np.float).astype("Int32")

NB: You have to go through numpy float first and then to nullable Int32, for some reason.