Quantized-dtype

在转换模型时, 1_quantize_model.sh文件需要修改

--quantized-dtype dynamic_fixed_point-8

请问这一步是把模型的float型转换为int型吗?

改成

--quantized-dtype dynamic_fixed_point-16
  • 这样能提高模型精度吗?

  • 这样操作后,模型转换无误,但进行检测时,检测不到目标。

  • 如何在转换模型时精度更高,比如不用int型,使用float型?

感谢告知

1 Like

Hey @penggeng, did you solve this issue? Can you please help me also in solving this issue?

嘿@penggeng,您解决了这个问题吗?您能帮我解决这个问题吗?

@CodeLogist I think @penggeng asked this in another post, check it there,
Quantification problem

2 Likes

Ohh yeah. Thanks @Electr1

1 Like

no problem, just check the recent posts once again if you find a question that you think has not been addressed. if the problem is that serious than the person would have tried posting it again :slight_smile:

1 Like

@CodeLogist

Quantification problem

您应该看过这个帖子了,我目前正在看源码,您如果有想法可以一起讨论一下

2 Likes

Okay thanks @penggeng :slight_smile: