Issue in execution model quantized to FP16 (I have tried with different models but none of them work)

I have quantized tflite model to FP16 , but it is showing error

return _pyarmnn_tfliteparser.ITfLiteParser_CreateNetworkFromBinaryFile(self, graphFile)
RuntimeError: Failed to parse operator #2 within subgraph #0 error: Buffer #0 has 0 bytes. For tensor: [32,3,3,3] expecting: 3456 bytes and 864 elements. at function CreateConstTensorNonPermuted [/home/khadas/armnn-dist/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:4190]