Unable to convert caffe based SSD

Hello everone,

I am currently trying to convert the original caffe SSD (Coco 300x300) model provided by weiliu…https://github.com/weiliu89/caffe/tree/ssd

After upgrading the prototxt the conversion of the model completed (more or less) successfully but I can see that the detection output will be dropped…

....
D Load blobs of conv8_2_mbox_conf
D Load blobs of conv9_2_mbox_loc
D Load blobs of conv9_2_mbox_conf
D Load blobs of detection_out <==============================
I Load blobs complete.
I Start C2T Switcher... 
D Optimizing network with broadcast_op
D convert conv4_3_norm_52(l2normalizescale) l2n_dim [1] to [3]
D convert mbox_priorbox_97(concat) axis 2 to 1
....
D remove permute conv9_2_mbox_conf_perm_92_acuity_mark_perm_114
D remove permute conv9_2_mbox_conf_perm_92
I End C2T Switcher...
D Remove detection_out_101.<================================
D Optimizing network with force_1d_tensor, swapper, merge_layer, auto_fill_bn, 
auto_fill_l2normalizescale, resize_nearest_transformer, auto_fill_multiply, merge_avgpool_conv1x1, 
auto_fill_zero_bias, proposal_opt_import
I End importing caffe...
I Dump net to ssd.json
I Save net to ssd.data
W ----------------Warning(3)----------------

If I inspect the resulting json file I can see that the layer detection_out_101 is used as an input for layer 102 but 101 does not exist…

    "mbox_conf_flatten_100": {
        "name": "mbox_conf_flatten",
        "op": "flatten",
        "parameters": {
            "axis": 1
        },
        "inputs": [
            "@mbox_conf_softmax_99:out0"
        ],
        "outputs": [
            "out0"
        ]
    },
    "output_102": {
        "name": "output",
        "op": "output",
        "inputs": [
            "@detection_out_101:out0"
        ],
        "outputs": [
            "out0"
        ]
    },
    "detection_out_101_acuity_mark_perm_115": {
        "name": "detection_out_101_acuity_mark_perm",
        "op": "permute",
        "parameters": {
            "perm": "0 3 1 2"
        },
        "inputs": [
            "@mbox_priorbox_97:out0"
        ],
        "outputs": [
            "out0"
        ]
    }

This results in an error if I go on to step two and try to load/quantize the network…

....
D Load layer mbox_loc_95 ...
D Load layer mbox_conf_96 ...
D Load layer mbox_priorbox_97 ...
D Load layer mbox_conf_reshape_98 ...
D Load layer mbox_conf_softmax_99 ...
D Load layer mbox_conf_flatten_100 ...
D Load layer output_102 ...
D Load layer detection_out_101_acuity_mark_perm_115 ...
E Unsuport input tensor type "None" of layer "output_102".
W ----------------Warning(1)----------------
Traceback (most recent call last):
  File "tensorzonex.py", line 446, in <module>
  File "tensorzonex.py", line 379, in main
  File "acuitylib/app/tensorzone/workspace.py", line 223, in load_net
  File "acuitylib/app/tensorzone/graph.py", line 26, in load_net
  File "acuitylib/acuitynet.py", line 441, in load
  File "acuitylib/acuitynet.py", line 474, in loads
  File "acuitylib/layer/acuitylayer.py", line 146, in add_input
  File "acuitylib/acuitylog.py", line 251, in e
ValueError: Unsuport input tensor type "None" of layer "output_102".    <=========
      [28306] Failed to execute script tensorzonex

This seems to be a bug in the acuity conversion tool or am I doing something wrong?

OS: Ubuntu 18.04
SDK: 6.4.0.10

best

Gerald

@gkaed Which one TF version you used to train ?

This is a caffe model from weiliu (https://github.com/weiliu89/caffe/tree/ssd) that I use as an example.

To be more precise, it’s the Coco 300x300 model: https://drive.google.com/open?id=0BzKzrI_SkD1_NDlVeFJDc2tIU1k

Edit:
I use an anaconda virtual environment with all requirements installed (TF 1.13.2) to convert the model.

Okay,

here is a short update. According to one of the devs, the missing detection layer has be implemented for CPU and will nor be processed on the AI chip.

3 Likes