How to install TensorFlow && keras with Ubuntu18.04

Preparing Python Development Environment

khadas@Khadas:~$ sudo apt update
khadas@Khadas:~$ sudo apt install -y cmake gcc protobuf-compiler python3-opencv  python3-h5py python3-lmdb 
khadas@Khadas:~$ sudo apt install -y python3-dev python3-pip
khadas@Khadas:~$ sudo pip3 install -U virtualenv

Creating Virtual Development Environment

Create a virtual environment called venv

khadas@Khadas:~$ virtualenv --system-site-packages -p python3 ./venv

Activate the virtual environment using the shell source command

khadas@Khadas:~$ source ./venv/bin/activate

You will see the prefix of venv on the command line when the environment is successfully activated

(venv) khadas@Khadas:~$ deactivate

Use command deactivate to exit the virtual environment

Install relevant Python packages

  1. Upgrade the pip package itself
    (venv) khadas@Khadas:~$ pip install --upgrade pip
  2. Install numpy packages
    (venv) khadas@Khadas:~$ pip install "numpy == 1.14.3" --user

Install TensorFlow && Keras

Download the TensorFlow packages

(venv) khadas@Khadas:~$ wget https://dl.khadas.com/Tools/TensorFlow/scipy-1.2.0-cp36-cp36m-linux_aarch64.whl
(venv) khadas@Khadas:~$ wget https://dl.khadas.com/Tools/TensorFlow/onnx-1.4.1-cp36-cp36m-linux_aarch64.whl
(venv) khadas@Khadas:~$ wget https://dl.khadas.com/Tools/TensorFlow/tensorflow-1.10.1-cp36-cp36m-linux_aarch64.whl

Install TensorFlow

(venv) khadas@Khadas:~$ pip install scipy-1.2.0-cp36-cp36m-linux_aarch64.whl
(venv) khadas@Khadas:~$ pip install onnx-1.4.1-cp36-cp36m-linux_aarch64.whl
(venv) khadas@Khadas:~$ pip install tensorflow-1.10.1-cp36-cp36m-linux_aarch64.whl

Install Keras

(venv) khadas@Khadas:~$ pip install keras==2.0

Verify

(venv) khadas@Khadas:~$ wget https://dl.khadas.com/Tools/TensorFlow/mlp.py
(venv) khadas@Khadas:~$ python mlp.py 
/usr/lib/python3/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Using TensorFlow backend.
Epoch 1/20
1000/1000 [==============================] - 2s 2ms/step - loss: 2.4113 - acc: 0.0930
Epoch 2/20
1000/1000 [==============================] - 0s 384us/step - loss: 2.3482 - acc: 0.1030
Epoch 3/20
1000/1000 [==============================] - 0s 378us/step - loss: 2.3243 - acc: 0.0990
Epoch 4/20
1000/1000 [==============================] - 0s 366us/step - loss: 2.3138 - acc: 0.1190
Epoch 5/20
1000/1000 [==============================] - 0s 392us/step - loss: 2.3171 - acc: 0.1160
Epoch 6/20
1000/1000 [==============================] - 0s 375us/step - loss: 2.3114 - acc: 0.0960
Epoch 7/20
1000/1000 [==============================] - 0s 399us/step - loss: 2.3067 - acc: 0.1160
Epoch 8/20
1000/1000 [==============================] - 0s 367us/step - loss: 2.3018 - acc: 0.1230
Epoch 9/20
1000/1000 [==============================] - 0s 382us/step - loss: 2.3142 - acc: 0.1180
Epoch 10/20
1000/1000 [==============================] - 0s 371us/step - loss: 2.3052 - acc: 0.1170
Epoch 11/20
1000/1000 [==============================] - 0s 360us/step - loss: 2.3003 - acc: 0.1310
Epoch 12/20
1000/1000 [==============================] - 0s 375us/step - loss: 2.2955 - acc: 0.1160
Epoch 13/20
1000/1000 [==============================] - 0s 397us/step - loss: 2.3033 - acc: 0.1310
Epoch 14/20
1000/1000 [==============================] - 0s 364us/step - loss: 2.3062 - acc: 0.1230
Epoch 15/20
1000/1000 [==============================] - 0s 386us/step - loss: 2.2968 - acc: 0.1200
Epoch 16/20
1000/1000 [==============================] - 0s 399us/step - loss: 2.3011 - acc: 0.1210
Epoch 17/20
1000/1000 [==============================] - 0s 398us/step - loss: 2.3049 - acc: 0.1280
Epoch 18/20
1000/1000 [==============================] - 0s 413us/step - loss: 2.3023 - acc: 0.1200
Epoch 19/20
1000/1000 [==============================] - 0s 399us/step - loss: 2.2977 - acc: 0.1310
Epoch 20/20
1000/1000 [==============================] - 0s 397us/step - loss: 2.2954 - acc: 0.1240
100/100 [==============================] - 0s 4ms/step
(venv) khadas@Khadas:~$ 

Trouble shooting

If you encounter this mistake

ERROR: Could not install packages due to an EnvironmentError: [Errno 28] No space left on device

You can slove it bu this steps

(venv) khadas@Khadas:~$ deactivate 
khadas@Khadas:~$ mkdir ~/tmp
khadas@Khadas:~$ export TMPDIR=$HOME/tmp
khadas@Khadas:~$ source ./venv/bin/activate
(venv) khadas@Khadas:~$ pip install tensorflow-1.10.1-cp36-cp36m-linux_aarch64.whl

note

1. Installing the Python package on ARM platform requires compiling the source code, and the installation process will be lengthy.
2. Compilation process error, you can add - v parameter to view log info.
3. Currently only CPU version are supported.NPU version is not supported.
4. This documents written for users who want to try tensorflow on the khadas board.In the future, we will launch a version using NPU, and it will take some time now.

2 Likes

I met a couple of problems when installing TensorFlow and trying to run mlp.py. Here are the problems and solutions:

  1. “pip install keras” is by default installing Keras 2.3.0, which is not compatible with TF 1.10.1. As a result, use “pip uninstall keras” to uninstall Keras 2.3.0, and then use “pip install keras==2.2” to have the compatible-versioned Keras installed.
  2. “ImportError: libgfortran.so.5: cannot open shared object file: No such file or directory”. This is caused by not having the libgfortran5-dbg library installed. Use “sudo apt-get install libgfortran5-dbg”.

Then you should be good to go.

1 Like

any update on this ?

Not yet, but I want to try it out as well, writing our own code to utilize the power of the NPU seems Awesome :smiley:

1 Like

@AKBAAR The SDK you use is the only way to run TF model in NPU

@Frank is there any plan to incorporate npu with TFlite…?

As NPU uses almost the similar architecture as google coral… will it be possible…?

@Archangel1235 About TFlite , I’ll find time to make one. I’ve got a lot of plans recently .
We’ve got a Google core , But we can’t run on our board yet . Maybe it’s a lack of drive or something .

3 Likes

Download links seem broken, I am getting 404s.

@jdrew We think that installing tensorflow on VIM3 is not a good choice, and we should use NPU to run the model through a conversion tool, so we have removed this part of the content

Which means I will need a custom implementation for unsupported layers and operations?
I hoped to be able to use functions such as tf.cast, reshape, top_k in post-processing.

If I see it correctly your yolo-models only execute the feature extraction network on the NPU and do the post processing in C++, am I right?

Is it possible to install the tflite_runtime on the khadas board to use an EdgeTPU USB Accelerator?

Yes, you need to do it

Yes , you are right

I think it’s difficult to achieve, but you can try it, I haven’t tried it here