book collections email follower instructable user

I already wrote one article on how to run OpenMV demos on Sipeed Maix Bit and also did a video of object detection demo with this board. One of the many questions people have asked is - how can I recognize an object that the neural network is not trained for? In other words how to make your own image classifier and run it with hardware acceleration.

This is an understandable question, since for your project you probably don't need to recognize some generic objects, like cats and dogs and airplanes. You want to recognize something specific, for example, a breed of the dog for that automatic pet door, or a plant species for sorting, or any other exiting applications you can think about!

I got you! In this article I will teach you how to create your own custom image classifier with transfer learning in Keras, convert the trained model to .kmodel format and run it on Sipeed board (can be any board, Bit/Dock or Go) using Micropython or Arduino IDE. And only your imagination will be the limit to tasks you can do with this knowledge.

Step 1: CNN and Transfer Learning: Some Theory

Picture of CNN and Transfer Learning: Some Theory
1 ThI4Ew1o74EIaN8fX98ozw.png

Convolutional Neural Networks or CNN is a class of deep neural networks, most commonly applied to analyzing visual imagery. There is a lot of literature on the internet on the topic and I'll give some links in the last part of the article. In short, you can think of CNN as a series of filters, applied to the image, each filter looking for a specific feature in the image - on the lower convolutional layers the features are usually lines and simple shapes and on the higher layers the features can be more specific, e.g. body parts, specific textures, parts of animals or plants, etc. A presence of certain set of features can give us a clue to what the object in the image might be. Whiskers, two eyes and a black nose? Must be cat! Green leaves, a tree trunk? Looks like a tree!

I hope you get the idea about the working principle of CNN now. Normally a deep neural network needs thousands of images and hours of training time(depends on the hardware you are using for training) to "develop" filters that are useful for recognizing the types of objects you want. But there is a shortcut.

A model trained to recognize a lot of different common objects(cats, dogs, house appliances, transport, etc) already has a lot of those useful filters "developed", so we don't need it to learn recognizing the basic shapes and parts of the objects again. We can just re-train the last few layers of the network to recognize specific classes of objects, that are important for us. This is called "transfer learning". You need significantly much less training data and compute time with transfer learning, since you are only training last few layers of the network, composed maybe of few hundred neurons.

Sounds awesome, right? Let's see how to implement it.

BorisV23 days ago
Hi,
I have got an error while trying to execute mobilenet.py:
raw REPL; CTRL-B to exit

>OK

Traceback (most recent call last):

File "<stdin>", line 53, in <module>

ImportError: no module named '__future__'

>

MicroPython 59c658e-dirty on 2019-04-26; Sipeed_M1 with kendryte-k210

Type "help()" for more information.

Any ideas why?
pasteurjr1 month ago
Hello. Dmitri.

There is something important to do before running tflite2kmodel.sh script. I don´t know why, but you must place some trainning images inside image directory under Maix_Toolbox directory in order to tflite2kmodel to work. Also, first of all, you should run get_nncase.sh script.
Do you know why is the image directory necessary and, if can you place any number of images of the trainning dataset inside it?
Thanks a lot.
DmitryM8 (author)  pasteurjr1 month ago
Yes, you're right! This is one of these small things that I tend to easily forget about. I'll edit the tutorial accordingly.
Yes, 2-3 images is enough. The ncc just uses these images to check the input size of the network. So, they should match the input size
pasteurjr1 month ago
Hello, Dmitri.
Congratulations for your project! I have one question: the trainnig set images size may be any one? Or must be 128x128? The images type must be jpeg? Can you share the santa and Arduino datasets?
Thank you very much!
DmitryM8 (author)  pasteurjr1 month ago
Hello!
The script automatically resizes images to pre-set size
img = image.load_img(img_path + file, target_size=(128, 128))
You can change the image size, but obviously you will also need to change the input layer dimensions to match your image size. It can be other image types, like png.
I will share the datasets tomorrow!
Hello, Dmitri.
Thank you for your repy. Will you share yor datasets here?
DmitryM8 (author)  pasteurjr1 month ago
Ghee SungN2 months ago
Nice work DmitryM8.

I have created a colab notebook that performed transfer learning using the flower dataset. After the model is trained, it can convert the model to kmodel to be uploaded to Maxipy.
https://iotdiary.blogspot.com/2019/07/maixpy-go-mobilenet-transfer-learning.html
DmitryM8 (author)  Ghee SungN1 month ago
Excellent job! I should be using colab notebooks more myself, they're so convinient.
PedroZ141 month ago
Hi, how can i change this model to use tensorflow counting api for count car in real time using maixpy bit?
DmitryM8 (author)  PedroZ141 month ago
TensorFlow counting API is not just a simple model, but in fact a framework, that includes Mobilenet SSD and some OpenCV functions. I have successfully run Mobilenet SSD on maixpy, so that part is not a problem. Now that just detects objects, for counting you need to also keep track of them. You can try using OpenMV, which is has functions related to object tracking and is available on Maix Bit.
Short answer: This is possible, but require quite a bit of coding :)
For Mobilenet SSD that can be run on Maixpy have a look at https://github.com/penny4860/Yolo-digit-detector. Change the feature extractor to MobileNet and train as usual. Then you will be able to convert the model to .kmodel format.
mbnet_kers.py fails with the following error:
Epoch 1/10
Traceback (most recent call last):
File "mbnet_keras.py", line 60, in <module>
model.fit_generator(generator=train_generator,steps_per_epoch=step_size_train,epochs=10)
File "C:\Users\USUARIO\AppData\Roaming\SPB_Data\.conda\envs\ml\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "C:\Users\USUARIO\AppData\Roaming\SPB_Data\.conda\envs\ml\lib\site-packages\keras\engine\training.py", line 1418, in fit_generator
initial_epoch=initial_epoch)
File "C:\Users\USUARIO\AppData\Roaming\SPB_Data\.conda\envs\ml\lib\site-packages\keras\engine\training_generator.py", line 251, in fit_generator
callbacks.on_epoch_end(epoch, epoch_logs)
File "C:\Users\USUARIO\AppData\Roaming\SPB_Data\.conda\envs\ml\lib\site-packages\keras\callbacks.py", line 79, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "C:\Users\USUARIO\AppData\Roaming\SPB_Data\.conda\envs\ml\lib\site-packages\keras\callbacks.py", line 338, in on_epoch_end
self.progbar.update(self.seen, self.log_values)
AttributeError: 'ProgbarLogger' object has no attribute 'log_values'

What sholud i do ?
DmitryM8 (author)  juandavidbarrero12 months ago
https://github.com/keras-team/keras/issues/3657

You
can consult this thread, this is known Keras issue. It might be
possible your dataset is too small. How many images do you have? Are
they in the right folder?
i have 50 pictures of santa in your_class1 and 50 pictures of a board in your_class2 and both folders are on image folder
I have a bigger dara set and that's the new error
Epoch 1/10
C:\Users\USUARIO\AppData\Roaming\SPB_Data\.conda\envs\ml\lib\site-packages\PIL\Image.py:965: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
' expressed in bytes should be converted ' +
C:\Users\USUARIO\AppData\Roaming\SPB_Data\.conda\envs\ml\lib\site-packages\PIL\TiffImagePlugin.py:785: UserWarning: Corrupt EXIF data. Expecting to read 4 bytes but only got 0.
warnings.warn(str(msg))
Traceback (most recent call last):
File "mbnet_keras.py", line 60, in <module>
model.fit_generator(generator=train_generator,steps_per_epoch=step_size_train,epochs=10)
File "C:\Users\USUARIO\AppData\Roaming\SPB_Data\.conda\envs\ml\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "C:\Users\USUARIO\AppData\Roaming\SPB_Data\.conda\envs\ml\lib\site-packages\keras\engine\training.py", line 1418, in fit_generator
initial_epoch=initial_epoch)
File "C:\Users\USUARIO\AppData\Roaming\SPB_Data\.conda\envs\ml\lib\site-packages\keras\engine\training_generator.py", line 217, in fit_generator
class_weight=class_weight)
File "C:\Users\USUARIO\AppData\Roaming\SPB_Data\.conda\envs\ml\lib\site-packages\keras\engine\training.py", line 1211, in train_on_batch
class_weight=class_weight)
File "C:\Users\USUARIO\AppData\Roaming\SPB_Data\.conda\envs\ml\lib\site-packages\keras\engine\training.py", line 789, in _standardize_user_data
exception_prefix='target')
File "C:\Users\USUARIO\AppData\Roaming\SPB_Data\.conda\envs\ml\lib\site-packages\keras\engine\training_utils.py", line 138, in standardize_input_data
str(data_shape))
ValueError: Error when checking target: expected dense_3 to have shape (4,) but got array with shape (2,)
DmitryM8 (author)  juandavidbarrero12 months ago
Ok, can you change the number of neurons in the last layer to 2, since you only have two classes?
mbnet_keras.py, line 30
also, I can see that you are using anaconda in windows. the error you had before may be related to the quirks of the different file systems(I used Ubuntu 16.04)
Capture.PNG
andri_yadi4 months ago
I've been successful to follow the steps until generating pb file.
When converting to kmodel, it said "Layer PAD is not supported".
Please advice.
DmitryM8 (author)  andri_yadi4 months ago
Hi!
There's no need to generate .pb file, the following command takes care of direct conversion of .h5 model to .tflite

tflite_convert --output_file=model.tflite \ --keras_model_file=my_model.h5

Concerning PAD problem - previous version of .kmodel converter didn't support padding layers, but the developers added support about two weeks ago. I'll ask them if they updated Maix toolbox. Meanwhile, if your project is urgent, e-mail me at dmitrywat@gmail.com and I'll send you the beta with padding support.
Awesome!... Just send you an email.
DmitryM8 (author)  andri_yadi4 months ago
https://github.com/kendryte/nncase/releases

I just checked the nncse(the converter) latest release(RC4) and it supports padding. You can manually download it from github and then unpack in ncc folder inside of the Maix_toolbox!
Maix team will update their get_nncase.sh script shortly as well.
Watsbear DmitryM83 months ago
Hello,
Even after updating the ncc to the newest version (I tested both rc4 and rc5) I still seem to be having the PAD problem. I followed all steps as given. Do you know what might be wrong, or if there's something that needs to be done that is not explained?
DmitryM8 (author)  Watsbear2 months ago
Another source of issues with PAD is wrong padding size. I solved this problem by including modified version of Mobilenet together with the training script. Can you clone my repository again and see if the problem solved?

Capture.PNG
Everything works well now.
I publish my source code here: https://github.com/andriyadi/Maix-LogoClassifier
And a demo video:

Big thanks for your support! Keep it up.
keithr03 months ago
I've been trying to run your examples.
test.py fails not being able to find 24.jpg
mbnet_kers.py fails with the following error:-
Epoch 1/10
Traceback (most recent call last):
File "mbnet_keras.py", line 60, in <module>
model.fit_generator(generator=train_generator,steps_per_epoch=step_size_train,epochs=10)
File "/root/miniconda3/envs/ml/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/root/miniconda3/envs/ml/lib/python3.6/site-packages/keras/engine/training.py", line 1418, in fit_generator
initial_epoch=initial_epoch)
File "/root/miniconda3/envs/ml/lib/python3.6/site-packages/keras/engine/training_generator.py", line 251, in fit_generator
callbacks.on_epoch_end(epoch, epoch_logs)
File "/root/miniconda3/envs/ml/lib/python3.6/site-packages/keras/callbacks.py", line 79, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "/root/miniconda3/envs/ml/lib/python3.6/site-packages/keras/callbacks.py", line 338, in on_epoch_end
self.progbar.update(self.seen, self.log_values)
AttributeError: 'ProgbarLogger' object has no attribute 'log_values'
DmitryM8 (author)  keithr02 months ago
https://github.com/keras-team/keras/issues/3657

You can consult this thread, this is known Keras issue. It might be possible your dataset is too small. How many images do you have? Are they in the right folder?
DmitryM8 (author)  keithr03 months ago
Hi!
For the first problem, I added back the test images to the repository, you can download them and try again, should be working now.
For you second problem, it seems you installed your miniconda with sudo( I see it is in /root/) ? It is not recommended to install miniconda in /root, can you try reinstalling it in your user home folder?
jessyratfink4 months ago
Really interesting project - thanks for sharing! :D
DmitryM8 (author)  jessyratfink4 months ago
Thank you! Glad that you liked it.
I added the video about the project and also the full-length demo.