Solution 1 :
There is a superb visualization tool that is called Netron . I used your .tflite file and the input of your model is:
So at your code at line where you calculate bytebuffer
1 * d.inputSize * d.inputSize * 3 * numBytesPerChannel
you have to input
1* 320 * 320 * 3 * 1
the last “1” is for uint8….if you had floats you should put “4”.
Solution 2 :
After I change TensorImage DataType from UINT8 to FLOAT32, it works.
val tfImageBuffer = TensorImage(DataType.UINT8)
->
val tfImageBuffer = TensorImage(DataType.FLOAT32)
Problem :
I’m trying to run my own custom model for object detection. I created my dataset from Google cloud – Vision (https://console.cloud.google.com/vision/) (I boxed and labeled the images) and it looks like this:
After training the model, I downloaded the TFLite files (labelmap.txt, model.tflite and a json file) from here:
Then, I added them to the Android Object Detection example ( https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android ) .
But when I run the project it crashes:
2020-07-12 18:03:05.160 14845-14883/? E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.examples.detection, PID: 14845
java.lang.IllegalArgumentException: Cannot copy to a TensorFlowLite tensor (normalized_input_image_tensor) with 307200 bytes from a Java Buffer with 4320000 bytes.
at org.tensorflow.lite.Tensor.throwIfSrcShapeIsIncompatible(Tensor.java:423)
at org.tensorflow.lite.Tensor.setTo(Tensor.java:189)
at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:154)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:343)
at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:197)
at org.tensorflow.lite.examples.detection.DetectorActivity$2.run(DetectorActivity.java:182)
at android.os.Handler.handleCallback(Handler.java:883)
at android.os.Handler.dispatchMessage(Handler.java:100)
at android.os.Looper.loop(Looper.java:214)
at android.os.HandlerThread.run(HandlerThread.java:67)
I tried changing the parameters TF_OD_API_IS_QUANTIZED to false and labelOffset to 0, and also I modified this line from the TFLiteObjectDetectionAPIModel.java to d.imgData = ByteBuffer.allocateDirect(_4_ * d.inputSize * d.inputSize * 3 * numBytesPerChannel);
(I replaced 1 for 4)
I am new to this, I would really appreciate if someone could help me understand and resolve the error. Thank you!
Update:
Here are the tflite files : https://drive.google.com/drive/folders/11QT8CgaYF2EseORgGCceh4DT80_pMiFM?usp=sharing (I don’t care if the model recognize correctly the squares and circles, I just want to check if it compiles on the android app and then I will improve it)
Comments
Comment posted by Farmaker
Welcome back! Nice to see that you have made progress. Error prompts for error in input size. If you have uploaded somewhere your project I would be happy to take a look.
Comment posted by github.com/tensorflow/examples/tree/master/lite/examples/…
haha yes, it’s me again. I used exactly the object detection example here
Comment posted by Farmaker
When you trained the model on google cloud what dimensions did you use? Here at the example is 300×300
Comment posted by SolArabehety
That’s the problem, I don’t find the way to know what architecture is used to generate the model, and I don’t know if it’s possible to change it. I configured the model for “object detection” so I believe that the model has the correct architecture and parameters.
Comment posted by SolArabehety
There! I updated the post with the link with files, just in case someone else needs them. Thanks a lot!