How we enhance AWS SageMaker Object Detection with “Mandarins”

Object detection is an AI model which is used to locate objects in an image. From detecting human faces, cars and also for medical examination such as detecting tumors.

In this blog we are sharing a trick how to enhance the accuracy of AWS SageMaker Object Detection algorithm by supplying negative samples utilizing its built-in multiclass support

Business Case

Tessa has many rules in place to approve an ad and one of them is to make sure that there is at least one photo with a visible rego plate. Using existing rego recognition services won’t help much with challenging photo conditions such as when the rego plate angle is too steep or when the lighting is poor resulting in lots of miss-detection. We built an AI to detect rego plate to overcome this issue using AWS SageMaker.

Amazon SageMaker provides every developer and data scientist with the ability to build, train, and deploy machine learning models quickly. It is a fully-managed service that covers the entire machine learning workflow which labels and prepares your data, chooses an algorithm, trains the model, tunes and optimizes it for deployment, makes predictions, and takes action.

Data Preparation

Images of cars from various angles

Obviously, our first step is to get rid of images of cars which do not show rego plate. For this we utilize our AI tech, Cyclops.

Cyclops Tech

Cyclops can classify car images into 27 categories like the boot, passenger seat, side mirror, dashboard, full rear, and full front with 97.2% accuracy.

We used Cyclops to categorize and to remove 9,500 images of cars without rego plate leaving us with only 1,500 images. Next is the unavoidable job to manually label the 1,500 images although the workload has been already dramatically reduced.

We then split the 1,500 images into 1,300 training and 200 validation and uploaded these images into our S3 bucket together with a JSON file which is a formatted version of our CSV file to satisfy AWS SageMaker input requirement.

Building the Model

Jupyter Notebook

In less than 5 minutes, we already started our training job. We continuously monitored the training progress from the CloudWatch log. Training was completed after an hour using ml.p2.16xlarge instance and we were getting a validation accuracy of 93.5%

Monitoring validation and training accuracy via CloudWatch

We created the model endpoint which is as simple as just executing one line of code and that’s it, we had an API end point to call, ready for inference serving. It toke us around 1.5 weeks to get to this point where majority of the times was spent in a data preparation. This is really a game changer knowing that in our experience, building an end to end AI tech like this toke us at least 2 months.

Testing the Model

False Positive error rate at various confidence score thresholds

We also noticed that false positive were happening more frequently on images like dashboard and GPS infotainment where there were lots of objects which looks like rego plate. Validation accuracy during training didn’t show this as our validation set did not contain car images without rego plate. With the facts above, we hypothesized that despite our model did a great job at detecting the location of a rego plate given there was one, it was easily mistaken to think that there is a rego plate when there was none.

Odometer counter was mistakenly identified as a rego plate with a high confidence score of 0.985

We realized our mistake. All our training set did not have images of car without rego plate, or the proper terminology is negative samples. We should train our model with a balanced mixed of positive and negative samples. This way, the AI will learn to ignore object which looks like a rego plate.

Positive sample (left), Negative sample (right)

Solution

So, what to do? While we were banging our heads in despair, we saw a mandarin 🍊 sitting in the corner of our desk and trigger a light bulb moment. We just need a way to include images without rego plate in our training set right?

SageMaker Object Detection Algorithm allows training with multiple classes hence we decided to train with two classes: Rego Plate and Mandarin. In the image when there is no rego, we digitally put a mandarin. Now every image has a bounding box and SageMaker is happy.

Random size mandarin is placed at random location in images when there is no rego

With very high hope, we restarted the training and test the new model. We ran the test again and the assessment on confusion metrics was showing a much better false positive error rate at 20% as oppose to 80% previously. Furthermore the error rate at 0.5 confidence score threshold is merely 0.8% compare to previously at 4% (measured at 0.5 confidence score threshold)

False positive error rate comparison between models at various confidence score threshold
Precision and Recall curve comparison between models at various confidence score thresholds

You can also see from the precision and recall curve above, the intersection between precision and recall for our new model is sitting at 0.87 which is much better than the old model at 0.8.

Summary

Credits

I would also like to give credit to Eric Yuxuan Lin, an AI Software Engineer in our team who worked on this project.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store