CoreML on iOS Apps

Nathaniel Putera
5 min readMay 4, 2021

Disclaimer: This article based on my understanding and experience of read and using CoreML.

Right now we can see implementation of artificial intelligence on many places. And personally the one that i think most interesting machine learning, one of the AI branch that try to teach machine how to think. We can see machine learning implementation in weather prediction, based on last 7 days the machine can predict what will tomorrow weather be, of course it’s just predictions but if the accuracy of the predictions very high, we can trust the result right? Another implementation usually on creating a bot, like chess bot that can face professional chess player, or game bot that can beat famous gamer. To achieve this result, we need to go through many steps from the beginning that write the algorithm until finished train the model.

What is machine learning model? Basically machine learning models are result or output from algorithm that has been trained with data. To create machine learning model we need to create an algorithm and then run it on specific data that we want to use as it base knowledge. It may be sound simple, but on reality it is a very long process. First we need to collect data, of course we will need large amount of data to get a good model. And then we need to pre-process, like clean and normalize the data. Lastly we use the pre-process data for train, evaluate, and testing process, this step can take days to complete (it takes days on my personal experience because i’m using laptop). So to make it simple we can use model that already trained by someone else, or usually we call it pre-trained model.

So how can we implement machine learning to our iOS project? Fortunately Apple already provide CoreML, framework to integrate machine learning models into iOS apps.

source: https://developer.apple.com/documentation/coreml

You can find pre-trained models that apple already provided from: https://developer.apple.com/machine-learning/models/. If you don’t want to use existing model you can also create a new one using Create ML App that already been bundled with Xcode. If you create a model using Create ML, the format of the model already on CoreML format, so it is ready to use.

Moving on to the CoreML structure

source: https://developer.apple.com/documentation/coreml

CoreML supports 4 frameworks Vision to process image and video, Natural language to process text, Speech for speech recognition, and Sound Analysis to process audio.

Accelerate and BNNS run on CPU and Metal Performance Shaders run on GPU, both is not for training new model, but frameworks for performing inference. Inference is a process to make predictions using trained model. Basically it applies the knowledge the model get from the training process. We can’t do training process on the iPhone, but iPhone capable to do the inference process from the trained model.

From my understanding, the CoreML works like this after user input an image from the app layer, the image will be process by Vision, and after that the model will make a predictions with help from the low level primitives framework (BNNS and Accelerate or Metal Performance Shaders) and then pass the result back to the App.

Simple Implementations

We need to add machine-learning model to the project, for this project i will use MobileNetV2. After success download the model, we just need to drag and drop the model to project, and it will look like this:

Don’t forget to import Vision framework, because we will need to use that to detect image. After import Vision framework you can add following code:

VNCoreMLRequest here is an image analysis request from CoreML to process images, and the result will be an array of CoreML based image analysis. Because we just want to know the name, we will use the VNClassificationObservation, it contains classification information based on inference process performed by the ML model.

The reason why i’m using the first result is because the result will be returned on scoring system, so the result is not only 1, but there are many probability. As we can see on the image below, there is a confidence level of each prediction, and the first result always has the highest confidence. At this case, you can see the confidence level is 0.899… or almost 90%.

As you can see on the result, it is an array, so we take the first value by results.first. And then because the identifier looks like “lion, king of beasts, Panthera leo”, we just want to “lion” value from it, so i add this following code:

If the identifier contains “,” we just take the first value before the “,” but if there is no “,” we can just take the first identifier value. So, the final result of this example will be like this:

For complete project you can look at my github: https://github.com/naelp14/CoreMLSimple.git
Please note to change the provisioning profile to test on device.

That’s it for today article! Please share your thought and if you want to correct me, feel free to write it on the comments. Thanks a lot and hope you guys enjoy the article.

--

--