Machine Learning with Apple’s Core ML 3 is Exciting and Personal

by Jun 22, 2019#iOS

Printer Icon
f

The scope of machine learning is just beginning to be imagined. The number of applications has increased at a humongous rate in the last years. Nowadays, almost all activity that includes data user analysis relies in machine learning. Fields like medicine, sports or even arts are taking advantage of apps using machine learning to improve the current procedures.

For some time Apple has provided a framework to take advantage of this powerful tool with Create ML, Core ML, and some Domain APIs. Create ML permits to create and train custom models. With Core ML the developers can integrate machine learning models into theirs apps. Core ML has always supported diverse models, e.g., Generalized Linear Models, Tree Ensembles, Support Vector Machines, and FeedForward, Convolution, and Recurrent Neural Networks. Domain APIs includes functionality for image, speech, and sound  analysis.

Create ML has added new model types to provide a total of 9 types:

  • Image classifier.
  • Object detector.
  • Sound classifier.
  • Activity classifier.
  • Text classifier.
  • Word tagger.
  • Tabular classifier.
  • Tabular regressor.
  • Recommender.

For this year Apple has introduced new Domain APIs that expands the reach of the framework. Sentiment detection allows to classify text in real time according to its positive or negative nature. Word embedding provides the ability to find semantically similar words. For example “Moon” is semantically close to “Night”  and far from “Dog”. A new fully embedded speech to text converter that not only analyze what is spoken but how is it, making possible to differentiate from a normal voice from a high jitter.

In the WWDC19 was presented Core ML 3, which is optimized for on-device performance and ensures the privacy by doing all the process locally and not running in any server. The newest version of Core ML provides  model flexibility and model personalization. Now, Core ML has support for more than 100 Neural Network layers allowing to import the state of the art models into an app.  A new converter from TensorFlow is already in place and a ONNX converter is soon to be released. The model gallery has also been updated. Core ML 3 allows On-device Model personalization. This personalization reflects the capability to adjust and tune the model on the device. One single model in an app can be adjusted for each user for his personal use.

Although Core ML will be supported on all Apple devices (iPad, iPhone and iWatch), one important consideration is that Core ML is made exclusively for iOS operating system. This is a big limitation against Google ML kit, that works for Android and iOS.

Photo by Wahid Khene on Unsplash

About Us: Krasamo is a mobile app development company focused on the Internet-of-Things and Digital Transformation.

Click here to learn more about our mobile development services.

RELATED BLOG POSTS

iPadOS 16 Features Overview 2022

iPadOS 16 Features Overview 2022

During the WWDC 2019 Apple revealed the new iPadOS, a specific OS to take advantage of the iPad characteristics and capabilities.

Apple App Development Updates 2024

Apple App Development Updates 2024

Check the latest news on Apple’s Worldwide Developers Conference (WWDC) and their latest technologies for developers to use in building apps for the App Store.

iOS App Phased Release Overview

iOS App Phased Release Overview

A phased release or rollout is an app version update released in stages to a percentage of users to address issues and bugs in a controlled environment.