Machine Learning with Apple’s Core ML 3 is Exciting and Personal
The scope of machine learning is just beginning to be imagined. The number of applications has increased at a humongous rate in the last years. Nowadays, almost all activity that includes data user analysis relies in machine learning. Fields like medicine, sports or even arts are taking advantage of apps using machine learning to improve the current procedures.
For some time Apple has provided a framework to take advantage of this powerful tool with Create ML, Core ML, and some Domain APIs. Create ML permits to create and train custom models. With Core ML the developers can integrate machine learning models into theirs apps. Core ML has always supported diverse models, e.g., Generalized Linear Models, Tree Ensembles, Support Vector Machines, and FeedForward, Convolution, and Recurrent Neural Networks. Domain APIs includes functionality for image, speech, and sound analysis.
Create ML has added new model types to provide a total of 9 types:
- Image classifier.
- Object detector.
- Sound classifier.
- Activity classifier.
- Text classifier.
- Word tagger.
- Tabular classifier.
- Tabular regressor.
For this year Apple has introduced new Domain APIs that expands the reach of the framework. Sentiment detection allows to classify text in real time according to its positive or negative nature. Word embedding provides the ability to find semantically similar words. For example “Moon” is semantically close to “Night” and far from “Dog”. A new fully embedded speech to text converter that not only analyze what is spoken but how is it, making possible to differentiate from a normal voice from a high jitter.
In the WWDC19 was presented Core ML 3, which is optimized for on-device performance and ensures the privacy by doing all the process locally and not running in any server. The newest version of Core ML provides model flexibility and model personalization. Now, Core ML has support for more than 100 Neural Network layers allowing to import the state of the art models into an app. A new converter from TensorFlow is already in place and a ONNX converter is soon to be released. The model gallery has also been updated. Core ML 3 allows On-device Model personalization. This personalization reflects the capability to adjust and tune the model on the device. One single model in an app can be adjusted for each user for his personal use.
Although Core ML will be supported on all Apple devices (iPad, iPhone and iWatch), one important consideration is that Core ML is made exclusively for iOS operating system. This is a big limitation against Google ML kit, that works for Android and iOS.
Related Blog Posts
Apple will release iOS 13 on September 19 2019. Learn about changes necessary to update your mobile application for iOS 13 and Xcode 11.
SwiftUI was one of the most exiting announcements during Apple’s WWDC 2019. Learn how to apply this useful technology to your next project.
TensorFlow is a Machine Learning cross-platform that has started to be adopted widely worldwide. It was released by Google in 2015 and now TensorFlow 2.0 Alpha is available.
ARCore is Google’s platform for building Augmented Reality experiences using Android, Unity, Unreal, or iOS as development environments.
Now, you can port iPad Apps to Mac, sharing the same project and source code to deliver your app to an audience of over 100,000,000 Mac users.
GET IN TOUCH!
Copyright © 2019 Krasamo Inc. All rights reserved. All Trademarks are the property of their respective owners.