All new on-device ML Kit SDK Introduced

All new ML kit SDK introduced, focused on on-device ML

Google launched ML Kit at I / O 2018 two years ago, making it easy for smartphone developers to incorporate machine intelligence into their software. Today, more than 25,000 Android and iOS devices make use of the functionality of the ML Kit.
 
Now, In its new release Google has introduced some changes that will make it even easier to use the ML Kit.
 
image1 
 
The original version of ML Kit was tightly integrated with Firebase, and since smartphone developers wanted more flexibility, hence this new standalone ML Kit SDK version makes available all the on-device APIs, hence removing the need for integrating Firebase. You can still use both the ML Kit and Firebase to get the best of both products if you choose to.
 
All ML Kit resources can now be found here.
 
With this change, the ML Kit is now fully focused on on-device machine learning, giving you access to the unique benefits that on-device versus cloud ML offers:
  • It’s fast, unlocking real-time use cases- since processing happens on the device, there is no network latency. This means we can do inference on a stream of images/video or multiple times a second on text strings.
  • Works offline - you can rely on the ML Kit APIs even when the network is spotty or your app’s end-user is in an area without connectivity.
  • Privacy is retained: since all processing is performed locally, there is no need to send sensitive user data over the network to a server.
Users who are already using ML Kit can go to the Migration guide to get step-by-step instructions to update your app to the new ML Kit SDK.
 
The cloud-based APIs, model deployment, and AutoML Vision Edge remain available through Firebase Machine Learning
 
Some More Features of the new Standalone ML Kit SDK
  • Shrink your app footprint with Google Play Services
     
    • You can now ship ML Kit through Google Play Services resulting in a smaller app footprint and the model can be reused between apps.
    • Apart from Barcode scanning and Text recognition, you will now find Face detection/contour (model size: 20MB) added to the list of APIs that support this functionality.
      1. // Face detection / Face contour model  
      2. // Delivered via Google Play Services outside your app's APK…  
      3. implementation 'com.google.android.gms:play-services-mlkit-face-detection:16.0.0'  
      4.   
      5. // …or bundled with your app's APK  
      6. implementation 'com.google.mlkit:face-detection:16.0.0' 
  • Jetpack Lifecycle / CameraX support
     
    • Android Jetpack Lifecycle support has been added to all APIs. Developers can use addObserver to automatically manage teardown of ML Kit APIs as the app goes through screen rotation or closure by the user/system. This makes CameraX integration easier
      1. // ML Kit now supports Lifecycle  
      2. val recognizer = TextRecognizer.newInstance()  
      3. lifecycle.addObserver(recognizer)  
      4.   
      5. // ...  
      6.   
      7. // Just like CameraX  
      8. val camera = cameraProvider.bindToLifecycle( /* lifecycleOwner= */this,  
      9.     cameraSelector, previewUseCase, analysisUseCase) 
  • ML Kit x CameraX Codelab
     
    codelab