What is Lumina?

Would you like to use a fully-functional camera in an iOS application in seconds? Would you like to do CoreML image recognition in just a few more seconds on the same camera? Lumina is here to help.

Cameras are used frequently in iOS applications, and the addition of CoreML and Vision to iOS 11 has precipitated a rash of applications that perform live object recognition from images – whether from a still image or via a camera feed.

Writing AVFoundation code can be fun, if not sometimes interesting. Lumina gives you an opportunity to skip having to write AVFoundation code, and gives you the tools you need to do anything you need with a camera you’ve already built.

Lumina can:

  • capture still images
  • capture videos
  • capture live photos
  • capture depth data for still images from dual camera systems
  • stream video frames to a delegate
  • scan any QR or barcode and output its metadata
  • detect the presence of a face and its location
  • use any CoreML compatible model to stream object predictions from the camera feed

Overview

  • Pricing: Free
  • Resource Link: https://github.com/dokun1/Lumina
  • Resource Maker: David Okun
  • Mobile Platform Destination: iOS Apps
  • Mobile Platform Support: Native iOS
  • Programming Languages: Swift
  • iOS Versions Supported: iOS 11.0+, iOS 12.0+
  • CocoaPods: Lumina
  • Carthage: dokun1/Lumina