I’m attempting to make use of mlkit on ios from a shared module.
I added pod’s utilizing cocoapods and am attempting to repeat the directions https://builders.google.com/ml-kit/imaginative and prescient/text-recognition/v2/ios
I get a photograph from a standard module within the type of a byte array utilizing the “peekaboo” library
@OptIn(ExperimentalForeignApi::class, BetaInteropApi::class)
precise enjoyable saveImage(byteArrays: ByteArray?) {
if (byteArrays != null) {
val string = NSString.create(string = byteArrays.decodeToString())
val knowledge = string.dataUsingEncoding(NSUTF8StringEncoding)
val imageFromBytes = knowledge?.let { UIImage(knowledge = it) }
val picture = imageFromBytes?.let { MLKVisionImage(it) }
if (picture != null) {
picture.orientation = imageFromBytes.imageOrientation
MLKTextRecognizer.textRecognizer().processImage(picture)
}
}
}
There’s a “course of” within the class textrecognize settings, however largely resulting from one thing referred to as “processImage”
Which I do not fairly perceive what to do with and which asks for a special kind of picture format it seems like
‘Kind mismatch: inferred kind is MLKVisionImage however MLKCompatibleImageProtocol was anticipated ‘