I’m attempting to make use of mlkit on ios from a shared module. I added pod’s utilizing cocoapods and am attempting to repeat the directions https://builders.google.com/ml-kit/imaginative and prescient/text-recognition/v2/ios
I get a photograph from a typical module within the type of a byte array utilizing the “peekaboo” library
@OptIn(ExperimentalForeignApi::class, BetaInteropApi::class)
precise enjoyable saveImage(byteArrays: ByteArray?) {
if (byteArrays != null) {
val string = NSString.create(string = byteArrays.decodeToString())
val information = string.dataUsingEncoding(NSUTF8StringEncoding)
val imageFromBytes = information?.let { UIImage(information = it) }
val picture = imageFromBytes?.let { MLKVisionImage(it) }
if (picture != null) {
picture.orientation = imageFromBytes.imageOrientation
MLKTextRecognizer.textRecognizer().processImage(picture)
}
}
}
There’s a “course of” within the class textrecognize settings, however largely as a consequence of one thing known as “processImage”
Which I do not fairly perceive what to do with and which asks for a special kind of picture format it seems to be like
‘Kind mismatch: inferred kind is MLKVisionImage however MLKCompatibleImageProtocol was anticipated ‘
That is the primary time I am utilizing expect-actual mechanism with cintertop swift. Perhaps I principally didn’t perceive easy methods to do it accurately.
Or perhaps I can get the end in another means?