Thursday, March 14, 2024
HomeiOS Developmentios - How you can use Imaginative and prescient to acknowledge phrases...

ios – How you can use Imaginative and prescient to acknowledge phrases boundingBox or positions – SwiftUI


I need to use VNRecognizeTextRequest .correct in Imaginative and prescient to get phrases boundingBox as an alternative of sentences boundingBox. Nearly everybody solely talked about the right way to get the boundingBox of the sentence with out phrases. Please don’t use .quick as a result of the impact is especially poor.

Their is a straightforward instance to get sentences boundingBox:


import SwiftUI
import Imaginative and prescient

struct OCR: View {
  @State var picture: UIImage? = UIImage(named: "take a look at")
  @State var texts: [String] = []
  @State var positions: [CGRect] = []
  
  func VNImageRectForNormalizedRect(rect: CGRect, imageSize: CGSize) -> CGRect {
    let width = imageSize.width
    let top = imageSize.top
    
    let x = rect.minX * width
    let y = (1 - rect.maxY) * top
    let rectWidth = rect.width * width
    let rectHeight = rect.top * top
    
    return CGRect(x: x, y: y, width: rectWidth, top: rectHeight)
  }
  
  var physique: some View {
    ZStack {
      if let picture = picture {
        Picture(uiImage: picture)
          .resizable()
          .aspectRatio(contentMode: .match)
          .overlay(Canvas { context, measurement in
            for place in positions {
              let normalizedRect = VNImageRectForNormalizedRect(rect: place, imageSize: picture.measurement)
              context.stroke(Path(normalizedRect), with: .shade(.crimson), lineWidth: 1)
            }
          })
          .onAppear {
            recognizeText(picture: picture) { t, p in
              texts = t
              positions = p
            }
          }
      } else {
        Textual content("Their is not any image")
      }
    }
  }
}

extension OCR {
  func recognizeText(picture: UIImage, completion: @escaping([String], [CGRect]) -> Void) {
    var texts: [String] = []
    var positions: [CGRect] = []
    
    guard let cgImage = picture.cgImage else { return }
    let request = VNRecognizeTextRequest { (request, error) in
      guard let observations = request.outcomes as? [VNRecognizedTextObservation], error == nil else {
        print("Textual content recognition error: (error?.localizedDescription ?? "Unknown error")")
        return
      }
      for statement in observations {
        guard let topCandidate = statement.topCandidates(1).first else { proceed }
        texts.append(topCandidate.string)
        positions.append(statement.boundingBox)
      }
      DispatchQueue.foremost.async {
        print(texts)
        print(positions)
        completion(texts, positions)
      }
    }
    request.recognitionLevel = .correct
    
    let handler = VNImageRequestHandler(cgImage: cgImage)
    strive? handler.carry out([request])
  }
}

#Preview {
  OCR()
}

I visited many boards however nothing was talked about



Supply hyperlink

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments