In my iOS software in React Native, I am making an attempt to arrange the MediaPipe pose landmarker mannequin. Since there isn’t any direct information for the iOS platform right here builders.google.com/mediapipe/options/imaginative and prescient/pose_landmarker, I extracted pose_detector.tflite
and pose_landmarks_detector.tflite
fashions from pose_landmarker_full.activity
. Then, I used the vision-camera-fast-tflite
library to run pose_landmarks_detector.tflite
.
I managed to get some information. It seems just like the mannequin can detect some modifications in motion, however the output landmarks are largely completely incorrect. Any concepts?
Detection instance
I went by means of guides for a number of platforms and in addition tried to look at the MediaPipe framework written in Python to see if some preprocessing or postprocessing is occurring. Nonetheless, I nonetheless have no cheap rationalization of why the mannequin output is inaccurate.