Tuesday, January 30, 2024
HomeIoTBlind Man Develops AI-Primarily based 3D Modeling and Printing Workflow

Blind Man Develops AI-Primarily based 3D Modeling and Printing Workflow



I’ve at all times considered 3D modeling — and by extension, 3D printing — as a visible medium. Whereas 3D-printed objects are actually bodily, your entire software program chain that results in them exists solely within the digital world. So my assumption was that, sadly, this interest is just not viable for those who dwell with visible impairments. However Redditor Mrblindguardian proved me flawed by growing an AI-based workflow that lets him mannequin and 3D print his personal customized designs, comparable to a one-winged dragon.

Along with the plain challenges, this comes with some difficulties that our sighted readers is probably not conscious of. We’ve got language to explain what we see, however that doesn’t maintain the identical that means to individuals who have by no means been in a position to see.

For instance, think about a query posed by William Molyneux in 1688: “Might a blind individual, upon all of a sudden gaining the flexibility to see, acknowledge an object by sight that he’d beforehand identified by really feel?”

In 2011, researchers at MIT answered that query by testing the premise within the real-world utilizing topics that acquired sight-restoration procedures. The outcomes confirmed that tactile understanding didn’t carry over to the visible world. This could provide you with some perception into the challenges Mrblindguardian confronted.

His resolution is ingenious and takes benefit of AI instruments that solely lately turned accessible. Mrblindguardian begins by typing out an outline of what he thinks a dragon appears to be like like, with the assistance of googled descriptions. He then makes use of Luma AI’s Genie service to generate a 3D mannequin based mostly on that description.

To confirm that the mannequin “appears to be like” proper with out the flexibility to see it, Mrblindguardian takes screenshots of the generated 3D mannequin and feeds these to ChatGPT to explain. If the AI-generated description matches his expectations, then he is aware of that the mannequin appears to be like proper—at the very least to ChatGPT. If it doesn’t, he can refine his Luma AI Genie immediate and repeat that course of till the outcomes are passable.

With an acceptable STL file, Mrblindguardian can then use slicing software program that’s suitable with display readers. To get a greater sense of what’s on display, he may have ChatGPT generate descriptions from screenshots. One he’s proud of the outcomes, Mrblindguardian can ask a sighted good friend to confirm that the file is able to print. If that’s the case, he can print it after which course of it by really feel.

This can be a laborious course of, but it surely works. Mrblindguardian used it to 3D-print this practice one-winged dragon, bringing a creature from his creativeness into the real-world the place he can really feel it himself.

I can’t assist however really feel tremendously impressed and impressed by Mrblindguardian’s achievement, and I hope that others are in a position to reap the benefits of this workflow to supply their very own designs.



Supply hyperlink

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments