Final month, we began previewing DALL·E 2 to a restricted variety of trusted customers to study concerning the expertise’s capabilities and limitations.
Since then, we’ve been working with our customers to actively incorporate the teachings we study. As of at the moment:
- Our customers have collectively created over 3 million photographs with DALL·E.
- We’ve enhanced our security system, bettering the textual content filters and tuning the automated detection & response system for content material coverage violations.
- Lower than 0.05% of downloaded or publicly shared photographs had been flagged as probably violating our content material coverage. About 30% of these flagged photographs had been confirmed by human reviewers to be coverage violations, resulting in an account deactivation.
- As we work to know and handle the biases that DALL·E has inherited from its coaching knowledge, we have requested early customers to not share photorealistic generations that embrace faces and to flag problematic generations. We imagine this has been efficient in limiting potential hurt, and we plan to proceed the apply within the present section.
Studying from real-world use is an necessary half of our dedication to develop and deploy AI responsibly, so we’re beginning to widen entry to customers who joined our waitlist, slowly however steadily.
We intend to onboard as much as 1,000 folks each week as we iterate on our security system and require all customers to abide by our content material coverage. We hope to extend the speed at which we onboard new customers as we study extra and achieve confidence in our security system. We’re impressed by what our customers have created with DALL·E to this point, and excited to see what new customers will create.
Within the meantime, you will get a preview of those creations on our Instagram account: @openaidalle.