SAN FRANCISCO, California – OpenAI, the Microsoft-backed synthetic intelligence firm behind the favored picture generator DALL-E, on Tuesday introduced the launch of a brand new device aimed toward detecting whether or not digital photos have been created by AI.
Authentication has change into a serious concern within the quick improvement of AI, with authorities anxious concerning the proliferation of deep fakes that might disrupt society.
According to the corporate, OpenAI’s picture detection classifier, which is presently below check, can assess the probability {that a} given picture originated from one of many firm’s generative AI fashions like DALL-E 3.
OpenAI mentioned that in inside testing on an earlier model, the device precisely detected round 98 % of DALL-E 3 photos whereas incorrectly flagging lower than 0.5 % of non-AI photos.
However, the corporate warned that changed DALL-E 3 photos have been more durable to determine, and that the device presently flags solely about 5 to 10 % of photos generated by different AI fashions.
OpenAI additionally mentioned that it might now add watermarks to AI picture metadata as extra firms join to satisfy the requirements from the Coalition for Content Provenance and Authenticity (C2PA).
The C2PA is a tech business initiative that units a technical normal to find out the provenance and authenticity of digital content material, in a course of referred to as watermarking.
Facebook big Meta final month mentioned it might start labeling AI-generated media starting in May utilizing the C2PA normal. Google, one other AI big, has additionally joined the initiative. — Agence France-Presse
Source: www.gmanetwork.com