AI Images From OpenAI To Carry Branded Metadata, Here’s Why
OpenAI has added innovation to images produced from its Artificial Intelligence (AI) image generator DALL.E 3 and other tools as they now have metadata integrated into them.
OpenAI Driving Transparency With Metadata
Per an OpenAI post on X, the AI firm confirmed that beyond the included metadata on DALL.E 3 images, those generated on ChatGPT as well will bear the tag. The addition of the metadata was made possible via the use of C2PA specification, an open technical standard that provides publishers, companies, and others with the opportunity to embed metadata in media.
Integrating metadata makes it possible to verify the origin of any image and this is one of the benefits that OpenAI is pursuing. The company believes that the metadata on its images will help individuals, social media platforms, and even content distributors to easily identify that the media is from OpenAI. Other related information could also be assessed via an image’s metadata.
At the same time, it is worth noting that the metadata can be detached from the image either intentionally or incidentally, and in cases where this happens, “its absence doesn’t mean an image is not from ChatGPT or our API,” the company reiterated.
OpenAI noted that the adoption of this method and other future methods for establishing provenance as well as urging users to be on the lookout for these signals are all steps geared towards boosting the integrity and trustworthiness of digital information.
The metadata change for mobile users will become effective as of February 12, 2024. In the meantime, the integration is limited to only images generated with OpenAI’s ChatGPT.
Growing Privacy Concerns in the AI Ecosystem
OpenAI took this bold step as concerns about AI privacy began to reach unprecedented levels. Many people are getting very troubled about the effect of AI in society and how unsafe it is for humans to be around LLM technology. It was recently discovered that certain models of AI have been exhibiting deceptive behavior that could be harmful to humans.
Apart from privacy concerns, some bad actors have consistently been utilizing AI to carry out their illicit activities. Brad Garlinghouse, Ripple CEO, warned his followers about a scheme where scammers cloned a video of him falsely urging XRP holders to send their coins for a promised doubling.
In the same fashion, several incriminating images and videos of pop singer Taylor Swift were circulated and seen multiple times on the internet with one of them viewed up to 47 million times. To avoid many such incidents, OpenAI seems to be putting effort into additional features and upgrades.
The post AI Images From OpenAI To Carry Branded Metadata, Here’s Why appeared first on CoinGape.
Filed under: News - @ January 1, 1970 12:00 am