Generative AI continues to grip the internet. You must have seen hundreds of social media posts where users of AI platforms like ChatGPT have shared ‘answers’ to different ‘prompts’. But just like any new technology, the dark side of this new tool has also been unravelled.

A Twitter user recently shared photos of ‘Donald Trump getting arrested’. The hyper-realistic eerie images or ‘deep fakes’ quickly went viral, creating shock and delight. Misinformation busters pointed out the dangers of such AI-generated photos.

While it is easy to figure out that the image is fake, you still cannot deny the dangers they pose, especially in the hands of propagandists. Earlier, there have been several cases of abusers creating deepfake videos of victims, morphed over pornography. There are ways one can be cautious in this era of text-to-image-based content generation.

For starters, AI platforms should not generate any results for ‘prompts’ that are questionable; case in point, Donald Trump’s ‘fake arrest’. They should also refrain from generating other harmful requests that depict suicide, violence, or pornography.

For now, AI-generated images are also not perfect; experts advise users to check body parts such as fingers or legs or else the facial reactions which are just too ‘artificial’. However, with the speed at which AI platforms improve, these flaws could be fixed in future updates. It could possibly get harder to distinguish between real and fake images.

Craft new rules

Therefore, AI platforms should bring in fact-checkers, legal experts, and policymakers to craft a new set of rules that focuses on ethical AI usage. Likewise, social media platforms like Facebook, Instagram, or Twitter also need to be proactive and bust out harmful AI-generated posts that pop up on their site. Finally, as always, make it a point to check the source of any content you stumble upon on social media.