Amnesty International, a human rights advocacy group, has withdrawn artificial intelligence (AI) generated images it used in a campaign to draw attention to police brutality in Colombia during widespread protests in 2021. The group faced criticism for using AI-generated images in its social media accounts. One image, in particular, was highlighted by The Guardian on May 2, depicting a woman being dragged away by police during Colombia’s protests against long-standing economic and social inequalities in 2021.
Upon closer examination, there were several discrepancies in the image, such as the unnaturally-looking faces, outdated police uniforms, and a protester who appears to be wrapped in a flag that is not the actual flag of Colombia. Each image also includes a disclaimer at the bottom stating that it was produced by an AI.
Amnesty International informed The Guardian that it opted to use AI-generated images to shield protesters from potential state retribution. Erika Guevara Rosas, director for Americas at Amnesty, stated, “We have removed the images from social media posts, as we don’t want the criticism for the use of AI-generated images to distract from the core message in support of the victims and their calls for justice in Colombia.”
Several photojournalists criticized the use of the AI-generated images, arguing that in today’s highly polarized era of fake news, people are more likely to doubt the credibility of the media. Media scholar Roland Meyer also commented on the deleted images, stating that “image synthesis reproduces and reinforces visual stereotypes almost by default,” before adding that the AI-generated images were “ultimately nothing more than propaganda.”
Other images, which have now been deleted by Amnesty, were shared by Twitter users in late April. AI is being increasingly used to generate images and visual media. In late April, HustleGPT founder Dave Craige posted a video showing the United States Republican Party using AI imagery in its political campaign. He expressed surprise at the quick adoption of AI-generated images in politics, stating, “We all knew that AI and deep-fake images were going to make it to politics, I just didn’t realize it would happen so quickly.”
The controversy surrounding Amnesty International’s use of AI-generated images highlights the ongoing debate about the ethics and implications of AI in the media and public sphere. While some argue that AI-generated images can be helpful in protecting the identities of individuals in sensitive situations, others claim that they can be misleading or manipulative, damaging the credibility of media organizations and reinforcing visual stereotypes.
As AI technology continues to advance and become more accessible, questions of responsibility, transparency, and accountability will become increasingly important. For instance, should media organizations be required to disclose when AI-generated images are being used? If so, what level of detail should be provided? What measures can be put in place to ensure that AI-generated images are not being used to mislead or manipulate public opinion? Moreover, how can organizations balance the need to protect individuals’ privacy and safety with the need to maintain credibility and accuracy in their reporting?
One possible solution to these ethical questions is the development of standardized guidelines, codes of conduct, or even regulations for using AI in the media and public sphere. Such guidelines could help ensure that AI-generated images are used responsibly and transparently while also helping to maintain public trust in the media. Additionally, better education and awareness about the capabilities and limitations of AI-generated images could help the public become more discerning consumers of information.
In conclusion, Amnesty International’s retraction of AI-generated images serves as a reminder of the ethical challenges and debates surrounding AI use in the media and public sphere. As AI continues to play a growing role in our lives, it is essential that we engage in open conversation and develop frameworks that promote both the responsible and transparent use of AI while preserving the accuracy and credibility of the media.