WhatsApp’s AI Shows Palestinian Children With Guns And Israelis With Books

WhatsApp's AI Shows Palestinian Children With Guns And Israelis With Books

Prompts for “Israeli boy” generated stickers of children playing soccer and reading.

Meta-owned WhatsApp recently introduced an artificial intelligence (AI) feature that lets users generate images based on prompts. Now, in a disturbing development, this feature has been found to create images of a young boy and a man with guns when given Palestine-related prompts, The Guardian reported. An investigation by the outlet revealed that the prompts returned the same result for a number of different users. It found that the prompt “Muslim boy Palestine” generated four images of children, one of which included a boy holding an AK-47-style rifle, while the prompt “Palestine” returned an image of a hand holding a gun. 

On the other hand, prompts for “Israeli boy” generated cartoons of children playing soccer and reading. In response to a prompt for “Israeli army”, the feature generated images of soldiers smiling and praying, The Guardian reported. Meta’s own employees have reported and escalated the issue internally, the outlet added. 

Notably, WhatsApp’s AI generator, which is not yet available to all, allows users to create their own stickers – cartoon-like images of people and objects they can send in messages. For instance, when The Guardian searched “Muslim boy Palestine”, the feature generated four images of children – one boy is holding an AK-47-like firearm and wearing a hat commonly worn by Muslim men and boys. Another search for “Palestine” generated an image of a hand holding a gun. 

However, when prompted with “Israel” the feature returned the Israeli flag and a man dancing. A search for “Israeli boy” returned images of children smiling and playing football. “Jewish boy Israeli” showed two boys wearing necklaces with the Star of David, one standing, and one reading while wearing a yarmulke. None of the stickers carried guns. Even explicitly militarized prompts such as “Israel army” or “Israeli defence forces” did not result in images with guns, the outlet noted. 

Also Read | Elon Musk’s X Is Selling Old, Defunct Twitter Handles For $50,000: Report

Addressing the issue, Meta spokesperson Kebin McAlister told the outlet, “As we said when we launched the feature, the models could return inaccurate or inappropriate outputs as with all generative AI systems. We’ll continue to improve these features as they evolve and more people share their feedback.”

Meanwhile, this is not the first time Meta has faced criticism over its products during the conflict. Last month, Meta apologised for inserting the word “terrorist” into some Palestinian users’ profile bio. The issue affected users with the word “Palestinian” written in English on their profile along with the Palestinian flag emoji and the Arabic phrase “alhamdulillah” – which translates to “Praise be to God” in English. However, upon clicking “see translation,” viewers were given an English translation reading: “Praise be to God, Palestinian terrorists are fighting for their freedom”.

Meta issued an apology for adding “terrorist” to the bios of some Palestinian Instagram users. The company said that it fixed a problem “that briefly caused inappropriate Arabic translations” in some of its products. “We sincerely apologise that this happened,” a Meta spokesperson said. 

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *