238

One video shows multiple Black women screaming and pounding on a door with the caption "store under attack." Another captures distraught Walmart employees of color being loaded into an ICE van. Why it matters: These AI-generated viral videos aren't just perpetuating racism — they're influencing political discourse.The big picture: Creating a fake AI-generated video is easy. Come up with a prompt of what you want to see (it can even include typos) and apps like OpenAI's Sora and Google's VEO 3 can easily spit one out.It used to be easy to spot fake AI content — think about the hands with 7 fingers and the like — but the technology is getting increasingly better.The trend replicates digital blackface, which is the practice of a non-black person creating a Black or brown character online for social currency such as likes and reposts or malicious disinformation campaigns. And because TikTok and other social media platforms now allow users to generate revenue based on interactions, the trend is likely to get worse.What they're saying: "It's more of the outrage farming that we've always seen," Rianna Walcott, associate director at the Black Communication and Technology (BCaT) Lab told Axios. "It doesn't even have to be interesting or accurate content, it just has to generate viewership.""If you are a person who's just trying to make a quick buck or do whatever for nefarious reasons, this is the best way to do so," she added.Case in point: Several fake viral videos of Black women talking about abusing their SNAP benefits during the government shutdown caused users to celebrate the pain families felt from losing those dollars.The videos not only continued the racist stereotype that Black women are "welfare queens" relying on the government for "free money," but also seemingly influenced users in the comments to turn against the SNAP program overall.Reality check: Most SNAP recipients are predominantly non-Hispanic white people. One user ID'd a video they shared as fake, but still encouraged others to share it because it justified their viewpoints about alleged SNAP fraud.Zoom in: "Even if somebody knows that an image is false, it still goes into their psyche," Michael Huggins, of the racial justice organization Color of Change, told Axios in a phone interview. "The consequences of these images getting out there is that these harmful stereotypes seep into people's brains," said Huggins, deputy senior director of government affairs."So many people get more of their news from social media. And my worry is that it could have a huge impact on how people perceive the upcoming midterm elections, and even the impact on the 2028 election." The other side: Companies have taken some precautions to prohibit racism and reduce misinformation on their platforms.A deluge of disrespectful videos of Rev. Martin Luther King Jr. caused OpenAi to stop allowing users to replicate his likeness. Plus, particularly offensive language like slurs or graphic violence are banned on Sora, according to their policy page. A spokesperson for OpenAI added that Sora-generated videos include "visible, moving watermarks" and that they "take action when we detect misuse or receive reports of violations."A. spokesperson for Veo 3 maker Google pointed Axios to their use policy, which prohibits hatred, hate speech, the incitement of violence and misinformation.The bottom line: It's easy to think these videos are just fun and games, but that's exactly what it makes it so "harmful," organizational psychologist Janice Gassam Asare told Axios."It is that deep and everything is that serious," she said. "I would just encourage people to be a little bit more cautious when they see something on social media, and ask yourself, 'how do I know that this is actually real?'" Go deeper: AI's next big thing is world models