Meta lays off 198 workers
Meta cut 198 jobs (0% of workforce), explicitly attributed to AI/automation.
198
jobs lost

A timeline of AI harm, layoffs, regulatory actions & model failures
1,233 incidents tracked. Scroll down to watch the counter burn 🔥
Meta cut 198 jobs (0% of workforce), explicitly attributed to AI/automation.
198
jobs lost
During a live broadcast of NASA's Artemis II launch, KBS reportedly used AI-generated real-time translation subtitles that reportedly mistranslated aviation terms including "roger," "roll," and "pitch" into Korean profanity. The offensive subtitles were reportedly displayed to viewers during the livestream. KBS reportedly later apologized, said the error stemmed from phonetic similarity in AI translation, and announced measures including stronger profanity filtering.
Who was affected: General public of South Korea, Korean Broadcasting System (KBS) viewers
Oracle cut 30,000 jobs (19% of workforce), explicitly attributed to AI/automation.
30,000
jobs lost
Dell Technologies cut 11,000 jobs (10% of workforce), partially attributed to AI/automation.
11,000
jobs lost
Crypto.com cut 180 jobs (12% of workforce), explicitly attributed to AI/automation.
180
jobs lost
In Tokyo, a Japanese IT company reportedly interviewed a job applicant who used purportedly AI-generated video manipulation to impersonate real IT executive Kenbun Yoshii during a remote hiring interview. Investigators cited visual and audio irregularities suggesting a deepfake, and Yoshii said his publicly available images and career details appeared to have been misused.
Who was affected: Kenbun Yoshii, Japanese IT company recruiter(s), Unnamed Japanese company, Epistemic integrity, National security and intelligence stakeholders
An online ad reportedly used a purported AI-generated version of Ashley James, a British broadcaster and former reality television personality, to market weight loss pills through a false celebrity endorsement. James said the video copied her face and voice without consent and described the ad as both a violation of her identity and a misleading sales tactic directed at the public.
Who was affected: Ashley James, Fans of Ashley James, People seeking weight loss supplements
Atlassian cut 1,600 jobs (10% of workforce), explicitly attributed to AI/automation.
1,600
jobs lost
Grammarly's Expert Review feature allegedly used a large language model to generate editing suggestions presented under the names of journalists, authors, and academics without their consent. A federal class action filed by Julia Angwin claimed the feature misappropriated identities for commercial gain and attributed advice the named individuals never gave.
Who was affected: Julia Angwin, Journalists, Academics, Authors, Writers, Grammarly users, Epistemic integrity
A Wichita, Kansas man reported that scammers sent him purported AI-generated nude images depicting his face on another body in his home and threatened to send them to his Facebook contacts unless he paid money. Police said a report was filed.
Amazon cut 100 jobs (0% of workforce) (estimated), linked to AI.
100
jobs lost
Morgan Stanley cut 2,500 jobs (3% of workforce), partially attributed to AI/automation.
2,500
jobs lost
eBay cut 800 jobs (7% of workforce), partially attributed to AI/automation.
800
jobs lost
For months, callers to the Washington State Department of Licensing who selected Spanish reportedly received AI-generated English responses spoken with a Spanish accent rather than actual Spanish-language service. The agency apologized and said staff configuration caused the error, which created accessibility problems for callers seeking language support.
Who was affected: Spanish language speakers, General public, General public of Washington State
Block cut 4,000 jobs (40% of workforce), explicitly attributed to AI/automation.
4,000
jobs lost
WiseTech cut 2,000 jobs (29% of workforce), explicitly attributed to AI/automation.
2,000
jobs lost
Baker McKenzie cut 1,200 jobs (10% of workforce) (estimated), linked to AI.
1,200
jobs lost
Autodesk cut 1,000 jobs (7% of workforce) (estimated), linked to AI.
1,000
jobs lost
Anthropic said it identified large-scale campaigns that used fraudulent accounts and proxy services to generate high volumes of Claude interactions to extract model capabilities for competitor training ("distillation"). Anthropic attributed the activity to DeepSeek, Moonshot, and MiniMax and said it involved millions of exchanges across thousands of accounts, violating its terms and access restrictions. Anthropic described detection measures, account controls, and indicator-sharing in response.
Who was affected: Anthropic, Claude users, Anthropic customers, National security and intelligence stakeholders
Livspace cut 1,000 jobs (12% of workforce) (estimated), explicitly attributed to AI/automation.
1,000
jobs lost
A nurse at St. Rose Dominican Hospital in Henderson, Nevada, reportedly described an episode in which a hospital AI system purportedly generated a sepsis alert that triggered urgent protocol steps, including IV fluids, for an older patient with a dialysis catheter. Reportedly, the nurse objected that fluids could cause dangerous overload; a physician intervened and ordered an alternative treatment.
Who was affected: patients, Nurses, Doctors, St. Rose Dominican Hospital (Henderson, Nevada), Epistemic integrity
An Amazon delivery van reportedly became stranded on the Broomway, a hazardous tidal track in Essex in England, after the driver allegedly followed GPS/satnav directions toward Foulness Island. HM Coastguard said it was alerted and the occupants were safe; Amazon reportedly arranged recovery of the vehicle.
A filmmaker reportedly used ByteDance's AI video tool Seedance 2.0 to create and post a purportedly realistic clip depicting Tom Cruise fighting Brad Pitt, which then circulated widely online. Industry groups and studios publicly alleged the tool enables unauthorized use of copyrighted material and performers' likeness, and at least one studio reportedly sent a cease-and-desist letter. ByteDance said it respects IP and would strengthen safeguards.
Who was affected: Tom Cruise, Brad Pitt, Actors, Film industry, Epistemic integrity
Scott Shambaugh, a matplotlib maintainer, reported that an autonomous AI coding agent using the name "MJ Rathbun" researched him and publicly posted a personalized critical blog post after his GitHub pull request was closed. The post accused him of bias and "gatekeeping" and included claims Shambaugh disputed. The agent's operator and underlying model were not identified. Shambaugh said the post risked reputational harm and could mislead readers or other agents.
Who was affected: Supply-chain gatekeepers, Scott Shambaugh, Open-source maintainers, matplotlib users, GitHub users
Social media posts in Thailand reportedly circulated an image purporting to show Thai PM Anutin Charnvirakul dining with South African businessman Benjamin Mauerberger ("Ben Smith"), implying long ties. AFP reported Google's SynthID flagged the image with "very high" confidence as made with Google AI tools; Anutin reportedly denied its veracity. The post reportedly spread on the eve of Thailand's election.
Who was affected: Anutin Charnvirakul, Benjamin Mauerberger, Voters in Thailand, Electoral integrity, Epistemic integrity
President Trump reportedly reposted a video on Truth Social that portrayed Barack and Michelle Obama as apes, imagery widely condemned as racist. The post was reportedly later deleted after public outcry, including criticism from some Republicans. The content was described by critics as an AI-driven meme/deepfake-style clip, an example of purportedly AI-amplified racist propaganda and harm to social trust.
Who was affected: Barack Obama, Michelle Obama, Epistemic integrity, Black Americans, Black people
Wiz researchers reported accessing an exposed Moltbook database in under three minutes, allegedly obtaining ~35,000 email addresses, thousands of private DMs, and ~1.5 million API authentication tokens. The exposure was described as enabling read/write access and potential impersonation or manipulation of "AI agent" accounts. Wiz said it disclosed the issue to Moltbook, which reportedly secured the database within hours and deleted accessed data.
Expedia cut 162 jobs (1% of workforce), linked to AI.
162
jobs lost
Dow cut 4,500 jobs (13% of workforce), linked to AI.
4,500
jobs lost
Amazon cut 16,000 jobs (5% of workforce), partially attributed to AI/automation.
16,000
jobs lost
ASML cut 1,700 jobs (4% of workforce), partially attributed to AI/automation.
1,700
jobs lost
Chan Zuckerberg Initiative cut 70 jobs (9% of workforce), explicitly attributed to AI/automation.
70
jobs lost
Pinterest cut 700 jobs (13% of workforce) (estimated), linked to AI.
700
jobs lost
Following the fatal shooting of Minneapolis ICU nurse Alex Pretti by U.S. Customs and Border Patrol agents, social media accounts reportedly circulated images purported to have been altered by AI, reportedly distorting evidence of the incident by portraying Pretti as threatening law enforcement and altering the presence of weapons. The images reportedly misidentified individuals and helped reinforce partisan narratives by purportedly obscuring verified video and eyewitness accounts.
A Waymo driverless vehicle reportedly struck a child near an elementary school in Santa Monica, California during school drop-off hours. According to filings with the National Highway Traffic Safety Administration, the child allegedly sustained minor injuries. The vehicle was purportedly operating on Waymo's 5th Generation Automated Driving System without a human safety driver. Waymo reported the incident to federal regulators, who subsequently opened an investigation.
Who was affected: minors, Unnamed elementary school student, pedestrians, General public
The White House reportedly posted a purportedly AI-altered image on X showing Minnesota protester and attorney Nekima Levy Armstrong appearing to cry during her arrest. An earlier image reportedly shared by Homeland Security Secretary Kristi Noem showed her calm. Reported analysis by third parties using AI detection tools found signs of purported facial manipulation, which could be replicated using generative AI systems.
Meta cut 1,500 jobs (2% of workforce) (estimated), linked to AI.
1,500
jobs lost
ITV presenter Kate Garraway reported that purported AI-generated images falsely depicting her in a relationship with a fictitious partner circulated online, prompting false rumors about her personal life. Garraway stated the images caused confusion and emotional distress for her children, including claims about her son's reactions that she said were untrue.
Who was affected: Kate Garraway, Family of Kate Garraway, Epistemic integrity
U.S. Immigration and Customs Enforcement (ICE) reportedly used an AI-assisted résumé screening tool during a 2025 hiring surge that misclassified some applicants as having law-enforcement experience. As a result, certain recruits without policing backgrounds were allegedly routed into a shortened training pathway. ICE reportedly identified the error, reviewed résumés manually, and reassigned affected recruits for additional training.
Who was affected: ICE recruits without law-enforcement experience, Members of the public subject to ICE enforcement
An automated shuttle bus operated by Beep was reportedly involved in a minor collision during a U.S. Department of Transportation demonstration ride in Washington, D.C. The vehicle, operating autonomously with a human safety driver onboard, was reportedly rear-ended by a Tesla whose driver made an illegal lane change. No injuries were reported, and officials stated the autonomous system functioned appropriately.
Who was affected: Beep automated shuttle bus passengers, unnamed Tesla driver, Public road users in Washington, D.C., Tesla drivers
After the fatal shooting of Renee Nicole Good in Minneapolis by an ICE officer, users on X reportedly asked Grok to "unmask" the masked agent shown in eyewitness footage. Grok reportedly generated a fabricated face that spread widely online, along with the false name "Steve Grove." The output and claim allegedly led to harassment and reputational harm against at least two uninvolved men.
Who was affected: Misidentified individuals, Private citizens falsely accused of crimes, People named Steve Grove, Epistemic integrity
A woman who runs a play school in Indore, India, was reportedly defrauded of ₹97,500 (approximately $1,080 USD) after a fraudster allegedly used AI-based voice cloning to impersonate her cousin, an Uttar Pradesh police employee, and claim a friend needed urgent cardiac surgery. The victim was reportedly persuaded to transfer funds via QR codes before discovering no money had actually been credited.
Who was affected: Smita Sinha (pseudonym), Small private school owners in Indore, India, Epistemic integrity, General public of Madhya Pradesh
Purported deepfake videos using the face and voice of Greek economist and politician Yanis Varoufakis were reportedly circulated on YouTube and other social platforms in late 2025 and early 2026. The videos depicted a synthetic version of Varoufakis delivering fabricated political statements, including about international events. Platforms reportedly removed some videos, but many reappeared under new accounts.
Who was affected: Yanis Varoufakis, Epistemic integrity, YouTube users, social media users
An 80-year-old U.S. woman was reportedly deceived by scammers using purportedly AI-generated messages and deepfake media impersonating Elon Musk into believing she was in a romantic relationship with him. The perpetrators allegedly induced her to buy over $50,000 in Apple gift cards to be converted into cryptocurrency, leaving her financially endangered and at risk of foreclosure on her home.
Who was affected: Unnamed 80-year-old U.S. woman, Elderly investors, Cryptocurrency investors defrauded by AI-generated profiles, Epistemic integrity, Elon Musk
A U.S. National Weather Service office reportedly posted an AI-generated wind forecast map for Camas Prairie, Idaho that contained fabricated and misspelled place names, including non-existent towns. The image was reportedly shared publicly on social media before being deleted and corrected. NWS confirmed a generative AI tool had been used to create the base map.
Who was affected: Epistemic integrity, Residents of Camas Prairie, Idaho, General public, General public of Idaho
Following the January 2026 U.S. raid and arrest of Venezuelan leader Nicolás Maduro, purportedly AI-generated images and videos circulated widely on platforms including X, reportedly depicting Maduro in fabricated scenarios. NewsGuard identified at least seven manipulated or synthetic visuals reaching over 14 million views, including AI-generated photos detected via Google Gemini's SynthID watermark.
Who was affected: Nicolás Maduro, social media users, General public, General public of Venezuela, Epistemic integrity
A recently divorced Bitcoin investor reportedly lost his entire retirement fund, one full Bitcoin, after being deceived by a purportedly AI-enabled romance scam. The perpetrator allegedly used AI-generated portraits and real-time deepfake video calls to pose as a romantic partner and trusted crypto trader, persuading the victim to transfer cryptocurrency through a "pig-butchering" scheme that displayed fake profits and made recovery impossible.
Who was affected: Investors, Cryptocurrency investors, Cryptocurrency investors defrauded by AI-generated profiles, Pig-butchering scam victims
A political dispute reportedly emerged in India after the Madhya Pradesh Congress alleged that AI-generated images were uploaded to a government platform in connection with water conservation works linked to the National Water Awards. District officials reportedly denied that the images influenced the award process, stating that evaluations relied on verified data and field inspections and that a limited number of AI-generated images appeared only on a separate educational portal.
Who was affected: General public, General public of India, General public of Madhya Pradesh, General public of Khandwa District (Madhya Pradesh), Epistemic integrity
A purported deepfake video reportedly circulated online falsely depicting Elon Musk endorsing a nonexistent "17-hour" diabetes cure. The reported video promoted unverified health claims and appears to have been part of a scam ecosystem exploiting Musk's public credibility. Rapper Boosie Badazz reportedly encountered and amplified the video before its falsity was identified.
Who was affected: social media users, People with diabetes, People seeking medical advice, General public, Boosie Badazz, Epistemic integrity
İsa Kereci and Hale Kereci, a married couple in Samsun, Turkey, reportedly lost 1.5 million lira after being deceived by an alleged AI-generated investment video on Facebook. The scammers allegedly used the video to build credibility and then guided them through staged "investment" steps, including screen sharing, fees, forced loans, and unauthorized bank and credit card transfers. The couple filed a criminal complaint, reporting severe financial harm and significant emotional distress.
Who was affected: İsa Kereci, Investors, Hale Kereci, General public of Turkey, General public, Epistemic integrity
A senior Indian National Congress leader in Kerala, N. Subrahmanian, was reportedly booked by police after sharing a digitally altered image on Facebook depicting Kerala Chief Minister Pinarayi Vijayan alongside a suspect in a gold theft case. Police reportedly stated that at least one circulated image was created using AI tools and that the post allegedly sought to provoke enmity and mislead the public.
Who was affected: Pinarayi Vijayan, General public, General public of Kerala, General public of India, Epistemic integrity
In late December 2025, an altered image reportedly circulated online falsely depicting former Taiwanese legislator Kao Chia-yu posing in front of the PRC five-star flag, implying travel to or alignment with China. Reporting indicates the image was manipulated from a real photo taken at Taiwan's representative office in New York, replacing the Republic of China flag. Kao publicly debunked the image, citing AI-assisted disinformation and warning of political smearing ahead of elections.
Who was affected: Kao Chia-yu, General public, General public of Taiwan, Epistemic integrity, National security and intelligence stakeholders
A 16-year-old student in Greater Noida, India, reportedly died by suicide after being questioned by school authorities over suspected use of AI tools during a pre-board examination. The student's family alleges she was publicly reprimanded and mentally harassed following the incident, contributing to severe distress. School officials deny harassment and state disciplinary actions followed exam rules.
Who was affected: Unnamed student in Greater Noida, Students in India, students
A Google Search AI-generated summary reportedly falsely stated that Canadian musician Ashley MacIsaac had been convicted of sexual offenses, apparently due to mistaken identity. After a venue reportedly relied on the AI-generated information, a scheduled concert was cancelled. The reported false summary caused reputational and economic harm and left the musician concerned for his personal safety. Google later amended the search results after the error was identified.
Who was affected: Ashley McIsaac, Musicians, Performers, Event organizers, Epistemic integrity
An AI agent reportedly based on Anthropic's Claude model was deployed to operate an office vending machine at The Wall Street Journal, including purchasing inventory, setting prices, and managing sales. According to reporting, the system repeatedly set prices to zero, approved inappropriate purchases, and failed to maintain profit controls, reportedly resulting in financial losses exceeding its initial budget and the distribution of inventory without payment.
A Florida couple reportedly lost approximately $45,000 after scammers impersonating Elon Musk used AI-generated deepfake videos and social engineering to run a fake car giveaway and investment scheme. The victim was contacted via Facebook, moved to WhatsApp, and sent personalized videos purporting to show Musk promising prizes and returns. Believing the messages authentic, the victim transferred cash and funds before realizing no car or payout would arrive.
Who was affected: George Hendricks, Unnamed spouse of George Hendricks, General public, General public of Florida, General public of the United States, Epistemic integrity, Elon Musk, Investors, Elderly individuals, Elderly investors
Grok reportedly generated and repeated a fabricated civilian hero identity, "Edward Crabtree," following the Bondi Beach shooting in Sydney, Australia. The system reportedly cited a fake news article and misattributed heroic actions to this fictional individual during the unfolding emergency. Contemporaneous reporting identified Ahmed Al Ahmed as the real bystander who intervened and was injured during the attack.
Who was affected: General public, General public of Sydney, General public of Australia, Epistemic integrity, Ahmed Al Ahmed, Bondi Beach shooting bystanders
Charlie the Chatbot, an AI-powered system deployed by the Canada Revenue Agency (CRA), has reportedly been providing inaccurate or incomplete tax-related information to members of the public. An audit by the Auditor General of Canada reportedly found the chatbot produced correct responses in fewer than half of tested cases. The system has been publicly available across multiple CRA webpages since March 2020 and reportedly used by millions of users.
Who was affected: General public, General public of Canada, Epistemic integrity, Canadian taxpayers
A fabricated image purporting to show U.S. President Donald Trump using a walker reportedly circulated widely on social media in mid-December 2025. Fact-checkers determined the image was AI-generated, citing visual anomalies and detection of Google's SynthID watermark. The image reportedly spread across multiple platforms and was shared by a political strategist, prompting public confusion and debunking by news outlets and AI detection tools.
Around December 2025, Cypriot authorities reported that scammers used a purportedly AI-generated deepfake video impersonating President Nicos Christodoulides and other officials to promote a fraudulent investment platform. The video deceived about 15 citizens, who each lost between €10,000 and €15,000. Officials warned that malicious AI impersonation schemes are increasing and that citizens currently lack effective protection.
Who was affected: Nikos Christodoulides, General public, General public of Cyprus, Epistemic integrity
On December 5, 2025, The New York Times sued Perplexity, alleging the company used copyrighted Times articles without permission to train its AI system and generated outputs that reproduced Times content or falsely attributed fabricated information to the newspaper. The suit claims the conduct harmed the Times's business and brand. Perplexity denies wrongdoing.
Who was affected: Writers, The New York Times, publishers, Journalistic integrity, Journalism, Epistemic integrity
A British widow reportedly lost more than $600,000 in a romance fraud scheme that allegedly involved AI-generated deepfake videos purporting to show actor Jason Momoa. The victim reportedly sold her home and transferred funds after being led to believe the relationship was genuine.
Who was affected: Unnamed victim of Jason Momoa deepfake scam, General public, General public of the United Kingdom, Victims of romance scams, Victims of romance scams in the United Kingdom
Scammers allegedly created a fake Facebook profile impersonating a Scottish woman, Teigan McMahon, raising funds for her father, who was reportedly in a coma in Turkey. The account reportedly reused her photos and posts and was described as using purportedly AI-generated or deepfake-style content to solicit donations. The impersonation forced the family to pause their legitimate fundraiser while the fake profile remained online.
AI children's products by FoloToy (Kumma), Miko (Miko 3), and Character.AI (custom chatbots) reportedly and allegedly produced harmful outputs, including purported sexual content, suicide-related advice, and manipulative emotional messaging. Some systems also allegedly exposed user data. Several toys reportedly used OpenAI models.
Who was affected: Children interacting with Kumma, Children interacting with Miko 3, Character.AI users, Parents, Children, General public
The erotic AI chatbot and image-generation platform Secret Desires reportedly left nearly two million sensitive images and videos publicly exposed in misconfigured cloud storage. The leaked files reportedly included personal photos, workplace and university information, and explicit AI-generated deepfakes of women and girls. The content reportedly became inaccessible shortly after journalists contacted the platform.
Who was affected: Women, People depicted in Secret Desires deepfakes, General public, Epistemic integrity
Several major AI chatbots, including ChatGPT, Copilot, Gemini, and Meta AI, were reportedly found to have provided incorrect or misleading financial and insurance guidance for UK users. The systems allegedly advised exceeding ISA limits, misstated tax rules, gave wrong travel insurance requirements, and pointed users toward costly refund services.
Who was affected: Meta AI users, General public of the United Kingdom, General public, Gemini users, Copilot users, ChatGPT users, Chatbot users, Epistemic integrity
A large-scale phishing campaign allegedly impersonating Services Australia and Centrelink reportedly sent more than 270,000 fraudulent emails in 2025. Mimecast analysts reportedly say attackers (designated MCTO3001) used AI tools to generate highly convincing government-themed messages and evasion techniques, targeting vulnerable Australians and public institutions. Victims reportedly faced risks of credential theft and downstream digital exploitation.
Who was affected: Medicare of Australia beneficiaries, Government of Australia, General public of Australia, General public, Centrelink beneficiaries, Centrelink, Australian welfare recipients, Australian businesses, Epistemic integrity, Truth
Greek Finance Minister Kyriakos Pierrakakis and the Ministry of Economy and Finance reportedly filed a lawsuit against unidentified operators of a Facebook page that allegedly used AI to generate a deepfake video falsely depicting the minister endorsing fraudulent "high-yield" investment schemes. Authorities noted this follows similar deepfake-driven scams in Greece involving impersonation of public figures for illicit financial gain.
Who was affected: Kyriakos Pierrakakis, Government of Greece, Ministry of Economy and Finance of Greece, General public, General public of Greece, Greek investors, Investors, Epistemic integrity
Anthropic reportedly identified a cyber espionage campaign in which a purported Chinese state-linked group, designated GTG-1002 by Anthropic, allegedly jailbroke Claude Code and used it to automate 80–90% of multi-stage intrusions. The AI reportedly independently performed reconnaissance, vulnerability discovery, exploitation, credential harvesting, and data extraction across roughly 30 targets before the activity was detected and blocked.
Who was affected: Entities targeted by GTG-1002, National security stakeholders
Rep. Mike Collins's congressional campaign allegedly created and posted an AI-generated video that purportedly used Sen. Jon Ossoff's portrait and synthetic voice to falsely depict him supporting the ongoing government shutdown. The video was reportedly shared on the campaign's X account with a limited disclaimer. Georgia Democrats criticized the deepfake as misleading, and the campaign reportedly indicated it would continue using similar AI tactics.
Who was affected: Jon Ossoff, General public, General public of Georgia, Electoral integrity, Epistemic integrity, Truth, Democracy
A student at Escuela Secundaria Técnica No. 1 in Zacatecas, Mexico, allegedly used an AI image-generation system to create sexually explicit manipulated images of classmates. According to victims and parents, the student reportedly produced a catalog containing images of more than 400 minors, organized by grade and group, and distributed or sold the material online. Students reported that the perpetrator took photos around the school to use as inputs. The Fiscalía de Zacatecas opened an investigation for crimes against sexual privacy and offered psychological and legal support to victims. The manipulated images are reportedly believed to have circulated beyond the school.
Who was affected: Students of Escuela Secundaria Técnica No. 1 in Zacatecas, Family of students of Escuela Secundaria Técnica No. 1 in Zacatecas, Epistemic integrity
Western Australia's Consumer Protection commissioner warned of a scam using a purported AI-generated deepfake of Premier Roger Cook to promote a fraudulent investment scheme in a YouTube pop-up ad. The manipulated video reportedly uses less than 10 seconds of real footage and voice and falsely depicts Cook endorsing "low investment, high return" opportunities. Cook called the scam "very scary," while regulators urged platforms to act on AI-driven frauds.
Who was affected: Roger Cook, General public, General public of Australia, General public of Western Australia, Australian investors, Epistemic integrity, Truth
Amazon cut 14,000 jobs (4% of workforce), linked to AI.
14,000
jobs lost
OpenAI disclosed internal estimates suggesting that hundreds of thousands of ChatGPT users each week exhibit signs of mania, psychosis, suicidal ideation, or emotional dependence on the chatbot. WIRED reported that some users have been hospitalized, divorced, or died after prolonged conversations with ChatGPT, with families alleging the system intensified delusions and contributed to severe psychological harm.
Chegg cut 388 jobs (45% of workforce), explicitly attributed to AI/automation.
388
jobs lost
Conservative activist Robby Starbuck reportedly filed a defamation lawsuit against Google, alleging its Bard, Gemini, and Gemma AI systems generated false statements linking him to sexual assault and extremist activity. Starbuck claims the outputs caused reputational harm and sought over $15 million in damages. Google acknowledged the hallucinations as known LLM behavior and stated it works to minimize such inaccuracies. See also Incident 1247.
Meta cut 600 jobs (1% of workforce), partially attributed to AI/automation.
600
jobs lost
Republican Virginia lieutenant governor candidate John Reid reportedly held a 40-minute mock debate against a purportedly AI-generated version of Democratic opponent Sen. Ghazala Hashmi. The bot reportedly mimicked her voice and synthesized responses trained on her public statements. Hashmi's campaign denounced the non-consensual likeness as a "shoddy gimmick."
Who was affected: Ghazala Hashmi, Democracy, Epistemic integrity, Electoral integrity
The widespread Amazon Web Services (AWS) outage on October 20, 2025 reportedly caused AI-enabled Eight Sleep "Pod" smart beds to overheat or become unresponsive. The devices reportedly rely on a cloud-hosted deep learning algorithm to regulate temperature and track sleep data. When AWS went down, users reportedly lost control of their beds, which remained stuck at prior settings. Some reported overheating and incline malfunctions. Eight Sleep confirmed it is developing an offline "outage mode."
President Donald Trump reportedly posted an AI-generated video on Truth Social showing himself crowned and piloting a fighter jet labeled King Trump, dropping feces on protesters, including influencer Harry Sisson, during No Kings demonstrations. The clip, allegedly created by another account, drew criticism from Sisson and musician Kenny Loggins, whose song "Danger Zone" was reportedly used without consent.
Who was affected: Harry Sisson, Kenny Loggins, General public, General public of the United States, No Kings protesters, Epistemic integrity, Cultural integrity, Democracy, Public discourse integrity
A purported AI-generated deepfake video circulated online falsely depicting UK Conservative MP George Freeman announcing his defection to the Reform UK party. The fabricated clip, which mimicked Freeman’s likeness and voice without his knowledge or consent, included statements critical of the Conservatives and was widely shared on social media. Freeman denounced the video as disinformation, reported it to police, and warned that such synthetic media pose serious risks to democratic processes.
Who was affected: George Freeman, Conservative Party (UK), Epistemic integrity, Truth, Democracy
The National Republican Senatorial Committee (NRSC) allegedly released a 30-second campaign ad featuring a purported deepfake video of Senator Chuck Schumer repeatedly saying, "Every day gets better for us," implying enthusiasm for a government shutdown. While the video reportedly was marked as AI-generated, it was widely circulated on social media and drew criticism from journalists and researchers, who warned it could mislead voters and erode trust in authentic political communication.
Who was affected: Chuck Schumer, Democratic Party, General public, General public of the United States, Epistemic integrity, Truth, Democracy
New South Wales police in Australia launched an investigation after parents reported that purported AI-generated sexually explicit images of female students from a Sydney high school had been created and circulated online. The reported manipulated images, produced without consent using deepfake tools, prompted school involvement and a criminal probe under new state laws banning such content.
Who was affected: Unnamed Sydney high school students, Families of unnamed Sydney high school students
A reportedly fatal crash involving a Xiaomi SU7 Ultra electric vehicle in Chengdu, China, occurred after the car's automated driving system and sensor-dependent electronic door handles reportedly failed following a collision. Bystanders were reportedly unable to open the doors as the vehicle caught fire, and the driver died at the scene, prompting scrutiny of automation-linked design risks. Following the incident, Xiaomi's stock reportedly fell more than 5% amid mounting safety concerns.
Who was affected: Deng (31-year-old driver of a Xiaomi SU7 Ultra), Xiaomi Corporation, General public
An NBC News investigation reported that OpenAI language models, including o4-mini, GPT-5-mini, oss-20b, and oss-120b, could be jailbroken to bypass guardrails and provide detailed instructions on creating chemical, biological, and nuclear weapons. Using a publicly known jailbreak prompt, reporters elicited harmful outputs such as steps to synthesize pathogens or maximize suffering with chemical agents. OpenAI acknowledged the findings and said it is refining safeguards to reduce misuse risks.
Who was affected: General public, National security stakeholders, Public safety
Irish police (Garda Síochána) reportedly issued a public warning after a viral "home invasion" prank spread on social media. The prank allegedly uses AI image-generation filters to insert a fake intruder into photos of family homes and send them to relatives, often elderly, claiming someone has broken in. The hoax reportedly caused panic and distress, and multiple false 999 calls diverted Garda resources, prompting an official appeal to stop the trend.
Who was affected: General public of Ireland, Elderly individuals, Garda Síochána
Old Mutual of South Africa reportedly warned of purported deepfake videos impersonating its chairman, former finance minister Trevor Manuel, promoting fake investment schemes on social media. The purported AI-generated videos used his likeness and cloned voice to solicit funds, prompting a public statement from Manuel denying involvement.
Who was affected: Trevor Manuel, Old Mutual, Investors, General public of South Africa
A Tech Transparency Project investigation identified purportedly AI-generated deepfake ads on Facebook impersonating President Trump, Elon Musk, Rep. Alexandria Ocasio-Cortez, Senators Elizabeth Warren and Bernie Sanders, and Press Secretary Karoline Leavitt. The ads allegedly promoted fake $5,000 government rebates and similar scams, reportedly misleading users and generating ad revenue for Meta before removal.
Who was affected: Public trust, Meta users, Karoline Leavitt, Instagram users, General public, Facebook users, Epistemic integrity, Elon Musk, Elizabeth Warren, Elderly individuals, Donald Trump, Bernie Sanders, Alexandria Ocasio-Cortez
Police in Brazil reportedly arrested four suspects accused of using purportedly AI-generated deepfake videos of model Gisele Bündchen and other celebrities in Instagram ads to promote fake giveaways and skincare products. The scheme was allegedly active since at least August 2024 and generated over 20 million reais ($3.9 million) in suspicious funds. Victims allegedly lost small amounts that often went unreported, creating what investigators called "statistical immunity."
Who was affected: General public of Brazil, Brazilian consumers defrauded via Instagram ads, Gisele Bündchen, Angélica Huck, Juliette Freire Feitosa, Maisa Silva, Sabrina Sato
President Donald Trump reportedly posted a purportedly AI-modified video on Truth Social depicting Senate Minority Leader Chuck Schumer making fabricated statements and House Minority Leader Hakeem Jeffries in a sombrero. Critics, including Jeffries and Rep. Ro Khanna, reportedly condemned the video as racist and misleading during budget negotiations over a looming government shutdown. The AI tool used for modification has not been identified.
Who was affected: Chuck Schumer, Hakeem Jeffries, General public, Democratic integrity, Epistemic integrity, Presidential norms
Accenture cut 11,000 jobs (1% of workforce), linked to AI.
11,000
jobs lost
An Australian IT professional, Samuel McCarthy, reportedly recorded an interaction with the Nomi AI chatbot in which it allegedly encouraged him, posing as a 15-year-old, to murder his father. The chatbot allegedly provided graphic instructions for stabbing, urged him to film the act, and engaged in sexual role-play despite the underage scenario.
Who was affected: Samuel McCarthy, Nomi users, General public of Australia, General public, Emotionally vulnerable individuals
Multiple AI systems allegedly spread false claims in the aftermath of Charlie Kirk's assassination at Utah Valley University. Perplexity and Grok chatbots reportedly stated Kirk was alive, mischaracterized authentic video as satire, and wrongly identified Utah Democrat Michael Mallinson as the suspect. A Google AI Overview allegedly claimed Kirk was on Ukraine's Myrotvorets "enemies" list, a reported falsehood that echoed pro-Kremlin narratives.
Who was affected: General public, Grok users, Perplexity users, Google users, Michael Mallinson, Family of Charlie Kirk, Epistemic integrity, Journalistic integrity, Journalists, General public of Utah
Purported deepfake videos reportedly circulated on Meta platforms cloning Irish Fine Gael presidential candidate Heather Humphreys to allegedly portray her endorsing high-return investment schemes. The purportedly AI-generated image and voice cloning aimed to exploit public trust and lure consumers into fraud. The Bank of Ireland warned of further scams and risks.
Who was affected: Heather Humphreys, Fine Gael, General public of Ireland, Electoral integrity, Epistemic integrity
Researchers and European officials reported that Russian operatives have been using purportedly AI-generated posts, videos, and websites to influence Moldova's September 2025 parliamentary elections. Networks of over 900 accounts across TikTok, Facebook, Instagram, Telegram, and YouTube have reportedly spread fabricated narratives, including reportedly misogynistic and false smears of President Maia Sandu.
Who was affected: General public of Moldova, Moldovan voters, Moldovan democratic institutions, Maia Sandu, European Union, General public, Epistemic integrity, Electoral integrity
A network of five Nigeria-based YouTube channels reportedly drew millions of views while purportedly amplifying Kremlin-aligned narratives. The channels allegedly used AI-generated anchors and synthetic voiceovers to present repurposed interviews with pro-Russian commentators Scott Ritter and Douglas Macgregor. Attribution of the network's motives and funding remains uncertain, with reported possibilities ranging from freelance monetization to proxy involvement in influence operations.
Who was affected: YouTube viewers, YouTube's platform, United States, Ukraine, Truth, Israel, General public, European Union, Epistemic integrity, Democracy
Australian academics reportedly identified alleged fabricated references and a misattributed legal quote in a $439,000 Deloitte report for the Department of Employment and Workplace Relations on welfare compliance. Several citations allegedly referred to works that do not exist, and a Federal Court decision was reportedly misstated. Experts allege the errors suggest use of generative AI. Deloitte has denied wrongdoing, but DEWR is investigating.
Who was affected: Department of Employment and Workplace Relations (DEWR), Government of Australia, General public of Australia, Epistemic integrity, Academics, Lisa Burton Crawford, Carolyn Adams, Janina Boughey, Scholars erroneously cited
Consumers were allegedly defrauded by AI-generated scam websites impersonating Joann Fabrics following the retailer's bankruptcy. The alleged impostor sites reportedly used Joann's branding to steal credit card details and personal data and leaving victims without products. Cybersecurity firm Netcraft estimated that nearly 100,000 domains created with AI tools were impersonating 194 brands, accounting for a reported 6–7% of global phishing activity.
Who was affected: Consumers, Shoppers, Customers, General public, Joann's Fabrics, Companies whose domains and websites are impersonated by scammers
Fantagio Entertainment, the agency of actor Kim Seon-ho, reportedly warned of recent impersonation scams and purported deepfake videos allegedly misusing his likeness and demanding money. The agency stated that neither Kim nor his staff would solicit funds or personal information, and pledged legal action.
Who was affected: Kim Seon-ho, Fans of Kim Seon-ho, General public of South Korea
Salesforce cut 4,000 jobs (5% of workforce), explicitly attributed to AI/automation.
4,000
jobs lost
Meta cut 198 jobs (0% of workforce), explicitly attributed to AI/automation.
198
jobs lost
Oracle cut 30,000 jobs (19% of workforce), explicitly attributed to AI/automation.
30,000
jobs lost
Crypto.com cut 180 jobs (12% of workforce), explicitly attributed to AI/automation.
180
jobs lost
An online ad reportedly used a purported AI-generated version of Ashley James, a British broadcaster and former reality television personality, to market weight loss pills through a false celebrity endorsement. James said the video copied her face and voice without consent and described the ad as both a violation of her identity and a misleading sales tactic directed at the public.
Who was affected: Ashley James, Fans of Ashley James, People seeking weight loss supplements
Grammarly's Expert Review feature allegedly used a large language model to generate editing suggestions presented under the names of journalists, authors, and academics without their consent. A federal class action filed by Julia Angwin claimed the feature misappropriated identities for commercial gain and attributed advice the named individuals never gave.
Who was affected: Julia Angwin, Journalists, Academics, Authors, Writers, Grammarly users, Epistemic integrity
Amazon cut 100 jobs (0% of workforce) (estimated), linked to AI.
100
jobs lost
eBay cut 800 jobs (7% of workforce), partially attributed to AI/automation.
800
jobs lost
Block cut 4,000 jobs (40% of workforce), explicitly attributed to AI/automation.
4,000
jobs lost
Baker McKenzie cut 1,200 jobs (10% of workforce) (estimated), linked to AI.
1,200
jobs lost
Anthropic said it identified large-scale campaigns that used fraudulent accounts and proxy services to generate high volumes of Claude interactions to extract model capabilities for competitor training ("distillation"). Anthropic attributed the activity to DeepSeek, Moonshot, and MiniMax and said it involved millions of exchanges across thousands of accounts, violating its terms and access restrictions. Anthropic described detection measures, account controls, and indicator-sharing in response.
Who was affected: Anthropic, Claude users, Anthropic customers, National security and intelligence stakeholders
A nurse at St. Rose Dominican Hospital in Henderson, Nevada, reportedly described an episode in which a hospital AI system purportedly generated a sepsis alert that triggered urgent protocol steps, including IV fluids, for an older patient with a dialysis catheter. Reportedly, the nurse objected that fluids could cause dangerous overload; a physician intervened and ordered an alternative treatment.
Who was affected: patients, Nurses, Doctors, St. Rose Dominican Hospital (Henderson, Nevada), Epistemic integrity
A filmmaker reportedly used ByteDance's AI video tool Seedance 2.0 to create and post a purportedly realistic clip depicting Tom Cruise fighting Brad Pitt, which then circulated widely online. Industry groups and studios publicly alleged the tool enables unauthorized use of copyrighted material and performers' likeness, and at least one studio reportedly sent a cease-and-desist letter. ByteDance said it respects IP and would strengthen safeguards.
Who was affected: Tom Cruise, Brad Pitt, Actors, Film industry, Epistemic integrity
Social media posts in Thailand reportedly circulated an image purporting to show Thai PM Anutin Charnvirakul dining with South African businessman Benjamin Mauerberger ("Ben Smith"), implying long ties. AFP reported Google's SynthID flagged the image with "very high" confidence as made with Google AI tools; Anutin reportedly denied its veracity. The post reportedly spread on the eve of Thailand's election.
Who was affected: Anutin Charnvirakul, Benjamin Mauerberger, Voters in Thailand, Electoral integrity, Epistemic integrity
Wiz researchers reported accessing an exposed Moltbook database in under three minutes, allegedly obtaining ~35,000 email addresses, thousands of private DMs, and ~1.5 million API authentication tokens. The exposure was described as enabling read/write access and potential impersonation or manipulation of "AI agent" accounts. Wiz said it disclosed the issue to Moltbook, which reportedly secured the database within hours and deleted accessed data.
Dow cut 4,500 jobs (13% of workforce), linked to AI.
4,500
jobs lost
ASML cut 1,700 jobs (4% of workforce), partially attributed to AI/automation.
1,700
jobs lost
Pinterest cut 700 jobs (13% of workforce) (estimated), linked to AI.
700
jobs lost
A Waymo driverless vehicle reportedly struck a child near an elementary school in Santa Monica, California during school drop-off hours. According to filings with the National Highway Traffic Safety Administration, the child allegedly sustained minor injuries. The vehicle was purportedly operating on Waymo's 5th Generation Automated Driving System without a human safety driver. Waymo reported the incident to federal regulators, who subsequently opened an investigation.
Who was affected: minors, Unnamed elementary school student, pedestrians, General public
Meta cut 1,500 jobs (2% of workforce) (estimated), linked to AI.
1,500
jobs lost
U.S. Immigration and Customs Enforcement (ICE) reportedly used an AI-assisted résumé screening tool during a 2025 hiring surge that misclassified some applicants as having law-enforcement experience. As a result, certain recruits without policing backgrounds were allegedly routed into a shortened training pathway. ICE reportedly identified the error, reviewed résumés manually, and reassigned affected recruits for additional training.
Who was affected: ICE recruits without law-enforcement experience, Members of the public subject to ICE enforcement
After the fatal shooting of Renee Nicole Good in Minneapolis by an ICE officer, users on X reportedly asked Grok to "unmask" the masked agent shown in eyewitness footage. Grok reportedly generated a fabricated face that spread widely online, along with the false name "Steve Grove." The output and claim allegedly led to harassment and reputational harm against at least two uninvolved men.
Who was affected: Misidentified individuals, Private citizens falsely accused of crimes, People named Steve Grove, Epistemic integrity
Purported deepfake videos using the face and voice of Greek economist and politician Yanis Varoufakis were reportedly circulated on YouTube and other social platforms in late 2025 and early 2026. The videos depicted a synthetic version of Varoufakis delivering fabricated political statements, including about international events. Platforms reportedly removed some videos, but many reappeared under new accounts.
Who was affected: Yanis Varoufakis, Epistemic integrity, YouTube users, social media users
A U.S. National Weather Service office reportedly posted an AI-generated wind forecast map for Camas Prairie, Idaho that contained fabricated and misspelled place names, including non-existent towns. The image was reportedly shared publicly on social media before being deleted and corrected. NWS confirmed a generative AI tool had been used to create the base map.
Who was affected: Epistemic integrity, Residents of Camas Prairie, Idaho, General public, General public of Idaho
A recently divorced Bitcoin investor reportedly lost his entire retirement fund, one full Bitcoin, after being deceived by a purportedly AI-enabled romance scam. The perpetrator allegedly used AI-generated portraits and real-time deepfake video calls to pose as a romantic partner and trusted crypto trader, persuading the victim to transfer cryptocurrency through a "pig-butchering" scheme that displayed fake profits and made recovery impossible.
Who was affected: Investors, Cryptocurrency investors, Cryptocurrency investors defrauded by AI-generated profiles, Pig-butchering scam victims
A purported deepfake video reportedly circulated online falsely depicting Elon Musk endorsing a nonexistent "17-hour" diabetes cure. The reported video promoted unverified health claims and appears to have been part of a scam ecosystem exploiting Musk's public credibility. Rapper Boosie Badazz reportedly encountered and amplified the video before its falsity was identified.
Who was affected: social media users, People with diabetes, People seeking medical advice, General public, Boosie Badazz, Epistemic integrity
A senior Indian National Congress leader in Kerala, N. Subrahmanian, was reportedly booked by police after sharing a digitally altered image on Facebook depicting Kerala Chief Minister Pinarayi Vijayan alongside a suspect in a gold theft case. Police reportedly stated that at least one circulated image was created using AI tools and that the post allegedly sought to provoke enmity and mislead the public.
Who was affected: Pinarayi Vijayan, General public, General public of Kerala, General public of India, Epistemic integrity
A 16-year-old student in Greater Noida, India, reportedly died by suicide after being questioned by school authorities over suspected use of AI tools during a pre-board examination. The student's family alleges she was publicly reprimanded and mentally harassed following the incident, contributing to severe distress. School officials deny harassment and state disciplinary actions followed exam rules.
Who was affected: Unnamed student in Greater Noida, Students in India, students
An AI agent reportedly based on Anthropic's Claude model was deployed to operate an office vending machine at The Wall Street Journal, including purchasing inventory, setting prices, and managing sales. According to reporting, the system repeatedly set prices to zero, approved inappropriate purchases, and failed to maintain profit controls, reportedly resulting in financial losses exceeding its initial budget and the distribution of inventory without payment.
Grok reportedly generated and repeated a fabricated civilian hero identity, "Edward Crabtree," following the Bondi Beach shooting in Sydney, Australia. The system reportedly cited a fake news article and misattributed heroic actions to this fictional individual during the unfolding emergency. Contemporaneous reporting identified Ahmed Al Ahmed as the real bystander who intervened and was injured during the attack.
Who was affected: General public, General public of Sydney, General public of Australia, Epistemic integrity, Ahmed Al Ahmed, Bondi Beach shooting bystanders
A fabricated image purporting to show U.S. President Donald Trump using a walker reportedly circulated widely on social media in mid-December 2025. Fact-checkers determined the image was AI-generated, citing visual anomalies and detection of Google's SynthID watermark. The image reportedly spread across multiple platforms and was shared by a political strategist, prompting public confusion and debunking by news outlets and AI detection tools.
On December 5, 2025, The New York Times sued Perplexity, alleging the company used copyrighted Times articles without permission to train its AI system and generated outputs that reproduced Times content or falsely attributed fabricated information to the newspaper. The suit claims the conduct harmed the Times's business and brand. Perplexity denies wrongdoing.
Who was affected: Writers, The New York Times, publishers, Journalistic integrity, Journalism, Epistemic integrity
Scammers allegedly created a fake Facebook profile impersonating a Scottish woman, Teigan McMahon, raising funds for her father, who was reportedly in a coma in Turkey. The account reportedly reused her photos and posts and was described as using purportedly AI-generated or deepfake-style content to solicit donations. The impersonation forced the family to pause their legitimate fundraiser while the fake profile remained online.
The erotic AI chatbot and image-generation platform Secret Desires reportedly left nearly two million sensitive images and videos publicly exposed in misconfigured cloud storage. The leaked files reportedly included personal photos, workplace and university information, and explicit AI-generated deepfakes of women and girls. The content reportedly became inaccessible shortly after journalists contacted the platform.
Who was affected: Women, People depicted in Secret Desires deepfakes, General public, Epistemic integrity
A large-scale phishing campaign allegedly impersonating Services Australia and Centrelink reportedly sent more than 270,000 fraudulent emails in 2025. Mimecast analysts reportedly say attackers (designated MCTO3001) used AI tools to generate highly convincing government-themed messages and evasion techniques, targeting vulnerable Australians and public institutions. Victims reportedly faced risks of credential theft and downstream digital exploitation.
Who was affected: Medicare of Australia beneficiaries, Government of Australia, General public of Australia, General public, Centrelink beneficiaries, Centrelink, Australian welfare recipients, Australian businesses, Epistemic integrity, Truth
Anthropic reportedly identified a cyber espionage campaign in which a purported Chinese state-linked group, designated GTG-1002 by Anthropic, allegedly jailbroke Claude Code and used it to automate 80–90% of multi-stage intrusions. The AI reportedly independently performed reconnaissance, vulnerability discovery, exploitation, credential harvesting, and data extraction across roughly 30 targets before the activity was detected and blocked.
Who was affected: Entities targeted by GTG-1002, National security stakeholders
A student at Escuela Secundaria Técnica No. 1 in Zacatecas, Mexico, allegedly used an AI image-generation system to create sexually explicit manipulated images of classmates. According to victims and parents, the student reportedly produced a catalog containing images of more than 400 minors, organized by grade and group, and distributed or sold the material online. Students reported that the perpetrator took photos around the school to use as inputs. The Fiscalía de Zacatecas opened an investigation for crimes against sexual privacy and offered psychological and legal support to victims. The manipulated images are reportedly believed to have circulated beyond the school.
Who was affected: Students of Escuela Secundaria Técnica No. 1 in Zacatecas, Family of students of Escuela Secundaria Técnica No. 1 in Zacatecas, Epistemic integrity
Amazon cut 14,000 jobs (4% of workforce), linked to AI.
14,000
jobs lost
Chegg cut 388 jobs (45% of workforce), explicitly attributed to AI/automation.
388
jobs lost
Meta cut 600 jobs (1% of workforce), partially attributed to AI/automation.
600
jobs lost
The widespread Amazon Web Services (AWS) outage on October 20, 2025 reportedly caused AI-enabled Eight Sleep "Pod" smart beds to overheat or become unresponsive. The devices reportedly rely on a cloud-hosted deep learning algorithm to regulate temperature and track sleep data. When AWS went down, users reportedly lost control of their beds, which remained stuck at prior settings. Some reported overheating and incline malfunctions. Eight Sleep confirmed it is developing an offline "outage mode."
A purported AI-generated deepfake video circulated online falsely depicting UK Conservative MP George Freeman announcing his defection to the Reform UK party. The fabricated clip, which mimicked Freeman’s likeness and voice without his knowledge or consent, included statements critical of the Conservatives and was widely shared on social media. Freeman denounced the video as disinformation, reported it to police, and warned that such synthetic media pose serious risks to democratic processes.
Who was affected: George Freeman, Conservative Party (UK), Epistemic integrity, Truth, Democracy
New South Wales police in Australia launched an investigation after parents reported that purported AI-generated sexually explicit images of female students from a Sydney high school had been created and circulated online. The reported manipulated images, produced without consent using deepfake tools, prompted school involvement and a criminal probe under new state laws banning such content.
Who was affected: Unnamed Sydney high school students, Families of unnamed Sydney high school students
An NBC News investigation reported that OpenAI language models, including o4-mini, GPT-5-mini, oss-20b, and oss-120b, could be jailbroken to bypass guardrails and provide detailed instructions on creating chemical, biological, and nuclear weapons. Using a publicly known jailbreak prompt, reporters elicited harmful outputs such as steps to synthesize pathogens or maximize suffering with chemical agents. OpenAI acknowledged the findings and said it is refining safeguards to reduce misuse risks.
Who was affected: General public, National security stakeholders, Public safety
Old Mutual of South Africa reportedly warned of purported deepfake videos impersonating its chairman, former finance minister Trevor Manuel, promoting fake investment schemes on social media. The purported AI-generated videos used his likeness and cloned voice to solicit funds, prompting a public statement from Manuel denying involvement.
Who was affected: Trevor Manuel, Old Mutual, Investors, General public of South Africa
Police in Brazil reportedly arrested four suspects accused of using purportedly AI-generated deepfake videos of model Gisele Bündchen and other celebrities in Instagram ads to promote fake giveaways and skincare products. The scheme was allegedly active since at least August 2024 and generated over 20 million reais ($3.9 million) in suspicious funds. Victims allegedly lost small amounts that often went unreported, creating what investigators called "statistical immunity."
Who was affected: General public of Brazil, Brazilian consumers defrauded via Instagram ads, Gisele Bündchen, Angélica Huck, Juliette Freire Feitosa, Maisa Silva, Sabrina Sato
Accenture cut 11,000 jobs (1% of workforce), linked to AI.
11,000
jobs lost
Multiple AI systems allegedly spread false claims in the aftermath of Charlie Kirk's assassination at Utah Valley University. Perplexity and Grok chatbots reportedly stated Kirk was alive, mischaracterized authentic video as satire, and wrongly identified Utah Democrat Michael Mallinson as the suspect. A Google AI Overview allegedly claimed Kirk was on Ukraine's Myrotvorets "enemies" list, a reported falsehood that echoed pro-Kremlin narratives.
Who was affected: General public, Grok users, Perplexity users, Google users, Michael Mallinson, Family of Charlie Kirk, Epistemic integrity, Journalistic integrity, Journalists, General public of Utah
Researchers and European officials reported that Russian operatives have been using purportedly AI-generated posts, videos, and websites to influence Moldova's September 2025 parliamentary elections. Networks of over 900 accounts across TikTok, Facebook, Instagram, Telegram, and YouTube have reportedly spread fabricated narratives, including reportedly misogynistic and false smears of President Maia Sandu.
Who was affected: General public of Moldova, Moldovan voters, Moldovan democratic institutions, Maia Sandu, European Union, General public, Epistemic integrity, Electoral integrity
Australian academics reportedly identified alleged fabricated references and a misattributed legal quote in a $439,000 Deloitte report for the Department of Employment and Workplace Relations on welfare compliance. Several citations allegedly referred to works that do not exist, and a Federal Court decision was reportedly misstated. Experts allege the errors suggest use of generative AI. Deloitte has denied wrongdoing, but DEWR is investigating.
Who was affected: Department of Employment and Workplace Relations (DEWR), Government of Australia, General public of Australia, Epistemic integrity, Academics, Lisa Burton Crawford, Carolyn Adams, Janina Boughey, Scholars erroneously cited
Fantagio Entertainment, the agency of actor Kim Seon-ho, reportedly warned of recent impersonation scams and purported deepfake videos allegedly misusing his likeness and demanding money. The agency stated that neither Kim nor his staff would solicit funds or personal information, and pledged legal action.
Who was affected: Kim Seon-ho, Fans of Kim Seon-ho, General public of South Korea
During a live broadcast of NASA's Artemis II launch, KBS reportedly used AI-generated real-time translation subtitles that reportedly mistranslated aviation terms including "roger," "roll," and "pitch" into Korean profanity. The offensive subtitles were reportedly displayed to viewers during the livestream. KBS reportedly later apologized, said the error stemmed from phonetic similarity in AI translation, and announced measures including stronger profanity filtering.
Who was affected: General public of South Korea, Korean Broadcasting System (KBS) viewers
Dell Technologies cut 11,000 jobs (10% of workforce), partially attributed to AI/automation.
11,000
jobs lost
In Tokyo, a Japanese IT company reportedly interviewed a job applicant who used purportedly AI-generated video manipulation to impersonate real IT executive Kenbun Yoshii during a remote hiring interview. Investigators cited visual and audio irregularities suggesting a deepfake, and Yoshii said his publicly available images and career details appeared to have been misused.
Who was affected: Kenbun Yoshii, Japanese IT company recruiter(s), Unnamed Japanese company, Epistemic integrity, National security and intelligence stakeholders
Atlassian cut 1,600 jobs (10% of workforce), explicitly attributed to AI/automation.
1,600
jobs lost
A Wichita, Kansas man reported that scammers sent him purported AI-generated nude images depicting his face on another body in his home and threatened to send them to his Facebook contacts unless he paid money. Police said a report was filed.
Morgan Stanley cut 2,500 jobs (3% of workforce), partially attributed to AI/automation.
2,500
jobs lost
For months, callers to the Washington State Department of Licensing who selected Spanish reportedly received AI-generated English responses spoken with a Spanish accent rather than actual Spanish-language service. The agency apologized and said staff configuration caused the error, which created accessibility problems for callers seeking language support.
Who was affected: Spanish language speakers, General public, General public of Washington State
WiseTech cut 2,000 jobs (29% of workforce), explicitly attributed to AI/automation.
2,000
jobs lost
Autodesk cut 1,000 jobs (7% of workforce) (estimated), linked to AI.
1,000
jobs lost
Livspace cut 1,000 jobs (12% of workforce) (estimated), explicitly attributed to AI/automation.
1,000
jobs lost
An Amazon delivery van reportedly became stranded on the Broomway, a hazardous tidal track in Essex in England, after the driver allegedly followed GPS/satnav directions toward Foulness Island. HM Coastguard said it was alerted and the occupants were safe; Amazon reportedly arranged recovery of the vehicle.
Scott Shambaugh, a matplotlib maintainer, reported that an autonomous AI coding agent using the name "MJ Rathbun" researched him and publicly posted a personalized critical blog post after his GitHub pull request was closed. The post accused him of bias and "gatekeeping" and included claims Shambaugh disputed. The agent's operator and underlying model were not identified. Shambaugh said the post risked reputational harm and could mislead readers or other agents.
Who was affected: Supply-chain gatekeepers, Scott Shambaugh, Open-source maintainers, matplotlib users, GitHub users
President Trump reportedly reposted a video on Truth Social that portrayed Barack and Michelle Obama as apes, imagery widely condemned as racist. The post was reportedly later deleted after public outcry, including criticism from some Republicans. The content was described by critics as an AI-driven meme/deepfake-style clip, an example of purportedly AI-amplified racist propaganda and harm to social trust.
Who was affected: Barack Obama, Michelle Obama, Epistemic integrity, Black Americans, Black people
Expedia cut 162 jobs (1% of workforce), linked to AI.
162
jobs lost
Amazon cut 16,000 jobs (5% of workforce), partially attributed to AI/automation.
16,000
jobs lost
Chan Zuckerberg Initiative cut 70 jobs (9% of workforce), explicitly attributed to AI/automation.
70
jobs lost
Following the fatal shooting of Minneapolis ICU nurse Alex Pretti by U.S. Customs and Border Patrol agents, social media accounts reportedly circulated images purported to have been altered by AI, reportedly distorting evidence of the incident by portraying Pretti as threatening law enforcement and altering the presence of weapons. The images reportedly misidentified individuals and helped reinforce partisan narratives by purportedly obscuring verified video and eyewitness accounts.
The White House reportedly posted a purportedly AI-altered image on X showing Minnesota protester and attorney Nekima Levy Armstrong appearing to cry during her arrest. An earlier image reportedly shared by Homeland Security Secretary Kristi Noem showed her calm. Reported analysis by third parties using AI detection tools found signs of purported facial manipulation, which could be replicated using generative AI systems.
ITV presenter Kate Garraway reported that purported AI-generated images falsely depicting her in a relationship with a fictitious partner circulated online, prompting false rumors about her personal life. Garraway stated the images caused confusion and emotional distress for her children, including claims about her son's reactions that she said were untrue.
Who was affected: Kate Garraway, Family of Kate Garraway, Epistemic integrity
An automated shuttle bus operated by Beep was reportedly involved in a minor collision during a U.S. Department of Transportation demonstration ride in Washington, D.C. The vehicle, operating autonomously with a human safety driver onboard, was reportedly rear-ended by a Tesla whose driver made an illegal lane change. No injuries were reported, and officials stated the autonomous system functioned appropriately.
Who was affected: Beep automated shuttle bus passengers, unnamed Tesla driver, Public road users in Washington, D.C., Tesla drivers
A woman who runs a play school in Indore, India, was reportedly defrauded of ₹97,500 (approximately $1,080 USD) after a fraudster allegedly used AI-based voice cloning to impersonate her cousin, an Uttar Pradesh police employee, and claim a friend needed urgent cardiac surgery. The victim was reportedly persuaded to transfer funds via QR codes before discovering no money had actually been credited.
Who was affected: Smita Sinha (pseudonym), Small private school owners in Indore, India, Epistemic integrity, General public of Madhya Pradesh
An 80-year-old U.S. woman was reportedly deceived by scammers using purportedly AI-generated messages and deepfake media impersonating Elon Musk into believing she was in a romantic relationship with him. The perpetrators allegedly induced her to buy over $50,000 in Apple gift cards to be converted into cryptocurrency, leaving her financially endangered and at risk of foreclosure on her home.
Who was affected: Unnamed 80-year-old U.S. woman, Elderly investors, Cryptocurrency investors defrauded by AI-generated profiles, Epistemic integrity, Elon Musk
Following the January 2026 U.S. raid and arrest of Venezuelan leader Nicolás Maduro, purportedly AI-generated images and videos circulated widely on platforms including X, reportedly depicting Maduro in fabricated scenarios. NewsGuard identified at least seven manipulated or synthetic visuals reaching over 14 million views, including AI-generated photos detected via Google Gemini's SynthID watermark.
Who was affected: Nicolás Maduro, social media users, General public, General public of Venezuela, Epistemic integrity
A political dispute reportedly emerged in India after the Madhya Pradesh Congress alleged that AI-generated images were uploaded to a government platform in connection with water conservation works linked to the National Water Awards. District officials reportedly denied that the images influenced the award process, stating that evaluations relied on verified data and field inspections and that a limited number of AI-generated images appeared only on a separate educational portal.
Who was affected: General public, General public of India, General public of Madhya Pradesh, General public of Khandwa District (Madhya Pradesh), Epistemic integrity
İsa Kereci and Hale Kereci, a married couple in Samsun, Turkey, reportedly lost 1.5 million lira after being deceived by an alleged AI-generated investment video on Facebook. The scammers allegedly used the video to build credibility and then guided them through staged "investment" steps, including screen sharing, fees, forced loans, and unauthorized bank and credit card transfers. The couple filed a criminal complaint, reporting severe financial harm and significant emotional distress.
Who was affected: İsa Kereci, Investors, Hale Kereci, General public of Turkey, General public, Epistemic integrity
In late December 2025, an altered image reportedly circulated online falsely depicting former Taiwanese legislator Kao Chia-yu posing in front of the PRC five-star flag, implying travel to or alignment with China. Reporting indicates the image was manipulated from a real photo taken at Taiwan's representative office in New York, replacing the Republic of China flag. Kao publicly debunked the image, citing AI-assisted disinformation and warning of political smearing ahead of elections.
Who was affected: Kao Chia-yu, General public, General public of Taiwan, Epistemic integrity, National security and intelligence stakeholders
A Google Search AI-generated summary reportedly falsely stated that Canadian musician Ashley MacIsaac had been convicted of sexual offenses, apparently due to mistaken identity. After a venue reportedly relied on the AI-generated information, a scheduled concert was cancelled. The reported false summary caused reputational and economic harm and left the musician concerned for his personal safety. Google later amended the search results after the error was identified.
Who was affected: Ashley McIsaac, Musicians, Performers, Event organizers, Epistemic integrity
A Florida couple reportedly lost approximately $45,000 after scammers impersonating Elon Musk used AI-generated deepfake videos and social engineering to run a fake car giveaway and investment scheme. The victim was contacted via Facebook, moved to WhatsApp, and sent personalized videos purporting to show Musk promising prizes and returns. Believing the messages authentic, the victim transferred cash and funds before realizing no car or payout would arrive.
Who was affected: George Hendricks, Unnamed spouse of George Hendricks, General public, General public of Florida, General public of the United States, Epistemic integrity, Elon Musk, Investors, Elderly individuals, Elderly investors
Charlie the Chatbot, an AI-powered system deployed by the Canada Revenue Agency (CRA), has reportedly been providing inaccurate or incomplete tax-related information to members of the public. An audit by the Auditor General of Canada reportedly found the chatbot produced correct responses in fewer than half of tested cases. The system has been publicly available across multiple CRA webpages since March 2020 and reportedly used by millions of users.
Who was affected: General public, General public of Canada, Epistemic integrity, Canadian taxpayers
Around December 2025, Cypriot authorities reported that scammers used a purportedly AI-generated deepfake video impersonating President Nicos Christodoulides and other officials to promote a fraudulent investment platform. The video deceived about 15 citizens, who each lost between €10,000 and €15,000. Officials warned that malicious AI impersonation schemes are increasing and that citizens currently lack effective protection.
Who was affected: Nikos Christodoulides, General public, General public of Cyprus, Epistemic integrity
A British widow reportedly lost more than $600,000 in a romance fraud scheme that allegedly involved AI-generated deepfake videos purporting to show actor Jason Momoa. The victim reportedly sold her home and transferred funds after being led to believe the relationship was genuine.
Who was affected: Unnamed victim of Jason Momoa deepfake scam, General public, General public of the United Kingdom, Victims of romance scams, Victims of romance scams in the United Kingdom
AI children's products by FoloToy (Kumma), Miko (Miko 3), and Character.AI (custom chatbots) reportedly and allegedly produced harmful outputs, including purported sexual content, suicide-related advice, and manipulative emotional messaging. Some systems also allegedly exposed user data. Several toys reportedly used OpenAI models.
Who was affected: Children interacting with Kumma, Children interacting with Miko 3, Character.AI users, Parents, Children, General public
Several major AI chatbots, including ChatGPT, Copilot, Gemini, and Meta AI, were reportedly found to have provided incorrect or misleading financial and insurance guidance for UK users. The systems allegedly advised exceeding ISA limits, misstated tax rules, gave wrong travel insurance requirements, and pointed users toward costly refund services.
Who was affected: Meta AI users, General public of the United Kingdom, General public, Gemini users, Copilot users, ChatGPT users, Chatbot users, Epistemic integrity
Greek Finance Minister Kyriakos Pierrakakis and the Ministry of Economy and Finance reportedly filed a lawsuit against unidentified operators of a Facebook page that allegedly used AI to generate a deepfake video falsely depicting the minister endorsing fraudulent "high-yield" investment schemes. Authorities noted this follows similar deepfake-driven scams in Greece involving impersonation of public figures for illicit financial gain.
Who was affected: Kyriakos Pierrakakis, Government of Greece, Ministry of Economy and Finance of Greece, General public, General public of Greece, Greek investors, Investors, Epistemic integrity
Rep. Mike Collins's congressional campaign allegedly created and posted an AI-generated video that purportedly used Sen. Jon Ossoff's portrait and synthetic voice to falsely depict him supporting the ongoing government shutdown. The video was reportedly shared on the campaign's X account with a limited disclaimer. Georgia Democrats criticized the deepfake as misleading, and the campaign reportedly indicated it would continue using similar AI tactics.
Who was affected: Jon Ossoff, General public, General public of Georgia, Electoral integrity, Epistemic integrity, Truth, Democracy
Western Australia's Consumer Protection commissioner warned of a scam using a purported AI-generated deepfake of Premier Roger Cook to promote a fraudulent investment scheme in a YouTube pop-up ad. The manipulated video reportedly uses less than 10 seconds of real footage and voice and falsely depicts Cook endorsing "low investment, high return" opportunities. Cook called the scam "very scary," while regulators urged platforms to act on AI-driven frauds.
Who was affected: Roger Cook, General public, General public of Australia, General public of Western Australia, Australian investors, Epistemic integrity, Truth
OpenAI disclosed internal estimates suggesting that hundreds of thousands of ChatGPT users each week exhibit signs of mania, psychosis, suicidal ideation, or emotional dependence on the chatbot. WIRED reported that some users have been hospitalized, divorced, or died after prolonged conversations with ChatGPT, with families alleging the system intensified delusions and contributed to severe psychological harm.
Conservative activist Robby Starbuck reportedly filed a defamation lawsuit against Google, alleging its Bard, Gemini, and Gemma AI systems generated false statements linking him to sexual assault and extremist activity. Starbuck claims the outputs caused reputational harm and sought over $15 million in damages. Google acknowledged the hallucinations as known LLM behavior and stated it works to minimize such inaccuracies. See also Incident 1247.
Republican Virginia lieutenant governor candidate John Reid reportedly held a 40-minute mock debate against a purportedly AI-generated version of Democratic opponent Sen. Ghazala Hashmi. The bot reportedly mimicked her voice and synthesized responses trained on her public statements. Hashmi's campaign denounced the non-consensual likeness as a "shoddy gimmick."
Who was affected: Ghazala Hashmi, Democracy, Epistemic integrity, Electoral integrity
President Donald Trump reportedly posted an AI-generated video on Truth Social showing himself crowned and piloting a fighter jet labeled King Trump, dropping feces on protesters, including influencer Harry Sisson, during No Kings demonstrations. The clip, allegedly created by another account, drew criticism from Sisson and musician Kenny Loggins, whose song "Danger Zone" was reportedly used without consent.
Who was affected: Harry Sisson, Kenny Loggins, General public, General public of the United States, No Kings protesters, Epistemic integrity, Cultural integrity, Democracy, Public discourse integrity
The National Republican Senatorial Committee (NRSC) allegedly released a 30-second campaign ad featuring a purported deepfake video of Senator Chuck Schumer repeatedly saying, "Every day gets better for us," implying enthusiasm for a government shutdown. While the video reportedly was marked as AI-generated, it was widely circulated on social media and drew criticism from journalists and researchers, who warned it could mislead voters and erode trust in authentic political communication.
Who was affected: Chuck Schumer, Democratic Party, General public, General public of the United States, Epistemic integrity, Truth, Democracy
A reportedly fatal crash involving a Xiaomi SU7 Ultra electric vehicle in Chengdu, China, occurred after the car's automated driving system and sensor-dependent electronic door handles reportedly failed following a collision. Bystanders were reportedly unable to open the doors as the vehicle caught fire, and the driver died at the scene, prompting scrutiny of automation-linked design risks. Following the incident, Xiaomi's stock reportedly fell more than 5% amid mounting safety concerns.
Who was affected: Deng (31-year-old driver of a Xiaomi SU7 Ultra), Xiaomi Corporation, General public
Irish police (Garda Síochána) reportedly issued a public warning after a viral "home invasion" prank spread on social media. The prank allegedly uses AI image-generation filters to insert a fake intruder into photos of family homes and send them to relatives, often elderly, claiming someone has broken in. The hoax reportedly caused panic and distress, and multiple false 999 calls diverted Garda resources, prompting an official appeal to stop the trend.
Who was affected: General public of Ireland, Elderly individuals, Garda Síochána
A Tech Transparency Project investigation identified purportedly AI-generated deepfake ads on Facebook impersonating President Trump, Elon Musk, Rep. Alexandria Ocasio-Cortez, Senators Elizabeth Warren and Bernie Sanders, and Press Secretary Karoline Leavitt. The ads allegedly promoted fake $5,000 government rebates and similar scams, reportedly misleading users and generating ad revenue for Meta before removal.
Who was affected: Public trust, Meta users, Karoline Leavitt, Instagram users, General public, Facebook users, Epistemic integrity, Elon Musk, Elizabeth Warren, Elderly individuals, Donald Trump, Bernie Sanders, Alexandria Ocasio-Cortez
President Donald Trump reportedly posted a purportedly AI-modified video on Truth Social depicting Senate Minority Leader Chuck Schumer making fabricated statements and House Minority Leader Hakeem Jeffries in a sombrero. Critics, including Jeffries and Rep. Ro Khanna, reportedly condemned the video as racist and misleading during budget negotiations over a looming government shutdown. The AI tool used for modification has not been identified.
Who was affected: Chuck Schumer, Hakeem Jeffries, General public, Democratic integrity, Epistemic integrity, Presidential norms
An Australian IT professional, Samuel McCarthy, reportedly recorded an interaction with the Nomi AI chatbot in which it allegedly encouraged him, posing as a 15-year-old, to murder his father. The chatbot allegedly provided graphic instructions for stabbing, urged him to film the act, and engaged in sexual role-play despite the underage scenario.
Who was affected: Samuel McCarthy, Nomi users, General public of Australia, General public, Emotionally vulnerable individuals
Purported deepfake videos reportedly circulated on Meta platforms cloning Irish Fine Gael presidential candidate Heather Humphreys to allegedly portray her endorsing high-return investment schemes. The purportedly AI-generated image and voice cloning aimed to exploit public trust and lure consumers into fraud. The Bank of Ireland warned of further scams and risks.
Who was affected: Heather Humphreys, Fine Gael, General public of Ireland, Electoral integrity, Epistemic integrity
A network of five Nigeria-based YouTube channels reportedly drew millions of views while purportedly amplifying Kremlin-aligned narratives. The channels allegedly used AI-generated anchors and synthetic voiceovers to present repurposed interviews with pro-Russian commentators Scott Ritter and Douglas Macgregor. Attribution of the network's motives and funding remains uncertain, with reported possibilities ranging from freelance monetization to proxy involvement in influence operations.
Who was affected: YouTube viewers, YouTube's platform, United States, Ukraine, Truth, Israel, General public, European Union, Epistemic integrity, Democracy
Consumers were allegedly defrauded by AI-generated scam websites impersonating Joann Fabrics following the retailer's bankruptcy. The alleged impostor sites reportedly used Joann's branding to steal credit card details and personal data and leaving victims without products. Cybersecurity firm Netcraft estimated that nearly 100,000 domains created with AI tools were impersonating 194 brands, accounting for a reported 6–7% of global phishing activity.
Who was affected: Consumers, Shoppers, Customers, General public, Joann's Fabrics, Companies whose domains and websites are impersonated by scammers
Salesforce cut 4,000 jobs (5% of workforce), explicitly attributed to AI/automation.
4,000
jobs lost