читайте также






Airbnb Scandal: Lawsuit with Fake AI-Generated Photos

Photo: Airbnb
Airbnb, the global accommodation booking platform, has found itself at the center of a scandal over a false damage claim, reports The Guardian. A London woman accused a New York apartment host of using fake, AI-altered photos to support a damages claim.
In early 2025, the London resident rented a one-bedroom apartment in Manhattan for two and a half months while studying. After seven weeks, she moved out, citing neighborhood safety concerns. Soon after, the host filed a claim with Airbnb for over £12,000 ($16,200), alleging she left a crack in a coffee table, ruined a mattress with urine stains, and damaged a robot vacuum, sofa, microwave, TV, and air conditioner.
The woman denied the allegations, stating she left the property clean and tidy and had hosted only two guests during her stay. Among the host’s evidence were two photos of the table showing different types of damage, which she claimed indicated digital manipulation or AI generation. “These discrepancies are impossible in genuine photos of the same object,” she said. “This should have immediately discredited the claim if the materials had been checked, even superficially. But Airbnb not only failed to detect the forgery, they completely ignored my explanations and evidence.”
She offered to provide a witness who was present at her departure. Initially, Airbnb ruled in the host’s favor, ordering her to pay £5,314 ($7,000+) after a “thorough review of the photos.” After appealing, she was refunded £500 ($675). She refused to use Airbnb again and was offered £854 ($1,153) – a fifth of the rental cost – which she declined. Ultimately, she received a full refund for her booking – £4,269 ($5,763). Airbnb also apologized and removed the negative review the host had left on her profile.
The guest expressed concern for other renters who might fall victim to similar fraudulent schemes. “Given how easily such images can now be generated by AI and apparently accepted by Airbnb despite investigations, the host should not be allowed to get away with such fabrications,” she stated.
The host, listed on Airbnb as a “Superhost” with a high rating, was warned of the violation. His account will be deleted if another such report is received. Airbnb also announced plans to revise its dispute review system.
Serpil Hall, Director of Economic Crime at consultancy Baringa, noted that image and video manipulation is now easier than ever, with cheap, widely available software requiring minimal skill. As a result, such cases are on the rise. For example, UK insurers reported a 300% increase in fake photo-related fraud between 2022–2023, often in claims for car and property damage. Fraudsters often use “shallowfakes” – images altered with basic editing software. In one case, scammers took a photo of a van from social media, added a crack to the bumper, and demanded over £1,000 ($1,350) in compensation. Allianz and Zurich UK see such fakes as a serious threat and link them to a 33% rise in UK car insurance premiums in early 2024.
In the Netherlands in 2025, AI-generated and digitally altered evidence was also detected in insurance claims. National broadcaster NOS reported a man forging medical documents to claim holiday cancellation payouts from seven insurers, with fake claims totaling over €150,000. Another woman submitted a falsified receipt for garden furniture and demanded €2,000 for allegedly stolen property.
Police in Uttar Pradesh, India, uncovered a large-scale Aadhaar-linked insurance fraud scheme. Criminal groups created fake identities, forged medical documents, and filed fraudulent health, life, and motor insurance claims. The scams involved networks of agents, hospitals, and middlemen targeting vulnerable individuals and filing claims in their names. Authorities estimate that 10–15% of all claims in India could be fraudulent.
The Insurance Information Bureau (IIB), India’s central fraud monitoring body, analyzed over 144 million records in five years, identifying 300,000 potentially fake life insurance cases worth ₹1.73 trillion ($19.8 billion). Investigators found fraudsters changing Aadhaar-linked phone numbers and emails to prevent insurers from verifying claims with actual policyholders. Some claims were made in the names of deceased or gravely ill individuals, using fake addresses to bypass “blacklisted” postal codes. Claim amounts often exceeded ₹2 million ($22,820).
Many organizations now conclude that images in disputes can no longer be taken at face value – they require forensic checks and analytical fraud detection models.
Подсказки: Airbnb, fraud, AI, artificial intelligence, travel, short-term rental, shallowfake, insurance, image editing, scams