Cyberscammers are cashing in by exploiting cancer patients — we must stop them  from the Hill Leeza Garber, opinion contributor  

Natalie is a 12-year-old superstar. She is outgoing, intelligent, determined — and when she recently lost her hair from treatment for B-cell leukemia, she said it just made her more aerodynamic. Though still on a tough road, Natalie is thriving.  

But in the last week, the foundation Natalie and her parents created to support children affected by leukemia saw shocking updates on their social media accounts. Someone had stolen Natalie’s photo, likeness and information, falsely reporting that Natalie had passed away, and was asking online for money to help with funeral expenses. 

Unfortunately, September’s Childhood Cancer Awareness Month has become a prime target for malicious cybercriminals. As foundations, children and parents post photos and details about their cancer journeys, scammers repurpose this information to create false accounts on social media and fundraiser platforms. During the darkest of times, when good people are faced with the most trying of circumstances, the cancer awareness movement is working to bring hope, love and light to others. At the same time, fraudsters are stealing, manipulating and lying, leaving a trail of frustration and disappointment in their wake.  

The scheme is not new: With easy access to photos, videos and personal information, a cybercriminal can create fraudulent accounts and donation campaigns — especially now, in an era of on-demand generative AI, deepfake apps, and voice cloning software. This type of crime is much easier to carry out than creating entire fake charities (which also happens at large scale). But the investigation of the use of social media and sites like GoFundMe to scam unsuspecting donors has become a bad game of whack-a-mole.  

In 2018, an Ohio mom shaved her little girl’s head in order to scam thousands of dollars from donors to help fund fake cancer treatments. In 2022, a story about scammers pulling on the heartstrings of 14,000 donors went viral: Three people set up a GoFundMe campaign to allegedly benefit a homeless man in Philadelphia. The hoax, which led to thousands of dollars of restitution and jail time for the perpetrators, didn’t require much planning — in fact, a key piece of evidence in the case was a text from one of the scammers to a friend that read, “I had to make something up to make people feel bad.”

Crowdfunding platforms are aware that these scams exist. GoFundMe offers tips to avoid fraudulent campaigns and also guarantees that it will reimburse donors if there is a problem. Government agencies and state consumer protection offices are also in the know: the Better Business Bureau, which recently reported on TikTok cons requesting charitable donations, offers specific guidance on avoiding crowdfunding fraud. Still, quick takedown of these schemes on social media relies on content moderation policies and oversight. 

For small foundations seeking to help their communities, social media updates, articles and grassroots fundraising campaigns are vital. It follows that platform response times to scam claims and red-flagged posts need to be swift. Unfortunately, that’s not always the case. And content moderation itself is losing big tech supporters and dollars every day. 

Although Facebook, Instagram, X, TikTok and YouTube use AI to help human moderators respond to violations against their content policies, response times vary — as do success rates.  

But the victims of this type of identity theft still suffer. How can they protect themselves, while also using their images and information to give others hope in the fight against childhood cancer?  

There are legal channels to protect photos, such as copyright protections and the Digital Millennium Copyright Act. But the most easily accessible way to attempt to report and take down fraudulent content is with the platforms themselves.  

Mark Zuckerberg recently announced that new internal moderation procedures will focus on tackling “illegal and high-severity violations.” It remains to be seen where identity theft and impersonation fit into that plan. Meta did release a Community Standards Enforcement Report, highlighting that it took action on one billion “fake accounts” in the first few months of 2025. And, importantly, the accounts associated with some scammers Natalie faced were locked relatively quickly. 

The use of social media for cancer awareness months feels like what the platforms were made for to begin with: helping communities connect and work together for the most noble of purposes.

These public spaces must be protected from cyberscammers targeting childhood cancer organizations and individual children; and Breast Cancer Awareness Month has just begun. As we approach a new age of generative AI capabilities, attentive moderation and deception detection will become significantly more complex and necessary — otherwise, the risk of exploiting the most vulnerable will overtake the optimism of connected cyberspace.  

Leeza Garber, Esq., is a cybersecurity and privacy attorney and adjunct law professor, teaching at The Wharton School and Drexel University’s Kline School of Law. Her book, “Can. Trust. Will.: Hiring for the Human Element in the New Age of Cybersecurity,” was published by Business Expert Press. 

 Read More