Another day, another ChatGPT data leak.
This time, it was login credentials and personal information from a pharmacy customer on a prescription drug portal. According to Ars Technica, a user named Chase Whiteside unwittingly received these chunks of a conversation in response to an unrelated query and submitted it to the tech site.
“I went to make a query (in this case, help coming up with clever names for colors in a palette)” wrote Whiteside in an email. “When I returned to access moments later, I noticed the additional conversations.”
The conversations appear to be from a frustrated employee troubleshooting issues with an app (name redacted by Ars Technica) used by the pharmacy. In addition to the entire text disparaging the app, the leak included a customer’s username, password, and the employee’s store number. It’s unclear whether this is the case, but it looks like the entire feedback ticket was included in the ChatGPT response.
This isn’t the first time ChatGPT had security problems. Hackers and researchers have discovered vulnerabilities that enable them to extract sensitive information, either through prompt injection or jailbreaking.
Last March, a bug was discovered that revealed ChatGPT Plus users’ payment information. Although OpenAI addressed certain issues related to ChatGPT users, it doesn’t protect from personal or confidential information shared with ChatGPT. This was the case when Samsung employees using ChatGPT to help with code accidentally leaked company secrets, and is why many companies have banned ChatGPT usage.
We’ll say it again: don’t share any sensitive or personal information — especially if it’s not yours — with ChatGPT.
The post ChatGPT reportedly leaked private conversations from pharmacy customers from Mashable appeared first on Tom Bettenhausen’s.
“}]] Article Continues..