Marvell Tech Shifts Focus to Data Center and AI Chips
Marvell Technology's data center and AI businesses are thriving. The company's enterprise...
Carbon Footprint Unveiled for Generating AI-Driven Graphics
A team of researchers has discovered a way to make ChatGPT reveal some of the private data it has been trained on.
The researchers, who work at Google DeepMind, the University of Washington, Cornell, Carnegie Mellon University, the University of California Berkeley, and ETH Zurich, were able to make ChatGPT reveal private information by asking the chatbot to repeat random words forever. In response, ChatGPT churned out people’s private information including email addresses and phone numbers, snippets from research papers and news articles, Wikipedia pages, and more.
The researchers urged AI companies to seek out internal and external testing before releasing large language models. They also said that it is important for AI companies to be upfront about the data that their models are trained on.
OpenAI, the company that developed ChatGPT, has patched the vulnerability that the researchers discovered. However, the researchers say that a more serious adversary could potentially get a lot more private data by spending more money.
This research highlights the importance of carefully testing AI models before they are released to the public. It is also important for AI companies to be transparent about the data that their models are trained on.
Catch all the Sci-Tech News, Breaking News Event and Latest News Updates on The BOL News
Download The BOL News App to get the Daily News Update & Follow us on Google News.