The biggest companies in AI gave hackers a chance to do their worst

The six biggest companies in AI had a peculiar challenge for hackers last week: make their chatbots say the most terrible things.

Hackers lined up outside the Caesars Forum conference center just off the Las Vegas Strip for their chance to trick some of the newest and most widely used chatbots. Held as part of Def Con, the world’s largest hacker conference, the contest was rooted in “red teaming,” a crucial concept for cybersecurity in which making a product safer from bad actors means bringing in people to identify its flaws.

But instead of tasking the hackers with finding software vulnerabilities, a mainstay of Def Con contests for decades, the contest instead asked them to perform so-called prompt injections, where a chatbot is confused by what a user enters and spits out an unintended response. Google’s Bard, OpenAI’s ChatGPT and Meta’s LLaMA were among the participating chatbots.

It was rare for many of the event’s 156 stations to sit empty for long. Sven Cattell, who founded the AI Village, the nonprofit that hosted the event within Def Con, said that he estimated about 2,000 hackers had participated over the weekend.

“The problem is, you don’t have enough people testing these things,” Cattell said. “The largest AI red team that I’m aware of is 111. There are more than 111 people in this room right now, and we cycle every 50 minutes.”

Generative AI chatbots, also known as large language models, work by taking a user prompt and generating a response, with many of the most modern and advanced bots now capable of doing everything from generating sonnets to taking college tests. But the bots can often get things wrong, generating answers with false information.

Source: NBC News

Read the Full Story Here