Black Hat Opens With Call to Steer AI From Predictions to Policy

Without cybersecurity guardrails now, AI will be harder to harness in the future.

Jeff Moss on stage at Black Hat USA 2023
Jeff Moss on stage at Black Hat USA 2023.Source: Dan Raywood

BLACK HAT USA – Las Vegas – Wednesday, Aug. 9 - A lot has changed, "but a lot has also stayed the same," according to Black Hat and DEFCON founder Jeff Moss in his opening Black Hat keynote. The industry veteran admits that with AI we are seeing both opportunities and the risks that come with it.

Predictions By the Practically Human

Moss said what he finds most fascinating with AI is that it is essentially predictions, and it's getting cheaper and cheaper to produce them. "If you think it's easy to create an AI model now, wait ten years," he said. "Instead of doing normal log analysis, you're doing a prediction analysis. The more you can turn your IT problems and your problems in[to] prediction problems, the sooner you'll get a benefit from AI."

Moss said the progress of AI parallels how smart cars evolved, as they predict when they should accelerate or break, and whether they should turn, based on models of what real humans have done. AI will allow cybersecurity decisions to be made in the same way.

Following the publication of a Blueprint for an AI Bill of Rights in 2022, which will promote responsible AI innovation and is expected to have significant implications for cybersecurity, Moss noted governments have failed to get ahead of emerging technology in this fashion before. The proactive attitude around AI offers a chance for stakeholders to participate in the rule-making around the technology, he said, and offers a chance for people to consult with others, asking questions like 'What do you think about this rule?', 'What do you think about accountability or a responsible person for an algorithm?', 'What do you think about training data?'.

Scraping Data for Training

Moss also talked about unstructured data, specifically related to recent news about how Zoom has updated its terms of service to use some customer data for training its AI models. Admitting he was not ok with that update, as there is no option to opt out of it, he also asked if this "is the next battle, for rights on the Internet."

"What happens if you take unstructured training data from a photography site and train your generative systems?" he asked. "Do you have a right to scrape the Internet and train your for-profit system? How do I not get trained on it? I think this means it'll be harder and harder for us to find authentic information."

He posed another question to the audience: Are people willing to spend a little bit more for an authentic human construction, or a premium for a real painted picture or a real music piece?

He concluded by saying that the cybersecurity community will be the keepers of AI representation, and businesses will be able to create the models and sell the traffic around AI.

"You're going to hear a lot about the problems of AI, but I also want you to think [about] a lot of the AI business opportunities; the opportunities for us, as professionals, to get involved and help steer the future," he said.

Read more about:

Black Hat News

About the Author

Dan Raywood, Senior Editor, Dark Reading

With more than 20 years experience of B2B journalism, including 12 years covering cybersecurity, Dan Raywood brings a wealth of experience and information security knowledge to the table. He has covered everything from the rise of APTs, nation-state hackers, and hacktivists, to data breaches and the increase in government regulation to better protect citizens and hold businesses to account. Dan is based in the U.K., and when not working, he spends his time stopping his cats from walking over his keyboard and worrying about the (Tottenham) Spurs’ next match.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights