- Generative AI presents challenges and opportunities for digital security.
- Attackers can use generative AI to create convincing and compelling content for scams like phishing and modify existing malware to evade detection.
- Generative AI can aid in big data analysis and automation for defense, but the security community must remain vigilant and study how these systems can be manipulated and exploited.
The RSA security conference in San Francisco has been abuzz with talk about the potential impact of generative AI on digital security and malicious hacking. While chatbots powered by large language models like OpenAI's ChatGPT have made machine-learning development and research more accessible, there are practical questions about how these tools will be manipulated and abused by bad actors to develop and spread malware and misinformation.
NSA cybersecurity director Rob Joyce warned that generative AI could fuel already effective scams like phishing by quickly creating tailored communications and materials that are convincing and compelling. Attackers could also use AI chatbots to modify existing malware with small changes that may evade antivirus software and other scanning tools.
Potential for Generative AI to Aid in Big Data Analysis and Automation
While generative AI presents challenges to the security community, it also offers potential benefits. Joyce cited three areas where the technology is “showing real promise” as an “accelerant for defense”: scanning digital logs, finding patterns in vulnerability exploitation, and helping organizations prioritize security issues.
Before defenders and communities more broadly come to depend on these tools in daily life, they must first study how generative AI systems can be manipulated and exploited. Joyce emphasized the unpredictable nature of the current moment for AI and security and cautioned the security community to “buckle up” for what's likely yet to come.