Artificial intelligence is more accessible than ever before. The global AI market was valued at around $189 billion in 2023 and is currently on track to hit a mammoth $4.8 trillion by the next decade. What used to be an experiment has now become a $1 billion industry and a daily habit or necessity for most. With models such as ChatGPT, Google Gemini, Microsoft Copilot and Amazon Alexa, users are able to streamline efficiency, tackle challenging problems in seconds, automate mundane tasks and reinvest the time saved into other activities.
While AI’s multiple functions offer a wealth of opportunity for advancement, this technology needs to be monitored closely. As AI continues to become more efficient, accessible and affordable, individuals become increasingly susceptible to potentially devastating cyberattacks. Bad actors are able to utilize AI for a number of scams, but most notably phishing.
Phishing is the fraudulent practice of sending emails or other messages purporting to be from a trusted source in order to gain information. This form of social engineering feeds on the trust users have in the internet and exploits this vulnerability to gain access to private data. With the rapid progress of the AI industry, there has been a parallel surge in these cyberattacks. As AI continues to evolve, phishing attacks are able to increase in both quality and quantity. Bad actors are able to weaponize generative AI and overcome the barriers that make phishing easier to spot.
For example, AI can be used to analyze behaviors and patterns to formulate a specific attack tailored to the target. Since the rise of AI in 2022 with the mainstream adoption of ChatGPT, there has been a 1,200% increase in phishing attacks alone. Artificial intelligence’s inevitable continued growth will pressurize the cybersecurity industry to advance just as fast in order to safeguard data privacy and the security of users.
AI’s role in cybersecurity is a striking paradox: it is becoming our best defense against what may be its greatest threat – AI itself. As artificial intelligence is increasingly used to power cyberattacks, security professionals are turning to the same technology to keep pace. The greatest strength of AI lies in its ability to analyze vast amounts of data in milliseconds. Tasks that might take a team of humans hours or days can now be handled in real time. This speed allows AI-enhanced security systems to identify anomalies and potential breaches as they occur – and contain them before they spiral.
Artificial intelligence can also predict, based on historical data, areas most susceptible to breach within an organization. This allows organizations and individuals to take proactive steps to mitigate risks before they occur. In addition, AI is increasingly effective in combating phishing and spam attacks. By analyzing email patterns and sender behavior, AI tools can identify and block malicious messages before they ever reach the end user.
These are just a few of the ways AI is being deployed to defend against today’s escalating cyber threats. It also helps mitigate mistakes associated with human error – one of the most common causes of security breaches. However, for AI to be truly effective in a cybersecurity role, human oversight remains necessary. Artificial intelligence systems are still vulnerable to errors, data manipulation and a lack of contextual judgment. While AI offers unmatched speed and scale, it is human judgment that provides the adaptability and accountability machines cannot replicate. As AI continues to advance, the most resilient cybersecurity strategies will be those that combine automation with human insight.
In the years ahead, organizations, governments and individuals must take a balanced approach: embracing AI’s strengths while acknowledging its limits. With the right mix of machine intelligence and human judgment, AI can become a powerful tool for digital security. The goal isn’t to resist AI’s growth, but to guide it responsibly – so it can help defend against the very threats it enables.
Jacob McLean is a third-year-law student at Washington University in St. Louis and Alexandra McKiethen is a third-year-law student at Saint Louis University School of Law. Both are currently summer associates with Heyl, Royster, Voelker & Allen.
This article appears in SBJ August 2025.
