Cyber
How Cybercriminals Are Weaponizing Artificial Intelligence
Cyber insurance and cyber risk management are more important than ever.
July 1, 2025
Artificial intelligence (AI) has surged in popularity among businesses and individuals in recent years. Such technology encompasses machines, computer systems and other devices that can simulate human intelligence processes. In other words, this technology can perform a variety of cognitive functions typically associated with the human mind, such as observing, learning, reasoning, interacting with its surroundings, problem-solving and engaging in creative activities. Applications of AI technology are widespread, but some of the most common include computer vision solutions (e.g., drones), natural language processing systems (e.g., chatbots), and predictive and prescriptive analytics engines (e.g., mobile applications).
While this technology offers benefits in the realm of cybersecurity—streamlining threat detection capabilities, analyzing vast amounts of data and automating incident response protocols—it also is being weaponized by cybercriminals. These bad actors have begun leveraging AI technology to seek out their targets more easily, launch attacks at greater speeds and in larger volumes, and wreak further havoc amid these attacks.
As such, businesses must understand the cyber risks associated with this technology and implement strategies to minimize these concerns. This post outlines ways cybercriminals can use AI technology and provides best practices to help businesses safeguard themselves against such weaponization.
Ways Cybercriminals Can Leverage AI Technology
AI technology can help cybercriminals conduct a range of damaging activities, including the following:
Creating and distributing malware. In the past, only the most sophisticated cybercriminals could write harmful code and deploy malware attacks. However, AI chatbots can now generate illicit code in seconds, permitting cybercriminals with varying levels of technical expertise to launch malware attacks with ease. Although current AI technology writes more basic (and often bug-ridden) code, its capabilities will likely continue to advance over time, thus posing more substantial cyber threats.
In addition to writing harmful code, some AI tools can also generate deceptive YouTube videos claiming to be tutorials on how to download certain versions of popular software (e.g., Adobe and Autodesk products) and distribute malware to targets’ devices when they view this content. Cybercriminals may create their own YouTube accounts to disperse these malicious videos or hack into other popular accounts to post such content. To convince targets of these videos’ authenticity, cybercriminals may further utilize AI technology to add fake likes and comments.
Cracking credentials. Many cybercriminals rely on brute-force techniques to reveal targets’ passwords and steal their credentials to then utilize their accounts for fraudulent purposes. Yet, these techniques may vary in effectiveness and efficiency. Cybercriminals can bolster their password-cracking success rates by leveraging AI technology, uncovering targets’ credentials at record speeds.
Deploying social engineering scams. Social engineering consists of cybercriminals using fraudulent forms of communication (e.g., emails, texts and phone calls) to trick targets into unknowingly sharing sensitive information or downloading harmful software. It repeatedly reigns as one of the most prevalent cyberattack methods. For example, a multinational company in Hong Kong lost tens of millions of dollars after an employee was tricked into making several financial transactions by AI-generated voices and video during a fake conference call.
Unfortunately, AI technology could cause these scams to become increasingly common by giving cybercriminals the ability to formulate persuasive phishing messages with minimal effort. It could also clean up grammar and spelling errors in human-produced copy to make it appear more convincing.
Identifying digital vulnerabilities. Cybercriminals usually look for software vulnerabilities they can exploit when hacking into targets' networks or systems, such as unpatched code or outdated security programs. While various tools can help identify these vulnerabilities, AI technology could permit cybercriminals to detect a wider range of software flaws, providing additional avenues and entry points for launching attacks.
Reviewing stolen data. Upon stealing sensitive information and confidential records from targets, cybercriminals generally have to sift through this data to determine their next steps—whether it’s selling this information on the dark web, posting it publicly or demanding a ransom payment in exchange for restoration. This can be a tedious process, especially with larger databases. With AI technology, cybercriminals can analyze this data much faster, making quick decisions and speeding up the total time it takes to execute their attacks. In turn, targets will have less time to identify and defend against such attacks.
Tips to Protect Against Weaponized AI Technology
Businesses should consider the following measures to mitigate their risk of experiencing cyberattacks and related losses from weaponized AI technology:
Uphold proper cyber hygiene. Such hygiene refers to habitual practices that encourage the safe handling of critical workplace information and connected devices. These practices can help keep networks and data protected from various AI-driven cyber threats. Here are some key components of cyber hygiene for businesses to keep in mind:
- Require employees to use strong passwords (those containing at least 12 characters and a mix of uppercase and lowercase letters, symbols and numbers) and leverage multifactor authentication across workplace accounts
- Back up essential business data in a separate and secure location (e.g., an external hard drive or the cloud) regularly
- Equip workplace networks and systems with firewalls, antivirus programs and other security software
- Provide employees with routine cybersecurity training to educate them on the latest digital exposures, attack prevention measures and response protocols
Engage in network monitoring. This form of monitoring pertains to businesses utilizing automated threat detection technology to continuously scan their digital ecosystems for possible weaknesses or suspicious activities. Such technology typically sends alerts when security issues arise, allowing businesses to detect and respond to incidents as quickly as possible. Since time is of the essence when it comes to handling AI-related threats, network monitoring is a vital practice.
Have a plan. Creating cyber incident response plans can help businesses ensure they have necessary protocols in place when cyberattacks occur, thus keeping related damages at a minimum. These plans should be well-documented and practiced regularly and should address multiple cyberattack scenarios (including those stemming from AI technology).
Purchase cyber insurance. Lastly, businesses must secure adequate cyber insurance and financially safeguard themselves from losses that may arise from the weaponization of AI technology. Businesses should consult trusted insurance professionals to discuss specific coverage needs.
Next Steps
Looking forward, AI technology is likely to contribute to rising cyberattack frequency and severity. By staying informed on the latest AI-related developments and taking steps to protect against its weaponization, businesses can maintain secure operations and minimize associated cyber threats. Contact us today for more risk management guidance.
Related Reading: Corporate Board’s Guide to Generative AI Benefits and Risks
The above information does not constitute advice. Always contact your insurance broker or trusted advisor for insurance-related questions.