Cybercriminals have leveraged AI in 2024 to create more sophisticated, faster, and large-scale attacks, making it harder to detect and defend against them. AI tools are being used in phishing, fraud, and social engineering attacks.
Deepfakes, particularly in video and audio formats, are increasingly being used for malicious purposes. Criminals have successfully used these to impersonate executives in video conferences, resulting in financial fraud, such as in the case of a Hong Kong engineering firm where deepfakes were used to impersonate the CFO and trick an employee into wiring millions of dollars.
With the improvement of generative AI technologies, the frequency and credibility of deepfake attacks are expected to rise. Audio deepfakes, which allow for highly accurate voice cloning, will also see increased usage in cyberattacks.
Cybercriminals are operating more rapidly and on a larger scale, with attackers often outpacing the capabilities of law enforcement. This increase in speed makes it harder to prevent and mitigate cyberthreats.
Regulatory frameworks in many regions, including APAC, are lagging behind in addressing emerging threats like AI-driven fraud and deepfakes. Governments need to adapt to these evolving challenges and create stronger cybersecurity regulations.
In 2025, businesses are expected to adopt integrated security platforms that combine various layers of security — such as network, endpoint, and cloud defenses. This approach aims to simplify cybersecurity management, especially in light of the shortage of skilled professionals.
Deepfake attacks will become more widespread in 2025, particularly in the APAC region. These attacks will be used not only in financial fraud but also as part of larger, multi-layered cyberattacks.
Criminals will use more credible generative AI technology to launch these attacks, with audio deepfakes being used for voice cloning and impersonations. Organizations must be aware of this growing threat and take proactive measures to identify and mitigate deepfake-based attacks.
Quantum defenses
While quantum computing is not yet capable of breaking widely used encryption methods, nation-state-backed actors are expected to focus on “harvest now, decrypt later” strategies. These actors will target highly classified data with the intent to decrypt it when quantum computing technology becomes advanced enough.
Organizations must begin transitioning to quantum-resistant defenses, such as quantum-resistant cryptography, to protect sensitive data in the future. As quantum computing progresses, businesses involved in its development may also face corporate espionage attacks.
Next year, regulators in APAC will continue to emphasize AI ethics, data protection, and transparency. The growing use of AI models will prompt increased scrutiny of their security, data integrity, and decision-making processes. Businesses will need to prioritize transparency about how AI models function, including their data collection methods, training datasets, and decision-making procedures, to build and maintain customer trust.
Organizations will place more emphasis on product integrity and securing supply chains in 2025. Comprehensive risk assessments, accountability for business outages, and more stringent insurance arrangements will become critical as part of overall cybersecurity efforts.