The organisation’s forward-looking threat analysis, combined with frontline incident response insights, highlights how cyber risk, regulation, leadership accountability, and the cyber workforce are set to evolve in 2026.
The AI versus AI cyber arms race
Cyber Intelligence Expert at CSIS Security Group, Stefan Tanase, cautions that 2026 will mark the full operationalisation of agentic AI on both sides of cyber conflict. Attackers are already automating reconnaissance, generating tailored phishing at scale, dynamically modifying malware during live operations, and identifying high-value cloud identities in seconds.
Defenders, meanwhile, are deploying autonomous agents for continuous threat hunting and response. “Security is becoming machine-versus-machine, with humans setting intent and validating outcomes,” Tanase said.
He also expects rapid growth in efforts to secure AI itself. Confidential computing for sensitive workloads, behaviour monitoring, and permission boundaries for AI agents will become essential. At the same time, poisoned datasets, backdoored open-source models, and compromised fine-tuning pipelines are likely to emerge as high-risk new attack vectors.
Identity becomes the primary attack surface
Jan Kaastrup, Chief Innovation Officer at CSIS, warns that attackers are moving away from malware-led intrusions and instead abusing stolen credentials, cloud tokens, and SaaS identities. “Attackers no longer need to break in. They simply log in,” he said.
This identity-driven model allows criminals to bypass perimeter defences and compress attack timelines to minutes rather than days, fundamentally changing how organisations must detect and respond.
Ransomware at an industrial scale
Kaastrup predicts that ransomware will operate at full industrial scale in 2026. Intrusion to encryption timelines are shrinking as attackers automate every phase of the kill chain.
Modern ransomware groups now resemble mature enterprises, complete with affiliate networks, revenue-sharing schemes, support desks, and negotiation teams. Increasingly, they target shared technologies such as file transfer platforms, identity providers, managed service providers, and widely deployed enterprise software. “One vulnerability today can equal thousands of victims tomorrow,” Kaastrup warned.
Attackers are also deliberately targeting backup systems and even incident response providers to disrupt recovery at the earliest stage.
Deepfakes and synthetic deception
Synthetic media is becoming a standard weapon during live cyber incidents. Kaastrup noted that deepfake audio of executives is already being used to authorise fraudulent payments, while AI-generated video is appearing in real-time social engineering campaigns.
He expects the rise of fully fabricated cyber incidents with forged forensic evidence, planted malware artefacts, manipulated screenshots, and coordinated disinformation campaigns designed to create the appearance of a breach that never occurred. “These fake breaches will be used to damage share prices, disrupt mergers, and destroy trust at the board level,” he said.
Geopolitics and influence operations
Kaastrup also points to the overlap between cybercrime and geopolitics. Russian-linked disruptive operations, Chinese grey zone activity, and Iranian espionage campaigns are expected to persist. Chinese-speaking cybercrime groups are expanding into global financial crime and ransomware, while hacktivist and extremist campaigns increasingly blur the line between cyberattack, propaganda, and political manipulation.
Attackers continue to abuse open-source ecosystems by inserting malicious code into developer libraries, automation tools, and software updates. Targeting is broadening beyond Windows into macOS, Linux, mobile platforms and cloud-native environments.
Regulation and executive liability
From the regulatory perspective, Dean Cowlishaw, Lead Cyber Threat Analyst at SecAlliance (part of the CSIS Security Group), predicts a sharp escalation in personal accountability. “Regulation is no longer just about organisational compliance. It is increasingly about individual decision-making at the board and executive level,” he said.
Disclosure windows are shrinking, while AI-specific regulation is expanding across jurisdictions. Regulators now demand demonstrable resilience, not just policies. Cowlishaw warned that CISOs and senior executives face rising exposure to fines, sanctions, and even potential criminal liability if cyber risk is mismanaged.
Technologies on the rise and decline
Tanase predicts strong growth in identity-centric architectures, Zero Trust design, extended detection and response platforms, cloud security posture management, and automated incident response. Continuous breach simulation and cyber wargaming will become routine as boards demand proof of readiness.
Tanase added that governments are intensifying pressure for memory-safe software development across critical infrastructure, accelerating the shift toward secure-by-design engineering.
Kaastrup expects a steady decline in perimeter-only models, standalone endpoint tools, signature-based detection, and “checkbox” compliance platforms that fail to reduce risk. Overhyped AI security tools that cannot demonstrate real detection and response value are also likely to fall out of favour.
The cyber workforce redefined
Operationally, Kaastrup predicts that the traditional analyst role will be redefined. Human teams will supervise and steer AI-driven detection and response rather than perform every investigative task manually.
Skills in cross-platform forensics, incident verification, false-flag detection, crisis communications, regulatory reporting, and continuous threat simulation will be in high demand.
Strategically, Cowlishaw sees a continued shift away from degree-only recruitment toward apprenticeships, reskilling programmes, and bootcamps. Organisations are drawing talent from psychology, law, geopolitics, communications, and data science, while globally distributed operations become the norm.
“The organisations that succeed will not be those with the most tools,” Cowlishaw concluded, “but those that align intelligence, automation, leadership, and accountability at machine speed.”