Connect with us
AI security skills gap

Security

CISOs Lack Skills and Tools to Secure AI Systems, Report Finds

CISOs Lack Skills and Tools to Secure AI Systems, Report Finds

A majority of security leaders are struggling to defend Artificial Intelligence systems with tools and skills that are not fit for the challenge, according to a new industry report. The findings highlight a growing security gap as AI adoption accelerates across business sectors.

The “AI and Adversarial Testing Benchmark Report 2026,” from security testing firm Pentera, is based on a survey of 300 chief information security officers and senior security leaders in the United States. The report examines how organizations are securing AI infrastructure and highlights critical gaps tied to skills shortages and inadequate tooling.

Key Findings on <a href="https://delimiter.online/blog/Cybersecurity-incidents/” title=”AI Security”>AI Security Preparedness

The report indicates that most organizations have rapidly integrated AI into core business functions, but their security postures have not evolved at a similar pace. Security teams are reportedly attempting to protect complex AI models and data pipelines using traditional cybersecurity frameworks designed for conventional IT infrastructure.

This mismatch leaves AI systems vulnerable to novel attack vectors, including data poisoning, model theft, and adversarial machine learning attacks that can manipulate AI behavior. The survey data suggests that a significant skills deficit is a primary contributor to this vulnerability, with few security professionals possessing specialized training in AI security principles.

The Tooling and Expertise Gap

According to the report, the tools commonly used for network and endpoint security are often ineffective for monitoring AI-specific risks. For instance, traditional security information and event management systems may not detect anomalies in model training data or identify subtle manipulations of a model’s output.

Furthermore, the survey reveals that many security leaders feel their teams lack the necessary expertise to conduct adversarial testing specifically designed for AI. This type of testing involves simulating attacks to find weaknesses in AI models before malicious actors can exploit them. Without this capability, organizations cannot accurately assess the robustness of their AI deployments.

Broader Implications for Enterprise Security

The gap between AI adoption and AI security has broad implications for data privacy, operational integrity, and regulatory compliance. As AI systems are entrusted with more sensitive decisions involving customer data, financial analysis, and critical operations, securing them becomes a paramount concern.

Industry experts not involved with the report have previously noted that securing AI requires a different approach than traditional software. It involves securing the entire AI lifecycle: the training data, the machine learning models during development and deployment, and the ongoing inputs and outputs during operation.

The Pentera report serves as a benchmark, quantifying a concern that has been discussed within cybersecurity circles for several years. It provides concrete data pointing to a widespread lack of preparedness at a leadership level.

Looking Ahead: Next Steps for Security Leaders

Based on the report’s analysis, the path forward for organizations involves strategic investment in both people and technology. Security teams will need access to specialized training programs focused on AI and machine learning security threats and defenses.

Concurrently, the cybersecurity market is expected to respond with a new generation of tools built to address AI-specific vulnerabilities. These may include platforms for continuous monitoring of model behavior, tools for securing AI supply chains, and standardized frameworks for adversarial robustness testing. Industry groups and regulators are also likely to develop more formal guidelines and standards for AI security in the coming years as the technology’s integration deepens.

Source: Pentera AI and Adversarial Testing Benchmark Report 2026

More in Security