ISACA’s State of Cybersecurity 2024 report revealed that artificial intelligence (AI) is being predominantly used in cybersecurity for the following top three reasons:
- Automating threat detection/response (28%)
- Endpoint security (27%)
- Automating routine security tasks (24%)
Nevertheless, AI is currently being explored in almost all the functions within an enterprise. There are some potential reasons behind the data supporting these dominant use cases:
- The rise of sophisticated cyberattacks: The increasing sophistication and frequency of cyberattacks demand advanced and automated solutions for timely detection and response. Traditional security methods often struggle to keep pace with the evolving threat landscape as they rely on predefined signatures to identify threats. AI, on the other hand, offers the capability to analyze vast amounts of data, identify patterns and detect anomalies that may indicate malicious activities, thereby enabling faster and more effective threat detection and response. Several companies are leveraging AI to enhance their cybersecurity capabilities. For instance, CrowdStrike leverages AI within its Falcon platform, which provides endpoint security, threat intelligence and incident response services. The platform's AI engine analyzes billions of security events daily, enabling proactive threat detection and automated response.
- The need for enhanced endpoint protection: The proliferation of endpoints, including laptops, mobile devices, and Internet of Things (IoT) devices, has expanded the attack surface for cyber threats. Endpoint security is crucial in safeguarding sensitive data and preventing unauthorized access. AI can play a pivotal role in endpoint security by continuously monitoring and analyzing endpoint activities, identifying and mitigating potential threats in real-time, and providing proactive protection against malware and other attacks. In this context, Darktrace, for example, employs AI to detect and respond to cyberattacks in real-time. Its Enterprise Immune System learns the “pattern of life” of an organization's IT environment and identifies any deviations from this pattern that may indicate a threat. This approach allows Darktrace to detect novel and sophisticated attacks that may bypass traditional security tools.
- The imperative for efficiency and automation: Cybersecurity teams are often burdened with repetitive and time-consuming tasks while security incident response teams are overburdened with manual eyeballing on potential alerts. AI can automate these routine security tasks, such as vulnerability scanning, log analysis, and incident response, freeing up valuable time for cybersecurity professionals to focus on more strategic initiatives. This automation, through usage of machine learning (ML) algorithms, improves efficiency and reduces the risk of human error, and enhances the overall security posture. Automatic responses can be triggered for certain security incidents without human intervention, such as isolation of affected systems and other remediation actions that can get better over time through training the AI from past incident data.
- The availability of training data: Within the area of incident handling and response, AI models for threat detection/response and endpoint security can often leverage existing security logs, malware samples and network traffic data for training. This abundance of data enables the development of robust and effective AI solutions for these use cases. On the flip side, the lack of sufficient data for other cybersecurity areas may hinder the development of AI applications in those domains, particularly when there are unauthorized AI systems that have been deployed without the cybersecurity team’s knowledge. Without the past attack patterns and trends to predict where future attacks may occur or what vulnerabilities could be exploited, cybersecurity teams may be restricted to take reactive stances compared to proactive ones.
The Gap in AI Integration: Why Are Security Teams Left Out?
Despite the clear benefits of integrating AI into cybersecurity strategies, there is a concerning trend of cybersecurity teams being excluded from the development, onboarding and implementation of AI solutions. This exclusion raises several questions about the reasons behind this gap and its potential implications for organizational security.
One possible reason for this exclusion is the lack of understanding and awareness from organization leaders regarding the cybersecurity implications from AI onboardings and implementations, similar to how cloud technologies such as Software as a Service (SaaS) were onboarded. Many organizations may view AI as a purely technological advancement and fail to recognize its potential impact on cybersecurity. What can possibly go wrong from some generic statements generated from a text prediction large language model (LLM)? Or an AI-powered customer service agent providing some winter clothes recommendations to customers? What about financial trading systems relying on multiple AI agents to run trading simulations while consuming financial trade data as part of a financial robo-advisor solution? This lack of awareness arguably has led to a disconnect between AI development teams and cybersecurity teams, resulting in the exclusion of the latter from critical decision-making processes.
Another contributing factor could be the perception that cybersecurity teams lack the necessary skills and expertise to contribute meaningfully to AI development and implementation. This perception may stem from the misconception that AI is solely the domain of data scientists and software engineers. For most cybersecurity professionals, what is most familiar to them is the CIA triad: Confidentiality, Integrity and Availability. This is also the foundation which most cybersecurity frameworks have referenced, namely ISO27001 and NIST. On the other hand, new AI frameworks may not have explicitly placed the utmost importance on the same principles. Instead, the recent ISO42001 outlines the key benefits of having a proper AI management system (AIMS) are Effectiveness, Fairness and Transparency. However, cybersecurity professionals possess unique insights into the threat landscape and security risks, which are invaluable in developing and implementing secure AI solutions. This is one of the objectives requirements as well in an AIMS, which is the need to perform AI system impact assessment, risk assessment and risk treatments.
On another hand, given the promises of new technologies in increasing employees’ productivity, it has been a constant challenge for organizations to deal with the illegal implementations of these new technologies. This can be seen in the risk of shadow IT, which cybersecurity teams still have to manage. This is arguably due to the organizational silos and communication barriers that contribute to the exclusion of cybersecurity teams in some of these IT deployments that in fact are instances of shadow IT. In many organizations, different departments operate independently, with limited communication and collaboration, especially when there is no specific policy that has been implemented for AI. This siloed approach can hinder the integration of cybersecurity considerations into AI development and implementation, leading to security gaps and vulnerabilities.
Bridging the Gap
The exclusion of cybersecurity teams from AI development and implementation poses significant risks to organizational security, which includes oversight in addressing AI adversarial attacks, data poisoning and breaches, as well as model vulnerabilities. To mitigate these risks and ensure the secure and effective integration of AI, it is imperative to increase the awareness, bridge the gap and foster collaboration between cybersecurity teams and other departments, such as AI development teams, products, compliance and even legal.
Here are four key recommendations for consideration:
- Involve cybersecurity team early, with continuous integration: Cybersecurity teams should be involved from the initial stages of AI development, providing input on security requirements, risk assessments and mitigation strategies. This early involvement will ensure that security considerations are embedded into the design, integration and development of AI solutions, minimizing the risk of vulnerabilities and security breaches. Organizations should learn from past technologies, such as how web applications security, which has matured now with OWASP and DevSecOps deployments, were typically managed by an enterprise cybersecurity team. There is anticipation from studies by Gartner that agentic AI deployments will be increasing in 2025. Such deployments could be through third-party agents to be readily integrated, though it could be developed in-house too. Regardless of the approach, a proper third-party risk management process would be required, as well as a mature development through methodologies such as DevSecOps, and these would need to be driven by the cybersecurity team.
- A cross-functional collaborative culture: Organizations should foster a culture of collaboration and communication. This can be achieved through regular meetings, joint workshops and shared training programs. Cross-functional collaboration will ensure that all teams understand each other's perspectives, share knowledge and work together to achieve common security goals. Development methodologies such as DevSecOps also is required to evolve, with MLOps being the next potential new culture and practice that unifies machine learning application development with systems deployment and operations.
- Upskilling and training: Cybersecurity professionals should be provided with the necessary training and upskilling opportunities to stay abreast of the latest AI developments and security challenges. This will enable them to contribute effectively to AI development and implementation, ensuring that security considerations are addressed throughout the AI lifecycle.
- Proper selection and usage of training data: Organizations should prioritize the proper selection and usage of training data for AI models, including both properly sourced real-world data and the generation of synthetic data. This will ensure the development of robust and effective AI solutions while addressing potential biases and privacy concerns.
By considering and applying the above recommendations, organizations can bridge the gap between AI development and cybersecurity, creating a conducive collaboration culture that ensures AI solutions are developed and implemented securely and effectively. The involvement of cybersecurity teams in AI development is essential for organizations to harness the full potential of AI while mitigating the associated security risks that it would pose throughout its lifecycle. In the long run, the goal should still be to maintain the digital trust that organizations have built with their customers.