Cybersecurity Providers Should Plan for Portfolio Expansion to Counter Future Attacks on AI

R. Muru

Summary Bullets:

• Cybersecurity providers will be expected to drive thought leadership in various regulatory bodies to address future cyberattacks on AI.

• Innovation in R&D will entail encryption and validation of AI/ML models through XDR as well as close integration of AI/ML models to SIEM and SOAR.

AI will Demonstrate Strong Growth Across Multiple Consumer and Vertical Settings
It’s pleasing to see artificial intelligence (AI) finally overcome many historic challenges around computing power and commercial implementation. And recent advances around the improvement of algorithms (e.g., Google’s AlphaGo, OpenAI’s GPT-3) as well as increasing computing power have accelerated AI across a number of potential applications and use cases. Use cases stem across automotive (e.g., computer vision and conversational platforms), consumer electronics (e.g., implementing virtual assistants, authentication via facial recognition – i.e., Apple’s FaceID), and ecommerce and retail (e.g., voice-enabled shopping assistants, personalized shop). Accordingly, based on GlobalData forecasts, the total AI market is demonstrating strong growth (includes software, hardware, and services) and will be worth $383.3 billion in 2030, having grown at a compound annual growth rate (CAGR) of 21.4% from $81.3 billion in 2022. As a result, many of these use cases will be across a number of consumer and business application settings.

Current Narrative of AI/ML in Cybersecurity
AI within cybersecurity is very much talked about in how AI can be utilized to increase cyber resiliency, simplify processes, and perform human functions. AI together with automation and analytics enables managed security providers to ingest data from multiple feeds and react more quickly to real threats as well as apply automation to incidence response in a broader way. AI in cybersecurity is also seen to solve the problem in the long run of resourcing, by in the short term providing a stop gap by streamlining human functions across security operations centers (SOCs) – this could be through, for example, cybersecurity technology components covering extended detection and response (XDR) that detect sophisticated threats with AI as well as through security orchestration, automation, and response (SOAR) platforms that utilize machine learning (ML) to provide incident handling guidance based on past actions and historical data.

Future Uptake of AI will Increase AI-Related Cyberattacks

On the flip side, the increased use of AI in all applications (including cybersecurity) increases the chances of attacks on the actual AI/ML models in varied systems, devices, and applications. In addition, adversarial attacks on AI could cause models to misinterpret information. There are many use cases that this could occur and examples include iPhone’s ‘FaceID’ access feature that makes use of neural networks to recognize faces. Here, there is potential for attacks to happen through the AI models themselves and in bypassing the security layers. Cybersecurity products where AI is implemented is also a target as AI in cybersecurity entails acquiring data sets over time, which are vulnerable to attacks. Other examples include algorithm theft in autonomous vehicles, predictive maintenance algorithms in sectors like oil & gas and utilities (which could be subject to state-sponsored attacks), identification breaches in video surveillance, and medical misdiagnosis in healthcare.

Recommendations: Cybersecurity Provider Plan of Action
The discussion of countering attacks on AI will gain momentum through 2025 as AI use cases increase. Regulations around AI security will also drive momentum and frameworks in place to address cyberattacks on AI. As an example, current regulatory examples at a vertical level include The European Telecommunications Standards Institute (ETSI) Industry Specification Group for Telecoms that is focusing on utilizing AI to enhance security and securing AI attacks. The financial sector as a whole is in its infancy in terms of setting and implementing AI regulatory frameworks. Though, there have been developments in Europe for example, and the European Commission published a comprehensive set of proposals for The AI Act. However, the security component is limited.

Moving forward, in the next year, GlobalData will be advising security providers on portfolio expansion in addressing AI-related cyberattacks. Below is a summary of areas for security providers to further consider:
– Cyberattacks deploying AI and cyberattacks on AI will gain momentum as AI is implemented globally. A prerequisite for service providers in addressing this will be to drive participation and thought leadership in various regulatory bodies, of which some will be vertically aligned – e.g., The ETSI Industry Specification Group for Telecoms and the UNECE WP.29 cybersecurity regulation in automotive.
– Consider innovation in R&D to develop encryption and validation of AI/ML models through XDR and closer integration of AI/ML models to security information and event management (SIEM) and SOAR.
– Consultancy services targeted at enterprise customers to assess their cybersecurity AI posture (particularly looking at the risk management profile of AI) through audits will come into play. Partnerships with innovative players delivering cybersecurity managed services specifically addressing AI should be considered as the market matures.

One thought on “Cybersecurity Providers Should Plan for Portfolio Expansion to Counter Future Attacks on AI

  1. Great blog post on the growth of AI and cybersecurity. As AI continues to be implemented more widely, the danger of cyberattacks on AI/ML models increases. I find it interesting that cybersecurity providers will be expected to drive thought leadership in regulatory bodies to address this issue. What specific actions do you think these providers can take to ensure AI remains secure from cyberattacks?

Leave a Reply