Report says that promise of AI relies on scaling security as edge AI booms

3 mins read

According to new research from PSA Certified, the growth of AI is driving an increased focus on security.

Credit: Fokke Baarssen - adobe.stock.com

The research found that two-thirds (68%) of technology decision makers were raising concerns that rapid advances in AI risk outpacing the industry’s ability to secure products, devices and services. Consequently, the acceleration in AI needs to be matched by the same acceleration in security investment and best practice to ensure trusted AI deployment.

A major factor impacting the need for greater AI security is edge technology. With the ability to process, analyse and store data at the edge of the network, or on the device itself, edge devices have efficiency, security and privacy advantages over a centralised cloud-based location.

As a result, 85% of device manufacturers (OEMs), design manufacturers (ODMs), SIPs, software vendors and other technology decision makers believe that security concerns will drive more AI use cases to happen at the edge.

But, in this push for added efficiency, the security of edge devices has become even more crucial, and organisations will need to double down on securing and protecting their devices and AI models in order to meet the demands of deploying AI at scale, according to the report.

Security matters across the supply chain, and the survey of 1,260 global technology decision-makers found that security has increased as a priority in the last 12 months for three quarters (73%) of respondents, with 69% now placing more impetus on security as a result of AI.

However, despite AI’s promise to catalyse the importance being placed on security, the report found that there is an AI-security lag that needs to be closed if its full potential is to be realised. 

Only half (50%) of those surveyed believe they are currently investing enough in security and a significant proportion are neglecting to prioritise important security foundations, like security certification, that underpin best practice.

Just over half (54%) are currently using externally validated security certifications, independent third-party testing/evaluation on products (48%) or threat analysis/threat modelling (51%) as a means to improve the security robustness of their products and services. These easy-to-implement security fundamentals should be foundational as more organisations seek to build consumer trust in AI driven services.

Commenting on the findings, David Maidment, Senior Director, Market Strategy, at Arm (a PSA Certified co-founder) said, “There is an important interconnect between AI and security: one doesn’t scale without the other. While AI is a huge opportunity, its proliferation also offers that same opportunity to bad actors. It's more imperative than ever that those in the connected device ecosystem don’t skip best practice security in the hunt for AI features. The entire value chain needs to take collective responsibility and ensure that consumer trust in AI driven services is maintained. The good news is that the industry recognises the need to prepare, and the criticality of prioritising security investment to future-proof systems against new attack methods and rising security threats linked to rapid adoption of edge AI.”

With four in five (80%) of respondents claiming security built into products is a driver of the bottom line, there’s a commercial as well as a reputational benefit to continued security investment. The same proportion (80%) also agree that compliance with security regulation is now a top priority, up by 6% from those listing it as a top three priority in 2023 (74%). 

With edge AI booming alongside an exponential increase in AI inference, the result is an unprecedented amount of personal data being processed on the billions of individual endpoint devices, with each one needing to be secured.

To secure edge devices and maintain compliance with emerging cybersecurity regulation, stakeholders in the connected device ecosystem must play their part in creating a secure edge AI life cycle, according to the report, and that includes the secure deployment of the device and the secure management of the trusted AI models that are deployed at the edge.

Despite some concerns that rapid advances in AI are outpacing the industry’s ability to secure products, devices and services (68%), organisations broadly feel poised to capitalise on the AI opportunity and are buoyant about the ability for security to keep pace.

In fact, 67% believe their organisation is well-equipped to manage the potential security risks associated with an upsurge in AI. More decision makers are also placing importance on increasing the security of their products and services (46%) than increasing the AI readiness (39%) of their products and services, recognising the importance of scaling security and AI in step.  

But with a majority of respondents (78%) also agreeing they need to do more to prepare for AI, and concerns around security risks remaining prevalent, security must remain a central pillar of technology strategy. Improving and scaling security in an era of interoperability and edge AI requires established standards, certification and trusted hardware all businesses can rely on. By embedding security-by-design, organisations can guarantee a benchmark of best practice that will help to protect them against risk both today and in the future.

“Those looking to unleash the full potential of AI must ensure they are taking the right steps to mitigate potential security risks,” concluded Maidment. “As stakeholders in the connected device ecosystem rapidly embrace a new set of AI-enabled use cases, it’s crucial that they do not simply forge ahead with AI regardless of security implications. Instead, security must be implemented from the ground up, and mapped across the complete value chain in order to embed best practice at scale and keep pace with evolving security risks.”