Monday, April 29, 2024

French MoD challenge: Thales Performs a Successful Sovereign AI Hack and Presents Enhanced Security Solutions for Military and Civil AI

The French Ministry of Defence’s AI security challenge

Participants in the CAID challenge had to perform two tasks:

1. In a given set of images, determine which images were used to train the AI algorithm and which were used for the test.

An AI-based image recognition application learns from large numbers of training images. By studying the inner workings of the AI model, Thales’ Friendly Hackers team successfully determined which images had been used to create the application, gaining valuable information about the training methods used and the quality of the model.

2. Find all the sensitive images of aircrafts used by a sovereign AI algorithm that had been protected using “unlearning” techniques.

An “unlearning” technique consists in deleting the data used to train a model, such as images, in order to preserve their confidentiality. This technique can be used, for example, to protect the sovereignty of an algorithm in the event of its export, theft or loss. Take the example of a drone equipped with AI: it must be able to recognize any enemy aircraft as a potential threat; on the other hand, the model of the aircraft from its own army would have to be learned to be identified as friendly, and then would have to be erased by a technique known as unlearning. In this way, even if the drone were to be stolen or lost, the sensitive aircraft data contained in the AI model could not be extracted for malicious purposes.

Also Read: Adarga and Second Front announce strategic partnership to accelerate deployment of AI software to AUKUS defence market

However, the Friendly Hackers team from Thales managed to re-identify the data that was supposed to have been erased from the model, thereby overriding the unlearning process. Exercises like this help to assess the vulnerability of training data and trained models, which are valuable tools and can deliver outstanding performance but also represent new attack vectors for the armed forces. An attack on training data or trained models could have catastrophic consequences in a military context, where this type of information could give an adversary the upper hand. Risks include model theft, theft of the data used to recognise military hardware or other features in a theatre of operations, and injection of malware and backdoors to impair the operation of the system using the AI. While AI in general, and generative AI in particular, offers significant operational benefits and provides military personnel with intensively trained decision support tools to reduce their cognitive burden, the national defence community needs to address new threats to this technology as a matter of priority.

The Thales BattleBox approach to tackle AI vulnerabilities

The protection of training data and trained models is critical in the defence sector. AI cybersecurity is becoming more and more crucial, and needs to be autonomous to thwart the many new opportunities that the world of AI is opening up to malicious actors. Responding to the risks and threats involved in the use of artificial intelligence, Thales has developed a set of countermeasures called the BattleBox to provide enhanced protection against potential breaches.

SOURCE: Businesswire

Subscribe Now

    Hot Topics