It seems that the Israeli Military utilised an artificial intelligence (AI) system called Lavender to target numerous Hamas entities.
A report from The Guardian cites Israeli intelligence sources disclosing the use of a previously undisclosed AI-powered database during the military's bombing campaign in Gaza.
Allegedly, this database identified around 37,000 potential targets associated with Hamas.
Additionally, sources claim that Israeli military authorities allowed for a significant number of Palestinian civilian casualties, particularly in the early stages of the conflict.
This revelation underscores Israel's utilisation of machine learning to identify and engage targets during the conflict, raising legal and ethical questions.
While AI tools offer the advantage of achieving results swiftly, concerns arise regarding their potential misuse and the broader implications for security and human safety.
An intelligence officer acknowledged that using Lavender saves them time.
Recent research has highlighted the inherent risks associated with many AI models, posing challenges for ensuring their safe and responsible deployment.