In a new report from the Royal Institute of International Affairs, Kathleen McKendrick discusses the theoretical contributions of artificial intelligence (AI) in the realm of counterterrorism operations. In “Artificial Intelligence Prediction and Counterterrorism“, the author examines how artificial intelligence is already used, how it could potentially be used, and the elements of creating a framework that supports security, privacy, and human rights considerations.
Artificial intelligence can use aggregated data from communications metadata, financial transaction information, travel patterns, and internet browsing and social media activity to create a prediction. While “uses of AI in counterterrorism centre on generating accurate predictions that help direct resources for countering terrorism more effectively”, risks still exist via inherent or learned biases and the sheer lack of public oversight, due, in part, to security concerns. Yet, from a counterterrorism perspective, artificial intelligence could provide a greater analysis of larger volumes of data, and perhaps reveal previously unknown or unrecognized patterns. “The impact of this is that traditional methods of investigation that work outwards from known suspects may be supplemented by methods that analyse the activity of a broad section of an entire population to identify previously unknown threats.” Kendrick also argues that the use of AI’s predictive capability as it relates to counterterrorism in the discretionary application of preventive measures would not only minimize the effects of such measures on the whole population, but would also improve resource allocation to perceived targets.
However, several moral, ethical, and practical pillars serve as obstacles to the full application of AI within counterterrorism operations. The author cites the following as some of the major factors:
- Lack of well-established norms for the use of AI technology
- Inherent disproportionality
- An expanding, but weakly regulated, private sector role
- Lack of redress
- The ability of AI to achieve adequate predictive value
- Access to data
- Performance in adversarial environments
Even though there is a strong list of true challenges, Kendrick argues there is a similar amount of opportunity under the right circumstances. She argues that the predictive capability is neither good, nor bad, inherently. “The infringement of privacy associated with the automated analysis of certain types of public data is not wrong in principle, but the analysis must be conducted within a robust legal and policy framework that places sensible limitations on interventions based on its results.” However, the present situation is a bit of a no man’s land. “The current constructs that regulate the use of predictive AI in countering terrorism seem unlikely to either safeguard against misuse or to enable the most beneficial use of these technologies, both in terms of operational performance and adherence to human rights principles.”
Need help finding something? Ask one of our librarians for assistance!
Go to Source
Author: Kendall Scherr