AI and Strategic Decision-Making: Communicating trust and uncertainty in AI-enriched intelligence



#AI #mediatrust #genAI

This CETaS Research Report presents the findings of a project commissioned by the Joint Intelligence Organisation (JIO) and GCHQ, on the topic of artificial intelligence (AI) and strategic decision-making. The report assesses how AI-enriched intelligence should be communicated to strategic decision-makers in government to ensure the principles of analytical rigour, transparency, and reliability of intelligence reporting and assessment are upheld. The findings are based on extensive primary research across UK assessment bodies, intelligence agencies, and other government departments. Intelligence assessment functions have a significant challenge to identify, process, and analyse exponentially growing sources and quantities of information. The research found that AI is a valuable analytical tool for all-source intelligence analysts and failing to adopt AI tools could undermine the authority and value of all-source intelligence assessments to government. However, the use of AI could both exacerbate known risks in intelligence work such as bias and uncertainty, and make it difficult for analysts to evaluate and communicate the limitations of AI-enriched intelligence. A key challenge for the assessment community will be maximising the opportunities and benefits of AI, while mitigating any risks. To embed best practice when communicating AI-enriched intelligence to decision-makers, the report recommends the development of standardised terminology for communicating AI-related uncertainty; new training for intelligence analysts and strategic decision-makers; and an accreditation programme for AI systems used in intelligence analysis and assessment.

Type of Publication
Centre for Emerging Technology and Security
Publishing Date
Megan Hughes; Richard Carter; Amy Harland; Alexander Babuta