Large Language Models and Intelligence Analysis

Share

Abstract

#AI #languagemodels #cybersecurity

This article explores recent progress in large language models (LLMs), their main limitations and security risks, and their potential applications within the intelligence community.

While LLMs can now complete many complex text-based tasks rapidly and effectively, they cannot be trusted to always be correct. This has important implications for national security applications and our ability to provide well considered and trusted insights.

This article assesses these opportunities and risks, before providing recommendations on where improvements to LLMs are most needed to make them safe and effective to use within the intelligence community. Assessing LLMs against the three criteria of helpfulness, honesty and harmlessness provides a useful framework to illustrate where closer alignment is required between LLMs and their users.

Type of Publication
articles-and-papers
Theme
artificial-intelligence
Publishing
Centre for Emerging Technology and Security
Publishing Date
2023
Language
english
Status
open-access
Authors
Adam C; Dr Richard Carter