Large Language Models (LLMs) have the potential to transform public international lawyering. ChatGPT and similar LLMs can do so in at least five ways: (i) helping to identify the contents of international law; (ii) interpreting existing international law; (iii) formulating and drafting proposals for new legal instruments or negotiating positions; (iv) assessing the international legality of specific acts; and (v) collating and distilling large datasets for international courts, tribunals, and treaty bodies.

During this lecture, professor Duncan Hollis (Laura H. Carnell Professor of Law at Temple Law School and co-faculty director of Temple’s Institute for Law, Innovation & Technology (iLIT)), will present his article, which uses uses two case studies to show how LLMs may work in international legal practice. First, it uses LLMs to identify whether particular behavioral expectations rise to the level of customary international law. In doing so, it tests LLMs’ ability to identify persistent objectors and a more egalitarian collection of state practice, as well as their proclivity to produce orthogonal or inaccurate answers. Second, it explores how LLMs perform in producing draft treaty texts, ranging from a U.S.-China extradition treaty to a treaty banning the use of artificial intelligence in nuclear command and control systems. Based on his analysis of the five potential functions and the two more detailed case studies, the article identifies four roles for LLMs in international law: as collaborator, confounder, creator, or corruptor.