As Large Language Models (LLMs) become increasingly sophisticated, emerging use cases threaten professions that have so far escaped the threat of automation, including psychotherapy, social services, and legal counsel. Adding to concerns about the impact of LLMs on professionals, the benefits and risks of this potential change are poorly understood. This project will examine the legal soundness and social acceptability of embedding LLMs in the workflow of legal professionals. We will do so in a series of experiments that seek to test the trustworthiness of LLM-generated legal advice, aiming to improve understanding of how to make generative AI more responsible.
This project aims to investigate the acceptance and reliability of LLM-use in low-stake legal contexts. We aim to (1) assess the legal soundness of the advice given by the LLM; (2) run a series of experiments on trust and reliance by laypeople and legal experts on the legal advice produced by that LLM; (3) engage with the legal experts to understand their perception of the LLM and the advice it produced; and (4) develop recommendations for legal expert usage of LLMs.