The rise of artificial or machine intelligence, for so long a debate about technology and capability, is now becoming the stuff of mainstream politics — and not before time.
As the race to increase capacity hots up, legitimate concerns as to the safety of this technology and the relative lack of investment are being raised in increasingly alarmist tones. Yet artificial intelligence is at once a gift and a burden. A gift, because it will transform many processes that are too slow, expensive and laborious. A burden, because it sets us challenges that require us to think fundamentally about human activity and its qualities.
Nowhere is this more apparent than in the field of justice. Digitalisation and AI offer speed, efficiency and cost-effectiveness at a time when, in the aftermath of the coronavirus pandemic, access to justice is proving difficult. Millions of cases can be resolved with the use of AI technology, as is already happening in China and Brazil.
But important questions need to be asked: can the machine ever truly replicate the often very human thought processes that go into decisions on issues such as the credibility of a witness, the granting of bail or the care of an estranged couple’s children?
In countries with strong traditions of the rule of law and democracy, the integrity of the datasets used to populate justice algorithms needs to be strong and transparent. When it comes to China’s use of AI in cases, that transparency, to say the least, is missing.
Through Britain’s membership of the G7, our globally competitive technology businesses and our reputation as a hub for financial and legal services, this country is well placed to play a leading role in instigating the development of international principles in the use of AI in the administration of justice. In doing so we should be looking not only at the preparation and delivery of judgments but also at the tendering of legal advice.
When I was trained as a part-time crown court judge, at the end of the course at the Judicial College, Lord Judge, then the lord chief justice of England and Wales, reminded us that in our work and our judgments we should not lose sight of our humanity; in other words our experiences as human beings, as opposed to our training as lawyers. This was a reminder that, although the law is there to be applied, judicial discretion is shaped not just by our legal training and experience but also by our experiences as humans.
Rather than focusing on the technology, we should be looking at human judgment itself. Broadly speaking there are two types of judgment: practical and reflective. The former is analytical — it centres on the application of universal concepts. It is based on hard facts where nothing more needs to be added.
However, reflective judgment requires human experience, or “empirical knowledge” based on the reactions and behaviour of others in different situations. For example, in moral dilemmas such as a conflict of loyalty, we should not want complete convergence or uniformity, but should accept a variety of responses within an ethical framework, where each answer or use of judgment will be rooted in the situation that presents itself.
I am in the process of researching these issues in my role as a senior fellow at Harvard University. As international leaders openly discuss AI governance, and with London set to host an international summit on artificial intelligence in the autumn, governments and legal professions of like mind should work to agree an international rules-based system founded on several principles where the state uses AI in the administration of justice:
• AI can be used for legal research, advice, preparation of submissions and judgments but to ensure full transparency there must be disclosure of the nature of its use and the underlying foundation model used to create the database.
• AI should not be used to ultimately determine issues requiring reflective judgment and where the public interest demands human involvement, for example determining criminal liability, custodial sentences and family issues, including the care of children
• If AI is to be used to determine cases, any consent obtained from the parties needs to be informed as per principle one
• Where AI is used to determine cases, any fact-based outputs must be verified
• If AI is used to determine a case outcome, a right of appeal to a human decision-maker must be available
Judges and lawyers in those countries should work with each other to develop agreements and protocols as to the use of AI with a clear outcome in mind. The outcome should be to see more cases and problems resolved than ever before for more and more people, but with the essential ethics of justice itself being maintained and enhanced.
The UK government’s recent AI white paper sets out general regulatory principles that are in line with the proposals that I have outlined. The time to act on justice AI ethics is now. While I am enthused about the potential of AI for legal services and justice, I want it to operate within well-understood ethical boundaries that will serve the interests of wider society while protecting the essential human qualities of justice.
Sir Robert Buckland KC is a former lord chancellor and the current Conservative MP for South Swindon