ROBERT BUCKLAND LECTURE TO SWINDON PHILOSOPHICAL SOCIETY, 21ST APRIL 2023.
(This is an adapation of a lecture delivered to the University of Worcester Law School in March 2023).
- Those reading the title of my lecture today will recognise the “Star Wars” reference, but I am going to risk disappointing some of you by not venturing further into that reference. Instead, I adopt the words of the creator of another science-fiction world, Douglas Adams, once wrote: “I've come up with a set of rules that describe our reactions to technologies:
1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you're thirty-five is against the natural order of things.”
- Many of us who have been and around the law, and indeed around life more generally, for a long time will recognise this generational divide. In particular the practice of law has been in many ways highly resistant to change. This is not merely because of the forces of reaction. There are very good reasons why lawyers should be suspicious of change, particularly the danger of change for change’s sake. The importance of legal memory, precedent and tradition should not be underestimated. On the contrary, it should be celebrated. With continuity comes certainty, and certainty is good for the rule of law. There is something profoundly reassuring about a system and its practitioners that, whilst the world around them might be changing at an ever-quickening pace, the law is constant, solid and unchanging.
- But this, however, is not the full picture. Laws and their practice that do not change at all will ossify and either become irrelevant, illogical, or inimical to the society that they should be designed to serve. Lawyers, therefore, should not be frightened of change, and should embrace it whenever the interests of justice are served. Nowhere is this change more marked than in the field of Artificial or Machine Intelligence, which is increasingly grabbing the attention of policymakers and practitioners as the pace of change quickens. AI is already a fact of life in many fields of activity, and justice is certainly one of them.
- The use of AI in a court judgment has finally happened, at least publicly, for the first time. A judge has included conversations he had with the AI tool, ChatGPT, in a case judgment that he delivered earlier this year. Juan Manuel Padilla, a judge in Cartagena, Colombia, handed down a ruling in a case involving a dispute about insurance cover for the full costs of medical treatment of an autistic child. In particular, the Judge said that he had asked ChatGPT whether autistic minors are exempt from paying medical fees for their treatments. This does obviously beg the question: why didn’t the Judge have access to the primary sources, namely the governing regulations themselves? In justifying his approach, the Judge said on media that ChatGPT performed services previously provided by a secretary, and that its use saved valuable time. He did not see such programs, however, as a replacement for judges, saying “we do not stop being judges, thinking beings”. Amen to that.
- 30 years ago, when I was a Law student, I worked in the Library of Allen & Overy, one of our more well-known London firms. It was an enjoyable analogue existence, making sure that that the well of knowledge available to the firm and its members was replenished on a daily basis. Paper was everything, with a bit of microfiche thrown in. Fast forward to 2023, and the same firm has now engaged Harvey, an AI chatbot, to help its teams draft legal documents. Harvey has been created from the underlying GPT tech that was created via OpenAI and developed by a start-up company. We are told that it won’t replace any workforce and will not reduce billable hours-well, not yet anyway.
- Allen and Overy say that its use will be supervised by licensed legal professionals, who will have to check its work as it still experiences “hallucinations”. Harvey will come up with a basic draft document, to be used as a starting point. Whether this will be particularly useful is questionable, though, as anything it produces will still need to be rigorously checked. For those of us who are admirers of the great Hollywood actor Jimmy Stewart, use of the word “Harvey” conjures up images of the six foot imaginary white rabbit that his character, a wealthy drunk called Elwood P Dowd, in the 1946 film “Harvey” was constantly accompanied by. Truth, it would seem, is now stranger than fiction. I wonder whether the namers of this bot had seen the film. I rather hope that they have.
- The dilemma that this case illustrates is one that is going to be an increasingly familiar one to judges and policy makers across the globe. I say “going to be”. The reality is that it is here now, and frankly my worry is that we aren’t doing nearly enough thinking about its implications for the very concept of justice itself.
- I have direct experience of change. From 2019-2021, when I was Lord Chancellor and Secretary of State for Justice, I had joint responsibility for the Courts and Tribunals of England and Wales with the Lord Chief Justice. During that time, I had responsibility for the rollout of more and more remote technology in courts to deal with the effects of the Covid pandemic. I oversaw a rapid scaling-up of remote hearings, and the development of new software to improve the overall experience of judges, advocates and court users. I also engaged with the development of legal services, and in particular the creation of a digital services hub based in the private sector, indeed at Allen and Overy, but with some government funding to develop new ways of using AI to gather material and to help in the preparation of cases. Internationally, I engaged with many other Justice Ministers in different jurisdictions issues such as remote technology and AI.
- Internationally, the Covid pandemic meant that the pace of a move to online and digital justice processes quickened dramatically in most countries. Change is not confined to state actors. The use of non-court based online dispute resolution mechanisms such as the E-Bay resolution system is increasing exponentially. China has scaled up its use of AI courts, where judges have been replaced by algorithms in a drive, according to the Chinese authorities, for greater consistency of outcomes. More of that a little later. In other countries, the move to online platforms is being driven, not by a specific plan, but by case backlog pressures. Several questions arise: Firstly, what are the actual concerns about digital and AI? On the one hand, they should be able to deliver justice at greater speed and with more consistency. On the other, the lack of a human element in decision-making can endanger wider equity or social considerations that often form part of the judicial process. Secondly, is the lack of a systemic approach to digital justice something we should be concerned about, or is this just another stage in the complex evolution of individual jurisdictions that is best dealt with at national level? Finally, if it is a matter of concern meriting intervention, what should governments and international organisations do to agree parameters and common standards that not only uphold rule of law norms, but support and enhance trade and investment?
- The move to digital, online and AI processes in justice and dispute resolution systems across the world prior to 2020 might have been piecemeal and variable, but it was inexorable. Covid-19 has had the effect of turbo-charging the pace of change. For example, in our jurisdiction of England and. Wales, only about 500 court hearings conducted remotely or by phone took place per week. As the pandemic took hold, this number dramatically increased to over 20,000 hearings per week. Other jurisdictions adopted similar approaches, as a direct response to the partial or complete closure of court centres and the restriction on in-person hearings.
- The questions that I think are worth asking are these: what have particular jurisdictions learned from the move to online hearings and the increased use of AI? Is it in fact the case that online hearings speed things up, or are cases that could have been settled in person litigated upon fully instead, expending more time and cost? Is the very essence of the justice experience itself restricted or diminished, both for judges and court users, or can we rely on the good sense of judges to use remote processes where appropriate, rather than as the default? Whilst remote processes and AI might increase speed and consistency, is there not a danger that they reinforce and embed injustice if the standards they apply are not the product of an independent legal system fully in adherence with international rule of law standards.
- Some of these issues are logistical and will be cured by improved technology, but how is the human element of solemn court proceedings replicated online? The overriding aim for the court must be the achievement of best evidence, which is a familiar concept to those of us used to dealing with vulnerable or child witnesses, for example.
- When it comes to court processes themselves, Jurisdictions such as England and Wales already offer an online money claim facility, and the aim is to develop this further in other parts of civil justice, as well as an increase in referring litigants to other ways of dispute resolution. In criminal law, minor offences involving a financial penalty where there is an admission are now dealt with online, and the direction of travel has been to add more types of case to this process. In recent weeks, there has been great controversy about the automatic processing of magistrates’ courts warrants to allow energy companies to enter the homes of customers who are struggling with bills to instal pre-payment energy devices, which are deeply controversial at a time of rising gas and electricity costs. The furore has resulted in a change of judicial guidance. Is digitalisation providing a fairer set of outcomes?
- The pressures of volume and court capacity were thrown into stark relief when an already-burdened Crown Court lost two months of jury trials due to the Covid lockdown in 2020. The UK government has legislated in the Police, Crime, Sentencing and Courts Act 2022 to allow the future use of remote juries in England and Wales, subject to further work and consultation. In Scotland, however, remote juries have been used in a small number of criminal trials using “cinema style” technology. The results were positive, but the cost was immense. Debates about the difference in quality between evidence given by remote witnesses and evidence given live in court will continue, but it is my belief that with a continued improvement in video technology, these concerns will subside.
- Looking more broadly, the use of online and AI mechanisms for dispute resolution is a well-established part of many people’s lives. As I indicated earlier, eBay and other platforms already offer this type of online service where the parties themselves resolve the issue or via a mediator, as digital transactions have proliferated. Is it in fact the reality that most disputes will in fact never come to a court process to be resolved, and that Alternative Dispute Resolution will be the dominant justice experience of most of us? I can see huge advantages for all of us in terms of speed, cost and reliability. Key questions as to how the algorithms are populated and the factors that determine outcomes, however, remain. It seems to me that if the consumer can make an informed choice about the type of dispute resolution to be used, then many of these concerns can be assuaged. We should not pretend, however, that this type of resolution will have the same properties or even the same qualities as a human determination process. What I think is much more challenging is the presence and rise of AI in our systems without apparent forethought.
- In particular, there is a real problem with the gap between the amount of research being conducted into increasing the capability of AI as opposed to research into alignment between AI and the human brain. In other words, we are dashing ahead with increasing the capacity and power of AI without taking what I regard as essential steps to increase its safety. The race with China seems to outstrip all other considerations. As Ian Hogarth wrote in the FT last week, when the secret Manhattan programme was developed and before Operation Trinity, it had been established that a nuclear explosion would not ignite all the oxygen in the atmosphere and extinguish life. As things stand, a theoretical well-intentioned international initiative to deacidify the oceans with the use of AI could, if the algorithm isn’t carefully calibrated to avoid unforeseen consequences, lead to a such a catastrophe for the human race. It is just this sort of thinking that has underlaid the letter with 1800 signatories last month urging a 6 month pause in the development of AI capability.
- Coming back to China, an altogether more decisive course has been set by the government. Since 2017, when AI judges were first used in Hangzhou, there has been a big increase in the number of cases resolved without human decision-making. Online finance disputes, intellectual property issues and product liability cases are resolved by AI. Millions of cases per year are now handled in this way. AI is used to sift through cases and to automatically send to appeal any cases that don’t fit the pattern. The Chinese authorities say that the use of AI ensures much greater consistency of approach and higher legal certainty as the AI process will look at previous cases, thereby strengthening a precedent-based approach. The concern remains, however, that wider social issues or matters of equity that a human judge will readily appreciate are then missing from the process. More fundamentally, if the information being processed via an AI algorithm isn’t the product of an independent judicial and legal system, then it reinforces unfairness and injustice.
- In the spirit of my opening remarks, it is worth asking the question as to whether these developments are truly without precedent or yet another stage in the evolution of justice, which is best left to individual jurisdictions to work out. The competitive drive to be the “best” or most attractive jurisdiction to invest in can be seen as a driver of excellence and certainty, but as China increasingly asserts the primacy of its own legal system, rather than being content with the use of English contract law for example, it is increasingly risky to rely on competition and economic growth as a driver of a better quality of justice. Just as we can no longer assume that economic growth and higher living standards are drivers of democracy and political accountability, the same has to be true of standards of justice, it can be argued. Is it necessary for action now to clarify precisely how AI is to be used in legal processes? AI is a growing factor in the preparation of court cases, as part of the fact management process and the disclosure of material process in complex litigation, and it would be useful to examine how these activities are monitored and policed by the professions themselves and the court dealing with such a case. I don’t have immediate answers to these questions, because my main point now is that asking the question is what we need to be doing.
- Further, should international agreement be sought as to the parameters of AI in general, or is a more specific justice-based approach preferable? Bearing in mind the political experiences at UN level with Russia and China, would a regional approach be better and more realistic? The Council of Europe continue to actively discuss AI and justice issues; could the European Convention on Human Rights and its Protocols be a place to include AI for example or is something bespoke more appropriate? Could the leading Common Law jurisdictions work together at an intergovernmental level or at a sector-to-sector level in order to agree basic standards and boundaries?
- Where then, should the focus of our thinking be concentrated? A deeper look at what is judgment, in the legal context, is necessary. There are different types of judicial exercise: the task of passing a sentence is very different from the task of assessing the credibility of a witness, for example. The former task involves the application of law and guidance, whereas the latter task is very much, shall we say, an exercise in human judgement-a value judgement, or a practical judgement, which will vary according to all sorts of mores, as long as an ethical standard is adhered to. I can’t see how an algorithm can be useful in these situations, but they will be useful for some determinative judgements where the law is clear and comprehensive.
- What about the ability of the system to acknowledge and correct error, a sense of humility as to the use of AI, in other words. If that is not part of the process, then we should be worried. AI can, therefore, have a supporting role, which improves consistency. An example of this is use of the polygraph or lie detector. Although the Government has legislated to allow its use in certain processes within the criminal justice system such as risk assessments when supervising certain types of offender, it is still not to be used as part of the evidence in a trial process. This restriction is not only a measure of the limitations of technology but is also a reflection of a wider public perception that will not accept reliance on machines to determine fundamental issues of criminal guilt. That suggests a greater confidence is inspired by recourse to a judge who has the same emotions, frailties and often life experiences of those who are looking to them, rather than decision-making by a machine that has none of those attributes. The recent miscarriages of justice brought about by the defective Horizons IT system in many Post Office fraud prosecutions brings home the danger of seemingly infallible evidence being undermined by fundamental system errors. Even with an AI system that irons out these problems or “hallucinations”, will the public accept it?
- Is absolute consistency and uniformity desirable when it comes to justice? Does the very presence of indeterminacy in the system help drive further research and improvement? Analysis of the reasons for differing decisions, either on a factual or legal basis, enriches our understanding and development of the law. Does AI risk enshrining the very inflexibility in the system that I said at the outset could be a risk? And rather like the difference between an analogue TV picture, which even with poor reception might be fuzzy but recognisable, and a digital picture that either can or cannot be seen, is the AI algorithm vulnerable to error? The human eye that checks an X ray for cancer is still more reliable than an AI process, for example. Reverting to China, is what goes into the algorithm in the first instance the product of a fully ethical and objective process, that reflects true justice, or is it influenced by socio-economic or even political factors?
- But rather than focussing on the technology, let’s define what it is that we want to keep and why. Can we always explain how judgments are reached, and is it even desirable? Lord Mansfield, the great 18th century Lord Chief Justice, was asked for advice by a solider who was about to depart for a colony in which he would be dispensing justice. “Decide promptly, but never give any reasons. Your decisions may be right but your reasons are sure to be wrong.” If decisions are made within an ethical norm, some divergence, difference, variance seems to me to be part of the essential human quality of judgment and decision-making. In making these assessments, however, we must accept that human judges may be affected by their social, economic or political background. This remains a contested area, but even assuming that the 20th century legal philosopher HLA Hart was right about everything, is this more explicable to the public than the inscrutable workings of an algorithm, which constantly develops?
- To sum up, the presence and use of AI in our justice and dispute resolution systems is real, and in many ways will enhance the speed and indeed quality of justice plus access to justice itself. Without serious thought and action by the international community, however, the indefinable human element that is the hallmark of justice might be changed or even lost without us realising it. This is why research into alignment between AI and the human mind has to be scaled up quickly, in order to catch up with and, I I believe, overhaul developments in AI capability. Of this I am sure, but I have also asked a lot of questions in my lecture to which I don’t yet have the answers. If we don’t seek to ask or answer them, I believe that AI will turn from hope to threat.