DEVELOPMENTS IN CONSTITUTIONAL AND ADMINISTRATIVE LAW: A RETURN TO NORMALITY OR A NEW NORMAL?
As we gear up for what will be a divisive and tempestuous US Presidential election season, I wish to summon up a campaign long past, where a victorious candidate won a landslide on the milquetoast promise of a “return to normalcy”. His full plea was this: “America’s present need is not heroics but healing; not nostrums but normalcy; not revolution but restoration…not surgery but serenity”. As things turned out, Warren Harding is remembered as one of the worst Presidents in US history, with his Administration mired in scandal and a colourful private life whose term in office ended in his sudden death in 1923.
Apart from the awful Americanism of the word “normalcy”, the 1920 Republican campaign has been the subject of negative criticism ever since. And yet, to dismiss the sentiment of the campaign as meaningless is to misunderstand the powerful yearning for years of turbulence to be replaced by predictable calm. Here in the UK, we are undergoing just such a process. After seven years of tumult, caused by Brexit and then Covid and its long-term economic effects, the public are yearning for stability.
Both the PM and the Leader of the Opposition are making a deliberate pitch for that ground, with the administration of corrective anti-inflationary medicine and the promise of long hard roads ahead. All quite, quite different from the boosterism of another recent UK leader whose career in some ways reflected that of Warren Harding.
The yearning for a return to normality often underlines commentaries about the relationship between our Parliament and our Courts, I make no bones whatsoever as to my preferred approach. Not for me the windy rhetoric of seemingly neat solutions to real or imagined issues. The Bill of Rights, introduced by my successor, has now met a quiet and unlamented end at the hands of the new Lord Chancellor. To quote T.S. Eliot from “The Hollow Men”, the Bill has ended, “not with a bang, but a whimper”. It was a hollow proposal, and I am relieved that we will not have to dwell upon it further as we look to return to a more normal way, the way of incremental change that lasts.
Before I finally leave the Bill of Rights however, I want to clear up some clear misconceptions about it that have been aired in social media. It was NOT a manifesto commitment, and deliberately so. Whilst for some it represented a potential way of staying within the ECHR but setting out a clear position on key issues to help fill the Margin of Appreciation, for others it was not ever going to be enough. For me, it ran the huge of risk of creating more conflicts with the Strasbourg Court and, even worse, giving rise to a new domestic genre of rights-based law that I believe is ill-suited to our common law system.
The actual 2019 Conservative Manifesto commitment was to update the Human Rights Act, in the best traditions of sensible, incremental change. We could and should have achieved that by now, building on the work of Sir Peter Gross’s Independent Review of the Human Rights Act that was published at the end of 2021; instead, much time and political capital has been wasted on a fool’s errand.
I also want to take this opportunity to correct a false narrative being propagated by a minority about the years 2019-21 at the Ministry of Justice. Contrary to what came after them, they were a time of intense activity, reform and achievement. Sentencing law was reformed and codified, no-fault divorces became law, world-leading domestic abuse legislation was passed, and the Probation Service reorganised, together with the biggest prison building programme since the Victorians and the Covid challenge as well. I am proud to have led a Ministry that worked hard, and which was achieving greater efficiency, higher morale and stronger confidence. My deep regret is that I was not able to complete some of the great tasks that I had started, but as to why that happened is for others to answer.
Had there been more time, my constitutional law priorities would have been HRA reform and then reform of the 2005 Constitutional Reform Act. I have elsewhere outlined my preferred approach to the role of the Lord Chancellor, but in essence, they should be a lawyer of standing and should be more directly responsible for HMCTS and the administration of justice, leaving the LCJ as Head of The Judiciary with amongst other things the responsibility for Human Resources via an expanded Judicial Office. Finally, I think that we should actively pursue the concept of a statutory referendum lock on all major constitutional reform proposals, from changes to the electoral system to the role and constitution of the House of Lords.
I want to focus on one of my reforms, and as this is ALBA, it is judicial review. It is now nearly two years since I delivered a lecture to the Policy Exchange Think Tank, in which I outlined my thinking that led to what is now the Judicial Review and Courts Act 2022. An example, you may think, of incremental change. The essence of my argument about the ouster clause contained within the Bill relating to the Cart jurisdiction in immigration cases was that it worked by making it very clear as to the type of Upper Tribunal decisions that were not capable of Judicial Review whilst making it clear the kind of error that could be a ground of review. This Clause was not merely important in itself, but it also charts a way forward that avoids the problems of the past, with poorly-drafted or unclear ouster clauses that create ambiguity.
So far, the new Section 11A of the Tribunals, Courts and Enforcement Act 2007 as amended by Section 2 of the Judicial Review and Courts Act 2022, has been challenged in the High Court, where last month Mr Justice Saini ruled in favour of the Government in R (Oceana) v Upper Tribunal [2023 EWHC 791 (Admin)]. In that case, the court rightly analysed the natural justice exception as requiring a procedural and fundamental breach and rightly rejected a submission that it had a common law jurisdiction to disregard new Section 11A with a clear enunciation at para 52 as follows:
“The starting point is that the courts must always be the authoritative interpreters of all legislation including ouster clauses. That is a fundamental requirement of the rule of law and the courts jealously guard this role. However, the rule of law applies as much to the courts as it does to anyone else. This means that under our constitutional system, effect must be given to Parliament’s will expressed in legislation. In the absence of a written constitution capable of serving as some form of ‘higher’ law, the status of legislation as the ultimate source of law is the foundation of democracy in the United Kingdom. The most fundamental rule of our constitutional law is that the Crown in Parliament is sovereign and that legislation enacted by the Crown with the consent of both Houses of Parliament is supreme. The common law supervisory jurisdiction of the High Court enjoys no immunity from these principles when clear legislative language is used, and Parliament has expressly confronted the issue of exclusion of judicial review, as was the case with Section 11A”.
This is, of course, a judgment at first instance but it is a welcome early indication that, where Parliament is clear in its language, that ouster clauses can and must be given effect. This seems to me to be part of a welcome return to normality, as the courts at all levels increasingly reflect that essential comity which has to exist between Parliament and the Courts. Respect, of course, is mutual, which is why it is equally important that the Government should seek to avoid legislation that sets up a fight. I was glad to see that the end of the unnecessary “Cart” jurisdiction immediately freed up about 20% more judicial resources for our courts.
In very recent days, we have seen the ongoing application by the Government relating to disclosure of material to Baroness Hallett and the Covid19 Inquiry, and the Court of Appeal decision in the Rwanda Judicial Review case. Ultimately, both cases are manifestations of no more than usual tensions that can and indeed should exist between the differing arms of the constitution. It is how the Executive reacts to adverse judgments that is the clearest indicator as to the health of our democracy, and in the Prime Minister’s reaction to the Rwanda case I did not detect a departure from what would be acceptable. Normality resumed, then?
I am afraid that I am going to disappoint you. Whilst the constitutional clashes of the Brexit years and the constitutional iconoclasm shown by some in Government since then are giving way to a calmer period, we will never return to perceived “normality”. This is nothing to do with the Government or the Courts of the day, however. Why? Because something even more important is happening. The way that justice itself operates is being increasingly affected by the use of machine or Artificial Intelligence processes in decisions made by Government and its agencies both here and across the world. A “new normal “is asserting itself rapidly. What does all this mean for justice generally and for administrative law in particular? The topic that I chose last year for my Senior Fellowship at the Mossavar-Rahmani Center for Business & Government at Harvard Kennedy School was the effect of digitisation and AI on the ethics and administration of justice.
For most lawyers, and especially administrative and public lawyers, understanding and analysing decision-making process itself is central to our considerations. I think that before deciding whether and how to regulate the use of AI in the justice system effectively, there is a basic concept that must be examined: judgement itself. The First Book of Kings in the Old Testament is a revealing place to start.
In a dream, the young King Solomon asks God, not for long life or riches, but for good judgement. In his dream, he is granted not only a wise and understanding heart but riches and long life too. It is after this dream that Solomon is presented with two women, living in one house, who gave birth to two babies. One child died in the night which gave rise to a dispute between the women as to maternity rights to the child that lived. Solomon’s response was to take a sword and propose cutting the living child in two. One of the women begged the King to spare the child and to allow the other woman to have it, whereas the other woman agreed to the child being cut in two. Solomon awarded the child to the first woman.
The true meaning of the Judgment of Solomon, therefore, is not about the King’s proposal to divide an asset down the middle but is about his emotionally intelligent response to the words and deeds of a witness. Solomon rightly decided that only the true mother would consent to her child being given away because the child’s life was precious to her above all things. This was an assessment of credibility based on a shared understanding of basic human emotions. The “law”, insofar as it existed, was based upon Solomon’s experience and qualities as a leader.
There are broadly two types of judgement: practical and reflective. The former is analytical – it centres upon the application of universal concepts. It is based upon hard facts where nothing more needs to be added. Reflective judgement, however, requires something more. It involves something that Solomon possessed: human experience. This type of judgement is anchored in ‘empirical knowledge’, namely observations based upon the reactions and behaviour of others in a range of different situations. For example, in moral dilemmas such as a conflict of loyalty, the aim should not be complete convergence or uniformity, but a variety of different responses within an ethical framework, where each answer or use of judgement will be rooted in the situation that presents itself.
What, then, about AI itself and its place in the administration of justice? Here it may help to loosely differentiate between two types of AI. On the one hand, there is Discriminative AI which learns how to distinguish between different types of data points. It splits data up into distinct groups, without ever having the primary aim of creating something new. On the other hand, Generative AI creates new examples of things based on what it has seen in its training data. The most notable recent advancements in AI, such as OpenAI’s ‘ChatGPT’, mainly belong to this group.
Discriminative AI can already deal well with decisions requiring practical judgement. These systems can learn sets of rules and concepts, and then apply them universally. The more challenging but pressing question is whether Generative AI might be capable of reflective judgement. It appears that for now, it cannot. Reflective judgement is not about merely creating new instances of things based on what has come before it. Rather, it is about applying complex lessons that are learned from lived experiences to a unique situation – and this is not something that AI, as it is presently designed, can reliably do.
This should have important implications for how we should regulate AI in the judicial system. In cases that only require practical judgement, we could actively encourage the safe and responsible use of Discriminative AI. After all, these tools are already bringing about a whole host of benefits to legal systems around the world. Consider how Brazil, which has the largest judicial system in the world, has used AI to chip away at its backlog of 78 million lawsuits. Half of its 92 courts now use AI systems to perform tasks ranging from separating legal resources that deal with the same issues to automatically examining appeals. China has increasingly resorted to the algorithm to replace judges in consumer cases as a way of getting through backlogs and increasing consistency. I needn’t go into depth the obvious concerns I and many others have about what data may be being used by China for its legal algorithms, however. In instances that entail reflective judgement, we must insist that a human remains in the loop because this is currently out of the purview of all types of current AI systems.
The inscrutability of the workings of algorithms and the need for there to be transparency about the data used within them pose a huge question for judicial review and the ability of the court to adequately be able to examine the procedure and thought processes used to reach decisions. One of your later sessions today will look more closely at this issue, but my general thoughts are these: does a move to AI inevitably result in a diminution in true accountability, we must ask. How will the Duty of Candour properly operate in a world where the government machine, quite literally, has made determinations which cannot be explained within current reference points. Without clarity as to the databases used and without an ability to challenge machine decisions, then accountability will be reduced to the detriment of all, and the concept of judicial review rendered meaningless.
Automation of administrative law needn’t be thought of as a sledgehammer to the principles of administrative justice. Neither will it be a panacea for any ills in our existing system. Instead, what it can and must be is a tool, used incisively in service to specific legal and administrative aims.
To properly direct and regulate a tool of such unprecedented power, lawmakers must first determine the bounds of its capacities. This will involve a kind of appropriate humility as well as confidence. It will be the case that artificial intelligence is much better than humans at tasks that we might not expect it to be.
The United States Department of Veterans Affairs, grappling with climbing suicide rates nationally and especially among veterans, has turned to AI assisted predictive algorithms that flag those who it believes to be at the highest risk of suicide. Although the analytic AI system is imperfect, it is better than human medical professionals at providing a service that is profoundly “human” in nature.
But this success isn’t uniform. Although AI carries tremendous potential for data acquisition and application, the technology is still immature in many ways. This was all-too visible in the disaster that was the Centrelink debt recovery program, implemented in Australia between 2015 and 2019. In this ill-fated scheme, an Australian social security state agency used algorithmic methods to automate tax services. The immature technology ended up dramatically over-estimating and unlawfully raising almost $2 billion in debt. Even though it was equipped with tremendous data acquisition powers, it failed to use these to procure accurate information.
But even as AI hallucinations and other errors are increasingly removed, there remains a risk of unwarranted deference to automated systems, a deference that becomes almost doctrinal as we begin to adopt quantitative solutions with abundant data sets, and maximally efficient practices. The consistency, efficiency, and accuracy offered by increased automation are virtues. However, they must be moderated, not merely maximized; they are virtuous, especially when applied to our administrative and judicial systems, only when the aim is to serve the interests of justice.
In what ways can AI serve administrative and judicial functions, and where must we be careful to regulate it? There are three characteristics of AI’s application to government that I’d like to highlight as being perhaps particularly challenging. These characteristics can be thought of as stages of the AI’s process when dealing with a legal or administrative task.
Firstly, there is the preliminary stage, where in the case of AI courtroom adjudication or dispute resolution, or in other tasks, data is collected generally and from the parties involved. Reliance on data is not unique to machine-learning systems. It is a feature of digitisation. But the dynamic decision-making made possible by AI makes scrutiny of this data collection even more important. In 2017, a Wisconsin man, Eric Loomis, appealed an initial sentencing in the Wisconsin Circuit Court that was informed by a recidivism prediction software. Loomis alleged that the use of such a software was discriminatory, as it used metrics such as gender and race in its assessment. Loomis’ appeal was rejected, but his accusation that the shadowy process of automated decision-making concealed an explicit reliance on factors that would not be otherwise tolerated forces us to consider data’s place in adjudication and administration.
By what metrics will we allow AI to make decisions or inform our human reasoning? AI has been accused of extrapolating withheld information, such as race, from other factors, like an individual’s residence and tax bracket. Continued appeals on grounds of discriminatory data usage, however, would impede the very efficiency AI is meant to bring.
Furthermore, will automation imbue our systems with a preference for more data; will more thorough inputs be consistently rewarded by algorithms? Will ubiquitous AI usage require more data collection than currently? The extent to which this will occur will have to inform our standards on digital privacy protections, which vary dramatically between nations.
The second stage is the decision-making process itself. This is an area of particular concern as in the case of “black box” systems, AI often operates in ways that are entirely inexplicable. If we let AI supplant human decision makers as the last word on administrative questions or adjudications, there is a lack of transparency. However, if it is subject to final human judgment in cases where discretion is required, it won’t be entirely necessary for the minutiae of the AI’s process to be made clear. Human arbiters and administrators are entirely incapable of outlining every factor that contributes to a final decision but must be able to speak to its basis or rationale. As a properly managed tool, AI recommendations do not replace such logic. The focus must instead be on transparency when it comes to the administrative process as a whole: which means disclosing the use of AI or algorithms, and the extent to which they contribute to a final decision.
The third stage is at the end of the process, namely the decision itself. It will be crucial at this stage to ensure that automation does not undermine the human responsiveness of our institutions. Proponents of increased AI use in administrative practice have a vision of responsible, neutral public administration carried out by delegation of authority to a perfectly consistent AI. But without recourse to a human judge or a human representative, citizens will feel managed, rather than served, by their digitized government. Questions of public confidence loom large here, and particularly so in cases involving the withholding of bail, criminal liability, custodial penalties or the care of a child.
However, as referred to a few moments ago, the necessity of recourse to a human representative risks destroying the efficiency of AI. Constant appeals, judicial review and rejection of automated decisions could increase, rather than clear, a backlog of administrative work. Balancing this dilemma between preserving the right of the individual to demand human adjudication or administration and the unrepentant streamlining of administrative processes is a crucial and urgent question.
The use of a primitive form of AI by researchers at Imperial College London back in 1986 to “translate” the British Nationality Act 1981 as a logic program provides some lessons as to how to incorporate greater machine-learning capacities into our own governance. The researchers understood that translating the law into logic a computer could understand meant more than inputting a series of data points and variables: they had to grapple with the logic and meaning of the law itself.
The researchers pursued a strictly “limited objective” of figuring out how to mechanically apply administrative rules and regulations to varying cases. They understood that fixed matters such as an individual’s time and place of birth stood alongside vague or subjective matters such as “having sufficient knowledge of English” or being “of good character”, so did not attempt to automate this latter type, instead preferring to have their algorithm produce qualified assessments.
Although AI capabilities have increased exponentially since those days, this incisive approach to automating administration should serve as an example. We can use this approach as an opportunity to clarify the law when necessary or preserve ambiguity where it is important; we can make use of automation to efficiently make decisions while remembering to qualify such decisions where appropriate.
In summary therefore, we must resist the temptation to let this technology, however unprecedented, however mysterious and powerful, to become more than a tool. The neutrality and consistency of the machine will not always mean justice, just as ambiguity or qualitative reasoning is not always injustice. We cannot turn to machine-learning for answers that we have not fed it ourselves, such as how to efficiently govern while preserving fairness. We must not ask AI to better align our institutions with principles of justice. Only we should do that.
In light of these issues, as discussed in my Times article last Thursday, it seems to me that a set of international principles governing the use of AI in justice systems is needed. The following five principles are a good place to start:
a. AI can be used for legal research, advice, preparation of submissions and judgments but to ensure full transparency there must be disclosure of the nature of use and the underlying foundation model used to create the database;
- AI should not be used to ultimately determine issues requiring reflective judgement and where the public interest demands human involvement, for example in determining criminal liability, custodial sentence and family issues including the care of children;
- If AI is used to determine cases, any consent obtained from the parties needs to be informed as per the first principle;
- Where AI is used to determine cases, any fact-based outputs must be verified;
- If AI is used to determine a case outcome, a right of appeal to a human decision-maker must be available.
- As Britain prepares to host an international AI Summit in the Autumn, then the question of the future of justice itself has to be well up the agenda. To answer the question that I posed at the beginning: we are not returning to normality, but to a new dispensation where we should be ensuring that justice will be more accessible, that public bodies will remain accountable via the courts where appropriate and that justice will continue to be seen to be done.
Sir Robert Buckland is Conservative MP for South Swindon. He was Secretary of State for Wales in 2022, Lord Chancellor & Secretary of State for Justice from 2019-21 and Solicitor General from 2014-2019. He is a member of Foundry Chambers, is Senior Counsel at Payne Hicks Beach LLP and is a Senior Fellow at the Mossavar-Rahmani Center for Business & Government at Harvard Kennedy School.