AI’s legal revolution

Yale Engineering and Yale Law School have teamed up to bring legal expertise to your fingertips with AI lawbots.
Yale Engineering’s Ruzica Piskac and Yale Law School’s Scott Shapiro in Yale’s Legal Laboratory, the Lillian Goldman

Yale Engineering’s Ruzica Piskac and Yale Law School’s Scott Shapiro in Yale’s Legal Laboratory, the Lillian Goldman Law Library.

This story originally appeared in Yale Engineering magazine.

The law can be a complicated thing, even for seemingly simple matters. Wondering if the oak tree in your front yard is in violation of local zoning ordinances? Figuring that out could mean wading through a tall pile of regulations, all written up in confounding legalese.

A city zoning code can contain tens of thousands of meticulously detailed rules, regulations, and guidelines. Even if the 60-megabytes-plus size of the documents doesn’t crash your computer, you still have to try to understand it all. This is a daunting task even for legal experts. For laypeople, deciphering such a Byzantine set of rules borders on the impossible.

To that end, professors Ruzica Piskac and Scott Shapiro — from Yale School of Engineering & Applied Science and the Yale Law School, respectively — ​are putting artificial intelligence (AI) to work on your behalf. With advanced AI-powered tools, they are developing a system — known as a “lawbot” — that can review and parse zoning laws, tax provisions, and other intricate legal codes much faster than human lawyers. They named their related start-up Leibniz AI, after the 17th century polymath who dreamed of an automated knowledge generator.

To the user, the concept behind the lawbot is fairly simple: ask it a legal question, and it provides you with an understandable and accurate answer.

Depiction of “lawbot” chatbot in action.
Piskac and Shapiro’s “lawbot” could review and parse zoning laws, tax provisions, and other intricate legal codes much faster than human lawyers.

More than just offering helpful advice, the two professors see their system as helping to democratize the legal system. Getting reliable information that isn’t cost- or time-prohibitive empowers the average person to understand their rights and make more informed decisions.

The system harnesses the power of large language models, which can understand and generate human language — essentially, they streamline legal analysis and allow users to ask questions and get answers in plain language. Crucially, the system also applies automated reasoning, a form of AI that uses logic and formal methods to reliably solve complex problems. Today’s popular chatbots have shown a tendency toward “hallucinating” — that is, asserting false statements as true. Obviously, this isn’t something you’re looking for in a lawyer. But thanks to automated reasoning, the Leibniz AI lawbot offers only clear-headed responses. By systematically verifying and validating each step of the reasoning process, it significantly reduces the potential for errors.

We want to use those insights that we already learned about reasoning in the legal setting,” said Piskac, associate professor of computer science. “Then we can apply them to real-world settings so that regular users like me or someone else can ask their questions. For instance, if I have an extra room, am I allowed to rent it on Airbnb?”

There are currently AI-based startups focused on providing legal services. Unlike Piskac and Shapiro’s system, though, none use automated reasoning or any other sorts of formal validation of their results. Instead, they tend to rely mainly on unreliable large language models.

Shapiro, the Charles F. Southmayd Professor of Law and professor of philosophy in Yale’s Faculty of Arts and Sciences, said developing a lawbot seemed like a great opportunity to show the promise of AI technology. But increasing access to legal information through large language models brings the obligation to ensure that the information is accurate — the stakes are high when it comes to the law.

That’s where the system’s techniques of automated reasoning, verification, and logic solvers come into play, he said. The result is nuanced legal information delivered quickly and accurately at the user’s fingertips.

A ‘deeply interdisciplinary’ collaboration

Piskac and Shapiro began working together after Samuel Judson, Piskac’s Ph.D. student, proposed applying for a research grant from the National Science Foundation (NSF). The proposal called for developing accountable software systems, a project that required legal expertise. Piskac emailed Shapiro, whom she’d never spoken with before.

I’m like, ‘Hey, I’m a person who likes logic. Would you like to work with me on a project involving logic and the law?’” Piskac said. “And Scott answered within a couple of minutes: ‘Yes. I like logic, too.’” Soon after, together with Timos Antonopoulos, a research scientist in Piskac’s group, they applied and were awarded an NSF research grant for their project on accountability.

The work they’ve accomplished wouldn’t have been possible without both researchers participating, Shapiro said.

One of the things that I really love about this project is how deeply interdisciplinary it is,” he said. “I had to learn about program verification and symbolic execution, and Ruzica and her team had to learn about legal accountability and the nature of intentions. And in this situation, we went from a very high level, philosophical, jurisprudential idea all the way down to developing a tool. And that’s a very rare thing.”

Each field of study comes with its own terminology and ways of thinking. It can make things tricky at first, Piskac said, but having a common interest helped overcome those obstacles.

Scott would say something, and I would say, ‘No, this is not correct from the computer science perspective.’ Then I would say something and he would say, ‘No, this is not right from the legal perspective,’” she said. “And just this immediate feedback would really help us. When you’re sitting close to each other and comparing and discussing things, you realize that your goals and ideas are the same. You just need to adapt your language.”

Yale Engineering Dean Jeffrey Brock said the collaboration is a great example of how the school can direct the conversation around AI and make impactful contributions to the rapidly evolving field. In addition to AI-related projects with Yale Law School and Yale School of Medicine, he noted that Engineering has been working with the Jackson School of Global Affairs on cybersecurity, and more collaborations are in the works.

Engineering is lifting Yale by helping other schools and disciplines on campus to thrive,” Brock said. “In the era of generative AI, fields like law and medicine will become inextricably intertwined with technology development and advanced algorithms. For these schools at Yale to maintain their preeminence, they are increasingly engaged with our mission, and we want to help make their work even better. That’s happening now, and we expect it to continue to an even greater degree in the future.”

He also noted that the cross-disciplinary approach is reflected in the school’s curriculum. Piskac and Shapiro, for instance, co-teach “Law, Security and Logic,” a course that explores how computer-automated reasoning can advance cybersecurity and legal reasoning. And “AI for Future Presidents,” a newly offered course taught by Professor Brian Scassellati, is designed for all students and takes a general approach to the technology and its societal impacts.

Putting the car on the stand

Our lives are increasingly entwined with the automated decision making of AI. Autonomous vehicles use AI to share our roads, health care providers use it to make certain diagnoses and treatment plans, and judges can use it to decide sentencing. But what happens when — even with the best intentions — things go wrong? Who’s accountable, and to what degree? Algorithms can fail — they can cause fatal accidents, or perpetuate race- and gender-based biases in court decisions.

In a project that combines computer science, legal rules, and philosophy, Piskac and Shapiro have developed a tool they call “soid,” which uses formal methods to “put the algorithm on the stand.”

To better understand how to hold an algorithm accountable, Piskac and Shapiro consider a case in which one autonomous car hits another. With human drivers, lawyers can ask direct and indirect questions to get to the matter of who’s at fault, and what the drivers’ intentions were. For example, if a human driver can testify convincingly that the crash was unforeseeable and unintentional, the jury might go easier on them.

Just as human drivers do, automated decision-making systems make unsupervised decisions in complex environments — and in both cases, accidents can happen. As the researchers note, though, automated systems can’t just walk into a courtroom and swear to tell the whole truth. Their programs, though, can be translated into logic and subjected to reasoning.

Piskac and Shapiro and their team developed a system that uses automated reasoning, which can rigorously “interrogate” algorithmic behaviors in a way that mirrors the adversarial approach a lawyer might take to a witness in court. This method is a provable approach, they say, that guarantees accurate and comprehensive answers from the decision algorithm.

The basic idea is that we developed a tool that can almost mimic a trial, but for an autonomous system,” Piskac said. “We use a car because it’s something that people can easily understand, but you can apply it to any AI-based system.”

In some ways, an automated decision-​making system is the ideal witness.

You can ask a human all of these questions, but a human can lie,” she said. “But this software cannot lie to you. There are logs, so you can actually see — ‘Did you see this car?’ If it’s not registered in the log, they didn’t see the car. Or if it is registered, you have your answer.”

Using soid, built by Judson in Piskac’s lab, an investigator can pose factual and counterfactual queries to better understand the functional intention of the decision algorithm. That can help distinguish accidents caused by honest design failures versus those by malicious design practices — for instance, was a system designed to facilitate insurance fraud? Factual questions are straightforward (“Did the car veer to the right?”). Counterfactuals are a little more abstract, asking hypothetical questions that explore what an automated system might or would have done in certain situations.

Then, when you ask all these counterfactual questions, you don’t even need to guess if the AI program is lying or not,” Piskac said. “Because you can just execute the code, and then you will see.”

Share this with Facebook Share this with X Share this with LinkedIn Share this with Email Print this

Media Contact

Michael Greenwood: michael.greenwood@yale.edu, 203-737-5151