How Autonomous is Your Legal AI Assistant?
As AI for legal work rapidly advances, it has become hard to mark where we are on the curve of what AI can and can’t do. And with hundreds of legal technology companies advertising AI capabilities, with marketing hype thrown in, it feels as though the volume is turned up to 11.
To cut through the noise, I’ve developed a framework for understanding the levels of AI autonomy in legal work, focusing on contract review and drafting. Somewhat self-serving, I admit, since AI contract review is what we do at LegalOn, but based on years of experience and an area of legal work that’s relevant to all organizations. I believe this framework is the first of its kind, and I encourage you to comment and share your reaction. If you find it valuable, please build it out for other legal practice areas.
Here is some additional context before jumping in:
- My thinking is inspired by the levels of autonomy defined for understanding the evolution of self-driving cars.
- Accuracy is a separate, critical dimension to these levels. By definition, accuracy must increase with increasing autonomy – otherwise, human intervention is always required (e.g., you can’t trust a self-driving car that makes constant mistakes). But at the lower levels, 1 through 3, there is a wide variance in accuracy, significantly impacting how engaged users must be in providing oversight and intervention.
- The law has many domains within it, so the descriptions and effectiveness of technology will vary quite a lot depending on the domain. At the highest level, autonomy for litigation-related tasks is different from that for transactional tasks, and important divisions still exist at much more granular levels. By analogy, a car that can self-drive in one city may fail terribly in another city or if sunshine turns to snow.
Let’s go with contract review and drafting as our example domain.
Level 1: Assistance
The system can identify contract types and/or clause types. This means that the AI can recognize different types of contracts, such as employment agreements or non-disclosure agreements, and different kinds of clauses within those contracts, such as a clause about the term of the agreement. This level of AI autonomy can help organize and categorize contracts, making it easier for lawyers to find the documents they need. Analytics about sets of contracts become possible if one takes the additional step of mapping contract information to standardized models. This technology is available today with varying levels of accuracy.
Level 2: Enhanced Assistance
The system can extract information from contracts and/or clauses, such as – using the example from Level 1 – that the term of the agreement is two years. This enhanced assistance can save time and effort by automatically identifying key contract information and making it easily accessible. This technology is available today with varying levels of accuracy.
Level 3: Conditional Automation
At the third level of AI autonomy, the system can review contracts to identify problems and draft solutions, and the user remains actively engaged. The AI can flag potential issues in a contract and suggest possible solutions, but the lawyer must still review and approve those solutions. This technology is available today (e.g., at LegalOn) with varying accuracy.
At the third level of AI automony, . . . the AI can flag potential issues in a contract and suggest possible solutions, but the lawyer must still review and approve those solutions.
In this level also exists the ability to draft some types of contracts, with varying levels of customization. I don’t mean automated templates (which don’t require AI and take a standard document and populate it with facts or using if>then type rules to customize with different provisions). Instead, generative AI today can create unique, semi-customized contracts of some types, with varying skill levels and accuracy depending on the system and the contract. Human oversight is generally required but on a sliding scale, depending on complexity.
Level 4: High Automation
The system can fully review, revise, and draft contract language within limited domains. This means that the AI can fully handle some contracts in some circumstances, and a human is not needed for line-by-line review in those instances. Humans oversee setting review standards, quality control, and may still manually review certain contract types and revisions. This technology is not available today.
Level 5: Full Autonomy
At the highest level of AI autonomy, the system can fully handle the review, revision, and drafting of all contract types without direct human intervention. At this level, humans may continue to oversee review policies and quality control, albeit with automated, data-driven technologies. This level of AI represents a massive shift – in technology, practice of law, and regulatory schemes. This technology is not available today.
The five levels of AI autonomy in legal represent a spectrum of increasing automation. While the benefits of AI in contract review and drafting are clear, we’re only at Level 3 of 5, and humans must remain actively engaged, particularly in situations that involve complexity or nuance. We are still some way off from more advanced levels of automation, which require more technical advancement, human trust, and regulatory clarity. I suspect we are still years off from Level 4. We are at a very exciting inflection point for AI, and lawyers and legal professionals will continue to play an important, but evolving role, in contract review.
We are at a very exciting inflection point for AI, and lawyers and legal professionals will continue to play an important, but evolving role, in contract review.
If you’d like to learn more about LegalOn and how we help legal teams negotiate stronger contracts faster, DM me!