Report: Generative AI Passes the Legal Ethics Exam

Download Research

Can Generative AI Pass the Legal Ethics Exam?

Our latest research takes on a pressing question: can generative AI models handle the complex ethical dilemmas faced by legal professionals? Earlier this year, GPT-4 passed the notoriously difficult bar exam, demonstrating AI's potential to interpret nuanced legal language. But what about the ethical side of legal practice? To find out, we conducted an experiment testing leading AI models against the Multistate Professional Responsibility Exam (MPRE) – the ethics exam required in nearly every U.S. jurisdiction.

This research brief dives into:

  • How GPT-4 and Claude 2 performed on the MPRE, using 500 sample questions mirroring the exam's style and complexity.
  • The models’ accuracy across key ethical topics like conflicts of interest and client-lawyer relationships.
  • The implications of these findings for AI’s role in legal practice and the importance of human oversight.

Key Insights:

  • GPT-4 achieved a 74% accuracy rate, surpassing the estimated passing score in every jurisdiction and outperforming the average human test-taker by 6%.
  • Despite high performance, the study underscores that AI is not perfect and requires extensive testing, validation, and ethical supervision by legal professionals.

Generative AI is transforming the legal landscape, but it’s clear that ethical responsibility remains a human domain. Download the research to explore how AI and legal professionals can collaborate for a more efficient and just future.

Experience LegalOn Today

See how LegalOn can save you time, reduce legal risk, and free you from tedious work.

Experience LegalOn in Action

Sign up to request free early access to LegalOn