Contract review is a time-consuming, repetitive, and expensive activity in terms of lawyer hours. It is also one of the first legal tasks that LLMs have managed to assist in a credible way.

What production tools actually do

The most widely used systems today (Harvey, Luminance, Kira) operate on the same principle: the model reads a contract, identifies clauses according to a predefined taxonomy, and flags deviations from a baseline established by the firm.

A London firm that participated in this investigation documented its use of Harvey on M&A contracts. Of 400 contracts analysed over one year, the model correctly identified 91% of non-standard warranty clauses, compared to 78% in a human-only review phase.

The limitations practitioners describe

Performance drops significantly on complex contracts with multi-jurisdictional governance. The models were trained primarily on English and American law. On contracts governed by Swiss or Dutch law, error rates climb noticeably.

A second limitation: the models do not understand commercial context. They can flag an unusually low liability cap, but cannot assess whether that limit is acceptable given the commercial relationship at stake.

The emerging business model

In firms that have adopted it, AI does not replace junior lawyers. It makes them more productive. One partner describes the situation: his associates now handle 40% more files with the same headcount, but every file still requires full human validation.

Key takeaway

Automated contract review is today operational for contracts under Anglo-Saxon law, on well-defined clause taxonomies. It remains fragile on complex contracts and continental legal systems less represented in training data.