A hospital physician spends an average of 34% of their time reading and entering data into information systems. That figure is one of the most cited justifications for introducing AI into clinical workflows. But what actually happens when an automatic summarisation tool is deployed?

What summarisation tools do in practice

Deployed systems read the patient record (history, prescriptions, discharge summaries, lab results) and generate a structured summary tailored to the context of the upcoming consultation.

A cardiologist sees a cardiovascular-focused summary. A general practitioner sees an overview. The model does not invent; it selects and reformulates from existing documents.

What doctors who use it say

At Nantes University Hospital, a system of this type was deployed in a pilot in the internal medicine department. Of 45 participating physicians, 38 report a real time saving per consultation, estimated at between 4 and 8 minutes per patient on complex files (more than 50 documents).

Several physicians note an unexpected positive side effect: the summary sometimes draws their attention to older elements of the record they would not have reviewed spontaneously, for lack of time.

Identified risks

The primary documented risk is overconfidence. Some physicians tend to read the summary without checking the source documents. If the summary contains an error or omission, it can go unnoticed.

A second risk is excessive standardisation. Automated summaries handle structured data well. They miss the nuance in physicians’ free-text notes, which sometimes contain the most important information about a patient’s social or psychological situation.

Key takeaway

Automated medical record summarisation measurably saves time on complex cases. It requires strict protocols to prevent overconfidence, and does not replace training physicians on its limitations.