Should you consent to your doctor using an AI scribe? Here’s what you should know.

(Photo credit: Pcess609/Adobe Stock. )
It’s no wonder many have taken to AI-based technology to do the notetaking for them – as the conversation occurs, rather than hours later.
I am cautiously optimistic about the growing use of AI scribes in healthcare. These tools have the potential to benefit both doctors and patients, but it all depends on how they are implemented by individual doctors and health clinics.
Research has shown that AI scribes can help doctors stay more present in the consultation, listening more carefully, maintaining eye contact and focusing on the patient rather than the keyboard. This can lead to a more human, attentive and reassuring experience for patients.
AI scribes can help doctors reclaim time for rest or family, reducing burnout and fatigue. Some can even help generate patient-facing letters or summaries in plain English, making it easier for people to understand their diagnosis, treatment plan, or follow-up instructions. That’s particularly valuable for patients managing complex conditions, navigating new diagnoses, or bringing information home to family.
The downside
That said, the risks are real. These tools can make mistakes, especially with strong accents, noisy rooms, or medical jargon. They are not currently regulated by the Therapeutic Goods Administration and legal responsibility for the content still lies with the doctor. Privacy is also a major concern.
Our research shows that health data breaches can cause serious harm to patients and expose healthcare organisations to reputational, financial and legal consequences.
Some AI scribes send data to overseas servers, which raises additional privacy and compliance issues. Using AI Scribes without proper patient consent could breach privacy or surveillance laws in some states.
If clinicians begin trusting AI-generated notes without properly reviewing them, errors may slip through. There’s also a deeper question here: will outsourcing documentation change how doctors think, reflect and reason about a case? Prior research suggests that skipping the manual process of writing notes may lead to missed clinical insights.
Gaining popularity
In recent months, through webinars, policy roundtables and ongoing collaboration with Australian clinicians, I’ve seen firsthand the momentum building around AI scribes. Many doctors – especially GPs – are actively exploring these tools.
This shift is now reflected in national data. A 2024 survey by the Royal Australian College of General Practitioners found AI scribe use among GPs rose from less than 3 per cent in May to 8.24 per cent by October. Another survey by Avant, one of Australia’s largest medical indemnity insurers, reported that use among its members nearly doubled from 11 per cent in August 2024 to 19 per cent by February 2025.
These tools are now used in thousands of consultations every day and have been described as a “game-changer” – reducing documentation time by up to 40 per cent and helping to lower clinician stress levels.
Patients should know that when an AI scribe is used, parts of their consultation – like spoken words or clinical summaries – may be recorded, transcribed and processed by third-party software.
The questions patients must ask
Before agreeing, patients should feel confident asking if the consultation will be recorded or transcribed; who will handle the data and where it will be stored; will their information be used to train AI or shared with others; and can they say no and still receive the same care?
Doctors should explain not only how AI scribes work, but also what’s in it for the patient – from improving focus and eye contact during the consult to reducing documentation errors and producing clearer follow-up summaries.
Our recent research shows patients are more willing to accept AI tools when consent processes are clear, privacy is protected and trust is actively built. Unfortunately, we also found that poorly explained consent, opaque data practices and fears of misuse – including use by private insurers – are key reasons patients hesitate to share their information.
It’s critical that patients are not left in the dark. They deserve transparency, choice and the assurance that their data won’t come back to harm them later, whether in the form of privacy breaches or decisions by insurers.
While AI scribes offer real promise, they also introduce new layers of legal, ethical and clinical complexity. Without proper oversight, training, and transparency, the risks could outweigh the benefits.
Dr Saeed Akhlaghpour is an Associate Professor of Information Systems at the Centre for the Business and Economics of Health.
Related articles

How a drone delivering medicine might just save your life

Thousands of Queensland reef photos lead to worldwide change
Media contact
UQ Communications
communications@uq.edu.au
+61 429 056 139