this post was submitted on 04 Apr 2026
196 points (95.8% liked)

Ask Lemmy

39030 readers
1603 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Lemmy, I really would like to hear your opinions on this. I am bipolar. after almost a decade of being misdiagnosed and on medication that made my manic symptoms worse, I found stable employment with good insurance and have been able to find a good psychiatrist. I've been consistently medicated for the past 3 years, and this is the most stable I have been in my entire life.

The office has rolled out the use of an app called MYIO app. My knee jerk reaction was to not be happy about the app, but I managed my emotions, took a breath and vowed to give it a chance. After being sent the link to validate my account, the app would force restart my phone at the last step of activation. (I have my phone locked down pretty tight, and lots of google shit, and data sharing is disabled, so I'm thinking that might be the cause. My phone is also like 4-5 years old, so that could also be the cause.)

Luckily I was able to complete the steps on PC and activate that way. Once I was in the account there were standard forms to sign, like the HIPAA release. There was also a form there requesting I consent to the use of AI. Hell to the NO. That's a no for me dawg.jpg.

I'm really emotional and not thinking rationally. I am hoping for the opinions of cooler heads.

If my doctor refuses to let me be a patient if I don't consent to AI, what should I do? What would you do? Agree even though this is a major line in the sand for me, or consent to keep a provider I have a rapport with, who knows me well enough to know when my meds need adjusting?

EDIT: This is the text of the AI agreement. As part of their ongoing commitment to provide the best possible service, your provider has opted to use an artificial intelligence note-taking tool that assists in generating clinical documentation based on your sessions. This allows for more time and focus to be spent on our interactions instead of taking time to jot down notes or trying to remember all the important details. A temporary recording and transcript or summary of the conversation may be created and used to generate the clinical note for that session. Your provider then reviews the content of that note to ensure its accuracy and completeness. After the note has been created, the recording and transcript are automatically deleted.

This artificial intelligence tool prioritizes the privacy and confidentiality of your personal health information. Your session information is strictly used for the purpose of your ongoing medical care. Your information is subject to strict data privacy regulations and is always secured and encrypted. Stringent business associate agreements ensure data privacy and HIPAA compliance.

Edit 2: I just wanted to say that I appreciate everyone here that commented. For the most part everyone brought up valid points, and helped me see things I had not considered. I emailed my doctor and let them know I did not want to agree to the use of AI. I let them know that I was cool with transcription software being used as long as it was installed locally on their machines, but I did not want a third party online app having access to recorded sessions for the purposes of transcription. They didn't take issue with it.

Thank you everyone!

top 50 comments
sorted by: hot top controversial new old
[–] lucg@lemmy.world 2 points 1 day ago

I let them know that I was cool with transcription software being used as long as it was installed locally on their machines, but I did not want a third party online app having access to recorded sessions for the purposes of transcription. They didn't take issue with it.

A cynical part in me thinks they'll just have it "locally installed" in the same way that Firefox is locally installed (doesn't mean the meaningful part runs locally), and that no third party has access because the servers just don't show stuff from other tenants even though the server operator could theoretically see all. It's not like the medical people necessarily know better if their vendor answered the concerns in this manner

One way to find out for lay people might be to turn off WiFi, or disconnect the network cable, and see if it still works — in case you're in a position where the doc might seem willing to do such a 30-second experiment (if they haven't already tried this in the past themselves). Doesn't mean it doesn't get uploaded when internet is reconnected (e.g. for backups), but that is much harder to check, and if the vendor already made sure the processing is all local then it's probably okay and not being sold off as training or insurance data

Kudos for reading the terms of service and raising your concerns with them! So long as some of us keep doing that, the privacy of people who don't know about this sort of thing is also better-protected. Thank you :)

[–] slazer2au@lemmy.world 85 points 5 days ago (28 children)

I would nope the fuck out and change doctors. A regurgitation machine prone to hallucinations has no place in medical care.

load more comments (28 replies)
[–] soar160@lemmy.world 63 points 5 days ago (4 children)

Definitely ask for how they are using it. I know a number of physicians that are just using it as a dictation software to quickly make a first draft for their paperwork, helps lighten a big load.

[–] credo@lemmy.world 23 points 5 days ago

This is the answer.

Most docs can’t keep up with the mountain of paperwork or billing codes required by insurance companies these days. The software helps, but requires the doc to review and sign off the notes.

It’s not an LLM coming up with treatment plans, etc. It’s transcription+

[–] ace_garp@lemmy.world 10 points 5 days ago* (last edited 5 days ago) (2 children)

Dictation and summary software could be installed onto the doctor's computer.

There is something else going on here, with pushing an app onto patients.

load more comments (2 replies)
load more comments (2 replies)
[–] TwilitSky@lemmy.world 48 points 5 days ago* (last edited 5 days ago) (2 children)

OP. I'm a bit of an unfortunate expert in U.S. Healthcare.

The fact that you have a psychiatrist who you trust that has you on the right meds and have been with for 3 years is invaluable. You calling yourself stable is a huge thing. You wouldn't be saying that if you weren't on solid ground.

It would be completely crazy to give up a psychiatrist who is on your insurance over some AI garbage that is just transcribing notes for your doctor.

At bare minimum get a new psychiatrist who is on your insurance before switching. That should take about 6 months if you're very lucky.

Play it through: do you want to lose a quality prescriber and talk therapist? Also, maybe you should just tell them you're extremely concerned and see what they say or do.

The end result can't be worse than you giving up on your mental health. You already know how hard it is to find quality psych care.

[–] Washedupcynic@lemmy.ca 20 points 5 days ago (7 children)

I 100% agree with you. I trust my doctor. I don't trust the app. Prior to this we were using zoom.

load more comments (7 replies)
load more comments (1 replies)
[–] GreenBeanMachine@lemmy.world 6 points 4 days ago
  1. If your options are having a doctor that uses AI or having no doctor at all. Some doctor is better than none.

  2. I would ask more information about what AI they are using, where the data is processed (locally or online), where and how the AI collected data is stored (locally or in the cloud), who can access your data and whether it could be used for some AI training.

[–] leadore@lemmy.world 21 points 5 days ago (3 children)

I feel very strongly about this and I would change doctors. But of course it won't be long before they all do this and we'll have no alternative. The two biggest problems I see are

  1. I saw a news story where a doctor who uses this said it saves her time because before seeing the patient she gets an AI summary of their chart, so she doesn't have to "go through several tabs" to read the actual information. Oh great, let the statistical probability text generator hallucinate up some shit about what's in a person's chart, to save 10 seconds of tab-clicking to read the ACTUAL patient records! If they want a summary there's no reason a traditional report or summary screen couldn't be programmed to pull data out of the most important fields and arranging them in the desired format.

  2. THEN the doctor uses her damn phone to record your visit, everything you say, and that gets run through the AI which generates a visit summary and puts that into your medical records. So, god only knows what 3rd party private corporate vulture has access to your doctor/patient conversations and what they'll do with them, and again, what hallucinated shit will get put into your medical records!

So your doctor never reads your chart and never writes your chart! [Readacted] me now! Also what happens after a few iterations of an AI summarizing records that an AI wrote?

[–] sem@piefed.blahaj.zone 8 points 5 days ago (1 children)

If you buy into the story that "someday they'll all be using it" you are doing the AI boosters' job for them. It is not a foregone conclusion, and there is no reason to accept that future.

[–] leadore@lemmy.world 5 points 4 days ago

I hope you're right! The magical thinking and child-like trust in this tech by otherwise intelligent people is scary though.

load more comments (2 replies)
[–] stringere@sh.itjust.works 24 points 5 days ago (1 children)

No. Absolutely not. I csnnot trust any current AI model with HIPPA compliance.

Find another doctor. I just had to fire my therapist because when I went in for this week's appointment they were playing some jesus worship service and song. I told her that it was our last session because I no longer had trust in their offices and added that I had no faith any progress would ever be made after I was triggered waiting to see my therapist. It could have been the receptionists choice in music or someone else from their office but since they do not advertise as a faith based therapy group they should have left that shit at home or should expect more of the same from people like me.

[–] BanMe@lemmy.world 8 points 5 days ago (2 children)

It's worth researching a therapist's credentials, some states allow "pastoral counseling degrees" and so on to be a path to "mental health therapist." You want LISW, a licensed social worker. I'm not saying there aren't weirdos, or that your experience wouldn't happen with a social worker... just that many folks don't realize some therapists went to theology classes instead of psychology classes, which is a prime setup for problems.

load more comments (2 replies)
[–] LodeMike@lemmy.today 24 points 5 days ago

An AI tool does NOT prioritize privacy. It's literally the opposite.

[–] michaelmrose@lemmy.world 5 points 4 days ago

AI summaries often make up details, omit what is important, and get stuff wrong. Every error may follow you forever complicating diagnosis and treatment and ultimately can harm or kill you.

[–] scrollo@lemmy.world 23 points 5 days ago (4 children)

Can you ask how AI is used in the app?

[–] Washedupcynic@lemmy.ca 27 points 5 days ago (5 children)

I can, but in truth I don't care. I don't want my data being used to train AI, and I don't want my treatment to be guided by AI.

[–] scrollo@lemmy.world 12 points 5 days ago* (last edited 5 days ago) (3 children)

The "fine print" you added doesn't say the automated transcript will be used for training a model. I'd highly, highly doubt HIPAA protected clinic notes would be use for training an LLM. If they did, the clinic would go bankrupt from lawsuits.

Also, if they only use AI for automated transcription, would you feel the same instead of "AI" it were a dedicated automated transcription tool?

If you abhor all things AI, your feelings of not continuing with this clinic are valid. However, I don't think they are using AI in ways you think they are.

[–] snooggums@piefed.world 14 points 5 days ago

I’d highly, highly doubt HIPAA protected clinic notes would be use for training an LLM

7areDwj5zOwv64i.jpg

load more comments (2 replies)
load more comments (4 replies)
load more comments (3 replies)
[–] TherapyGary@lemmy.dbzer0.com 18 points 5 days ago (2 children)

I'm a therapist and I use SimplePractice for my practice. They recently added an AI note taker that is HIPAA compliant, and the consent form they suggest giving to clients sounds okay, but I read the actual privacy policy and the language used is way too vague for me to trust, so I don't use it.

In your position, I would:

  1. Ask if you have to sign that, or if you can opt out. Your specific provider may be open to just not enabling the AI note taker for your profile, and they may be able to remove that form from the app for you on their end. This may not be in their control, but if they're a good person who cares about you, they'll make an effort to get it done anyway.

  2. If not, ask for a link to the actual privacy policy and see if it sounds acceptable to you. Not the practice's Privacy Practices, not the Patient Portal privacy policy, but the actual privacy policy for the AI note taker (whoever you ask might have to do some digging to actually find it)

load more comments (2 replies)
[–] Vex_Detrause@lemmy.ca 5 points 4 days ago

One of our doctors started using AI transcription and summary. I find it lacking substance after AI is done summarizing. You can see her thought process when she type her notes, it's thorough but concise. The AI summary is definitely short, but it's not about shortening, it's about handing your note to another doctor and that doctor is able to follow through with the plan.

[–] Royy@lemmy.world 8 points 4 days ago

Hello, It us absolutely justified to be worried, tell your doctor you concerns, and ask your doctor questions about the use of AI. If you want some help putting together questions for your doctor lmk.

I'm involved with the development / integration of AI. From the specific text of the AI agreement, it looks like these are the AI tools you're consenting to:

  • Transcription tool: This is a speech-to-text tool. It can differentiate between speakers.

  • Transcript -> clinical documentation tool. This takes the text of the transcript, interprets it, and generates clinical documentation based on it.

It does not seem like, as part of the agreement, it covers taking the clinical documentation and attempting to suggest diagnosis or care steps.

I am actually concerned by the "recording and transcript are automatically deleted" line. If your doctor reviews the generated clinical documentation vs the transcript, and misses something for whatever reason, if they are unsure about something in the future they can't go back and reference the original audio / generated transcript to verify accuracy?

There are also concerns about how they are following HIPAA laws:

What model / service are they using?

Did they do their due diligence in deciding what service to use?

Have they looked at other cases where data companies have said they don't persist/ sell your data and then they sold it / there was a breach of data that shouldn't have persisted in the first place?

Do they anonymize personal information before they send it to whatever service they are using? -Note that this is not possible for transcription models, as they cannot know what text to anonymize/censor until the model generates the text. That doesn't mean there are not HIPAA-compliant text transcription models, text transcription models can even be run locally on maybe consumer-grade devices, meaning the audio doesn't have to be sent to a 3rd party.

[–] VampirePenguin@lemmy.world 8 points 4 days ago (7 children)

AI and the people pushing it are not trustworthy. They do not have your data security nor your wellbeing at heart, even if your doctor does. LLMs are inherently bad at data security and there is no way these companies can, in good faith, promise HIPPA compliance. Likely, the AI use will be on the part of the insurance company to find ways of denying your claims.

load more comments (7 replies)
[–] Crankley@lemmy.world 4 points 4 days ago

I have all sorts of anxiety surrounding AI. Most of the anxiety comes from the misuse, copyright issues and departure from critical and creative thinking. However, one of the fields I actually think it could be very useful and of great benefit is medicine.

That being said, I'd be a no as well. The way this is worded and he track record we've seen with privacy doesn't fill me with much confidence. Feels like another instance off loading of thinking rather as a tool for better diagnosis.

It sounds like America from the process. The confluence of commercialization of healthcare and tools that can make it look like time and attention has been used leads to some bad places. I'd be very sceptical about any advice medical or otherwise I recieved.

The unfortunate truth is that without these tools the cost of care will be higher for health companies not using the tools. Which means bespoke human led care will be a luxury in America in the near future. I don't think it's a reality you are going to be able to avoid.

I would push back at every opportunity, double check all of the information you are getting, ask pointed "why this" questions, make doctors clearly communicate that they are the ones giving the recommendation. At the end of the day a good doctor with AI tools is likely to do a better job.

[–] NotMyOldRedditName@lemmy.world 4 points 4 days ago* (last edited 4 days ago)

For note taking only, id be fine IF it was all run locally with no ability to be trained on.

Id want assurances from the Dr that they also carefully review the notes immediately after or that I get to see the notes before leaving due to the risk of hallucinations that could cause future care problems.

They could have it visible on a screen while youre in the room with you to help you be sure its accurate.

Edit: id care less about it being local if it wasn't medical/legal in nature.

[–] e0qdk@reddthat.com 12 points 5 days ago (2 children)

My medical provider started doing that when I last had a video conference with them, and I declined to allow the use of AI. They took no issue with that -- didn't even bring it up. It's very unlikely that your provider will care that you declined either. I recommend saving your energy for other problems and dealing with this later in the unlikely event that they do actually make an issue of it.

load more comments (2 replies)
[–] Kazel@lemmy.dbzer0.com 4 points 4 days ago
[–] Deestan@lemmy.world 17 points 5 days ago

The privacy statements are fucking lies.

I will not share my innermost mental issues with some group of 20-something "move fast and break things" sociopaths in Silicon Valley.

[–] gratux@lemmy.blahaj.zone 17 points 5 days ago

AI is an overloaded marketing term. Definitely ask which kind of AI, how it is used, how and which of your data is going to be used.

[–] chicken@lemmy.dbzer0.com 12 points 5 days ago (2 children)

I would only be ok with an AI note taking app if the model is running on hardware the doctor physically has in their office because otherwise any privacy assurances don't mean that much.

load more comments (2 replies)
[–] AnchoriteMagus@lemmy.world 16 points 5 days ago* (last edited 5 days ago)

It would be an absolute deal breaker for me. There has never yet been a commercially available AI that doesn't hallucinate, and there's no element of my healthcare where I'm comfortable having facts be unreliable.

[–] Tollana1234567@lemmy.today 3 points 4 days ago* (last edited 4 days ago)

i wonder if they hallucinate notes post-appointment, i notice that there have been complaints against certain providers that the "doctors" did other examinations that they dint do in-person and it appeared on their records.

[–] Bebopalouie@lemmy.ca 12 points 5 days ago

I would be out of that office faster than the speed of light.

[–] Nibodhika@lemmy.world 8 points 5 days ago (1 children)

I know this might go against the flow here, but realistically if they're using the tools in the way they say they are (which you should 100% check with your doctor to let him know about possible hallucinations) it's not that bad. Speech-to-text is not prone to hallucinate, it can fail and detect wrong things but shouldn't outright hallucinate. After that, LLMs are good at summarizing things, yes they are prone to hallucinations which is why having the doctor review the notes immediately after the session is important (and they said they do), so I don't see this as such a big issue from the usability point of view.

You might still have issues from a privacy point of view and that's a much more complex discussion with them about what kind of contract they have with the LLM company to ensure no HIPAA violations (as from the LLM point of view it's just making a summary of a text it might store it, and then the whole stack is suable). They need to understand that just because they haven't kept a copy around doesn't mean the other party hasn't, and because they shared it out without your agreement (you're only agreeing to AI note taking which can be done locally so them sharing information with third parties is entirely up to them) they would be liable. I'm not a lawyer, so you might want to double check that, but I would be very surprised if that's not the way it works, otherwise Drs could get away with a bunch of HIPAA violations by having you sign something that says they use a computer to store data and then storing things in shared Google drive.

load more comments (1 replies)
[–] cley_faye@lemmy.world 6 points 5 days ago (2 children)

It depends on many things. The hard line for me would be is this running locally, on a server with the same IT management as my actual data, or on a third party servers. If the doctor either don't know this, or can't give adequate proof that it isn't running on some third party servers, then all the "prioritize your privacy" aren't worth shit.

But that's only the point where I give a hard no. The way it is used would also matter a lot. Is it used as a clutch for reference searching, or a full self driving decision making process that will write me a prescription in the end? This part is the same whether it's for medical advice or for anything else: if the user is skilled enough to be able to evaluate/validate the output of the process faster than it would have taken them to do it manually, then there might be some value. Some usages fits into this. Some don't. Summarizing large documents you did not read does not work as a safe thing, because, you'd have to read the document to check the summary. Getting the summary of a drug/sickness/whatever that you know about but need a reminder of, could be ok.

tl;dr: it have to run in a privacy-enabled context (no third parties), it have to be used as a clutch (no skipping work), and the user have to keep is brain en mental activity alive enough to steer the system instead of being dragged by it. As things stands right now, I doubt there's a lot of doctors that would fit all three points, but in the future, maybe.

load more comments (2 replies)
[–] GrayBackgroundMusic@lemmy.zip 5 points 4 days ago

Your provider then reviews the content of that note to ensure its accuracy and completeness.

You know they're not gonna do that, in practice.

[–] BlindFrog@lemmy.world 7 points 5 days ago

No, but.

One of my doctors has an assistant nurse (or whatever they're called in the hierarchy) take notes just so the conversation can be more fluid. She always asks my permission for if that's okay with me.

My other doctor types and reads out her notes with me towards the end of my visit to make sure she hasn't missed anything, and she makes me feel heard and involved.

No, I wouldn't consent. Sending my PHI to a third party is unnecessary, and AI data centers are a net negative on the planet. I also wouldn't trust that the Ai service provider isn't helping themselves to your data + doctor's feedback to use for further training anyway. Thank god healthcare providers are required to ask before shunting your info off to some third party.

But, if presented with this, I'd talk to my doctor about the extent that third-party AI-services are already being used in my own healthcare. If I can fully opt out, I'd stay. If I didn't have a real choice to opt out, and if it were easy to find a new doctor that didn't use Ai-services, ~l'd fuck off so fast, like bye felicia, I ain't dealing with this palantir-esque bullshit just for getting a rx refill~

[–] cerebralhawks@lemmy.dbzer0.com 12 points 5 days ago

I left. Similar thing happened to me, except the doctor said he was going to use an AI tool on his computer (or maybe it was an iPad/Android tablet? It was a couple years ago) but needed my permission. I said no. He said it meant he'd have to write stuff down. I asked if there was any way he could do it without feeding AI my data and training the AI. He said no. I said no to it. They would not schedule me again. I actually still need to change GPs.

[–] apfelwoiSchoppen@lemmy.world 11 points 5 days ago* (last edited 5 days ago) (2 children)

Given how captured our data is by the lack of regulation even in the medical space in the US. I simply do not want my personal data to be used in anything but in-house signal-to-noise improvement for diagnosis.

Anything else, which is most of it, is unacceptable and I do not consent.

load more comments (2 replies)
[–] melsaskca@lemmy.ca 5 points 5 days ago

If my doctor needs AI to treat me then they ain't no doctor, they're middlemen.

load more comments
view more: next ›