Lemmy, I really would like to hear your opinions on this. I am bipolar. after almost a decade of being misdiagnosed and on medication that made my manic symptoms worse, I found stable employment with good insurance and have been able to find a good psychiatrist. I've been consistently medicated for the past 3 years, and this is the most stable I have been in my entire life.
The office has rolled out the use of an app called MYIO app. My knee jerk reaction was to not be happy about the app, but I managed my emotions, took a breath and vowed to give it a chance. After being sent the link to validate my account, the app would force restart my phone at the last step of activation. (I have my phone locked down pretty tight, and lots of google shit, and data sharing is disabled, so I'm thinking that might be the cause. My phone is also like 4-5 years old, so that could also be the cause.)
Luckily I was able to complete the steps on PC and activate that way. Once I was in the account there were standard forms to sign, like the HIPAA release. There was also a form there requesting I consent to the use of AI. Hell to the NO. That's a no for me dawg.jpg.
I'm really emotional and not thinking rationally. I am hoping for the opinions of cooler heads.
If my doctor refuses to let me be a patient if I don't consent to AI, what should I do? What would you do? Agree even though this is a major line in the sand for me, or consent to keep a provider I have a rapport with, who knows me well enough to know when my meds need adjusting?
EDIT: This is the text of the AI agreement. As part of their ongoing commitment to provide the best possible service, your provider has opted to use an artificial intelligence note-taking tool that assists in generating clinical documentation based on your sessions. This allows for more time and focus to be spent on our interactions instead of taking time to jot down notes or trying to remember all the important details. A temporary recording and transcript or summary of the conversation may be created and used to generate the clinical note for that session. Your provider then reviews the content of that note to ensure its accuracy and completeness. After the note has been created, the recording and transcript are automatically deleted.
This artificial intelligence tool prioritizes the privacy and confidentiality of your personal health information. Your session information is strictly used for the purpose of your ongoing medical care. Your information is subject to strict data privacy regulations and is always secured and encrypted. Stringent business associate agreements ensure data privacy and HIPAA compliance.
Edit 2: I just wanted to say that I appreciate everyone here that commented. For the most part everyone brought up valid points, and helped me see things I had not considered. I emailed my doctor and let them know I did not want to agree to the use of AI. I let them know that I was cool with transcription software being used as long as it was installed locally on their machines, but I did not want a third party online app having access to recorded sessions for the purposes of transcription. They didn't take issue with it.
Thank you everyone!
I feel very strongly about this and I would change doctors. But of course it won't be long before they all do this and we'll have no alternative. The two biggest problems I see are
I saw a news story where a doctor who uses this said it saves her time because before seeing the patient she gets an AI summary of their chart, so she doesn't have to "go through several tabs" to read the actual information. Oh great, let the statistical probability text generator hallucinate up some shit about what's in a person's chart, to save 10 seconds of tab-clicking to read the ACTUAL patient records! If they want a summary there's no reason a traditional report or summary screen couldn't be programmed to pull data out of the most important fields and arranging them in the desired format.
THEN the doctor uses her damn phone to record your visit, everything you say, and that gets run through the AI which generates a visit summary and puts that into your medical records. So, god only knows what 3rd party private corporate vulture has access to your doctor/patient conversations and what they'll do with them, and again, what hallucinated shit will get put into your medical records!
So your doctor never reads your chart and never writes your chart! [Readacted] me now! Also what happens after a few iterations of an AI summarizing records that an AI wrote?
If you buy into the story that "someday they'll all be using it" you are doing the AI boosters' job for them. It is not a foregone conclusion, and there is no reason to accept that future.
I hope you're right! The magical thinking and child-like trust in this tech by otherwise intelligent people is scary though.
AI is really good at concepts, not logic. But even then the performance is going to be dependant of the data it was modelled with.
You can ask for a specific symptom of pneumonia and it can answer. You can also ask for a summary of pneumonia, as someone has most likely wrote one already and AI understand to use it because of the concept relevance. But if you ask it to summarize a patient information, it will split the patient information into blocks it can summarise based on what summarisation information it has in the model data. I can assure you it cannot ever have all the possibilities pretrained already.
My fear is that the models merge all kind of patient record info together as the statistical model so the 'summaries' will write the most likely word to come next in the phrase, so wrong information and incorrect diagnoses will be recorded into a person's record, or that important information will be omitted.
I predict that people will be harmed or die because of missing or false information patient records. But it will be difficult for the public to find out about it because of privacy issues and the unwillingness of institutions to acknowledge it.
Drugs have to go through multiple stages of testing and trials before they're allowed to be used on patients. But no one is doing any kind of testing on the effects of this at all, let alone controlled trial rollouts with review, before allowing general use.