Lemmy, I really would like to hear your opinions on this. I am bipolar. after almost a decade of being misdiagnosed and on medication that made my manic symptoms worse, I found stable employment with good insurance and have been able to find a good psychiatrist. I've been consistently medicated for the past 3 years, and this is the most stable I have been in my entire life.
The office has rolled out the use of an app called MYIO app. My knee jerk reaction was to not be happy about the app, but I managed my emotions, took a breath and vowed to give it a chance. After being sent the link to validate my account, the app would force restart my phone at the last step of activation. (I have my phone locked down pretty tight, and lots of google shit, and data sharing is disabled, so I'm thinking that might be the cause. My phone is also like 4-5 years old, so that could also be the cause.)
Luckily I was able to complete the steps on PC and activate that way. Once I was in the account there were standard forms to sign, like the HIPAA release. There was also a form there requesting I consent to the use of AI. Hell to the NO. That's a no for me dawg.jpg.
I'm really emotional and not thinking rationally. I am hoping for the opinions of cooler heads.
If my doctor refuses to let me be a patient if I don't consent to AI, what should I do? What would you do? Agree even though this is a major line in the sand for me, or consent to keep a provider I have a rapport with, who knows me well enough to know when my meds need adjusting?
EDIT: This is the text of the AI agreement. As part of their ongoing commitment to provide the best possible service, your provider has opted to use an artificial intelligence note-taking tool that assists in generating clinical documentation based on your sessions. This allows for more time and focus to be spent on our interactions instead of taking time to jot down notes or trying to remember all the important details. A temporary recording and transcript or summary of the conversation may be created and used to generate the clinical note for that session. Your provider then reviews the content of that note to ensure its accuracy and completeness. After the note has been created, the recording and transcript are automatically deleted.
This artificial intelligence tool prioritizes the privacy and confidentiality of your personal health information. Your session information is strictly used for the purpose of your ongoing medical care. Your information is subject to strict data privacy regulations and is always secured and encrypted. Stringent business associate agreements ensure data privacy and HIPAA compliance.
It depends on many things. The hard line for me would be is this running locally, on a server with the same IT management as my actual data, or on a third party servers. If the doctor either don't know this, or can't give adequate proof that it isn't running on some third party servers, then all the "prioritize your privacy" aren't worth shit.
But that's only the point where I give a hard no. The way it is used would also matter a lot. Is it used as a clutch for reference searching, or a full self driving decision making process that will write me a prescription in the end? This part is the same whether it's for medical advice or for anything else: if the user is skilled enough to be able to evaluate/validate the output of the process faster than it would have taken them to do it manually, then there might be some value. Some usages fits into this. Some don't. Summarizing large documents you did not read does not work as a safe thing, because, you'd have to read the document to check the summary. Getting the summary of a drug/sickness/whatever that you know about but need a reminder of, could be ok.
tl;dr: it have to run in a privacy-enabled context (no third parties), it have to be used as a clutch (no skipping work), and the user have to keep is brain en mental activity alive enough to steer the system instead of being dragged by it. As things stands right now, I doubt there's a lot of doctors that would fit all three points, but in the future, maybe.
We have a BAA and our vendor attests that they are HIPAA compliant. I don't know what or where it runs. But BAA and they promise that it's good for PHI.
Yeah, I stopped trusting service provider with promises the moment they came into existence. "We're compliant with XYZ" have as much value as "We promise to not snoop, see?". And that's not even considering security vulnerabilities. Certifications are merely the promise that at some point, someone maybe did something right (or maybe not), and paid to be able to say so (sometimes they don't). Not very reassuring.
Data remains on controlled systems, and if it has to get out, it's encrypted properly, either for cold storage, or for specific recipients. Anything below that is believing random people saying random shit, and ignoring that every time there's a data leak somewhere people go "oops, our mistake, it won't happen again, pinky swear".
And I know there's already an incredible amount of sensitive, personal data on the loose. That's no excuse to let this trend keep going.