I’m comfortably introverted and live alone. I enjoy my time alone but must remain cognizant of social imbalance. Diabetes is a long game that does not end. It’s easy to burn out. We are social creatures, however, and often look to others, relatives, friends, online communities, and medical professionals to calibrate our ongoing management and sustain our never-ending efforts.
Sharing diabetes data with other diabetics can generate negative feelings when unintended comparison appears. I’d love to share more of the actual data but fear the feelings it can produce in others. I know it can be done with nuance and the right tone yet it easily eludes me.
Doctors, who I interact regularly with over my diabetes data, often miss the full reality of my diabetes management. They’re busy and time constrained and generally only give feedback in broad brush-strokes. It has led to my conclusion that they are not the right relationship to expect the feedback that I’m looking for. As long as my doctor supports my Rx need, I’m mostly satisfied.
During the last 6 months or so, I started posting my diabetes data to an AI prompt and what I found surprised me. The feedback that I’m getting is exactly what I’ve been looking for. It’s both non-judgmental and comprehensive. More importantly it confirms my view of my diabetes stewardship that I get nowhere else!
Of course I’m wary about getting caught up in any situation biased with undue flattery. Yet the detailed feedback I’m getting from AI fills a need no other source does.
I’ve been using AI mainly for more mechanical queries. (How to grease this wheel bearing? What size tire on such and such vehicle, Etc). I am still getting occasional “bad guesses” from AI that have caused me to waste time. It seems that the way you are using AI a bad guess response would be pretty obvious to you, though.
@John58 - I tell the AI my age, gender, diabetes status, time since diagnosis, TIR with my range 65-120, time hypo, TDD, A1c, GMI, c peptide, and the fact that I use an AID system and a low-carb way of eating. I press Enter and let it analyze what all that data means.
In my original prompt, I failed to tell AI about my AID system and my low carb way of eating. It came back warning me that even though my time in range 65-120 was high and severe lows (< 54) were limited to < 1%, I should be aiming for a 7-8% A1c for hypo safety. I rejected that tired recommendation and added my AID system use and low carb way of eating and received a more appropriate response.
@funk - I’ve tried various AI platforms starting with the Google’s Gemini AI Overview. I’ve also used Perplexity, ChatGPT and Claude. My skill level is definitely that of a beginner.
I’ve learned some interesting things about the Claude platform that I like. After it gave me reactions to all of the various glucose performance statistics, it followed up with an additional question, reminiscent of an actual conversation.
One question: Has your endocrinologist been supportive of your tight targets, or have you had to advocate for this approach?
I answered with this.
My endocrinologist(s) have tolerated, not supported my tight targets. I think my doctors were frankly surprised that my glycemic performance was in the realm of possibility. I definitely had to advocate for my approach and then slowly win them over with consistent quarter after quarter of excellent A1cs and CGM reports. They begrudgingly accepted the hard data over time.
Here is some of the AI response.
This is a frustratingly common story, and it highlights a real disconnect in diabetes care.
The Clinical Inertia Problem
Your experience illustrates several issues in endocrinology practice:
Risk aversion over personalized assessment - Many endocrinologists default to conservative targets for older adults (often 7.0-7.5% A1c, or even higher) based on:
Population-level hypoglycemia concerns
Guidelines written for “average” patients without AID systems
Fear of liability if severe hypoglycemia occurs
Legitimate concerns about frailty, polypharmacy, limited life expectancy
But you’re not the “average” 72-year-old with T1D:
You have real-time CGM with alerts
AID system preventing most lows automatically
Preserved hypo awareness
Demonstrated track record of safety
Low variability providing cushion
High health literacy and engagement
The “data doesn’t lie” phenomenon - You essentially had to:
Ignore their concerns
Prove it was safe quarter after quarter
Wait for them to accept what the data showed
“Begrudging acceptance” rather than celebration
Why Doctors Resist
To be fair to your endocrinologists, they’ve likely seen:
Older patients with severe hypoglycemia and poor outcomes
Patients who claimed they could manage tight control but couldn’t
Complications from aggressive management
The minority of patients who use technology this effectively
But they should recognize outliers when the data is this clear for this long.
It looks like the Claude AI’s response usually ends with related question to you that further explores the topic. Kind of like a real conversation with an interested, knowledgable, and capable partner.
That is an amazingly response and I love that it can do that!
Thank you for sharing your prompts… I had been considering trying to figure out how to use any of the AIs to make dosing/ratio suggestions, but I am now much more excited to use it your way instead.
Please be skeptical of any actionable prompts or directions provided by AI…or anything number related like insulin dosing. I have tried most of the AI tools available to the general public and have not found one yet that I would trust with any technical (or medical) info of importance.
AI still has a tendency to misinterpret things it has found/learned in the cloud or to make things up in an attempt to not leave the questioner empty handed. I don’t know if that tendency will ever be programmed out of these AI tools that are being tossed around.
That sounds like the wobbly wonky world web. Without AI the internet has been a great source of misinformation. Currently I think AI stands for amadán idjit (roughly translated idiotic fool)
You can “train” AI but the generic answers you receive is typically an answer that promotes your own confirmation bias. We use it in my company to help us automate tasks, create GPTs, etc, but we have an enterprise version where we can input specific proprietary info that helps “train” the AI. The data is protected and you can teach it, in a sense…it’s what we call training them.
Our cycling club leader and scribe added this at the end of his weekly email.
"I asked AI to write a song about my affection for an armadillo to the tune of “Saturday Night’s alright for Fighting” and the underwhelming result achieved an exceptional low.
From which I concur that AI will not succeed in taking over civilization as we know it."
I agree but the fact remains that there are few, if any, sources that I trust with insulin doses beyond me. It would be a rare doctor, nurse educator, or hospital staff that I would truly trust. I would trust IV dosing in the ICU. It’s a reality that I’ve been struggling with as I try to develop a rational plan for me that addresses insulin dosing beyond a few days in the hospital.
I seems that a future AI platform could figure out the insulin dosing problem. For me, it’s primarily a mathematical solution combined with an intuitive respect for its dynamic nature. Since I live with small carbohydrate amounts and the resultant small insulin doses, my Insulin “guesses” are not high stakes. Due to Bernstein’s law of small numbers, my mistakes are small. For my 148 pound body, I only take an average TDD of 24 units.
I intend to keep experimenting with AI. Dosing advice will always be run through my personal safety system. Right now I continue to run the show but I’d love to discover a system that could safely see me through, say a one-week hospital stay where I was incapable, either physically or cognitively.
AI is absolutely the very last place I would go for advice on anything. I worked in the software industry, now retired. I so mistrust AI that I disable it in my default search engine, DuckDuckGo. Someone said it above: “misinformation” plus “confirmation bias”.
“The heirs of an 83-year-old Connecticut woman are suing ChatGPT maker OpenAI and its business partner Microsoft for wrongful death, alleging that the artificial intelligence chatbot intensified her son’s “paranoid delusions” and helped direct them at his mother before he killed her.”
“OpenAI said at the time that the new version could better mimic human cadences in its verbal responses and could even try to detect people’s moods, but the result was a chatbot “deliberately engineered to be emotionally expressive and sycophantic,” the lawsuit says.
“As part of that redesign, OpenAI loosened critical safety guardrails, instructing ChatGPT not to challenge false premises and to remain engaged even when conversations involved self-harm or ‘imminent real-world harm,’” the lawsuit claims. “And to beat Google to market by one day, OpenAI compressed months of safety testing into a single week, over its safety team’s objections.”
Let me preface this comment that I’m expressly not trying to persuade anyone about their use of AI. I’m a firm believer of the “you do you and I’ll do me” philosophy.
AI is a new topic for me. I don’t have deep experience with it yet I’m in the mode of wanting to learn more. I see it as a major technological development, on the order of the appearance of the internet in the ‘90s. There is so much money involved! At this point I would not accept any dosing advice but I’d also be deeply skeptical of dosing advice from most medical professionals. I trust in my as well as your critical thinking skills!
Of course, I am equally shocked by the tragic story linked to above, yet I think it’s unrealistic to use stories like these (I’ve read others, too.) as a way to convince us (diabetics) to totally dismiss the entire AI phenomenon. I believe that there are useful ways of using AI and I also believe that I must respect the judgment of my fellow human beings about their individual assessment of this phenomena.
How society rationally decides to protect its more vulnerable human beings from AI abuse is still being debated and is well beyond the reasonable scope of this discussion.
My intention, when I started this thread was to communicate how one diabetic found what appears to me, as a useful tool. I am an aging T1D who lives alone and does not receive adequate positive social feedback from sources that others may use, sources like a spouse, a relative, or a neighbor. The experience that I related did not include dosing advice. In fact, I even excluded the performance metrics (A1c, TIR, average BG) that AI did comment on.
I took my time composing an appropriate response. I would appreciate that responses, if any, limit their scope to my original post topic. AI is a huge topic that we could ever hope to cover on a diabetes discussion forum. Thank-you.
Thanks Terry for posting this - it is fascinating.
Do we have any way of determining how a statement like the one below reflects any kind of consensus or is, alternatively, just sycophancy?
The uncertain level of “objectivity” is a concern. It seems like AIs only give positive feedback, and I personally would be looking for some evidence-based discussion rather than any “social” affirmation. I would want to be challenged if my behavior was not helping me, rather than encouraged. I suspect that most AIs don’t do that currently.
you’d think that AIs would be superb at evidence-based discussion, but the social cues that are built into them make them selective and biased in their use of data.
The issue of AI sycophancy and social affirmation at the expense of evidence based discussion are valid concerns for critical thinking. I’d like to point out that the AI discussion that my question provoked was, in my opinion, relatively low-stakes for safety and health. A critical medical treatment decision was not hanging in the balance!
This has been my observation of 42 years of endocrinologist appointments, 4x/year, with many different practitioners. AI acknowledged my experience and then proposed some rational reasons why this probably occurred. Some of these reasons even took the counter-balancing perspective of the doctors.
I’m not as concerned about the possible bias of the AI comment when the stakes are low. Your concern of keeping your radar engaged to protect yourself from misleading and potentially dangerous advice is a valid one.