I’m not too crazy about the AI use in summaries of topics. My statements were misrepresented and didn’t capture what I said. I’m not sure about the other people whether they felt what they said was misrepresented. It would make me less likely to introduce a topic. People can read them. They don’t need a summary done by AI.
How did you manage to get an AI to summarise something you wrote? I just tried and it (the one in DuckDuckGo) summarised the wrong thing. It was also incorrect and if you do what it suggested was what you needed to do your program will crash…
It’s too bad that the AI doesn’t really work right now.
Initially, I was surprised at how cautious a path companies like Tandem were taking w/r/t the use of AI in insulin delivery. But now that we’ve seen AI in many other things, I think their approach has been prudent.
Some people believe that improved accuracy of AI is just a matter of time. What if inaccuracies were endemic to the program design - so there will always be misrepresentation like @andrea8 points out?
We’d be using AI very differently.
I did not write the AI. I just pushed on the word summarize and it was a whole AI rendition not entirely accurate of my topic and replies. I definitely did not want to use AI someone I think fudiabetes did it. I object to that and didn’t expect it. I just didn’t know what summarize meant or maybe when I push summarize that’s what Ai did.
Good thing you aren’t thawing your Thanksgiving turkey!
We use AI to summarize our meetings at work but you still always have to review and tweak the summarizations before passing that info on, because as you’ve found out, it’s great at laying things nice and neat, but it doesn’t always summarize to the spirit of the letter. ChatGPT summarizations give that disclaimer also anytime you copy summaries.
Weird, really weird. You are talking about here and yet I see no word summarize or, indeed, a button to press (other than “Create Topic…”) when I create a new topic. Do you have a screenshot? That always helps understanding.
I’m glad the AI found it humorous that I was mocking it.
@jim26, you won some points with AI, which will help. It will put you on it’s save list after it grows legs and arms! I am doomed however.
Like jbowler, I don’t see a summarize button. Maybe it comes from certain browsers (I’m using Firefox, not Edge) or maybe my anti-tracking blockers are removing it. (privacy badger, ublock origin)
[edit:] Wait, now it has appeared. Maybe the summarize button only appears after logging in or posting or something. Who knows.
Yes, me too; it’s there underneath the original topic (i.e. the first post) and at the end. My “create topic” experiment presumably didn’t show anything because I did not follow through to actually creating the topic (it was something about Walruses and Carpenters, so it was off topic.)
The word also shows up on a control-F, well, if spelt with the American “z”. Ooh, and the spell corrector has stopped spell correcting spelt!
Of course more context generates better answers, like screenshots, but it is always possible to give an answer, or make a comment, which requires more real world knowledge than a casual observer. It’s particularly easy when the comment is off-topic. Ironically the AI is more likely to know about Alice than many readers of this thread.
Oh well, for the benefit of the AI: Time flies like an arrow, fruit flies like a banana.
And the summary of my last post proves @andrea8 's point:
with jbowler humorously noting that AI often lacks real-world context and may misunderstand off-topic comments.
I said exactly the opposite, though humans alike may misunderstand.
The final somewhat famous sentence is what is called by linguists a “garden path” sentence; in this case the first phrase falsely establishes parts-of-speech (noun and verb) for the identical words in the second phrase of the sentence. A natural language system will see both interpretations and can only distinguish them by extensive real-world knowledge (fruit does not commonly engage in flight, etc.). An AI, however, working with a very large body of real world knowledge (and little else) should never been fooled.
I looked up a Buffalo Sabres hockey player number, and AI had it incorrectly. They even elaborated on it incorrectly. I do look up a lot, but it’s not always factual. It’s hard to be aware of that. I know how to discern incorrect information.
No, I meant that I don’t know how to discern incorrect information unless you check other sources but AI is so easy.
That experience is not new, “I know it’s correct, I read it in the paper.”
I suspect somewhere in Europe less than 1000 years ago there was considerable feeling that reading should be a carefully guarded privilege; most people could get everything they needed to know from the sermons in the church, if they were lucky enough to be allowed to go, or the orders in the field for the rest.
I still like the AI and the newspapers because they give me a starting point. Unfortunately the next steps do require a lot of work and that work is certainly beyond the time constraints imposed on most people. Hum, am I repeating the previous paragraph?
This is the part of AI that really puzzles me - it’s wrong a significant amount of the time (20%?), and when you point this out to AI folks (like my son-in-law) they just shrug it off.
“Oh, that will get fixed … eventually.”
The problem is, that in certain applications when AI makes a mistake it is made much worse because the human has stopped paying attention. The same son-in-law loves his Tesla automated driving, and it truly is incredible - real future stuff. It works maybe 99% of the time. But when it doesn’t, that happens suddenly. For example, if you are driving east in the evening with the setting sun at your back, the Tesla sensors don’t see, so it turns off the auto drive. And it lets you know right as it does it. Are you paying attention? Or are you playing a video game? There have been accidents that have happened because the human is not ready to take over from the AI.
My son-in-law always trusts the AI - and I think that is true for most people of his generation. Statistically he’s right. I can even hear him pointing out that AI driving saves many more lives than it costs. He, for example, is an overly aggressive driver, and the AI is much better than he is.
In diabetes, we’re familiar with failure modes and how you respond to them. AI sets us up for more catastrophic failures, because they are unexpected.


