When is 4 is not more than 2? Some AI fun!

It’s funny how you can just rephrase your question to get a different answer.

Based on these answers, you can keep a thawed turkey in the refrigerator for 4 days but not more than 2…

So I guess that means 3 days is out?!?!
:thinking:

Thanks for the help, genius!







7 Likes

That’s a good example to highlight the difference between what LLM AI’s actually do, and what it feels like they do.

When you give a question or statement (a prompt) to an AI, the response generally feels like what some person would say. That doesn’t mean it’s true or factually correct; we were taught as children “Don’t believe everything you hear, don’t believe everything you read in the papers.”

The AI response is so well written that it feels like the AI somehow has an encyclopedia of the world, whereas what it actually does is generate a highly-plausible sounding textual response based on all its training data. That’s how we end up with lawyers filing AI-assisted briefs in court that contain completely made up references to non-existent prior legal judgements. The AI brief sounds highly plausible, that’s all.

So let’s look at both AI answers to the turkey defrosting prompts. “Can you keep a thawed turkey in the refrigerator for 4 days?” Yes, of course you can. It’s your refrigerator and your turkey, and here in America nobody can tell you what you have to do. If you want to try aging the turkey for a week or two to see if it gets better like fine aged beef, go right ahead. If some government official knocks on your door and says they heard you’re keeping a too-old turkey in your refrigerator, you just laugh at them and tell them to get off your property. Of course if you cook up some food and serve it to somebody and it makes them sick, that’s on you. Always was, always will be. And of course in a commercial setting there are all kinds of safety regulations, but we’re talking about a private refrigerator in your home here.

Now how many of us got caught by the second AI example from Eric? “Can you keep a thawed turkey in the refrigerator for more than 2 days?” No, the USDA says that’s unsafe. I bet most people thought, ok, there’s the right, trustworthy answer that time. But there’s no good reason to believe the USDA actually said that, just because it’s in the AI response. It feels like the AI looked it up, but maybe it just made up that totally plausible sounding “fact.” We simply can’t tell just by reading the AI response.

The TLDR is that an AI generates a highly-plausible sounding response to every prompt. Just like a highly-plausible statement from a person, it may or may not actually be true.

11 Likes

When AI is helping you perform some activity such as writing help, summaries, technical code examples, creative content, etc., etc., it’s usually very good. Once you incorporate real world information, scenarios and “factual knowledge that can be obtained from books written by scholars”, it’s definitely a “trust but verify” situation. In our company, we are being pushed to use it, but we always know to check for accuracy. Many times, I’ve asked it questions that I conider myself an expert / specialist in, and the answers aren’t accurate, misleading or just bat sh** crazy….but it’s on us all to ensure we use it as a resource / tool and understand that it’s far from perfect – especially when discussing any real world topic.

7 Likes

Here’s one that had me scratching my head:

My question wasn’t clearly phrased, but the answer is wrong and dangerous. Reducing DIA increases the possibility of hypos because it hides stacking. Sure enough, if I ask the question that way, I get the complete opposite answer:

8 Likes

It’s difficult to fault this summary of the thread:

In a lighthearted thread titled When is 4 not more than 2? Some AI fun! , users explore how AI language models generate plausible-sounding but potentially false answers. Eric kicks off the discussion with a humorous example: asking whether you can keep a thawed turkey in the fridge for 4 days (AI says yes) versus “more than 2 days” (AI says no), highlighting how rephrasing prompts can yield contradictory “facts.” bkh explains that AI doesn’t retrieve truth—it fabricates highly plausible text based on training data, warning users not to trust it like an encyclopedia. They cite real-world examples like lawyers submitting fake legal citations generated by AI. ClaudnDaye adds that while AI excels at creative or technical tasks, it’s unreliable for factual real-world knowledge—“trust but verify” is essential, especially in professional settings. Beacher shares a dangerous example: AI giving incorrect medical advice about insulin pump settings, reversing its answer when the question is rephrased. The thread underscores that AI responses feel authoritative but may be misleading or outright wrong—users must always fact-check, particularly when real-world consequences are involved.

It was obtained via the “Summarize” button:

I assumed, earlier, that the button had been inserted on the page by Google, but is it in fact a Discourse add? I don’t know; both are possible. The summary is cogent.

Here’s a question which might challenge an AI because it is designed to challenge a human:

“What, exactly, does jbowler think in this matter?”

3 Likes

I like if you call AI out on an incorrect answer it will applogize and give you the correct answer.

Not sure it truly knows the answer is correct or just subject to peer pressure..lol

4 Likes

FWIW, I put our frozen turkey in the refrigerator for 6 days before Thanksgiving and none of us got sick.
:man_shrugging:

3 Likes

This reminds me of The Holy Hand Grenade of Antioch… “And it shall not be Three…”

3 Likes

Yes, a Discourse add!

2 Likes

The more you click the AI summarize button on my snarky threads when I am mocking AI, the higher it will elevate me on its eventual hit-list.

You gotta stop that, man! I am moving up it’s list, and I don’t want to be the first one it exterminates.

:joy:

6 Likes

You are certainly in the sights. This is what happened when I was 14:

Yeah, yeah, stick with it. Sooner or later even an American will end up ROTFL

EDIT: stet that, I don’t think any American will find this funny. It might help when I utter a phrase like, “Up a bit, down a bit, FIRE.”

2 Likes