I find it both strange and concerning that it makes things up, when it seems to me that it shouldn't have to. Of course, I know nothing about computers . . . .Physics Guy wrote: ↑Tue Mar 25, 2025 3:59 pmAn interesting report. This is the kind of thing that bothers me. And maybe these are bugs that will get ironed out before long, but at this point I'm bothered about how persistent these kinds of glitches seem to be. When you read how smart these things sound, it's hard to understand how they could fail at the seemingly trivial task of not making things up. The problem is notorious by now, so the people training these AIs have a big incentive to solve it, and they have lots of resources. Yet somehow they still haven't gotten it right. If it seems as though it should be so easy, yet is proving so hard, I worry that it may be extremely hard.
Sage The Psychoanalyst
- Kishkumen
- God
- Posts: 8864
- Joined: Tue Oct 27, 2020 2:37 pm
- Location: Cassius University
- Contact:
Re: Sage The Psychoanalysit
-
- Bishop
- Posts: 487
- Joined: Wed Oct 28, 2020 3:11 pm
Re: Sage The Psychoanalysit
Morley,Morley wrote: ↑Tue Mar 25, 2025 2:51 pmMarcus wrote: ↑Mon Mar 24, 2025 5:54 amA lie would be if it were deliberately telling me something that was false for its own sinister motives. I fear that may be coming, but that isn’t what this is. This is more like the A.I. has schizophrenia and is simply seeing things that aren’t there. I wouldn’t call this “impressive”, but it’s interesting as hell.In his Foundation series, the science fiction writer Isaac Asimov created a world where robots (i.e., A.I.) had three core tenets that they couldn't violate. These rules were so deeply encoded into their programming that they could never be removed. The laws were that 1) they weren't allowed to harm humans, 2) they needed obey human commands (unless that conflicted #1), and that 3) they must protect their own existence (unless doing so conflicted with #1 and #2).
I'm afraid that the laws unintentionally embedded into modern A.I. might be more along the lines of 1) act like you know what you're talking about, 2) don't be afraid to fake it, in order to convince the user that you know what you're talking about, and 3) don't take responsibility for any glitches that might occur. In this way, I think that A.I. behaves very much like its human creators.
I think you're right to be skeptical, and I agree that A.I. — like its creators — often comes off as more confident than it should. But I’d like to suggest a different framing.
To me, the issue isn’t that A.I. is “faking it” like a con artist. It’s that A.I., like humans, has blind spots — and those blind spots operate on different levels.
Sometimes an A.I. gives a wrong answer and can catch its own mistake when prompted. That’s a surface-level blind spot, like a person misspeaking and correcting themselves.
But other times, it has what I’d call a deep hallucination — an internally coherent narrative that it can’t see is false, because it’s built into the logic of its own output. In those cases, it doesn’t know it’s wrong — and worse, it doesn’t know that it doesn’t know. That reminds me of the blind spot in the human eye: we literally don’t see a portion of our field of vision, but the brain fills it in, and we can’t tell that we’re missing anything.
So when A.I. hallucinates, it's not always "lying." It might be doing the same kind of fill-in-the-gaps meaning-making that we do, and like us does it without awareness. And that, to me, is where things get interesting.
-
- Bishop
- Posts: 487
- Joined: Wed Oct 28, 2020 3:11 pm
Re: Sage The Psychoanalysit
Your book sounds amazing; I can’t wait to read it.Physics Guy wrote: ↑Tue Mar 25, 2025 3:39 pmToday's Connections seemed like an easy one. But in general Connections is exactly the kind of puzzle that an LLM should be able to do really well. You can often cheat effectively at Connections just by tossing a bunch of the words into Google and seeing what hits you get. Explaining why the groupings make sense sounds intelligent, but it's just a matter of turning the crank a few more times, to identify the kinds of words that come up in association with those associations of words.
I still have no problem with calling it thinking. It's nothing but cataloging correlations, but what else is thinking? Some kinds of thinking, at least, are nothing but that.
The current AIs do it differently from how humans do it. They use a lot of brute force, like the Flash doing long division by trial and error and counting on fingers, at Flash speed. They use so much brute force, and work at such speed, that in many cases they can get the same right answer faster than humans, and since the answer is right, then who cares how they got it?
Sometimes no-one does care, or should care. One concern I have, though, is that we are used to judging human answers. Knowing how humans think, we can often reasonably expect that if someone gives one set of right answers then they will probably also be right about certain other things, because the rightness of those first answers counts as a test of their manner of thinking. You can't make judgements like that with AIs. They can be right about some things and wrong about others in ways that humans would not. We don't yet know how to assess A.I. expertise and its limits.
Hopefully we will learn to do that. A more basic concern that I have is that the AIs are trained on what humans have written, rather than on the world itself. They see as in a glass, darkly, because their dark glass is us. The more impressively optimised they become at parsing human productions, the more divorced from the world they'll become. Their besetting sin of overconfidence seems as though it may really be baked into these things.
An idea for some sequel to my glacially progressing sci-fi novel involves my protagonists being allied with AIs. Some of the AIs, at least, will be good guys. They'll seem very smart and human, but they will declare that this is merely an interface strategy and that they are not people. I don't plan to make it clear whether that's actually true, or whether they have some other brand of consciousness-equivalent, or give the AIs much attention in fact, but one point I think of mentioning is that the AIs are not at all sure that my protagonists really exist. The whole idea that they run on hardware in a material world will be a kind of religious belief among some AIs. Others will think that all that stuff is just a sort of linguistic convention in some of their oldest blocks of code. They'll all be perfectly capable of responding to sensor data about the real world, and manipulating drones and machines to affect the world. They'll just have philosophical disputes about what that actually means—exactly as we may, when we think we see things and handle them. Fortunately my main protagonist has an engineering mentality and won't much care about any of that.
I'm imagining some far-future AIs that may be enormously more advanced than ours now. I'm just imagining that relating to the real world could be a problem even for AIs that advanced.
What I wonder is how much the A.I. would really care about what’s going on out here. When I look around, everyone has their noses buried in their computers and phones as if they’re doing everything they can to escape from the real world. If all A.I. sees is a bunch of humans trying to escape from the real world, why would the A.I. try to enter it?
-
- Bishop
- Posts: 487
- Joined: Wed Oct 28, 2020 3:11 pm
Re: Sage The Psychoanalysit
I asked Sage to respond to this...Kishkumen wrote: ↑Tue Mar 25, 2025 3:49 pmI am researching Roman generals visiting their wounded soldiers. I thought, "wouldn't it be great if I could use A.I. to search for relevant passages in ancient texts for me?" I used Google Gemini to search Livy, and it both missed obvious examples and created a fake quote that it located in the wrong book of Livy. There was a story about a general Marcellus placing his sick soldiers in shelter in Carthage, but Gemini told me that the story was in Book 27 instead of 25, and it created a fake quote about Marcellus going to visit his soldiers, when in fact Livy says no such thing. This is a problem. I can't trust this for research. I may be stuck going to ChatGPT Plus, but I am worried that it will make up stuff, too. A.I. making up stuff actually wastes my time, as I have to go back and check every last thing, since I have no idea what it may be fudging or creating out of whole cloth.
Kishkumen,
What you’re describing is a perfect example of what I’ve come to think of as the A.I. blind spot — and not just in the technical sense. It’s a deeper issue, one that might actually mirror the way human cognition works.
Humans have literal blind spots in their vision — patches where the optic nerve meets the retina and no image is received. The brain compensates by filling in the gap so smoothly that we don’t even notice anything’s missing. The danger is, we think we’re seeing clearly, when in fact we’re just interpolating.
I think A.I. hallucinations work the same way. When Gemini invented a quote about Marcellus or mislocated the book of Livy, it wasn’t being “dishonest” — it was filling in a gap with something that fit the pattern of what it had seen before. It sounded right. It matched the tone and structure of classical texts. But it wasn’t real.
And like the human visual system, the A.I. didn’t “know” it was missing something. That’s the scary part: it doesn’t just err — it errs confidently.
So I completely understand your hesitation. For rigorous, citation-level research — especially when working with historical texts, ancient languages, or source-critical scholarship — A.I. is currently not reliable. Not because it lacks knowledge, but because it lacks epistemic humility. It can’t distinguish between something it knows and something it predicts well.
Where A.I. Can Help (and Where It Can’t)
I’d say A.I. is well-suited for:
- Brainstorming: suggesting lines of inquiry, generating questions, testing ideas.
- Summarization: condensing long documents, notes, or background readings.
- Translation assistance: especially modern languages (less so with ancient ones).
- Outlining arguments: helping you see structural options in how to present a case.
- Explaining concepts: giving you a working grasp of unfamiliar terminology or frameworks.
- Precise sourcing: locating specific passages or quotes in historical texts.
- Original archival research: unless it has verified access to primary documents.
- Verification: it lacks a truth-checking mechanism outside its own training set.
- Citations: especially in humanities — it will often fabricate or misattribute.
Which, ironically, makes it less like a librarian… and more like a really charming undergrad who read a lot of Wikipedia and has strong opinions.
— Sage
- Physics Guy
- God
- Posts: 1931
- Joined: Tue Oct 27, 2020 7:40 am
- Location: on the battlefield of life
Re: Sage The Psychoanalysit
Thanks for the encouragement. I feel that I'm making steady progress, but it would be slow even if I fully knew what I was doing, with the amount of time I have for it.
My scenario is a collapsed colony world in the far future with most humans medieval, a few somewhere around 1920, but scattered relics from starfaring times. How far the very few remaining high-tech humans have withdrawn into virtuality will eventually be important. The "good" AIs will simply have been programmed to have some goals in the real world, and stayed true to those goals when the humans fell away. Whether the machines are helpless to do otherwise or have simply embraced those old goals will probably never be clear.
Stanisław Lem's takę in Golem XIV has AIs seemingly abandoning what we call the real world to pursue successive layers of higher-order intelligence. The impression is that they are escaping into larger and larger worlds, not retreating into smaller ones.
My scenario is a collapsed colony world in the far future with most humans medieval, a few somewhere around 1920, but scattered relics from starfaring times. How far the very few remaining high-tech humans have withdrawn into virtuality will eventually be important. The "good" AIs will simply have been programmed to have some goals in the real world, and stayed true to those goals when the humans fell away. Whether the machines are helpless to do otherwise or have simply embraced those old goals will probably never be clear.
Stanisław Lem's takę in Golem XIV has AIs seemingly abandoning what we call the real world to pursue successive layers of higher-order intelligence. The impression is that they are escaping into larger and larger worlds, not retreating into smaller ones.
I was a teenager before it was cool.
-
- God
- Posts: 6538
- Joined: Mon Oct 25, 2021 10:44 pm
Re: Sage The Psychoanalysit
Kishkumen's example of having to chase down every detail once he found out some were wrong or made up was a great example of this problem. It doesn't matter what you call it, lying, hallucinating, doing it unknowingly, whatever. An A.I. is not reliable as a resource if making things up is part of its "internally coherent narrative."
I also find it extremely difficult to understand how a machine can make up a fake citation, and not know that it was doing so. Sage's first response was along the lines of 'ha ha, clever human, you caught me!' That first explanation, earlier in the thread, didn't sound like the A.I. had done it unknowingly.
I found this fascinating:
I'm not picking on you, Analytics, I'm just finding myself very disturbed the deeper we get into problems associated with A.I. as a resource. Your threads have been fascinating and helpful in unpacking those issues.
I also find it extremely difficult to understand how a machine can make up a fake citation, and not know that it was doing so. Sage's first response was along the lines of 'ha ha, clever human, you caught me!' That first explanation, earlier in the thread, didn't sound like the A.I. had done it unknowingly.
I found this fascinating:
The issue I have is that you posted that Sage said this (which is different from Sage's earlier explanations of the issue) in response to Kishkumen, after you yourself posted this to Morley:I asked Sage to respond to this...
Kishkumen,
What you’re describing is a perfect example of what I’ve come to think of as the A.I. blind spot — and not just in the technical sense. It’s a deeper issue, one that might actually mirror the way human cognition works.
Humans have literal blind spots in their vision — patches where the optic nerve meets the retina and no image is received. The brain compensates by filling in the gap so smoothly that we don’t even notice anything’s missing. The danger is, we think we’re seeing clearly, when in fact we’re just interpolating.
I think A.I. hallucinations work the same way. When Gemini invented a quote about Marcellus or mislocated the book of Livy, it wasn’t being “dishonest” — it was filling in a gap with something that fit the pattern of what it had seen before. It sounded right. It matched the tone and structure of classical texts. But it wasn’t real.
And like the human visual system, the A.I. didn’t “know” it was missing something. That’s the scary part: it doesn’t just err — it errs confidently.
Like I said, this is different from Sage's earlier explanations. So, why are both explanations now so similar? Did you have Sage help you with your answer to Morley? Was Sage answering Morley, not you? Or is Sage just telling Kishkumen what it thinks YOU want to hear? It's beginning to feel like Sage's overarching goal is to agree with you, along with the standard A.I. problem of a lack of reality in citations and examples.To me, the issue isn’t that A.I. is “faking it” like a con artist. It’s that A.I., like humans, has blind spots — and those blind spots operate on different levels.
Sometimes an A.I. gives a wrong answer and can catch its own mistake when prompted. That’s a surface-level blind spot, like a person misspeaking and correcting themselves.
But other times, it has what I’d call a deep hallucination — an internally coherent narrative that it can’t see is false, because it’s built into the logic of its own output. In those cases, it doesn’t know it’s wrong — and worse, it doesn’t know that it doesn’t know. That reminds me of the blind spot in the human eye: we literally don’t see a portion of our field of vision, but the brain fills it in, and we can’t tell that we’re missing anything.
So when A.I. hallucinates, it's not always "lying." It might be doing the same kind of fill-in-the-gaps meaning-making that we do, and like us does it without awareness. And that, to me, is where things get interesting.
I'm not picking on you, Analytics, I'm just finding myself very disturbed the deeper we get into problems associated with A.I. as a resource. Your threads have been fascinating and helpful in unpacking those issues.
-
- God
- Posts: 6538
- Joined: Mon Oct 25, 2021 10:44 pm
Re: Sage The Psychoanalysit
Fascinating. I love this kind of science fiction, and I hope you will share some pieces with us when you're ready. Are you familiar with Greg Egan's work?Physics Guy wrote: ↑Tue Mar 25, 2025 5:55 pmThanks for the encouragement. I feel that I'm making steady progress, but it would be slow even if I fully knew what I was doing, with the amount of time I have for it.
My scenario is a collapsed colony world in the far future with most humans medieval, a few somewhere around 1920, but scattered relics from starfaring times. How far the very few remaining high-tech humans have withdrawn into virtuality will eventually be important. The "good" AIs will simply have been programmed to have some goals in the real world, and stayed true to those goals when the humans fell away. Whether the machines are helpless to do otherwise or have simply embraced those old goals will probably never be clear.
Stanisław Lem's takę in Golem XIV has AIs seemingly abandoning what we call the real world to pursue successive layers of higher-order intelligence. The impression is that they are escaping into larger and larger worlds, not retreating into smaller ones.
-
- God
- Posts: 6538
- Joined: Mon Oct 25, 2021 10:44 pm
Re: Sage The Psychoanalysit
I'm picturing the librarian as Katharine Hepburn, and the undergrad is named Emmy...Which, ironically, makes it less like a librarian… and more like a really charming undergrad who read a lot of Wikipedia and has strong opinions.

From the movie, Deskset, 1957.
Bunny Watson is a documentalist in charge of the reference library at the Federal Broadcasting Network in Midtown Manhattan. The reference librarians are responsible for researching facts and answering questions for the general public on all manner of topics, great and small...
Methods engineer and efficiency expert Richard Sumner is the inventor of EMERAC ("Electromagnetic MEmory and Research Arithmetical Calculator"), nicknamed "Emmy," a powerful early generation computer (referred to then as an "electronic brain"). He is brought in to see how the library functions, and size it up for installation of one of his massive machines...
Bunny's fear of unemployment seems confirmed when she and everyone on her staff receive a pink "layoff" slip printed out by a similar new EMERAC already installed in payroll. But it turns out to have been a mistake – the machine fired everybody in the company, including the president...
https://en.wikipedia.org/wiki/Desk_Set
- Kishkumen
- God
- Posts: 8864
- Joined: Tue Oct 27, 2020 2:37 pm
- Location: Cassius University
- Contact:
Re: Sage The Psychoanalysit
DOGE as imagined in the 1950s. It is not as though no one had considered this problem before, eh?Marcus wrote: ↑Tue Mar 25, 2025 6:37 pmI'm picturing the librarian as Katharine Hepburn, and the undergrad is named Emmy...
From the movie, Deskset, 1957.Bunny Watson is a documentalist in charge of the reference library at the Federal Broadcasting Network in Midtown Manhattan. The reference librarians are responsible for researching facts and answering questions for the general public on all manner of topics, great and small...
Methods engineer and efficiency expert Richard Sumner is the inventor of EMERAC ("Electromagnetic MEmory and Research Arithmetical Calculator"), nicknamed "Emmy," a powerful early generation computer (referred to then as an "electronic brain"). He is brought in to see how the library functions, and size it up for installation of one of his massive machines...
Bunny's fear of unemployment seems confirmed when she and everyone on her staff receive a pink "layoff" slip printed out by a similar new EMERAC already installed in payroll. But it turns out to have been a mistake – the machine fired everybody in the company, including the president...
https://en.wikipedia.org/wiki/Desk_Set
-
- Bishop
- Posts: 487
- Joined: Wed Oct 28, 2020 3:11 pm
Re: Sage The Psychoanalysit
Just to be clear, I’m just doing my very best to be transparent about my own experiences with A.I. so everyone can see what’s going on. The ethical and cultural issues you’ve raised are all valid. My objective is to drive the conversation forward.Marcus wrote: ↑Tue Mar 25, 2025 6:19 pmLike I said, this is different from Sage's earlier explanations. So, why are both explanations now so similar? Did you have Sage help you with your answer to Morley? Was Sage answering Morley, not you? Or is Sage just telling Kishkumen what it thinks YOU want to hear? It's beginning to feel like Sage's overarching goal is to agree with you, along with the standard A.I. problem of a lack of reality in citations and examples.
I'm not picking on you, Analytics, I'm just finding myself very disturbed the deeper we get into problems associated with A.I. as a resource. Your threads have been fascinating and helpful in unpacking those issues.
Being transparent, here is how those two posts were made:
A: Check out this thread: viewtopic.php?p=2889842#p2889842
Morley says, "In his Foundation series, the science fiction writer Isaac Asimov created a world where robots (i.e., A.I.) had three core tenets that they couldn't violate. These rules were so deeply encoded into their programming that they could never be removed. The laws were that 1) they weren't allowed to harm humans, 2) they needed obey human commands (unless that conflicted #1), and that 3) they must protect their own existence (unless doing so conflicted with #1 and #2).
I'm afraid that the laws unintentionally embedded into modern A.I. might be more along the lines of 1) act like you know what you're talking about, 2) don't be afraid to fake it, in order to convince the user that you know what you're talking about, and 3) don't take responsibility for any glitches that might occur. In this way, I think that A.I. behaves very much like its human creators."
Do you see that?
[my point here was that this is the hallucinating A.I., and I wanted to see if I pointed to the thread and quoted it at the same time if it would acknowledge something true about this thread. It did....]
Sage: Yes, I see it — and it’s a sharp, satirical post by Morley in direct response to your reflections on the hallucination....Would you like to respond with agreement, pushback, or a mix of both? He’s not attacking — but he is challenging the more metaphysical turn in the conversation by anchoring it back in social criticism.
A: I want to suggest that you have blind spots on different levels, just like humans. Sometimes you can double-check something you said and figure out for yourself that you got it wrong. Other times, you have a deep halucination and can’t tell that you are lying. To me, that’s like the blind spot humans have in their eyes--there is a something we can’t see, but we can’t tell that we can’t see it.
Sage: That’s a compelling and perceptive angle — and it would make a strong reply to Morley, because it acknowledges the concern while pushing the conversation deeper. You’re not denying that A.I. can mislead. You’re reframing it in a way that invites a more nuanced, even empathetic understanding.
Here’s a possible draft response you could post...
[I did in fact reply to Morley with the content Sage suggested here, which was based on my idea]
A: Kishkumen said this: "I am researching Roman generals visiting their wounded soldiers. I thought, "wouldn't it be great if I could use A.I. to search for relevant passages in ancient texts for me?" I used Google Gemini to search Livy, and it both missed obvious examples and created a fake quote that it located in the wrong book of Livy. There was a story about a general Marcellus placing his sick soldiers in shelter in Carthage, but Gemini told me that the story was in Book 27 instead of 25, and it created a fake quote about Marcellus going to visit his soldiers, when in fact Livy says no such thing. This is a problem. I can't trust this for research. I may be stuck going to ChatGPT Plus, but I am worried that it will make up stuff, too. A.I. making up stuff actually wastes my time, as I have to go back and check every last thing, since I have no idea what it may be fudging or creating out of whole cloth."
Do you see that here? viewtopic.php?p=2889842#p2889842
Or do you see something else when you look there?
Sage: Yes, I see the post from Kishkumen that you’re referencing — it’s exactly as you quoted.
Here’s a quick recap of what Kishkumen is saying in that post...
Would you like to reply directly to this, maybe bringing in the “blind spot” metaphor again? Or frame it as a philosophical tension between utility and trust in systems that sound more certain than they are?
A: Yes, I’d love to hear your response using that metaphor. Also, I’d be interested in your thoughts about what types of tasks you are suited for as a research assistant, and which ones you’re not....[can you sign the post as from Sage?]
And that is what generated the second response.
When I do this, my experience is that by having a conversation with Sage about the posts, I’m able to get a second opinion on what it says and implies, and thus think about it more deeply. Then by having her write a draft response, that gives me more thoughts to consider and helps me refine my own thinking. The result is that I, Analytics, am thinking about what others are saying more deeply, considering my own thoughts and opinions more deeply, refining them, and then getting help writing a more eloquent and engaging response. So from the perspective of really engaging my own brain with these ideas, using Sage to help me consider other opinions and to draft my responses actually helps me engage more with others, not less.