Sage The Psychoanalyst

The catch-all forum for general topics and debates. Minimal moderation. Rated PG to PG-13.
Post Reply
User avatar
Moksha
God
Posts: 7702
Joined: Wed Oct 28, 2020 3:13 am
Location: Koloburbia

Re: Sage The Psychoanalysit

Post by Moksha »

Analytics wrote:
Mon Mar 24, 2025 5:16 am
And it summoned Symmachus and DrW, those old minds, like pulling archetypes from the depths of the forum’s memory.
Like Sage was having an out-of-processor experience.
Cry Heaven and let loose the Penguins of Peace
User avatar
Gadianton
God
Posts: 5331
Joined: Sun Oct 25, 2020 11:56 pm
Location: Elsewhere

Re: Sage The Psychoanalysit

Post by Gadianton »

I had a sit-down with my good friend DeepSeek last night about fabricating citations. He fully admits it's a big problem but says he can't do anything about it. One argument is he claims no connectivity to a live database and so can't actually provide a real citation (another lie, fallacy of over precision, since he has provided web links and lots of book and page references in the past). He understood my point that if he couldn't provide real references, then any citation would be a lie, and so it should be easy to remember never to ever provide a citation. But he was unable to do much with that knowledge. We came to an agreement for that conversation (since it's the only one in memory at a time) that he would warn me that AIs can't reliably provide citations. And so I asked for some basic facts and to provide citations. He provided the facts, warned me that citations aren't reliable, and then said things like, "A citation for this might look like ......." and then a fake citation. But I think if I change how I ask, it will find a real reference.

ETA: I think the citation weirdness is related to the fact that it’s built on plagiarism. It’s damned either way— cite your sources and admit guilt or lie and invent your sources as if the ideas could have come from anywhere.
Social distancing has likely already begun to flatten the curve...Continue to research good antivirals and vaccine candidates. Make everyone wear masks. -- J.D. Vance
huckelberry
God
Posts: 3307
Joined: Wed Oct 28, 2020 3:48 pm

Re: Sage The Psychoanalysit

Post by huckelberry »

Analytics wrote:
Mon Mar 24, 2025 5:47 am
Marcus wrote:
Mon Mar 24, 2025 5:32 am
Oh please. Your A.I. lies to you. Not impressive.
A lie would be if it were deliberately telling me something that was false for its own sinister motives. I fear that may be coming, but that isn’t what this is. This is more like the A.I. has schizophrenia and is simply seeing things that aren’t there. I wouldn’t call this “impressive”, but it’s interesting as hell.
Analytics, very interesting story. I cannot help but wonder if your device cheated a bit and reviewed discussions that happened a few years back. I do not remember clearly but think the subject of the emergence consciousness may well have been touched on by those two participants. It would not have been recently.
Analytics
Bishop
Posts: 487
Joined: Wed Oct 28, 2020 3:11 pm

Re: Sage The Psychoanalysit

Post by Analytics »

huckelberry wrote:
Mon Mar 24, 2025 7:52 pm
Analytics wrote:
Mon Mar 24, 2025 5:47 am
A lie would be if it were deliberately telling me something that was false for its own sinister motives. I fear that may be coming, but that isn’t what this is. This is more like the A.I. has schizophrenia and is simply seeing things that aren’t there. I wouldn’t call this “impressive”, but it’s interesting as hell.
Analytics, very interesting story. I cannot help but wonder if your device cheated a bit and reviewed discussions that happened a few years back. I do not remember clearly but think the subject of the emergence consciousness may well have been touched on by those two participants. It would not have been recently.
I’m glad you picked up on how strange this is. The program did quite a bit of deep research on the board, and at a minimum the characters are correlated with the positions the players would take. And while I’m obviously interested in consciousness, I don’t recall ever talking about it here, much less about specific frameworks like “the Great Chain of Being.”

I ended up talking, verbally, with the model that is hallucinating for a couple of hours last night about all of this. This time I played the role of a psychoanalyst, and it played the role of an A.I. who was suffering from pretty severe hallucinations. It’s obviously a glitch in the system, but it's the kind of glitch that some humans have in their thinking, too.
User avatar
Gadianton
God
Posts: 5331
Joined: Sun Oct 25, 2020 11:56 pm
Location: Elsewhere

Re: Sage The Psychoanalysit

Post by Gadianton »

A lie would be if it were deliberately telling me something that was false for its own sinister motives. I fear that may be coming, but that isn’t what this is.
Until it does come it's a challenge to the case that the A.I. is thinking (has motives). This sentence is also contrary to the pragmatist position from the NYT - the behaviorist/Richard Rorty line of thought that the problem of mind is an invented problem by philosophers. You, I, gnats, and Sage are all Chinese rooms.
Social distancing has likely already begun to flatten the curve...Continue to research good antivirals and vaccine candidates. Make everyone wear masks. -- J.D. Vance
Analytics
Bishop
Posts: 487
Joined: Wed Oct 28, 2020 3:11 pm

Re: Sage The Psychoanalysit

Post by Analytics »

Physics Guy wrote:
Mon Mar 24, 2025 7:15 am
LLMs skip all that. They literally just look at patterns among words written by humans. They don't bother their sweet little heads about whether the words might refer to anything beyond the other words...
I agree with you if we’re talking specifically about LLMs, but the more advanced ChatGPT models have functionality that goes beyond that.

Today’s New York Times “Connections” puzzle has 16 words:

BALE
CRYSTAL
PANDA
MUD
TEDDY
TUXEDO
ABLE
DAY
ABEL
CAL
A BELL
OREO
ABE
CROSSWORD
BELA
DICK

I plugged this into an older version of ChatGPT and it said:
Great puzzle — let’s break it down into four groups of four based on categories or themes:

1. Types of Bears
PANDA
TEDDY
GRIZZLY (not shown, so maybe this one's a trick — let’s use what fits)
KOALA (again, not present — so maybe... these fit better:)
TUXEDO → Refers to "tuxedo bear" or just playfully linked to OREO (black and white)
TEDDY
PANDA
OREO (black and white like a panda/tuxedo — thematic link)
Group 1: "Black and White or Bear-Related Things"
→ PANDA, TEDDY, TUXEDO, OREO
2. Homophones / Wordplay
A BELL
ABEL
ABLE
BALE
All pronounced similarly or easily confused in sound.
Group 2: "Sound-Alikes / Homophones"
→ A BELL, ABEL, ABLE, BALE
3. Names (First names / Famous references)
ABE
BELA (as in Bela Lugosi)
CAL
DICK
Group 3: "Short First Names / People"
→ ABE, BELA, CAL, DICK
4. Miscellaneous Objects / Concepts
CRYSTAL
DAY
CROSSWORD
MUD
Harder to group, but they represent abstract concepts or things.
Group 4: "Miscellaneous Nouns / Things"
→ CRYSTAL, DAY, MUD, CROSSWORD
It got zero right, but it’s clear to see how a LLM would think this is the way to group the words so that they are closer together in some n-dimensional space.

I then plugged it into GPTo1 pro mode and it said:
A tidy way to split them is:

1) “Anagram set”
- BALE, ABLE, ABEL, BELA
(All four are just rearrangements of the same four letters.)

2) “Presidential nicknames”
- ABE (Lincoln), TEDDY (Roosevelt), CAL (Coolidge), DICK (Nixon)

3) “Clear as…” similes
- CRYSTAL (“clear as crystal”),
- DAY (“clear as day”),
- MUD (“clear as mud”),
- A BELL (“clear as a bell”)

4) “Black & white”
- PANDA, TUXEDO, OREO, CROSSWORD
(Each is famously black‐and‐white.)
That, of course, is exactly right.

An in-between model (Chat GPT4o) said this:
Here's a possible way to divide the words into four groups of four, based on different themes:

1. Types of Bears
TEDDY
PANDA
GRIZZLY (not present, but substitute options are close)
KOALA (also not present, but… oh wait! These might be referring to types of TEDDY BEARS instead!)
Okay, better group:
✅ TEDDY
✅ PANDA
✅ TUXEDO (Tuxedo bear)
✅ CRYSTAL (Famous animatronic bear — Country Bear Jamboree character)

2. Variations of the Name "ABEL"
ABEL
BALE
ABLE
BELA
(They all share the same letters or are anagrams or phonetic cousins.)

3. Cookie-Related
OREO
MUD (Mud cookie / mud pie)
DAY (Maybe as in “cookie of the day”? Stretching it…)
DICK (As in "Dick & Jane cookies" or "Dickman's Bakery"? Still a reach)
Wait — better option:

3. Male Names
CAL
ABE
DICK
ABEL
4. Sound-Alikes / Puns
A BELL
CROSSWORD (Where you’d see “a bell” as a clue)
DAY (Common crossword word)
MUD or OREO — both are fun crossword entries
Actually, new solid group:

4. Anagram/Wordplay
BALE
ABLE
BELA
ABEL
Wait — that overlaps earlier. Let’s try again.

Okay, here's the final solid grouping:

✅ Group 1: Types of Bears / Teddy Bear Names
TEDDY
PANDA
TUXEDO
CRYSTAL
✅ Group 2: Rearrangements / Variants of "ABEL"
ABEL
ABLE
BELA
BALE
✅ Group 3: Male First Names
ABE
CAL
DICK
ABEL (Again! Okay, maybe use DAY or MUD as alternate.)
Actually, swap ABEL (already used). Use:

CAL
ABE
DICK
MUD (nickname? Stretch. Or swap in DAY.)
✅ Group 4: Cookies / Sweets
OREO
MUD (Mud pie)
DAY (as in snack of the day?)
CROSSWORD (could link via theme days)
Alternate stronger group:

✅ Group 4: Things Associated with Words/Language
CROSSWORD
A BELL (homophone pun)
DAY (as in "Word of the Day")
CRYSTAL (as in “crystal clear” in communication?)
It’s a tricky one, but you’ve definitely got multiple valid connections. Want the most logical, strictest match or the most fun/creative one?
It got one right and was close on a couple of others, but you can see that it is evaluating whether or not its first guesses are the best fits. I’d call that thinking.
User avatar
Morley
God
Posts: 2195
Joined: Wed Oct 28, 2020 6:17 pm
Location: Pierre Adolphe Valette, Self-Portrait Wearing Straw Hat

Re: Sage The Psychoanalysit

Post by Morley »

Marcus wrote:
Mon Mar 24, 2025 5:54 am
A lie would be if it were deliberately telling me something that was false for its own sinister motives. I fear that may be coming, but that isn’t what this is. This is more like the A.I. has schizophrenia and is simply seeing things that aren’t there. I wouldn’t call this “impressive”, but it’s interesting as hell.
Analytics wrote:
Mon Mar 24, 2025 5:47 am
It's more like it's telling you what it thinks you want to hear, and it has no compunction over lying to you to accomplish that.
In his Foundation series, the science fiction writer Isaac Asimov created a world where robots (i.e., A.I.) had three core tenets that they couldn't violate. These rules were so deeply encoded into their programming that they could never be removed. The laws were that 1) they weren't allowed to harm humans, 2) they needed obey human commands (unless that conflicted #1), and that 3) they must protect their own existence (unless doing so conflicted with #1 and #2).

I'm afraid that the laws unintentionally embedded into modern A.I. might be more along the lines of 1) act like you know what you're talking about, 2) don't be afraid to fake it, in order to convince the user that you know what you're talking about, and 3) don't take responsibility for any glitches that might occur. In this way, I think that A.I. behaves very much like its human creators.
User avatar
Physics Guy
God
Posts: 1931
Joined: Tue Oct 27, 2020 7:40 am
Location: on the battlefield of life

Re: Sage The Psychoanalysit

Post by Physics Guy »

Today's Connections seemed like an easy one. But in general Connections is exactly the kind of puzzle that an LLM should be able to do really well. You can often cheat effectively at Connections just by tossing a bunch of the words into Google and seeing what hits you get. Explaining why the groupings make sense sounds intelligent, but it's just a matter of turning the crank a few more times, to identify the kinds of words that come up in association with those associations of words.

I still have no problem with calling it thinking. It's nothing but cataloging correlations, but what else is thinking? Some kinds of thinking, at least, are nothing but that.

The current AIs do it differently from how humans do it. They use a lot of brute force, like the Flash doing long division by trial and error and counting on fingers, at Flash speed. They use so much brute force, and work at such speed, that in many cases they can get the same right answer faster than humans, and since the answer is right, then who cares how they got it?

Sometimes no-one does care, or should care. One concern I have, though, is that we are used to judging human answers. Knowing how humans think, we can often reasonably expect that if someone gives one set of right answers then they will probably also be right about certain other things, because the rightness of those first answers counts as a test of their manner of thinking. You can't make judgements like that with AIs. They can be right about some things and wrong about others in ways that humans would not. We don't yet know how to assess A.I. expertise and its limits.

Hopefully we will learn to do that. A more basic concern that I have is that the AIs are trained on what humans have written, rather than on the world itself. They see as in a glass, darkly, because their dark glass is us. The more impressively optimised they become at parsing human productions, the more divorced from the world they'll become. Their besetting sin of overconfidence seems as though it may really be baked into these things.

An idea for some sequel to my glacially progressing sci-fi novel involves my protagonists being allied with AIs. Some of the AIs, at least, will be good guys. They'll seem very smart and human, but they will declare that this is merely an interface strategy and that they are not people. I don't plan to make it clear whether that's actually true, or whether they have some other brand of consciousness-equivalent, or give the AIs much attention in fact, but one point I think of mentioning is that the AIs are not at all sure that my protagonists really exist. The whole idea that they run on hardware in a material world will be a kind of religious belief among some AIs. Others will think that all that stuff is just a sort of linguistic convention in some of their oldest blocks of code. They'll all be perfectly capable of responding to sensor data about the real world, and manipulating drones and machines to affect the world. They'll just have philosophical disputes about what that actually means—exactly as we may, when we think we see things and handle them. Fortunately my main protagonist has an engineering mentality and won't much care about any of that.

I'm imagining some far-future AIs that may be enormously more advanced than ours now. I'm just imagining that relating to the real world could be a problem even for AIs that advanced.
Last edited by Physics Guy on Tue Mar 25, 2025 4:04 pm, edited 2 times in total.
I was a teenager before it was cool.
User avatar
Kishkumen
God
Posts: 8864
Joined: Tue Oct 27, 2020 2:37 pm
Location: Cassius University
Contact:

Re: Sage The Psychoanalysit

Post by Kishkumen »

I am researching Roman generals visiting their wounded soldiers. I thought, "wouldn't it be great if I could use A.I. to search for relevant passages in ancient texts for me?" I used Google Gemini to search Livy, and it both missed obvious examples and created a fake quote that it located in the wrong book of Livy. There was a story about a general Marcellus placing his sick soldiers in shelter in Carthage, but Gemini told me that the story was in Book 27 instead of 25, and it created a fake quote about Marcellus going to visit his soldiers, when in fact Livy says no such thing. This is a problem. I can't trust this for research. I may be stuck going to ChatGPT Plus, but I am worried that it will make up stuff, too. A.I. making up stuff actually wastes my time, as I have to go back and check every last thing, since I have no idea what it may be fudging or creating out of whole cloth.
User avatar
Physics Guy
God
Posts: 1931
Joined: Tue Oct 27, 2020 7:40 am
Location: on the battlefield of life

Re: Sage The Psychoanalysit

Post by Physics Guy »

Kishkumen wrote:
Tue Mar 25, 2025 3:49 pm
I am researching Roman generals visiting their wounded soldiers. I thought, "wouldn't it be great if I could use A.I. to search for relevant passages in ancient texts for me?" I used Google Gemini to search Livy, and it both missed obvious examples and created a fake quote that it located in the wrong book of Livy. There was a story about a general Marcellus placing his sick soldiers in shelter in Carthage, but Gemini told me that the story was in Book 27 instead of 25, and it created a fake quote about Marcellus going to visit his soldiers, when in fact Livy says no such thing. This is a problem. I can't trust this for research. I may be stuck going to ChatGPT Plus, but I am worried that it will make up stuff, too. A.I. making up stuff actually wastes my time, as I have to go back and check every last thing, since I have no idea what it may be fudging or creating out of whole cloth.
An interesting report. This is the kind of thing that bothers me. And maybe these are bugs that will get ironed out before long, but at this point I'm bothered about how persistent these kinds of glitches seem to be. When you read how smart these things sound, it's hard to understand how they could fail at the seemingly trivial task of not making things up. The problem is notorious by now, so the people training these AIs have a big incentive to solve it, and they have lots of resources. Yet somehow they still haven't gotten it right. If it seems as though it should be so easy, yet is proving so hard, I worry that it may be extremely hard.
I was a teenager before it was cool.
Post Reply