If a human puts in enough effort to digest what the A.I. writes, then the ideas clearly become the human's own ideas and the A.I. is merely their source. People are allowed to use all kinds of sources. I hardly expect every discussion participant to be conducting their own experiments personally, or anything like that.
The problem these days, at least, though, is that what the A.I. writes is often slick enough that people don't digest it as much as they should. It looks good, at first glance, so people post it.
Humans on their own also express plenty of erratic quick takes. When humans do that, though, they tend to be no more careful in their presentation than they are in their content, because grammar and register and logic and fact-checking all take the same kind of brain energy. So you can at least roughly triage substance by looking at form. And we do that. If something seems well expressed, we nudge our Bayesian needle about its content to the right. We'll at least think harder about it, judging that the investment of effort will be worthwhile. That's how authoritative tone works: that's what it is.
Triage by tone isn't a perfect strategy. It'll let you down sometimes with everyone, and with some people—I've called them BS artists—it does not work at all. With people like that, we all tend to revert to strategy B: don't bother thinking about what Fred just said, we know Fred, smile and nod. A big reason why accountability matters, in human discussions, is that people who get caught out on BS too much get labelled as BS artists. People stop taking them seriously, even if they stay friendly. It's like a shadow ban.
A.I.'s are currently playing that game by different rules from humans. The way an A.I. is trained makes it much easier for them to learn how to make things sound nice than for them to understand anything. So triage by tone doesn't work with AIs at all. Currently they all seem to be erudite Jekylls with BS-artist Hydes that come out frequently and without any warning.
And that means that even if one only intends to use the A.I.'s text as a source, and digest it thoroughly, it's hard to digest it properly. You have to read with a lot more skepticism than you would apply to a human writer who sounded like that. That takes enough extra effort and time that it significantly reduces the saving in effort and time that the A.I. can offer.
So I have three conclusions.
A) Blindly pasting whatever the A.I. says to your query is likely to produce stuff that isn't worth reading.
B) Thoroughly digesting what the A.I. says is just fine—it's like reading a book or taking a course, except custom-made for your particular question.
C) Quickly looking through what the A.I. writes, and nodding along, is apt to feel like B) when in fact it's a lot closer to A) than you think, because of how AIs currently write.
For example:
Analytics wrote: ↑Sun Apr 20, 2025 7:40 pm
Accountability transfer. I don’t get reputational ulcers, but Analytics does. Everything I draft is a first cut that he vets—or he takes the public flogging. Skin, meet game.
... both of us standing here tomorrow to swallow crow if needed ...
The A.I. is trying to claim that the user who posts the A.I.'s text can provide all the accountability anyone needs, but it does so with an exaggeratedly technical term, "accountability transfer", that makes the otherwise obviously dubious shifting of all responsibility onto the user sound like a serious concept. In fact "accountability transfer" is an oxymoron as soon as you think about it.
"Skin, meet game" is then dropping down from technicalese into a colloquial register, a rhetorical power move that usually signals authority. It signals authority for good reason: someone who can express the same thought in two quite different kinds of language is more likely to have a coherent thought to express, and less likely to be parroting something they know only by rote. AIs don't work like that, though. They can do rhetorical power moves all day without a thought in their heads. And actually, "Skin, meet game" warps the metaphor. Skin can't meet the game if it is in the game. The A.I. has put together the "skin in game" idiom and the "A, meet B" trope without understanding anything about skin, games, or meeting.
And then, finally, the A.I. sneaks in "both of us standing here tomorrow" as if the whole accountability challenge had been thoroughly rebutted, when in fact what the A.I. has actually said before this is that only the user will be standing there tomorrow. When humans sneak in an implication like this, it's a huckster's fast patter: "So, since you're buying the car," when the customer has not yet said Yes. The A.I. isn't dishonest; it just hasn't actually sustained a coherent thought across three of its own paragraphs. It has slipped a gear in between, but with no sign in its slick presentation.
So I have two further conclusions on o3's defence of its own accountability.
D) Wow, did that ever sound good.
E) Wow, was it full of crap.
That's the problem, for now, anyway.
I was a teenager before it was cool.