Gadianton wrote: ↑Mon Dec 29, 2025 4:08 pm
Whiskey wrote:I thought I would ask AI why people use AI.
Except you didn't post your prompt, which ends up being one the main issues with people using AI, in particular, MG's use. All your examples were flattering for the AI user, or at least makes the AI user the victim, suggesting you biased the AI toward a noble answer.
my prompt to deepseek wrote:Is it a problem when people on Internet forums post AI responses but don't post the prompts used to generate the AI response? What are the main two reasons why a person would not include their prompt along with the AI's response?
I'll skip the wall of text, unless requested to post it, and provide deepseek's summary:
deepseek wrote:Conclusion
Omitting the prompt fundamentally undermines the collaborative and investigative spirit of most forums. While sometimes done out of mere oversight or a mistaken belief in efficiency, it can also be a deliberate tactic to present AI outputs with more authority or to protect one's prompting strategy. For AI-generated content to be truly useful and trustworthy in community discussions, sharing the prompt should be a standard practice, akin to citing sources or showing one's work in a calculation.
OK, I did the same thing - I asked ChatGPT 5.2, with the exact same prompt. Here is what I got. I found it remarkably honest and straight forward:
Short answer: yes, it can be a problem, but why it’s a problem depends on why the prompt is missing. Here’s a clean, forum-friendly way to think about it.
Is it a problem not to post the prompt? It becomes a problem when the AI output is treated as if it were a direct, unmediated contribution from the poster, especially in discussions about scholarship, theology, history, or interpretation. Without the prompt, readers cannot tell:
what assumptions were baked in, what constraints were imposed, what tone, audience, or framing was requested, whether counter-arguments were excluded by design.
An AI response is not “neutral.” It is prompt-shaped. Hiding the prompt hides the intellectual steering wheel.
That said, not every omission is malicious. Which brings us to the two main reasons.
The two main reasons people do not include their prompts
1. Innocent or practical reasons (non-problematic). Many people simply don’t realize prompts matter, or think of AI as a “smart search” rather than a shaped tool. They are sharing something they found interesting, not making a scholarly claim, and don’t want to clutter a post with what looks like backstage mechanics.
In this case, the omission reflects naïveté, not deception. The person may genuinely believe the output “stands on its own.”
This is common with newer or casual AI users.
2. Rhetorical control or authority-laundering (problematic).
More concerning is when someone omits the prompt to preserve the illusion of objectivity or authority.
This can include: framing the AI as an independent “third-party expert”; concealing leading assumptions or exclusions in the prompt; avoiding accountability for biased or one-sided framing; using AI to win an argument rather than explore one.
In these cases, the AI output functions like a masked ventriloquist because the user speaks, but the voice sounds external and authoritative.
This is not always conscious, but it does distort fair discussion.
Why the prompt matters (in one sentence):
The prompt is to AI what methodology is to scholarship. Without it, conclusions cannot be properly evaluated.
A balanced forum norm (if you want one)
A healthy middle ground many communities are moving toward:
1. If AI is used for brainstorming or illustration the prompt is optional.
2. If AI is used to support an argument or factual claim then the prompt should be summarized or linked.
Even a short note like “I asked the AI to summarize X from Y perspective” restores transparency.
The bottom line is:
Not including the prompt usually means one of two things:
The user doesn’t yet understand that prompts shape outcomes. The user benefits rhetorically from others not knowing how the answer was shaped. Only the second is a real problem. But without transparency, readers can’t tell which is which.
If you want, I can help you craft a polite, non-accusatory one-liner for forum use that invites prompt disclosure without starting a fight. This is me, Philo Soffee now - I didn't ask it to do this.