The big problem I have with people tossing A.I. answers into message board discussions is that it's like quoting an absent BS artist friend—one of those people whose knowledge is badly unreliable but who doesn't let that stop them from offering authoritative-sounding answers on any question.
Fortunately, most real people aren't BS artists like that. When a real person who is planning to stick around for future discussions expresses an idea of their own in their own words, there's at least a good fighting chance that the idea isn't just based on embarrassingly wrong assumptions or shamefully dishonest arguments. People normally try as well as they can to avoid saying things that will be hard to live down afterwards. So if someone is expressing their own thoughts in a discussion, then it's usually worth trying to understand them, at least if you're interested enough to be in the discussion yourself.
Maybe in a few years A.I. will be better in this way, but at the moment these bots are all perfectly ready to say, in the most confidently authoritative tone, things that a sincere human being would be ashamed to say. Chatbots don't have any afterlives after their current sessions. They're not going to have to live down the bad rep they get for saying something dishonest or idiotic. If you ask them for a list of arguments for X, they're going to list five to eight things that people have said on the Internet somewhere, even if half of the things are stupid or irrelevant.
So it's just not worth reading A.I. answers, the way it's worth reading human answers. The chance of wasting your time is too high.