We have this big fear that LLMs will appear human, convincing us they are rational when in fact they are just good at constructing statistically-compelling sentences, whether fact-based or not. The problem is, lots of humans do the same thing. And that scares me more.