ChatGPT hallucinates software bugs and ignores real ones
I've seen any number of people defending ChatGPT's use as a tool that can not just write code, but find bugs in existing code. My own position is that it could be useful for this, but only in the hands of someone who is skeptical, detail-oriented, and experienced with the language. This is because ChatGPT doesn't know anything about programming; it just knows what code looks like and what people say about it. This is sometimes good enough to write code, but it very readily departs from reality onto its own hallucinatory journey. In my case, it took a single line of code with one bug in it and instead of identifying that bug decided to add 3 or 4 more.
You can find any number of examples of this online along with discussion and analysis, so this blog post is just a vehicle for posting my own January 9 2023 chat transcript in a way I can conveniently link to.
(I would have posted it earlier, but OpenAI's chat history function was down for several weeks, so I could not retrieve it.)