Scientists find ChatGPT is inaccurate when answering computer programming questions
A team of computer scientists at Purdue University has found that the popular LLM, ChatGPT, is wildly inaccurate when responding to computer programming questions. In their paper published as part of the Proceedings of the CHI Conference on Human Factors in Computing Systems, the group describes how they pulled questions from the StackOverflow website and posed them to ChatGPT and then measured its degree of accuracy when responding.
In the early 2000s, computer hobbyists could walk into any of nearly 700 Barnes and Noble bookstores and find aisle after aisle filled with manuals on programming, coding, design, the internet and virtually any other topic even remotely related to computing. Scores of magazines supplemented this sanctuary for computer addicts.
The fact that colossal amounts of energy are needed to Google away, talk to Siri, ask ChatGPT to get something done, or use AI in any sense, has gradually become common knowledge.
The artificial intelligence (AI) language model ChatGPT has captured the world's attention in recent months. This trained computer chatbot can generate text, answer questions, provide translations, and learn based on the user's feedback. Large language models like ChatGPT may have many applications in science and business, but how much do…