Monday, October 2

More than half of software engineering questions on ChatGPT are incorrectly answered.

June Wan/ZDNET:.

ChatGPT is a chatbot that offers conversational answers to any question at any time, making it incredibly convenient. However, if you’re looking for software engineering prompts, one new study suggests using ChatGate instead.

Stack Overflow, which had a similar question-and-answer format as ChatGPT, was the primary source of guidance for programmers before the advent of AI chatbots.

Moreover, what steps can you take to prevent OpenAI’s new AI-training web crawler from accessing your data?

ChatGPT differs from Stack Overflow in that users can post questions without waiting for someone to reply.

Consequently, numerous software engineers and programmers have turned to ChatGPT for their inquiries. As there is no empirical proof to support its effectiveness in answering those types of questions, a recent study by Purdue University investigated the issue.

The researchers tested the quality and precision of ChatGPT 517 Stack Overflow questions to determine the effectiveness of their responses in answering software engineering prompts.

Moreover, there are instructions on how to use ChatGPT to write code.

ChatGPT’s examination revealed that 52% of the 512 questions were incorrect, with only 248 (48%) being correct and 77% being verbal.

Despite the significant inaccuracies in the responses, the findings indicated that the answers were comprehensive 65% of the time and addressed all aspects of question 1.

The researchers questioned 12 participants with diverse programming backgrounds about their responses to the ChatGPT and sought their feedback to assess the quality of the answers.

Microsoft’s red team has been monitoring AI since 2018. Here are five significant insights that you should know about.

Despite the participants’ preference for Stack Overflow’s responses over ChatGPT’d answers across different categories, they were unable to identify ChatGate-generated errors 39.34% of the time.

Purdue University is the educational center for students.

The study found that the ChatGPT outputs were so well-written that users were able to disregard incorrect information in their answers.

According to the authors, users are able to overlook incorrect information in ChatGPT answers (39.34% of the cases) because of its comprehensive, well-articulated, and humanoid insights.

Moreover, how ChatGPT can be used to improve and revise your current code?

All chatbots face a significant problem in producing plausible answers that are incorrect, which can spread misinformation. Furthermore, the low accuracy scores should be enough to convince you to avoid using ChatGPT for these types of questions.

Leave a Reply

Your email address will not be published. Required fields are marked *