AI on the knee: Mozilla disables AI Explain feature due to incompetence
The neural network could not cope with the simplest tasks, but Mozilla promises to restore its authority in front of users.
Mozilla has temporarily disabled the AI Explain feature on its documentation site MDN after the discovery of inaccuracies in the operation of this technology.
Last week, MDN introduced two new AI services – AI Help and AI Explain.
- AI Help allows registered MDN users to ask questions via the chat interface and receive responses from the OpenAI GPT-3.5 bot.
- AI Explain allows users to ask questions about code examples on the documentation page, and the bot provides more detailed explanations. However, users found inaccuracies in AI Explain’s answers and expressed their concerns about the technical accuracy of the service.
Mozilla Representative declared that the company has received a lot of feedback from users, including positive feedback, constructive criticism, and concerns about the technical accuracy of the responses. Although the majority of users rated the services positively (75.8% for AI Help and 68.9% for AI Explain), Mozilla decided to temporarily disable the AI Explain feature in order to conduct additional research and fix the identified problems.
Example of an incorrect answer AI Explain: AI indicates that the code defines a grid with two rows and two columns, when in fact CSS only defines two lines.
Mozilla will continue to work on the use of generative AI. The MDN team plans to improve the algorithms to more accurately detect bad responses and provide better options for reporting errors. The Mozilla spokesperson also encouraged the community to actively participate in the development process to ensure the responsible and efficient use of artificial intelligence.
Mozilla promises to report on the launch of AI Help and AI Explain, as well as the decision to temporarily disable part of the service. The organization encourages users to share their views and suggestions as they consider them an important element in this process.
Source link
www.securitylab.ru