GPT-4 is afraid to look you in the eyes: you will have to wait a little with the face recognition function
OpenAI is faced with an ethical dilemma.
GPT-4, announced earlier this year, should not only process and generate text, but also analyze images. OpenAI, in collaboration with startup Be My Eyes, has also developed an app that helps blind people interact with the world around them by describing pictures and photos to them. Recently, some users began to notice that the application stopped transmitting information about faces.
Sandhini Agarwal, a researcher at OpenAI, confirmed that the model is able to identify public figures who have a Wikipedia page. However, there are doubts that this feature will not violate privacy laws in a number of regions where the use of biometrics requires the consent of citizens.
Other potential concerns include OpenAI’s concern that Be My Eyes may misinterpret or misrepresent facial characteristics. For example, there have been cases where GPT-4 incorrectly recognized gender and emotions. Such errors can be critical in some situations. The developers are going to consider security issues before the GPT-4 image analysis feature becomes widely available. “We really want to have a two-way dialogue with the public. If people say that something is not necessary, then we will fully accept such a position,” Agarwal said.
While privacy issues remain unresolved, other companies are testing similar technologies. For example, Microsoft has developed a GPT-4 based visual analysis tool in their Bing chatbot. Interestingly, Bing recently passed the CAPTCHA (Public Turing Test), this may slow down the launch of the update.
Google has implemented image analysis features in the Bard neural network. It is also capable of solving text based CAPTCHAs, although with varying degrees of success.
Obviously, the massive spread of AI computer vision technologies is inevitable. However, companies still have to solve a number of problems before these technologies become available to a wide range of users.