The false results that OpenAI has produced have been the subject of some concern - Cafeqa

The false results that OpenAI has produced have been the subject of some concern


A complaint has been filed against OpenAI by a group known as noyb, which is an organization that campaigns for the protection of personal data in Europe

The complaint cites OpenAI’s inability to correct the incorrect information that ChatGPT creates at the time of the complaint. According to the charges that were made by the organization, the fact that OpenAI is unable to ensure the accuracy of the personal data that is processed by the service represents a breach of the General Data Protection Regulation (GDPR) that is in place within the European Union.

The act of fabricating false information is a behavior that, in and of itself, is exceptionally problematic. The following is a statement made by Maartje de Graaf, a Data Protection Lawyer at noyb: “However, when it comes to false information about individuals, there can be serious consequences.”


When it comes to the processing of data belonging to individuals, it is plainly clear that companies are unable to guarantee that chatbots like ChatGPT comply with EU regulations. This is the case particularly when it comes to the processing of data. In the case that a system is unable to offer results that are accurate and transparent, then it cannot be used to create data on individuals. This is because the system should be able to deliver both. It is of the utmost importance that the technical demands be satisfied by the technology, rather than the solution becoming the technology.

Individuals have the right to seek information about the data that is processed and its sources, as well as the right to rectify data that is incorrect, as defined by the General Data Protection Regulation (GDPR), which specifies that personal data must be accurate. In addition, individuals have the right to correct data that is incorrect. On the other hand, OpenAI has openly admitted that it is unable to correct the false information that is generated by ChatGPT or give the sources of the data that was used to train the model. This is something that OpenAI has recognized has been a problem.


One of the arguments that OpenAI has presented is that “factual accuracy in large language models continues to be an area of active research.”

A research study that was just published in the New York Times is brought to the notice of the lobbying group of choice. According to the findings of the inquiry, chatbots like ChatGPT “invent information at least three percent of the time – and this number can reach as high as twenty-seven percent.” The complaint that Noyb filed against OpenAI includes an example in which ChatGPT persistently provided an incorrect date of birth for the complainant, who was a notable person. This occurred despite the fact that the complainant had asked that the information be changed. Noyb provides this scenario as an example.

On the basis of the statement that was made by noyb, it was said that “OpenAI refused his request to rectify or erase the data, arguing that it was not possible to correct data.” The fact that the date of birth that was provided by ChatGPT was incorrect did not prevent this from being carried out.

OpenAI noted that it has the power to filter or block data on particular prompts, such as the name of the complainant; but, this would not prevent ChatGPT from filtering all information relevant to the individual. Additionally, the organization did not offer a response that was satisfactory to the access request that was made by the complaint. This is a condition that the General Data Protection Regulation (GDPR) requires firms to implement.

Each and every company is obligated to comply with the requirement that they assist with access requests. Keeping records of the training data that was utilized in order to at least have a sense of the sources of knowledge is not only a possibility, but it is also a highly likely possibility, as stated by de Graaf. On the basis of each new ‘innovation’ that is produced, it would seem that an increasing number of companies are coming to the realization that their products do not have to comply with the law.

In March of 2023, the Italian Data Protection Authority placed a temporary restriction on the amount of data that OpenAI may handle, and the European Data Protection Board launched a task group to investigate ChatGPT. European privacy watchdogs have already conducted investigations into the mistakes that were made by ChatGPT, and both of these measures are instances of such investigations.

Noyb is seeking that the Austrian Data Protection Authority undertake an inquiry into OpenAI’s data processing as well as the processes that it has taken to ensure the accuracy of the personal data that is processed by its enormous language models. This request is included in the complaint that noyb has filed. Another request that was made by the advocacy group was for the authority to issue an order that would require OpenAI to comply with the access request that was made by the complaint, to ensure that its processing is in accordance with the General Data Protection Regulation (GDPR), and to charge a fee to ensure that it will continue to comply in the future.

Lastest News