Home SECURITY conspiracy theory or stupid mistake

conspiracy theory or stupid mistake

conspiracy theory or stupid mistake


The US Constitution was written by neural networks: a conspiracy theory or a stupid mistake

Why are generative text detectors still not able to accurately perform the task?

How to determine if the text is written by a human or artificial intelligence? This question is becoming increasingly relevant in light of the development of text generation technologies that can create compelling and realistic texts on any topic.

In the USA, for example, overly suspicious teachers have more than once falsely accused students and schoolchildren in the use of generative neural networks and, as a result, dishonestly done work, which led to numerous scandals, growing distrust and tension in the educational sphere.

There are already a number of services and tools that can quite accurately, according to the authors, determine whether the text was generated by a neural network or written by a person. Such services include, for example, GPTZero, ZeroGPT, and OpenAI Text Classifier. However, as it turned out, you should not seriously rely on these services either.

For example, a study conducted by researchers from the University of Maryland and published in March this year empirically demonstrated that AI-generated text detectors are unreliable in practical scenarios.

In addition to the fact that such tools are characterized by false positives for text written entirely by a person, it is also easy to deceive them by asking for the same conditional ChatGPT change the order of words in a sentence without distorting the meaning, or make sentences and paragraphs of different and alternating lengths. Such tricks very quickly confuse any kind of detectors.

Around the same time, researchers from Stanford University conducted a similar experiment, also making the results of their work public domain . They found that the detectors are biased against non-native English speakers, resulting in a high false positive rate when checking their texts.

One of the funniest facts about AI text detectors was the recent discovery that the original text of the US Constitution, loaded unchanged into the ZeroGPT detector, is detected with 96.21% accuracy as generated by a neural network. And if James Madison, one of the main authors of the original American Constitution, was not a time traveler, then there are clearly certain flaws in the operation of the detectors.

The constitution phenomenon was tried to be explained by Edward Tian, ​​the author of the GPTZero tool. According to the developer, the text of the US Constitution was entered into the training data of large language models so often that over time, these language models began to tend to generate text similar to the text of the Constitution. With the Bible, the situation, by the way, is similar, ZeroGPT defines it by 88.2% as generated by artificial intelligence.

A small Bible passage uploaded to ZeroGPT

In general, generative text detectors are based on two main metrics: “perplexity” and “variability”.

Perplexity is a measure of how different a piece of text is from what the AI ​​model learned during its training. People write much more chaotically than AI models, so the bewilderment of the neural network from human text will be higher.

Variability, in turn, measures the variation in sentence length and structure throughout the text. A text written by a human tends to be more chaotic and dynamic due to different sentence lengths and a heterogeneous structure than a text generated by a neural network.

However, both indicators are not reliable for detecting text generated by artificial intelligence. After all, a person with a strong desire can write in a highly structured style, which will lead to a low bewilderment rate, and, as a result, to a high percentage of discovering the role of AI in writing a text.

Ultimately, there is no magic formula that can tell the difference between human-written text and machine-written text with 100% accuracy. AI letter detectors can make an educated guess, but the margin of error is too large to rely on them for an accurate result.

Experts suggest using more complex definition methods that take into account, among other things, the semantic and contextual meaning of what is written, as well as its purpose and audience. Such methods can be more accurate and resistant to tampering.

Artificial intelligence can be a useful tool for creating texts on various topics, but it can also be used for manipulation and misinformation. Therefore, it is extremely important to develop methods for detecting artificial intelligence, which in the future can protect us from outright fake information and help us distinguish truth from fiction.


Source link



Please enter your comment!
Please enter your name here