No menu items!

Artificial intelligence: OpenAI develops its own tool to detect plagiarism with ChatGPT

Share This Post

- Advertisement -

OpenAI, developer of ChatGPT, has released a new tool that predicts whether a text has been autowritten, though it warns that it’s not entirely reliable. The many possible uses of OpenAI artificial intelligence have sparked debate about the intention of some of them, as in the case of plagiarism.

- Advertisement -

The tool developed by OpenAI is revolutionizing technology thanks to its ability to write text that appears to be written by a person, making it almost impossible to detect whether it is the work of artificial intelligence.

ChatGPT can generate meaningful texts on almost all topics, which led OpenAI to develop a tool that can detect whether or not a text has been written by its artificial intelligence.

- Advertisement -

The result is now ready and consists of a tool that comes from a modification of GPT, or the basic technology used by OpenAI for the development of its popular bot. The developer called it ‘AI Text Classifier’ and clarifies that its purpose is “to predict the likelihood that a text has yet to be generated by AI from various sources.”

The new tool filters text and returns a result on a scale of five possibilities, ranging from “very unpleasant” to “most likely generated by artificial intelligence”. The developer has specified that for a correct analysis of the text it is necessary to provide a minimum of 1,000 characters, between 150 and 200 words.

OpenAI warns that the tool “isn’t always accurate, it can mislabel both AI-generated and human-written text.” The company explained that the text classifier can easily be tricked into adding fragments written by a person.

The new tool’s algorithm was trained with databases of texts written by adults in English, so “the classifier is likely to be wrong in texts written by children and in non-English texts“, they explain. With the classifier, OpenAI intends “to promote the debate on the distinction between human-written content and AI-generated content”.

How the tool was trained

The developer insists the results can help,”but they shouldn’t be the only test”, as “the model was trained on human-written text from a variety of sources, which may not be representative of all types of human-written text”.

The biggest problems have been found in the academic field. Many students have used the open artificial intelligence plagiarize texts. As a result, some Australian and American educational institutions have decided to ban its use.

The developer has clarified that the classifier has not been trained to detect plagiarism in an academic setting, so it is not effective for this purpose. In OpenAI they are aware that one of the main uses they will want to give to the tool is precisely to check if a text is written by a machine or a person.

However, “we caution that the model has not been thoroughly tested on many of the main intended targets, such as student documents, automated disinformation campaigns or chat transcripts”. Furthermore, they add that “neural network-based classifiers are poorly calibrated outside their data from training“.

With information from La Vanguardia .

Source: Clarin

- Advertisement -

Related Posts