Open-Sourcing DetectGPT
github.comThanks for put this out there!
Open question, does the written by ML model thing feel like catching people wearing a fake rolex, I can argue it two ways here.
So we have tools like google, calculators and now GPT.. the big but here being if students don't learn they will be effected later and is a personal issue?
OR we are missing the point by bandaging old academic systems and not testing people on the concepts so that they are not just memorizing stuff?
I think all these models are oriented towards detecting people that are simply copying pasting text from these large language models into their assignments / homeworks.
If the text generated from these models are passed through an article spinner, there is no way these models can detect that it has been generated by an AI model.
At the end of the day, it's just a system to detect lazy people copying pasting stuff.
The first question that I have when I see a tools like this is a different one, namely what about (inevitable) misclassifications? False accusations by an ML model, in a world where people are increasingly willing to trust such models, could have serious consequences. The problem with this is that you can't solve this on the engineering side, because a higher accuracy will arguably make the issue worse.
I'm not saying don't build such tools, and it's definitely great that to have them open source, but it's something to be aware of and caution users about.
I feel like attacking academic dishonesty is actually trying to solve for a symptom rather than a cause. In my observations people tend to cheat because they arent concerned with the minutea of what they're learning