Discussion about this post

User's avatar
Suchismit Ghosh's avatar

I tested it out, and it seemed like it was working on - and it does work for texts which are generated by GPT models entirely or generated with semi-human intervention; however, that said, It does not work well with essays written by good writers. It false flagged so many essays as AI-written. This is at the same time a VERY useful tool for professors, and on the other hand a very dangerous tool - trusting it too much would lead to exacerbation of the false flags.

To Edward: Please make sure the model has a false flag rate of <1%-2% on all type of contents: articles, very poor essays, good essays, stories, etc. For example, my college essays were false flagged multiple times, while I didn't even use ChatGPT or any language model. I uses Thesaurus, and grammarly and that's about it. I urge you to train it on a dataset which accounts for every type of content available. Coming from a high school student, I specially want to emphasize to train this on very good essays because yes a lot of students will use GPT to elevate their writing but some are honest in their essays and the model seems to not take that into account.

Expand full comment
Anthony's avatar

Can I have an API please :-)

Expand full comment
23 more comments...

No posts