Dear Edward, we‘re in awe. We, being the https://tedxmuenster.de/en/about/ team in Germany, would be very honored to have you on stage at this year’s main event on the 23rd June. Please let me know if you’re interested in following up on this conversation. Thank you and warm regards, Wiebke.
I tried it using my own written essay, which was entirely written by myself and the result was that about 5% of the essay, which was around 3 sentences were highlighted as being written by AI. The conclusion gptzero gave me was: "Your text is most likely human written but there are some sentences with low perplexities."
What does this mean? Where can we get an explanation of the factors being used such as perplexity and burstiness.
What worries me is that in cases like the one I pointed out above, won't an organisation that relies on this app as an absolute measure of truth mistakenly believe I used an AI when I didn't, threatening my credibility.
GPTChat is amazing. My bet is that It will create a bimodal distribution in student performance--1) those who use it and 2) those who don't. It will probably compress the distribution of student performance for those who use it, making it difficult to differentiate those who are learning from those who are using it as a crutch and phoning it in. The plagiarism/AI detection is a great start.
The goals of education are usually focused toward improvement in learning with a terminal endpoint of mastery for cohorts of students and each student individually within the distribution. Creativity is a separate dimension. Additionally, the teacher needs to provide feedback to guide learning and understand which concepts were successfully taught. Most institutions have rubrics and criteria for judging performance and mastery. All of these are time dependent. A suggested base use case: Teach -> Assess/Measure -> Provide Feedback ->Adjust/Correct -> Compare Cohort & Individual Performance Across Dimensions to External Criteria.
The newest thing, someone posted, is that students take something written by AI and run it through a text spinner (paraphrasing software), and this makes it impossible to ID as AI. Anyone know if that is true? Can Zero do anything about that? Anyone have any suggestions about how to deal with it?
Tried it with two essays generated by ChatGPT. For both it said "Your text is likely to be written entirely by a human." So, it appears to be completely useless.
Dear Edward, we‘re in awe. We, being the https://tedxmuenster.de/en/about/ team in Germany, would be very honored to have you on stage at this year’s main event on the 23rd June. Please let me know if you’re interested in following up on this conversation. Thank you and warm regards, Wiebke.
Email me at wiebke@tedxmuenster.de :)
I tried it using my own written essay, which was entirely written by myself and the result was that about 5% of the essay, which was around 3 sentences were highlighted as being written by AI. The conclusion gptzero gave me was: "Your text is most likely human written but there are some sentences with low perplexities."
What does this mean? Where can we get an explanation of the factors being used such as perplexity and burstiness.
What worries me is that in cases like the one I pointed out above, won't an organisation that relies on this app as an absolute measure of truth mistakenly believe I used an AI when I didn't, threatening my credibility.
Thanks for the improvement.
Testing the improved version of the tool, I noticed that the tool wrongly identifies texts completely written by humans as texts written by AI.
I suggest this feature be improved
congratulations @edwardtian & team💥
GPTChat is amazing. My bet is that It will create a bimodal distribution in student performance--1) those who use it and 2) those who don't. It will probably compress the distribution of student performance for those who use it, making it difficult to differentiate those who are learning from those who are using it as a crutch and phoning it in. The plagiarism/AI detection is a great start.
The goals of education are usually focused toward improvement in learning with a terminal endpoint of mastery for cohorts of students and each student individually within the distribution. Creativity is a separate dimension. Additionally, the teacher needs to provide feedback to guide learning and understand which concepts were successfully taught. Most institutions have rubrics and criteria for judging performance and mastery. All of these are time dependent. A suggested base use case: Teach -> Assess/Measure -> Provide Feedback ->Adjust/Correct -> Compare Cohort & Individual Performance Across Dimensions to External Criteria.
we have bypassed this
https://github.com/gonzoknows/AI-Detection-Bypassers
Hi,
Where I can find pricing/usage conditions?
Can you provide a transcript of the video please? I can't hear the audio. Thank you
The slashed out "mute" icon in the corner means that there is no audio. You just watch the video demonstration and infer the usage
Congrats ! :)
Does it work in all languages ?
4214 Crescent St
Long Island City, NY 11101
The newest thing, someone posted, is that students take something written by AI and run it through a text spinner (paraphrasing software), and this makes it impossible to ID as AI. Anyone know if that is true? Can Zero do anything about that? Anyone have any suggestions about how to deal with it?
Definitely showing this to my principal to get some brownie points at work
Tried it with two essays generated by ChatGPT. For both it said "Your text is likely to be written entirely by a human." So, it appears to be completely useless.