> There's nothing stopping students from generating an essay and going over it.
This then comes back to my original point. If they learn the content and rewrite the output, is it really plagiarism?
> Takes just a little effort to avoid this.
That depends entirely on the size of the coursework.
> That's never going to happen. Probably because it doesn't make any sense. What's a change in writing style ? Who's measuring that ? And why is that an indicator of cheating ?
This entire article and all the conversations that followed are about using writing styles to spot plagiarism. It’s not a new concept nor a claim I made up.
So if you don’t agree with this premise then it’s a little late in the thread to be raising that disagreement.
> Training is not necessary in any technical sense. A decent sample of your writing in the context is more than good enough. Probably most cheaters wouldn't bother but some certainly would.
I think you’d need a larger corpus than the average cheater would be bothered to do. But I will admit I could be waaay off in my estimations of this.
>This then comes back to my original point. If they learn the content and rewrite the output, is it really plagiarism?
Who said anything about rewriting? That's not necessary. You can have GPT write your essay and all you do is study it afterwards, maybe ask questions etc. You've saved hours of time and yes that would still be cheating and plagiarism by most.
>This entire article and all the conversations that followed are about using writing styles to spot plagiarism. It’s not a new concept nor a claim I made up.
>So if you don’t agree with this premise then it’s a little late in the thread to be raising that disagreement.
The article is about piping essays into black box neural networks that you can at best hypothesize is looking for similarities between the presented writing and some nebulous "AI" style. It's not comparing styles between your past works and telling you just cheated because of some deviation. That's never going to happen.
>I think you’d need a larger corpus than the average cheater would be bothered to do. But I will admit I could be waaay off in my estimations of this.
An essay or two in the context window is fine. I think you underestimate just what SOTA LLMs are capable of.
You don't even need to bother with any of that if all you want is a consistent style. A style prompt with a few instructions to deviate from GPT's default writing style is sufficient.
My point is that it's not this huge effort to have generated writing that doesn't yo-yo in writing style between essays.
> Who said anything about rewriting? That's not necessary. You can have GPT write your essay and all you do is study it afterwards, maybe ask questions etc. You've saved hours of time and yes that would still be cheating and plagiarism by most.
Maybe. But I think we are getting too deep into hypotheticals about stuff that wasn’t even related to my original point.
> The article is about piping essays into black box neural networks that you can at best hypothesize is looking for similarities between the presented writing and some nebulous "AI" style. It's not comparing styles between your past works and telling you just cheated because of some deviation. That's never going to happen.
You cannot postulate your own hypothetical scenarios and deny other people the same privilege. That’s just not an honest way to debate.
> My point is that it's not this huge effort to have generated writing that doesn't yo-yo in writing style between essays.
I get your point. It’s just your point requires a bunch of assumptions and hypotheticals to work.
In theory you’re right. But, and at risk of continually harping on about my original point, I think the effort involved in doing it well would be beyond the effort required for the average person looking to cheat.
And that’s the real crux of it. Not whether something can be done, because hypothetically speaking anything is possible in AI with sufficient time, money and effort. But that doesn’t mean it’s actually going to happen.
But since this entire argument is a hypothetical, it’s probably better we agree to disagree.
This then comes back to my original point. If they learn the content and rewrite the output, is it really plagiarism?
> Takes just a little effort to avoid this.
That depends entirely on the size of the coursework.
> That's never going to happen. Probably because it doesn't make any sense. What's a change in writing style ? Who's measuring that ? And why is that an indicator of cheating ?
This entire article and all the conversations that followed are about using writing styles to spot plagiarism. It’s not a new concept nor a claim I made up.
So if you don’t agree with this premise then it’s a little late in the thread to be raising that disagreement.
> Training is not necessary in any technical sense. A decent sample of your writing in the context is more than good enough. Probably most cheaters wouldn't bother but some certainly would.
I think you’d need a larger corpus than the average cheater would be bothered to do. But I will admit I could be waaay off in my estimations of this.