No Gemini model has ever made a mistake or distorted information. They are all, by any practical definition of the words, foolproof and incapable of error.
I generally agree with gp. Checkout with your link says "This item is currently on pre-order" btw. Retail mini-pcs are somehow harder to obtain than general purpose ones.
What really matters is the society where we want to live in. It doesn't matter much what kind of technology allows private entities to reproduce creativity. We can assume these brains are not virtual and are actually organic and more capable than human ones, or that they are magical black boxes.
Since it changes incentives and mechanics of the creativity market so much it forces us to reassess current approaches. I can't agree that mechanism behind this tech and the approach to IP is of any consequence. We don't make laws, norms and judgements for the sake of our tools.
It's ok to say that we're not ready to arrange things in a proper way yet, without letting everything slide on some arbitrary technicality.
Makes sense. There is very little in common between physical theft and unauthorised information copying. It may be an act worse than theft in ethical or other way, but it's a different thing. Crude theft analogy is good to colour it a particular way and evoke emotions, but harmful for a reasonable discussion.
I don't believe it's possible at all if any effort is made beyond prompting chat-like interfaces to "generate X". Given a hand crafted corpus of text even current llms could produce perfect style transfer for a generated continuation. If someone believes it's trivially easy to detect, then they absolutely have no idea what they are dealing with.
I assume most people would make least amount of effort and simply prompt chat interface to produce some text, such text is rather detectable. I would like to see some experiments even for this type of detection though.
Are you then plagiarising if the LLM is just regurgitating stuff you’d personally written?
The point of these detectors is to spot stuff the students didn’t research and write themselves. But if the corpus is your own written material then you’ve already done the work yourself.
Oh I agree, producing text by llms which is expected to be produced by human is at least deceiving and probably plagiarising. It's also skipping some important work, if we're talking about some person trying to detect it at all, usually in education context.
Student don't have to perform research or study for the given task, they need to acquire an example of text suitable for reproducing their style, text structure, to create an impression of being produced by hand, so the original task could be avoided. You have to have at least one corpus of your own work for this to work, or an adequate substitute. And you still could reject works by their content, but we are specifically talking about llm smell.
I was talking about the task of detecting llm generated text which is incredibly hard if any effort is made, while some people have an impression that it's trivially easy. It leads to unfair outcomes while giving false confidence to e.g. teachers that llms are adequately accounted for.
LLM is just regurgitating stuff as a principle. You can request someone else's style. People who are easy to detect simply don't do that. But they will learn quickly
I’ve found LLMs to be relatively poor at writing in someone else’s style beyond superficial / comical styles like “pirate” or “Shakespeare”.
To get an LLM to generate content in your own writing, there’s going to be no substitute for training it on your own corpus. By which point you might as well do the work yourself.
The whole point cheating is to avoid doing the work. Building your own corpus requires doing that work.
I meant you don't need to feed it your corpus if it's good enough at mimicking styles. Just ask to mimic someone else. I don't mean novelty like pirate or shakespeare. Mimic "a student with average ability". Then ask to ramp up authenticity. Or even use some model or service with this built in so you don't even need to write any prompts. Zero effort
You're saying it's not good enough at mimicking styles. others saying it's good enough. I think if it's not good enough today it'll be good enough tomorrow. Are you betting on it not becoming good enough?
I’m betting on it not becoming good enough at mimicking a specific students style without having access to their specific work.
Teachers will notice if students writing style shifts in one piece compared to another.
Nobody disputes that you can get LLMs to mimic other people. However it cannot mimic a specific style it hasn’t been trained on. And very few people who are going to cheat are going to take the time to train an LLM on their writing style since the entire point of plagiarism is to avoid doing work.
How would the teacher know what student's style is if she always uses the LLM? Also do you expect that student's style is fixed forever or teachers are all so invested that they can really tell when the student is trying something new vs use an LLM that was trained to output writing in the style of an average student?
Imagine the teacher saying "this is not your style it's too good" to a student who legit tried killing any motivation to do anything but cheat for remaining life
> How would the teacher know what student's style is if she always uses the LLM?
If the student always uses LLMs then it would be pretty obvious by the fact that they’re failing at the cause in all bar the written assessments (ie the stuff they can cheat on).
> Also do you expect that student's style is fixed forever
Of course not. But people’s styles don’t change dramatically on one paper and reset back afterwards.
> teachers are all so invested that they can really tell when the student is trying something new vs use an LLM that was trained to output writing in the style of an average student?
Depends on the size of the classes. When I was at college I do know that teachers did check for changes in writing styles. I know this because one of the kids on my class was questioned about his changes in his writing style.
With time, I’m sure anti-cheat software will also check again previous works by the students to check for changes in style.
However this was never my point. My point was that cheaters wouldn’t bother training on their own corpus. You keep pushing the conversation away from that.
> Imagine the teacher saying "this is not your style it's too good" to a student who legit tried killing any motivation to do anything but cheat for remaining life
That’s how literally no good teacher would ever approach the subject. Instead they’d talk about how good the paper was and ask about where the inspiration came from.
>performing badly under pressure is not a thing in your world
No need to be rude.
Pressure presents different characteristics. Plus lecturers would be working with failing students so would understand the difference between pressure and cheating.
> My point was cheaters don't need to train on their corpus. That's why it's zero effort. You keep trying to wave that away
My entire point was that most cheats wouldn't bother training their corpus!
With the greatest of respect, have you actually read my comments?
> My entire point was that most cheats wouldn't bother training their corpus!
Good, because they don't need a custom corpus to cheat with LLMs with most normal teachers.
And if a teacher reduced your grade saying you are using LLM because your style doesn't match you just report them for it and say you were trying a new style (teacher would probably will be wrong 50% of the time anyway)
> Good, because they don't need a custom corpus to cheat with LLMs with most normal teachers.
I think you're underestimating the capabilities of normal teachers. And I say this as someone who a large percentage of their family are teachers.
Also this topic was about using LLMs to spot LLMs. Not teachers spotting LLMs.
> And if a teacher reduced your grade saying you are using LLM because your style doesn't match you just report them for it and say you were trying a new style (teacher would probably will be wrong 50% of the time anyway)
You're drifting off topic again. I'm not going to discuss handling false positives because that's going to come down the policies of each institution.
>If the student always uses LLMs then it would be pretty obvious by the fact that they’re failing at the cause in all bar the written assessments (ie the stuff they can cheat on).
There's nothing stopping students from generating an essay and going over it.
>Of course not. But people’s styles don’t change dramatically on one paper and reset back afterwards.
Takes just a little effort to avoid this.
>With time, I’m sure anti-cheat software will also check again previous works by the students to check for changes in style.
That's never going to happen. Probably because it doesn't make any sense. What's a change in writing style ? Who's measuring that ? And why is that an indicator of cheating ?
>However this was never my point. My point was that cheaters wouldn’t bother training on their own corpus. You keep pushing the conversation away from that.
Training is not necessary in any technical sense. A decent sample of your writing in the context is more than good enough. Probably most cheaters wouldn't bother but some certainly would.
> There's nothing stopping students from generating an essay and going over it.
This then comes back to my original point. If they learn the content and rewrite the output, is it really plagiarism?
> Takes just a little effort to avoid this.
That depends entirely on the size of the coursework.
> That's never going to happen. Probably because it doesn't make any sense. What's a change in writing style ? Who's measuring that ? And why is that an indicator of cheating ?
This entire article and all the conversations that followed are about using writing styles to spot plagiarism. It’s not a new concept nor a claim I made up.
So if you don’t agree with this premise then it’s a little late in the thread to be raising that disagreement.
> Training is not necessary in any technical sense. A decent sample of your writing in the context is more than good enough. Probably most cheaters wouldn't bother but some certainly would.
I think you’d need a larger corpus than the average cheater would be bothered to do. But I will admit I could be waaay off in my estimations of this.
>This then comes back to my original point. If they learn the content and rewrite the output, is it really plagiarism?
Who said anything about rewriting? That's not necessary. You can have GPT write your essay and all you do is study it afterwards, maybe ask questions etc. You've saved hours of time and yes that would still be cheating and plagiarism by most.
>This entire article and all the conversations that followed are about using writing styles to spot plagiarism. It’s not a new concept nor a claim I made up.
>So if you don’t agree with this premise then it’s a little late in the thread to be raising that disagreement.
The article is about piping essays into black box neural networks that you can at best hypothesize is looking for similarities between the presented writing and some nebulous "AI" style. It's not comparing styles between your past works and telling you just cheated because of some deviation. That's never going to happen.
>I think you’d need a larger corpus than the average cheater would be bothered to do. But I will admit I could be waaay off in my estimations of this.
An essay or two in the context window is fine. I think you underestimate just what SOTA LLMs are capable of.
You don't even need to bother with any of that if all you want is a consistent style. A style prompt with a few instructions to deviate from GPT's default writing style is sufficient.
My point is that it's not this huge effort to have generated writing that doesn't yo-yo in writing style between essays.
> Who said anything about rewriting? That's not necessary. You can have GPT write your essay and all you do is study it afterwards, maybe ask questions etc. You've saved hours of time and yes that would still be cheating and plagiarism by most.
Maybe. But I think we are getting too deep into hypotheticals about stuff that wasn’t even related to my original point.
> The article is about piping essays into black box neural networks that you can at best hypothesize is looking for similarities between the presented writing and some nebulous "AI" style. It's not comparing styles between your past works and telling you just cheated because of some deviation. That's never going to happen.
You cannot postulate your own hypothetical scenarios and deny other people the same privilege. That’s just not an honest way to debate.
> My point is that it's not this huge effort to have generated writing that doesn't yo-yo in writing style between essays.
I get your point. It’s just your point requires a bunch of assumptions and hypotheticals to work.
In theory you’re right. But, and at risk of continually harping on about my original point, I think the effort involved in doing it well would be beyond the effort required for the average person looking to cheat.
And that’s the real crux of it. Not whether something can be done, because hypothetically speaking anything is possible in AI with sufficient time, money and effort. But that doesn’t mean it’s actually going to happen.
But since this entire argument is a hypothetical, it’s probably better we agree to disagree.
>An award worth between 15 and 30 percent of the total proceeds that IRS collects could be paid, if the IRS moves ahead based on the information provided
It's about safety and having a tomorrow. A lot of places where you won't live long without housing. You could live your whole life without strong connections to other people even if a miserable one.
Not having a major disruption to your whole life beyond your control is a meaningful thing, and a basic need. It could be reliable, predictable, stable source of income or saving, which allows you to rent without such concerns. It could be owning place to sleep, to eat, to invite friends.
Not being able to afford a house doesn't imply that you're in risk of being homeless. OP seems to be using home owning as the measure for having hope in life, which is ridiculous. Now when I lived in Beijing, rent was expensive (for any place a westerner would see as livable), the apartments were lousy, finding an apartment was a hassle, landlords were annoying, and I ended up moving every year. Sure, all the foreigners kvetched about finding an apartment. Every one of us had some sort of major disruption due to something out of our control (that's pretty much going to happen in China). None of us lost hope in life because of it.
> Also, the aggressively protect your trademarks or lose them thing is a myth.
It's not. For example, Bayuer lost their trademark for aspirin due to genericization.[0]
For more context about copyright, trademark and patent protection by videogame companies and their motivation - I recommend this excellent on-topic video essay by a real lawyer[1].
I am aware of trademark becoming generic words and losing protection. But that is a process of a trademark becoming so wide spread in usage that courts find that it is just a common word now. Courts look a common usage, not a tally of all the times the trademark holder didn't defend the trademark.
A trademark might still be generic-ified even if the holder attacks every use of it.
If Valve ignores this one, and decides to attack some other project selling something with a Portal trademark on it, the courts won't look at this case and say "but you didn't do anything that time".
The whole premise of "defend it or lose it" is a misunderstanding of how trademarks becoming generic.
On windows and Linux this works great. I have two Dell business screens, one with USB-C, a DP in and a DP out. Just connect another screen to the DP out, and you get two screens over one USB-C connection. Together with the charging and USB hub provided by the screen this replaces a docking station for me.
However, Macs have spotty support for this. I believe it works on the newest Mac books, but not older generations, and not the Mac mini?
It doesn't work on Macbooks. References to macbooks supporting MST are about certain early high res monitors that behaved internally as multiple monitors due to bandwidth limitations on Displayport 1.2. That's fixed with higher bandwidth newer DP and HDMI revisions so the MST feature people care about these days is monitor chaining, and that is still not supported even in M2 Macbook Pros
>2014 cybersecurity audit performed by the French Cybersecurity Agency (ANSSI) at the museum's request
reply