The problem is not that security solutions are a double edged sword, it's that such solutions stop mass surveillance.
When Ross Ulbricht was arrested, they made sure to do it in a way that they got access to his laptop while logged in. I'm sure competent investigators can figure out the login method used daily by someone on their phone if they follow them because they are committing a crime. Just like they did with Ulbricht. But they can't do that for everyone whenever they feel like it, and that's the problem.
Am I wrong in understanding you were using the company account with the enterprise subscription while you were working on those side projects? Or were you using a different account?
Sounds like he was using a company account? In that case, the default is always to expect that the company will see everything including personal projects.
Oh, that’s a different scenario. I would never do personal work on someone else’s laptop. But I think what’s being described in this case is that if you use this IDE, even on a personal machine where your license is from another source, then your personal data is somehow exposed to others.
Maybe you’re right, in this case it’s like if a plumbing company loans out plumbers tools. You’re not necessarily allowed to use loaned tools for personal work, but in that case it’s usually due to degradation. I am not sure that applies to digital tools. It’s an interesting question.
I m not sure about the problem here, lyrics are public you can search '$songname lyrics' and get the result in a website (or even at the search engine page). What's the issue with an LLM producing those lyrics if you ask?
Long ago the first site I remember to do this was lyrics.ch, which was long since shut down by litigation. I'm not endorsing the status quo here, but if the licensing system exists it is obviously unfair to exempt parties from it simply because they're too big to comply.
I think the major issue with asking LLMs (CGPT, etc.) for advice on various subjects is that they are typically 80-90% accurate. YMMV, speaking anecdotally here. Which means that the chance of them being wrong becomes an afterthought. You know there's a chance of that, but not bothering to verify the answer leads to an efficiency that rarely bites you. And if you stop verifying the answers, incorrect ones may go unnoticed, further obscuring the risk of that practice.
It's a hard thing to solve. I wouldn't expect LLM providers to care because that's how our (current) society works, and I wouldn't expect users to know better because that's how most humans operate.
If anyone has a good idea for this, I'm open to suggestions.
[1]: https://en.wikipedia.org/wiki/Daniel_Suarez_(author)
P.S. Read a lot of his book, great author
reply