Hacker Newsnew | past | comments | ask | show | jobs | submit | voilavilla's commentslogin

The bell-curve meme popular on reddit showing both extremes using the same solution, while the "average" cries, is dead on.

OP is on the right path: think before you code.

One of the neat things about being in the last few years of my career (started in 1988) is how the tools change. I'm a senior principal software architect at a large-ish company. And I don't write a single line of code. I write everything in Visio, Word, and PowerPoint (and sometimes PlantUML). As you move up the abstraction ladder the tools become simpler. I define architectures that will deploy into 10-year lifespan applications (think military, medical, and automotive tier-1), and the code that implements it--or even the language used--has absolutely zero impact on the architecture. Mostly C and C++ (went through an Ada period, too), and some of it might even be implemented in Rust over the next few years as it matures into the automotive world, but when you're high enough up, the implementation is irrelevant.

What matters are the building blocks, the apis, and most importantly, the encapsulation because that has an impact on the silicon, security, manufacturing, and test. Stuff that can be drawn and explained in a few slides, and not the code itself. (Of course, my lovely boxes have to be able to withstand upstream discoveries of flaws in the architecture, but that's the fun part!)


>> (where everyone cheated anyways)

This is depressing. I'm late GenX, I didn't cheat in college (engineering, RPI), nor did my peers. Of course, there was very little writing of essays so that's probably why, not to mention all of our exams were in person paper-and-pencil (and this was 1986-1990, so no phones). Literally impossible to cheat. We did have study groups where people explained the homework to each other, which I guess could be called "cheating", but since we all shared, we tended to oust anyone who didn't bring anything to the table. Is cheating through college a common millenial / gen z thing?


Even before LLMs, if you walked into any frat and asked to see their test bank, you'd get thousands of files. Though not technically cheating, having every test a professor ever gave was a huge advantage. Especially since most profs would just reuse tests and HWs without any changes anyway.

To my generation, it wasn't that cheating was a 'thing' as much as it was impossible to avoid. Profs were so lazy that any semi-good test prep would have you discover that the profs were phoning it in and had been for a while. Things like not updating the course page with all the answers on them were unfortunately common. You could go and tell the prof, and most of us did, but then you'd be at a huge disadvantage relative to your peers who did download the answer key. Especially since the prof would still not update the questions! I want to make it clear: this is a common thing at R1 universities before LLMs.

The main issue is that at most R1s, the prof isn't really graded on their classes. That's maybe 5% of their tenure review. The thing they are most incentivized by is the amount of money they pull in from grants. I'm not all that familiar with R2 and below, but I'd imagine they have the same incentives (correct me if I'm wrong!). And with ~35% of students that go to R2 and below, the incentives for the profs for ~65% of students isn't well correlated with teaching said students.


Seems to me that studying a collection of every test over the years, without knowing what questions will be on the exam is... actually learning? >_<


It's a lot easier to memorize AABBCCBDDADBADABCCABAD than the actual information.


Did you have a lot of multiple choice tests in higher education? I know Americans used them a lot in high school, but didn't realise that extended to college.


Not really. I had fellow students who understood nothing, could not program at all, but could tell you the answer to question 6 of the 2015 Java exam because they had memorized it all.


Then I would hire that person to be a requirements & specifications archival expert! ;)


Don’t know about frats, but I went to a lowly ranked “third tier” university and a “top 10” one.

While most of the classes were taught pretty well at both, the third tier ones were taught much better. Just couldn’t get an interview upon graduation despite near 4.0…

It is utterly bizarre that we use graduate research dollars to evaluate the quality of undergraduate education.


Here's how cheating advanced since then.

1. People in the Greek system would save all homework sets and exams in a "library" for future members taking a given course. While professors do change (and a single professor will try to mix up problems) with enough time you eventually have an inventory of all the possible problems, to either copy outright or study.

2. Eventually a similar thing moved online, both with "black market" hired help, then the likes of Chegg Inc.

3. All the students in a course join a WhatsApp or Discord group and text each other the answers. (HN had a good blog about this from a data science professor, but I can't find it now. College cheating has been mentioned many times on HN).


That's why MPEG defines "I", "B", and "P" frames (well, one of the reasons).


>> rights holders have engaged in a fundamentally-doomed arms race of implementing copy-protection strategies

Not entirely true. They simply haven't succeeded in created an industry-standard secure pipeline to the pixels on the display. Aside from the "analogue hole", eventually all of the gaps will be plugged, the same way we use secure sockets today. All media devices (including home HDTV/8K/etc) will extend the chain of trust farther into the pipeline. A set of signed apps and hardware will be required to watch any DRM films on HDTV, with each stage using authenticated encryption completely annihilating any MITM siphoning of the video.

So, its not doomed, just moving slowly, but it absolutely WILL arrive. I know, because I'm working on secure embedded video codec hardware, and our customers are targeting this..


At some point you hit the pixel driver with a bunch of bits, unless your pipeline involves digital signing of copyrights in everyone's future cyber eyeballs, it will always be possible to get the video if you have hardware access.

And the article goes over how there is already an industry standard for the encryption pipeline that goes all the way to monitors and television sets themselves and how you can get a cheap device which just pretends to be a TV and passes on an unencrypted HDMI out.


The end goal is end-to-end protection with online verification. As far as I can tell, we are already halfway there. The highest level of Widevine protection in use today essentially involves the streaming server having a private encrypted conversation directly with your GPU. That includes a certificate that can expire due to age and be revoked due to suspicion of tampering. If anything is not up to snuff, you'll get downgraded content at best and a ban at worst.

The next logical step is to extend this process down the chain to include every device from the GPU to the display.

In order to make a fake TV work, you'd likely need to take a real TV and hack it. That's going to get increasingly difficult and various watermarking techniques will likely allow it to be identified and blacklisted anyway.


I loved these series of CCC talks

- https://media.ccc.de/v/37c3-12296-full_aacsess_exposing_and_...

- https://sgx.fail/ and I'm sorry I'm not currently having good luck finding the talk that went along with it


I think that physical media is already known as the weakest link in the chain today and is thus being phased out. While the studios were reticent to adopt streaming initially, I think they've realized it is actually easier to secure, and to keep secure over time.

I don't know if there are exploits against GPUs like those against SGX. It's much easier to update GPU firmware than BIOS/UEFI.


I see more pirated media sourced from streaming services than physical media nowadays.


I've dug into this a bit more, and it seems I got some wires crossed somewhere.

Widevine L1 (the highest level of protection) is still expecting a "trusted execution environment" that is separate from the GPU. This leaves two major paths for exploitation: against the TEE itself, and against the path between the TEE and the GPU. There seem to be published exploits for the former, at least.

Also, Widevine L1 is only really used for "high-value" content, so it's often possible to obtain relatively high-quality streams at lower protection levels, which I'd assume are even easier to break.

Not to put too fine of a point on it, but the crytography behind DRM seems consistently amateurish. They ought to be doing what I said, but maybe for compatibility reasons they can't. I think the gist of what I said remains, though: online streaming is superior to physical media from a DRM perspective because it can use online verfication natively. A physical disk cannot change after it is stamped, but a streaming service can implement tighter rules over time, even for its back catalogue.


I’m sceptical this could ever work politically.

There are still people watching television on 1980’s hardware. Full HD televisions have been essentially feature complete for over 20 years and should remain relevant for another 20 years, since the vast majority of broadcasts are still 480p and 720p. There are now hundreds of millions of 4k and 8k televisions and projectors with expected service life and lifecycles extending into 2050s.

Bricking those devices en masse is a PR disaster and invites legal scrutiny from regulators, and any individual service suddenly requiring special hardware is shooting itself in the face financially.


> since the vast majority of broadcasts are still 480p and 720p.

I don’t think I’ve seen anything below 1080p on Xfinity cable in the USA for at least 10 years. Even older content is typically upscaled at the broadcast source (e.g. Seinfeld reruns)

Are you referring to over-the-air broadcasts? Or cable/satellite broadcasts?


Implementation depends on the country and the broadcasters. I know nothing about US and Canadian broadcasting standards.

In Europe TV is in most countries sent over DVB-T2 (DVB-C2 in urban areas) which supports SD, HD and FullHD. The older standards, DVB-T and DVB-C, are still used in some countries and not everyone even plans to transition to DVB-T2. There are countries and broadcasters sticking with SD and 720p HD.[1]

The resolution of video content (SD, 720p or 1080p) is only approximately correlated with quality (there are numerous resources online comparing different resolutions at different bit rates and bit depths). I'm not a big broadcast TV user, but from eyeing various EU countries' TV broadcasts at hotels and the occasional sports bar while traveling, for the majority of your every day programming the quality corresponds to YouTube's 720p format (~2 MB/s), even if for example German TV is nominally 1080p50 H.265.

As an example, in most of the EU the 2024 Olympics were broadcast in 1080p. There was no 4K option available to purchase in many countries. The 1080p broadcast, however, had a very low bitrate, making it effectively equivalent to YouTube 480p/360p/240p. Many sports involving fast moving landscapes and water looked worse than in the 80s and 90s. Footage of water, in particular, compresses very poorly at constant bitrates because of the constant subtle motion of waves and reflections on its surface.

[1] https://en.wikipedia.org/wiki/DVB-T2


All it takes is one person to figure out how to get the bits out, and then the only other potential solution would be to make devices that cannot play unencrypted content.


>I know, because I'm working on secure embedded video codec hardware, and our customers are targeting this..

Why? Or more specifically, why you, doing that?

You can say no, you know. To solve your problem, you're making for some of the least scrupulous people on the planet, (Hollywood types), the primitives to a guaranteed technologically enforceable tyranny. Remember that just because someone says they won't do something with a thing, doesn't mean the heel turn isn't coming. Sometimes you just don't build things because people can't be trusted with them.

So, why are you doing it?

You might think it's just harmless bits now... But today's harmless bits are tomorrow's chain links. Seriously asking. Might help me out of a mental hang up I'm trying to work through.


How does this make huge sense?

It's a financial goal that is literally based on how long you plan to live, aka, your age.

Let's say I have €100.000 in the bank and that's two years of expenses based on my budget. Can I retire tomorrow? Well, what's my age? and what X do I plan to add to my age to determine if that amount of money is sufficient to retire?


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: