Out of necessity, academia historically selected for students who could learn from being read to just once without much context, and there was little pressure to fundamentally change the education format. It's very easy to better serve most modern students by actually designing the program for those students for a change.
Debates over this get muddled by zero-sum concerns. E.g. catering to the average student hurts the top performers; putting applications first replaces rigorous education with training.
It's not blackmail in the first place. There were no secrets to reveal since everything was published online, and it's legal to sue and report a possible crime. Speculating about the consequences doesn't make it blackmail.
The vast majority of neurotypical English-speaking adults would have no difficulty categorising it as a veiled threat. Note the strong consensus in this comments section.
The message has two parts - the response that X wants from Y, and the possible negative consequences for Y. The proximity of the two parts makes the implied link clear (to most people, anyway).
I assure you that 99% of judges and prosecutors would recognise this as blackmail (modulo variance in different countries' legal definitions of the crime).
Thank you for the tutorial! I don't mean to be greedy, but would you mind sharing your top MS Word typography tip that you think people should know? Aside from this one.
Turning on full justification and kerning do most of the heavy lifting.
The next biggest thing for most technical documents is maintaining the fidelity of drawings, diagrams, and charts. Many people paste low-resolution JPEGs into their documents, which looks like trash, especially on 4K screens or if printed.
Even older versions could paste in vector-format diagrams from other apps by using "Paste special -> Enhanced metafile".
Metafiles are the "native" vector format for the Windows GDI drawing and printing APIs, and provide the maximum quality and retain all of the original formatting details one-to-one. They're also small and efficient to display.
If none of those are possible, I insert images like company logos by scaling them up at the source to fill a 4K screen, take a screenshot, save as a PNG, and then insert that at a small size. This provides crisp edges at all but the most ludicrous resolutions.
In engineering practice, we often start using math without first consulting numerical analysts. It takes a long time to identify and fix the inevitable issues, which eventually becomes a lesson we have to teach students and practicing engineers because the field has accumulated so much historical baggage from doing it the wrong way.
As an example, early device models for circuit simulation were not designed to be numerically differentiable, leading to serious numerical artifacts and performance issues. Now we have courses dedicated to designing such models, and numerical analysis is used and emphasized throughout.
Is there anything today that you look at and think "yeah, they're gonna need to fix that at some point"?
I think the future of these languages is largely as a target for code generation and transformation. Their legacy tooling-unfriendly designs is what's slowing this transition.
I don't think it ever will be, either. Ada's "ecosystem" is really the tooling. I've seen plenty of Ada, but there's always been zero (or very nearly zero) exchange of code into or out of the company. There's not much opportunity to grow a natural ecosystem like that. The tooling for C and C++ is catching up, facilitating model-based design code generation, which is the direction Ada's niche is heading. Ada didn't entrench itself as a target language early on, missing another chance to surge in popularity.
All you have to do is read the datasheet and multiply a few numbers to calculate worst-case power. Remembering to do that should be second nature, like remembering to close any parentheses you open. It's a great tool for understanding in greater detail, but it's not the way to avoid breaking stuff.
"You won't make mistakes if you remember not to make mistakes" is nice in theory, but just not how people work in real world. Just like trivial syntax errors are very common while you're doing things. It's better to embrace the common human failures and have a second layer of protection.
I explained in my other comment how it's a great tool for other jobs, but the wrong tool for this job. You won't learn how not to break things by ignoring the proper tool.
if I was to give a kid a breadboard and a bag of components I wouldn't want them to avoid breaking stuff; what I would want is for the child to be able to interpret the event and gain knowledge from the mistake.
This idea can shift the paradigm from "oh, the LED doesn't work", to "Oh, the LED doesn't work, the color around that rectifier area has shifted; why?" , and I think that can help to build intuition.
Nothing. It's great. But it's the wrong tool for the job. Like teaching someone to stop a car by using the speedometer to estimate when to let go of the accelerator pedal, instead of introducing them to the brake pedal.
Have you read many datasheets? Parsing a datasheet can be pretty complex, even if it’s well written (of which many are not). There’s quite a few figures spread across many sections, and a beginner might not even realize they should be looking for the absolute maximums section. Sometimes datasheets aren’t even that explicit about certain failure modes, you could lock up a logic chip by accidentally leaving a pin floating, for example.
I can certainly see some value in a breadboard with instant feedback for when you’ve fucked up. Ideally you’re using a power supply with an adjustable current limit to begin with but this is a cool way to quickly draw your attention to an issue.
> All you have to do is read the datasheet and multiply a few numbers to calculate worst-case power.
True story: When I was in college the lab mate I had reversed unknowingly reversed power and ground on a chip on a breadboard. I turned off the circuit and reached for the chip and got a blister on my finger.
Maximum power is infinite in that case (well not infinite, but pretty damn high!) I don't recall seeing a data sheet on power consumption when power and ground is reversed...
No offense, but this comes across as "This was the way I was forced to learn it, and dang it everyone else should do the same." But my experience says, I wouldn't have got a blister on my finger if I had that.
Current was flowing through the "body diode" which is intrinsic to the construction of ICs (it's normally reverse biased). The forward voltage was likely around 0.6v, maybe a little higher with a lot of current flowing through it. So roughly that times however many amps the power supply could supply with the resistance of the leads/rails. It doesn't actually take that much power to make a DIP package extremely hot though, lacking any sort of heat sink.
I can't edit the comment now, but I worded it wrong. The quote only applies to exceeding maximum values. The visualization is much more useful if you've calculated worst-case. I use a thermal camera all the time.
It sounds like Facebook is volunteering 3 person-years which is nothing resembling a 3 year sponsorship. We're talking at least an order of magnitude difference. 3 person-years sounds like one team at Facebook working on the GIL for 3 months.
It's until the end of 2025, about 2.5 years from now. The comment was by GVR, who is presumably adding up all the pledged time to estimate whether there's enough in total.
Out of necessity, academia historically selected for students who could learn from being read to just once without much context, and there was little pressure to fundamentally change the education format. It's very easy to better serve most modern students by actually designing the program for those students for a change.
Debates over this get muddled by zero-sum concerns. E.g. catering to the average student hurts the top performers; putting applications first replaces rigorous education with training.