Hacker Newsnew | past | comments | ask | show | jobs | submit | Klathmon's favoriteslogin

I didn't become a software developer so I could write the same SQL queries, the same plumbing code, the same boilerplate beginnings of programs, the same repetitive error handling, the same string formatting, the same report generation, the same HTML templating, and the same thread cancellation logic. I also didn't become a programmer so I could gratify myself by yak-shaving elegant helpers for those SQL queries, plumbing, boilerplates, error handlers, formatting, reports, templates, and cancellations.

Bloggers have been kidding themselves for decades about how invigorating programming is, how intellectually demanding it is, how high the IQ demands are, like they're Max von Sydow playing chess with Death on the beach every time they write another fucking unit test. Guess what: a lot of the work programmers do, maybe even most of it, is rote. It should be automated. Doing it all by hand is part of why software is so unreliable.

You have a limited amount of electrical charge in your brain for doing interesting work every day. If you spend it on the rote stuff, you're not going to have it to do actually interesting algorithmic work.


If you’ve got an IKEA Bekant sit/stand desk and are interested in this sort of thing then Megadesk[1] and Megadesk Companion[2] will allow it to connect to Home Assistant for automation (and also adds memory states).

(This post emphatically not endorsed by my employer.)

[1] https://github.com/gcormier/megadesk

[2] https://github.com/gcormier/megadesk_companion


Electrical Engineer who can write messy but functional code in C and Python, but mainly deals with analog circuits at work.

Software has a magical property that other engineering disciplines do not: The engineering environment is ideal and fundamentally perfect. It's like building electronic circuits but only in the simulator that uses all idealized parts. And you get the extremely rapid prototyping and scaling that comes with that. (A "single line of code" equivalent error in a production analog circuit can take months to fix and years to distribute)

In a way it feels "unfair" because the fuckery of mother nature and immovable boundaries of physics is largely (and normally completely) cut out of the equation. Imagine having to write code where every variable is a fuzzy range as opposed to a fixed value, it would put a huge clamp on designs.

However,

The intellectual savings from not having to deal with non-linearities and errant physical effects, is largely just shifted into increased overall complexity that all exists in a different realm. So maximum logical thinking and intellect is still demanded.

I think the divide really comes from the difference in space in which the "engineering" is happening. Conventional engineers are dealing directly with nature trying it's best to constantly break your design. And I think there is a degree of comradery that comes with that. Software guys, free form those chains and working in a parallel universe of perfect logic, instead are pushed to the absolute limits of complexity.


HTTP/3 is HTTP/2 over UDP[0]. The advantage of this is that packet loss no longer delays processing of independent streams: if you lose a packet of one image it doesn't delay processing the rest of the CSS.

HTTP/2 is HTTP/1.1+TLS with a custom binary encoding that allows stream multiplexing. This lets browsers download the image and the CSS over the same TCP connection without having to incur slow-start penalties every request. It's the reason why "reduce number of requests" is no longer good web dev practice.

HTTP/1.1 is the usual plaintext protocol people think of when they think "HTTP".

[0] Specifically a UDP overlay protocol that allows selective TCP-like delivery guarantees, called QUIC


I've been building some projects with Crank.js and Mithril, both of which require an explicit function call to rerender the application after changing state.

In theory, that sounds terrible; I don't want to keep track of when my app needs to rerender, or worry about a stale DOM because I forgot to rerender after some event.

In practice, it's a breath of fresh air compared to my work with React/Vue/Svelte. Everything is just so simple; I can use vanilla JS data structures and code organization patterns. I can architect my frontend state and logic in the way that makes the most sense for my domain, instead of ceding control flow to a heavy, footgun-y, limited-expressivity framework that demands to know everything my app does so it can decide on its own when to rerender.

I'm tired of debugging confusing bugs because React hooks mix asynchronous control flow inside the render loop and my app has to have correct rendering and logic for every intermediate state of a chained sequence of useCallback/setState hooks. Tired of telling my team we have to put off a feature because it'll take a week to do something that feels like a 3-hour task because refactoring around a framework's control structures is harder than refactoring ordinary JS features. Tired of magic reactive compilers that create buggy renders and template languages that are only nice to work with if you use the One True Blessed VS Code Plugin, and tired of frameworks that need CLI scaffolding tools because they're more complicated than anyone wants to initialize with a blank editor.

Crank.js, Mithril, and (at a preliminary glance) Forgo are very compelling to me. They offer the declarative, self-contained nature of components that I love from React/Vue/Svelte and the type safety of TypeScript for view code instead of template strings. My whole project scaffolding is just installing esbuild and generating a tsconfig.json. Maybe toss in browser-sync and watchexec if I'm feeling fancy. I don't need to make major architectural decisions based on the preferred state management tool, because these frameworks couldn't care less about how state is managed. And if I really want automatic rerenders, it's 10 LOC to write a reusable ES6 Proxy that returns an object I can dump state values into that will auto refresh if I reassign a property.

And these frameworks still feel like a state machine in the same way React does - view code is declaratively rendered from application state. There is synchronization of state to DOM, but only in the sense that you have to explicitly invoke the declarative engine, and not in the sense that you have to manually synchronize like you would using jQuery to find and change DOM nodes.

Keeping up with manual refreshes feels so simple and easy compared to all the hoops I've jumped though with the household name frameworks.


"We are survival machines – robot vehicles blindly programmed to preserve the selfish molecules known as genes. This is a truth which still fills me with astonishment"

- Richard Dawkins, The Selfish Gene, 1976

Dawkins talking about the first sentence, about 1980:

"...that was no metaphor. I believe it is the literal truth, provided certain key words are defined in the particular ways favoured by biologists. Of course it is a hard truth to swallow at first gulp. As Dr Christopher Evans has remarked, "This horrendous concept - the total prostitution of all animal life, including Man and all his airs and graces, to the blind purposiveness of these minute virus-like substances - is so desperately at odds with almost every other view that Man has of himself, that Dawkins’ book has received a bleak reception in many quarters. Nevertheless his argument is virtually irrefutable" ...

http://www.politicsforum.org/forum/viewtopic.php?t=58430


I use architecture docs constantly at work as a way to scale myself. I can get a decent first cut 4-8 page doc with problem summary, rationale, implementation (usually api and some schemas), interaction diagram, alternatives considered done in a 4-6 real world hours now. This is for a 1-3 engineer month sized project. I also find I can get a 3 page one done with less details on schema and api in about 1-2 hours.

I find that being able to do this allows me to scale myself a ton. I can take a discussion I had with other engineers in a design session, or my thoughts on an implementation, written up quickly. I can then take these docs and send them around and have a concrete place for everyone to start discussion.

I find even if my idea is not what others are thinking, it greatly helps them articulate the delta and why. The opposite is everyone sits in a meeting and argues for their point of view in parallel while they do the design on their idea in parallel, which causes more fighting.

So to me the cost of an arch doc is not very high anymore, and the effectiveness is very high. I can send that doc to 30+ people and not have to have the same 1-2 hour conversation over and over. However it took me quite a while to be able to get my thoughts onto paper that fast.

I also find the 1-3 page writeups help me sell managers on spending the time on the longer doc either for me or their team.

The other thing that has been great for me is that I can write up distracting ideas (I have bad shiny object syndrome), quickly, and get them to my peer manager. This means I can go back to my current work without worry about it being forgotten about. Usually I get feedback in a day or so from the manager on how good the idea is and where it fits priority wise. This allows me to innovate and implement at the same time.


In my mind, this is a bit of an "experimentation smell". It means you're experimenting backwards because you are uncertain of the domain in which you are operating. It's something I've learned to notice as a symptom that I'm flailing around in confusion, and that I need to step back and understand the problem better.

Clarify the domain so that you know the types of everything up front, and your work is more than halved.


This is awesome! Are you familiar with the plus/equals/minus learning concept? Unfortunately I can't recall the exact term used when I first read about it and google isn't helping. It basically says the most effective way to master a subject is to have a:

"Plus" - someone who knows more about the subject than you, has more experience than you, can act as a mentor and resource for you and answer questions.

"Equal" - a study buddy at about the same level as you to work through problems together and challenge each other to make sure you both understand the material. This is the person you go to when you initially don't understand something and you try to figure it out together. If you're both stumped after you're spent some time with the issue that's when it's time to ask the "Plus."

"Minus" - Someone who doesn't have as much experience or subject matter knowledge as you do. You act as a mentor/resource for this person answering questions and explaining concepts. This in turn helps to solidify your understanding of the material.

I've long thought it would be beneficial to incorporate this concept into online learning and your platform looks like it has all of the information to make it happen. In addition to study buddy recommendations would you consider adding the ability to pair people currently taking the class with people who have completed the class who are willing to be a resource? I would guess the "Equal" is the most important component and your system tackles that but there might be significant value to users in adding the Plus/Minus as well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: