It's not about being honest. It's about Joe Bullshit from the Bullshit-Department having it easier in his/her/theirs Bullshit Job. Because you see, Joe decided two decades ago to be an "office worker", to avoid the horrors of working honestly with your hands or mind in a real job, like electrician, plumber or surgeon. So his day consists of preparing powerpoints, putting together various Excel sheets, attending whatever bullshit meetings etc. Chances are you've met a lot of Joe Bullshits in your career, you may have even reported to some of them. Now imagine the exhilaration Joe feels when he touches these magic tools. Joe does not really care about his job or about his company. But suddenly Joe can reduce his pain and suffering in a boring-to-death-job while keeping those sweet paychecks. Of course Joe doesn't believe his bosses only need him until the magic machine is properly trained so he can be replaced and reduced to an Eloi, living off the UBI. Joe Bullshit is selfish. In the 1930s he blindly followed a maniacal dictator because the dictator gave him a sense of security (if you were in the majority population) and a job. There is unfortunately a lot of Joe Bullshits in this world. Not all of them work with Excel. Some of them became self-made "developers" in the last 10 years. I don't mean the honest folks who were interested in technology but never had the means to go to a university. I mean all those ghouls who switched careers after they learnt there was money to be made in IT and money was their main motivation. They don't really care about the meaning of it all, the beautiful abstractions your mind wanders through as you create entire universes in code. So they are happy to offload it too, well because it's just another bullshit job, for the Joe Bullshit. And since Joe Bullshit is in the majority, you my friend, with your noble thoughts, are unfortunately preaching to the wind.
Leaky abstractions. Lots of meta programming frameworks tried to do this over the years (take out as much crud as possible) but it always ends up that there is some edge case your unique program needs that isn’t handled and then it is a mess to try to hack the meta programming aspects to add what you need. Think of all the hundreds of frameworks that try to add an automatic REST API to a database table, but then you need permissions, domain specific logic, special views, etc, etc. and it ends up just easier to write it yourself.
If you can imagine an evolutionary function of noabstraction -> total abstraction oscilating overtime, the current batch of frameworks like Django and others are roughly the local maxima that was settled on. Enough to do what you need, but doesn’t do too much so its easy to customize to your use case.
For my sins I occasionally create large PRs (> 1,000 files) in GitHub, and teammates (who mostly all use Chrome) will sometimes say "I'll approve once it loads for me..."
The funny thing is Stallman started his fight like half a century ago and on regular days Hacker News shits on him eating something off of his foot and not being polished and diplomatic, and loves practical aspects of Corporate Open Source and gratis goodies and doesn't particularly care about Free Software.
On this day suddenly folks come out of the woodwork advocating for half baked measures to achieve what Stallman portrayed but they still hardly recognize this was EXACTLY his concern when he started the Free Software movement.
1. Unrestricted access to an absolutely huge library of movies, music and TV shows, nearly unlimited. Certainly not limited by opaque "licensing deals" between various companies.
2. Highest resolution/bitrate/quality that was available at the time of the work's original release.
3. No arbitrary device/OS limitations.
4. Can watch/listen/download from any location on earth with sufficient bandwidth.
I didn't even mention that it's free or that there are no ads, because that's pretty much the least important attribute to me. If any company came out with a service that offered those four points, I'd probably be willing to pay a lot for it. How much? Who knows, we don't know how much this is worth because nobody is even trying to offer it.
Not even the most fanatical functional programming zealots would claim that programs can be 100% functional. By definition, a program requires inputs and outputs, otherwise there is literally no reason to run it.
Functional programming simply says: separate the IO from the computation.
> Pretty much anything I've written over the last 30 years, the main purpose was to do I/O, it doesn't matter whether it's disk, network, or display.
Every useful program ever written takes inputs and produces outputs. The interesting part is what you actually do in the middle to transforms inputs -> outputs. And that can be entirely functional.
I feel this in my bones. Side projects are so cathartic and saved my sanity at $DAYJOB. I don't care that I can't implement things the way I want, or how everything is spaghetti, or how much tech debt has piled up, my side projects is a blissful world that I invented. It gives me the "I am Jack's crap codebase" fight club zen at work.
I feel like this keeps coming up again and again and I can't help thinking that they just don't get it. Or maybe I don't.
There's two prevailing sentiments IMO - (1) FSF-like: the greatest freedom is achieved when we empower the users of the software to improve or fix the very software that they are using and (2) BSD-like: the greatest freedom is achieved when we permit licensees to freely use the software how they see fit.
In my opinion, nothing in either scenario risks "appropriation". There's only different takes on "freedom." But if you fear appropriation then clearly licenses like AGPL and GPLv3 are more suitable.
Making money is not incompatible with Open Source nor Free Software. Making lots and lots of money is not incompatible with it either.
If you don't like what might happen when you license your code liberally, you might just prefer proprietary software. And that's okay too. You can even publish the source but add a restrictive license (or explicitly forbid any use). This is a source available proprietary approach. I wouldn't contribute to a project like this, but there's no reason you couldn't make yours like this.
Free (beer/kostenloss, not freiheit) is magic and does magical things to people's brains. Also, public software work is really valuable to attract higher quality results.
If you seek to balance a ledger, then it's not free (in either sense of the word).
I wonder to myself: these articles, are they really written by open source software contributors who regret their contributions? Or someone on the outside looking in, wondering how they can "fix" things?
> I'm also cautiously optimistic though. We'll get there, but it's gonna be a bit shakey for a minute or two.
But I don't understand how all of these AI results (note I haven't used Kagi so I don't know if it's different) don't fundamentally and irretrievably break the economics of the web. The "old deal" if you will is that many publishers would put stuff out on the web for free, but then with the hope that they could monetize it (somehow, even just with something like AdSense ads) on the backend. This "deal" was already getting a lot worse over the past years as Google had done more and more to keep people from ever needing to click through in the first place. Sure, these AI results have citation results, but the click-through rates are probably abysmal.
Why would anyone ever publish stuff on the web for free unless it was just a hobby? There are a lot of high quality sites that need some return (quality creators need to eat) to be feasible, and those have to start going away. I mean, personally, for recipes I always start with ChatGPT now (I get just the recipe instead of "the history of the domestication of the tomato" that Google essentially forced on recipe sites for SEO competitive reasons), but why would any site now ever want to publish (or create) new high quality recipes?
Can someone please explain how the open web, at least the part of the web the requires some sort of viable funding model for creators, can survive this?
The most effective presentation style I have ever seen used literally hundreds (if not thousands) of slides, but in a way I had never encountered before. I’ve heard it described as the Lawrence Lessig presentation style, but my introduction to it was a presentation about “Identity 2.0” by Dick Hardt, and his is still the best I’ve seen. It absolutely blew me away, both as a presentation style and a mechanism for conveying information/message (ideally they’re the same thing, but I have sat through some presentations that are entertaining but devoid of content).
Dick Hardt’s presentation is at https://youtube.com/watch?v=RrpajcAgR1E&t=6s - the content has aged a little (though less than I expected before I just rewatched it) but if you’ve never seen it, I encourage you to watch it; it’s not that long, it will feel even shorter than it is because it’s so engaging, and it certainly illustrated to me how important communication skills are and improved my own presentation style.
In my experience, this depends a lot on how you organize possible features/improvements/etc.
I hate the strategy of just taking every idea you hear and throwing it into a ticket. You just end up with this giant icebox of stuff you'll never do. If a big new prospect demands one of the ideas that's in the icebox be implemented immediately in order to close a deal, you're probably still not going to pull it out of the icebox, because you don't remember that it's there. Instead, you'll just create a new ticket for it, and eventually when going through the icebox, someone will go "hey, I think we built this already" and close as dupe.
Instead, I strongly prefer to have tickets that at least have some possibility of getting done in the short to medium term, and store other ideas elsewhere. Engineering keeps a list of tech debt that they'd like to address. PMs keep one list per project of possible improvements. For potential new features/products, they write PRDs but don't immediately turn them into a bunch of tickets.
Ultimately I think the giant backlog of stuff that mostly won't get addressed is a sign of weak PMs who are afraid to say no and like to fall back to the comfortable answer that is, "sounds interesting, I'll write a ticket for it."
I think this is "how to think about coding assistants and your task" but none of this is "tackling" their unreliability.
While coding assistants seem to do well in a range of situations, I continue to believe that for coding specifically, merely training on next-token-prediction is leaving too much on the table. Yes, source code is represented as text, but computer programs are an area where there's available information which is _so much richer_. We can know not only the text of the program but the type of every expression, which variables are in scope at any point, what is the signature of a method we're trying to call, etc. These assistants should be able to make predictions about program _traces_, not just program source text. A step further would be to guess potential loop invariants, pre/post conditions, etc, confirm which are upheld by existing tests, and down-weight recommending changes which introduce violations to those inferred conditions.
ChatGPT and tab-completion assistants have both given me things that are not even valid programs (e.g. will not compile, use a variable that isn't actually in scope, etc). ChatGPT even told me that an example it generated wasn't compiling for me b/c I wasn't using a new enough version of the language, and then referenced a language version which does not yet exist. All of this is possible in part b/c these tools are engaging only at the level of text, and are structurally isolated from the rich information available inside an interpreter or debugger. "Tackling" unreliability should start with reframing tasks in a way which lets tools better see the causes of their failures.
> You may not be aware but thanks to all those donations, we've been able to pay two people $1.5k/month for the past two years to keep shipping. Fisker Cheung and Sosuke Suzuki have done an incredible job!
It's incredible how little money some people get paid to build foundational pieces in a multi-trillion dollar industry.
It's the elimination of window borders. Aside from not being able to differentiate one window from another similarly colored window in the background, it's nearly impossible to click and hold on anything along the edge to resize the window.
It's the overloading of the title bar with so much shit like search boxes and extraneous buttons that a user has almost no place to grip to move the window.
It's the way that tabbing between text boxes either doesn't behave the way you'd expect, or doesn't work at all.
It's all the tooltips that interrupt and litter the interface and, at times, block out things that you are looking at. And 95% of the time, the information provided in these tooltips are redundant or useless.
It's amazing how much damage these cargo-cult UI/UX morons have done in the past ten years. They threw out several decades of usability pioneered by real HID experts for something that looks pretty but doesn't fucking work for a lot of people.
Applications like Postman, Teams (and pretty much all of MSFT's applications these days), Chrome, and Insomnia should be case studies on how to not design user interfaces. They are about as bad as desktop software gets.
The biggest sin is that this would be a non-issue if these things were configurable at the windowing system level and could not be overriden by app developers. But the trend has gone in the opposite direction; instead of providing more configurability, Windows and Gnome/GTK are actually taking away options that have existed before.
the twelve-factor app is a set of recommendations from 2011 that are based less on engineering principles and more on the capabilities of Heroku and containerized infrastructure in 2011. For example:
> Another approach to config is the use of config files which are not checked into revision control, such as config/database.yml in Rails. This is a huge improvement over using constants which are checked into the code repo, but still has weaknesses: it’s easy to mistakenly check in a config file to the repo; there is a tendency for config files to be scattered about in different places and different formats, making it hard to see and manage all the config in one place. Further, these formats tend to be language- or framework-specific.
> The twelve-factor app stores config in environment variables (often shortened to env vars or env).
yeah the reason they argue for this is that the people that wrote this worked on Heroku, and the way that Heroku worked is you populated the environment variables from some fields in a web app. If you want your config history tracked in version control, or you do gitops, or you have k8s configmaps, or you want to have your configuration files on a mounted volume ... those things are all broadly fine; they keep the configuration state separate from the app deploy state. This document really confuses the forest with the trees and recommends things based less on actual engineering principles and more on the product capabilities of the corporation that produced it. It is an actively harmful set of guidelines.
I don’t know who said it, but an amazing quote I love is: “they call it AI until it starts working, see autocomplete”
I love this because when a company tells me they do AI (as a software engineer) they tacitly say that they have little to no knowledge of where they want to go or what services they will be offering with that AI.
Go a step further, write some company blog posts outlining how to do it yourself. Do a good job, honestly show how easy it is to host your own alternative, you're making the world a better place by doing so.
You want readers to think "that would be easy, maybe I'll do it". They start to believe it's important they have what you're offering and they think they'll do it themselves.
Well, we know how attention spans are these days, if something takes 30 minutes of work it will probably never get done. Most of the people will give up, and a lot of them will buy your hosted service instead because they've already convinced themselves it's important. If it's important enough to someone that they would spend their time on it, they'll spend money on it too. You want people willing to spend time / money on something to have good will towards your company.
I'll tell you my vision which is kinda long term - like 20-30 year from now on. I think the future is that everyone will have their own personalized AI assistant on their phone. Internet as it is will be mostly useless because only robots will be able to wade through the generated shit ocean and the next generation will see it as the current see the TV - unimportant old boring low-entropy data which is not entertaining anymore.
There will be a paradigm shift where our-customers-are-ai apps appear and most stuff will need to have an API which makes AI able to connect and use those services effortlessly and without error because who don't want to tell his assistant to "send 5$ to Jill for the pizza"? There will be money in base AI models you can choose(subscribe to) and what it can and cannot do for you. It will still be riddled with ads and now it's your personal assistant who can push any agenda to you.
Operation systems will become a layer under the assistant ai.
You will still talk on/to your phone.
I guess free software will be more important than ever.
Free ai assistants will be available and computing power will be there to run it on your phone but all the shit we've seen with open Vs closed source, Linux Vs Windows, walled gardens whatnot will go another round this time with free open public training data assistants Vs closed-but-oh-so-less-clumsy ones.
Security problem will be plenty like how to hide your AI assistant custom fingerprint? Authentication and authorization system for AI? How much someone else personal assistant worth? How to steal or defend it?
I call these frankenframeworks. The constant drive to DRY and reach the supposed nirvana of code being a DSL of pure business logic leads to more and more implementation details being shoved under the rug to deeper and deeper layers. But for some reason there’s no foresight that any non-trivial change requires changing more than just the business logic and so you have to resort to bolting on config options, weird hooks, mixins, “concerns”, and global state for no reason other than it’s all you can do to reach down the layers.
I've been struggling with wrapping my head around asynchronous programming with callbacks, promises and async/await in JS, however I think it's finally clicking after watching these YouTube videos and creating a document where I explain these concepts as if I'm teaching them to someone else:
Edit... I've been rewatching these videos, reading the MDN docs, the Eloquent JavaScript book, javascript.info, blogs about the subject, etc. This further proves you shouldn't limit yourself to a single resource, and instead fill up the laguna with water from different sources if you will.
Thank you for the follow up. I am not trying to push it but I'm failing to understand how I can express my opinions and experience about governments blocking apps and websites. What would make a comment describing what happened in Turkey and asking people to reconsider their support for app and website blocking in the name of claimed greater good a high quality comment?
This is my second time I fail at this. If this is not banned speech or undesired opinion, do you have any tips to improve my comment quality on the issue?