So, how many people will actually switch their setups to multi-cloud as a consequence of this? How many will move over to self-hosting? Or will they just do a post-incident report, wave hands around and do nothing?
Because I think it's very much the same way as it is with Cloudflare - while the large vendors aren't always openly hostile, we can just smile and hope that they don't get too keen on reminding us that they're holding us hostage.
I don't see that changing anytime soon. I've personally also used Hetzner, Contabo, Scaleway, Vultr, DigitalOcean, Time4VPS and some other platforms, but when people couple their setups to CF/AWS/GCP/Azure, typically that coupling is hard to get rid of and doing so is hard to justify.
For most companies, I suspect this will actually re-affirm _not_ switching to multi-cloud.
Lots of businesses who will be completely forgotten as having an outage today because all of their customers were dealing with their own outages and outages in dozens of other providers.
GCP and Azure should be running a 10% sale/discount (Coupon code: RAINYDAY) for new accounts during the week of an AWS outage. The bean counters would take note.
> This had the accidental benefit of forcing me to study and discuss the material more (which led to better understanding) so that I could help coach others.
The few times when someone tried to explain things to me live, lead to my brain just kinda blanking because of the time pressure and whatnot and it wasn't very useful for me.
Instead, if I wanted to learn something properly, I'd have to just dig into the material myself and iterate on it. Consulting others worked better over text, in a group chat or forum or whatever.
I could only discuss a topic when I already had good grasp of it and felt confident about it. At that point it was more for the benefit of others, outside of finding niche cases that I didn't run into myself.
Same here. People trying to share information I have no interest in? Impossible to learn. My brain finding some interesting topic? Impossible to avoid ingesting all the knowledge about it.
Makes the first ~18 years of your life kind of difficult, as school basically is mostly the first part with not so much the second, but once you complete school or drop out to start working, being able to do the second part seems like a godsend compared to your peers.
I guess people who are running their own registries like Nexus and build their own container images from a common base image are feeling at least a bit more secure in their choice right now.
Wonder how many builds or redeployments this will break. Personally, nothing against Docker or Docker Hub of course, I find them to be useful.
It's actually an important practice to have a docker image cache in the middle. You never know if an upstream image is purged randomly from docker, and your K8s node gets replaced, and now can't pull the base image for your service.
> You never know if an upstream image is purged randomly from docker, and your K8s node gets replaced, and now can't pull the base image for your service.
That doesn’t make sense unless you have some oddball setup where k8s is building the images you’re running on the fly. Theres no such thing as “base image” for tasks running in k8s. There is just the image itself and its layers which may come from some other image.
But it’s not built by k8s. Its be built in whatever is building your images and storing I. Your registers. That’s where you need your true base image caching.
We are using base images but unfortunately some github actions are pulling docker images in their prepare phase - so while my application would build, I cannot deploy it because the CI/CD depends on dockerhub and you cannot change where these images are pulled from (so they cannot go through a pull-through cache)…
My advice: document the issue, and use it to help justify spending time on removing those vestigial dependencies on Docker asap.
It's not just about reducing your exposure to third parties who you (presumably) don't have a contract with, it's also good mitigation against potential supply chain attacks - especially if you go as far as building the base images from scratch.
Yea we have thought about that - I also want to remove most dependencies on externally imported actions on GitHub CI and probably just go back to simple bash scripts. Our actions are not that complicated and there is little benefit in using some external action to run ESLint than just run the command inside the action directly. Saves time and reduces dependencies - just need to get time to do that…
Hmm yea with a self hosted runner this could work. Gotta need to set the dockerd config into the VM before the runner starts I assume - unfortunately GitHub itself does not allow to change anything for the prepare stage - and it's a known issue for 2 years at least...
We run Harbor and mirror every base image using its Proxy Cache feature, it's quite nice.
We've had this setup for years now and while it works fine, Harbor has some rough edges.
I came here to mention that any non-trivial company depending on Docker images should look into a local proxy cache. It’s too much infra for a solo developer / tiny organization, but is a good hedge against DockerHub, GitHub repo, etc downtime and can run faster (less ingress transfer) if located in the same region as the rest of your infra.
Edit to add: This might spur on a few more to start doing that, but people are quick to forget/prioritise other areas. If this keeps happening then it will change.
Seems related to size and/or maturity if anything. I haven't seen any startups less than five year old doing anything like that, but I also haven't seen any huge enterprise not doing that, YMMV.
Pull-through caches are still useful even when the upstream is down... assuming the image(s) were pulled recently. The HEAD to upstream will obviously fail [when checking currency], but the software is happy to serve what it has already pulled.
Depends on the implementation, of course: I'm speaking to 'distribution/distribution', the reference. Harbor or whatever else may behave differently, I have no idea.
I think it was put pretty well by describing things as accidental complexity (of which you want as little as possible) and essential complexity, which is inherent to the problem domain that you're working with and which there is no way around for.
The same thing could sometimes fall into different categories as well - like going for a microservices architecture when you need to serve about 10'000 clients in total vs at some point actually needing to serve a lot of concurrent requests at any given time.
> inherent to the problem domain that you're working with and which there is no way around for
I'd phrase it to reasonable taken trade-offs for customer/user support and/or selling products.
> going for a microservices architecture when you need to serve about 10'000 clients
So far I am only aware of the use case to ship/release fast at cost of technical debt (non-synchronized master) of microservices.
As I understand it, this is to a large degree due to git shortcomings and no more efficient/scalable replacement solution being in sight. Or can you explain further use cases with technical necessity?
1. When you have a system where each component has significant complexity, to the point where if it was a monolith you'd have a 1M SLoC codebase that would be slow to work with - slow to compile, slow to deploy, slow to launch and would eat a lot of resources. At that point chances are that the business domain is so large that splitting up into smaller pieces wouldn't be out of the question.
2. Sometimes when you have vastly different workloads, like a bunch of parts of the system (maybe even the majority) that is just a CRUD and a small part that needs to deal with message queues, or digitally sign documents, or generate reports or do any PDF/Word/Excel/whatever processing. You could do this with a modular monolith but sometimes it's better to keep the dependencies of your project clean, especially in the case of subpar libraries that you have to use, so you at least can put all of the bullshit in one place. Also applies in cases where the load/stability of that one eccentric part is under question.
3. The tech stack might also differ a whole bunch of some of those use cases, for example if you need to process all sorts of binary data formats (e.g. satellite data) or do specific kinds of number crunching, or interact with LLMs, a lot of the time Python will be a really good solution for this, while that's not what you might be using in your stack throughout. So you might pick the right tool for the job and keep it as a separate service.
4. The good old org chart, sometimes you'll just have different teams with different approaches and will be working on different parts of the business domain - you already said that, but Conway's law is very much a thing that'd be silly to fight against all that much, because then you'd end up with an awkward monorepo, the tooling to make working which easy might just not be available to you.
> We have the DevOps knowledge on our team to go to containers, prepackaged dev environments, etc.
This is lovely to strive towards and going all in on containers (albeit not with Kubernetes) has worked out great for where I work; their resistance to the approach sucks, I'm sorry you have to deal with that. Hope it works out in the end.
Never underestimated the impact of convenience. At the same time, I'm so broke that any attackers could just look at my mostly empty wallet and weep (or do automated attacks and extract what little there is in the case of compromise).
> Both Free Pascal and QB64 are maintained and under relatively-active development, with their most recent releases in 2021… but they are mostly ignored because they expose arcane languages that most people have no interest in these days.
Touché. Personally I think Pascal (the FPC/Lazarus variety) was pretty cool, straight up one of the best ways to easily do cross platform GUI apps, something of that old RAD fame: https://www.lazarus-ide.org/index.php
I wish someone would prove me wrong, what are the best modern cross-platform options for native GUI?
At the same time, for everything else in similar circumstances (statically compiled executables, relatively safe to code and use), Go has replaced it for me, in great part due to both the ergonomics of the language, but also just how batteries included the standard library is.
No mentions of EAV/OTLT, I will use this opportunity to have a crashout about it and how in some companies/regions for whatever reason it's overused to the point where you'll see it in most projects and it's never nice to work with: https://softwareengineering.stackexchange.com/questions/9312...
If I have to work with one more "custom field" or "classifier" implementation, I am going to cry. Your business domain isn't too hard to model, if you need a 100 different "entities" as a part of it, then you should have at least 100 different tables, instead of putting everything into an ill fitting grab bag. Otherwise you can't figure out what is connected to what by just looking at the foreign keys pointing to and from a table, because those simply don't exist. Developers inevitably end up creating shitty polymorphic links with similarly inevitable data integrity issues and also end up coupling the schema to the back end, so you don't get like "table" and "table_id" but rather "section" and "entity_id" so you can't read the schema without reading the back end code either. Before you know it, you're not working with the business domain directly, but it's all "custom fields this" and "custom fields that" and people end up tacking on additional logic, like custom_field_uses, custom_field_use_ids, custom_field_periods, custom_field_sources and god knows what else. If I wanted to work with fields that much, I'd go and work on a farm. Oh, you're afraid of creating 100 tables? Use codegen, even your LLM of choice has no issues with that. Oh, you're too afraid that you're gonna need to do blanket changes across them and will forget something? Surely you're not above a basic ADR, literally putting a Markdown file in a folder in the repo. Oh, you're afraid that something will go wrong in those 100 migrations? How is that any different than you building literally most of your app around a small collection of tables and having fewer migrations that will affect pretty much everything? Don't even get me started on what it's like when the data integrity issues and refactoring gone bad starts. Worst of all, people love taking that pattern and putting it literally everywhere, feels like I'm taking crazy pills and nobody seems to have an issue what it's like when most of the logic in your app has something to do with CustomFieldService.
Fuck EAV/OTLT, thanks for coming to my rant. When it comes to bad patterns, it's very much up there, alongside using JSON in a relational database for the data that you can model and predict and put into regular columns, instead of just using JSON for highly dynamic data.
> Excessive View Layer Stacking
> In larger data environments, it’s easy to fall into the trap of layering views on top of views. At first, this seems modular and organized. But over time, as more teams build their own transformations on top of existing views, the dependency chain becomes unmanageable. Performance slows down because the database has to expand multiple layers of logic each time, and debugging turns into an archaeological dig through nested queries. The fix is to flatten transformations periodically and materialize heavy logic into clean, well-defined base views or tables.
I will say that this is nice to strive for, but at the same time, I much prefer having at least a lot of views instead of dynamically generated SQL by the application (see: myBatis XML mappers), because otherwise with complex logic it's impossible to predict exactly how your application will query the DB and you'll need to run the app locally with logging debug levels on so you see the actual SQL, but god forbid you have noisy DB querying or an N+1 problem somewhere, log spam for days, so unpleasant to work with. It's even more fun when people start nesting mappers and fucking around with aliases, just give me MongoDB at this point, it's web scale.
Oh, that's really cool, the channel seems like it has a lot of other nice videos as well!
I wish there was a video that's so nice about compressors, like in Audacity you have all of these settings and most are also present in OBS and other software:
And I feel like visualizations can really help understanding them better, like: https://codepen.io/animalsnacks/full/VRweeb alongside maybe something that lets you loop an audio sample and see how different it sounds with each change. There obviously already are some videos and discussions and plenty of material out there, but I love a good visualization!
I have watched 10+ hours of lessons on compressors and I still don't hear it. I understand most concepts but don't use much compression besides side-chaining and the built-in Ableton glue compression.
I know what you mean - took me a while too. I understood what it does, how the parameters affect what it does and the mechanics of it, but struggled to "hear" it.
I even bought a cheap $25 Behringer guitar compressor pedal to see what I was missing but it didn't help - later I realized that my guitar playing isn't repeatable enough. So this isn't the way to go.
What made it click was due to an accidental mistake in my normal workflow - I recorded some DAW-less techno jam stuff using GarageBand (normally I just copy the wavs from my little Tascam). While playing back, I noticed that there's a master compressor and I started fiddling with it. With repetitive music like Techno and House, the difference between no compression and full compression suddenly becomes very apparent (although still somewhat subtle compared to other FX commonly used in music production). Also it helped that my recording had no compression on it - comprising just a raw drum machine and mono-synth.
I work with audio professionally and it took me a years to get a feel for dynamics processing. Like, it wasn't until I down sat with a nice compressor (in my case a Neve 453) and just did a lot of experiments, and then took my experiences from those experiments to my live gigs.
IME, you can get a feel for what the threshold and ratio are doing pretty quickly, and that's probably enough to be useful. In broad strokes that's all you need to make them "work". If you have the attack set way fast, you'll start to hear the signal get a bit muddy.
But attack and release (especially on a lot of plugins) are a bit funky, and I still can't tell what the knee is doing unless I move the knob around. And I own a couple clones (76kt or gold comp 2a) and a couple of distressors, and they sound different but I still need play around with them to coax them into what I think they should sound like.
That's normal. For engineers and mixers, compression is one of the more difficult phenomena to hear and build an intuition for.
My advice for learning is to totally overdo the compression on a drum track (snare, kick, hats, etc) and play with the settings. Ideally these drums are uncompressed.
Using a 4:1 ratio, lower your threshold all the way down until you're getting > 10 dB of gain reduction and then start playing with the attack time. What do you hear when the attack time is at 0 ms? What do you hear when you start to slow the attack time? 5 ms? 10 ms? 30 ms? 100 ms?
Then do the same with your release time. Start with it set as fast as it will go and then start to slow it down. What do you hear happening?
Once you have the attack time and release time feeling good then raise your threshold so that the compression is less heavy-handed (unless you like it). Set the threshold where the level of compression feels good to you.
It is pretty normal not to hear compression on material that already is quite compressed. But you will hear compression on very dynamic vocals or a snare drum, I am pretty sure.
> Supposedly the performance of Owen-coder is comparable to the likes of Sonnet4. If I invest in a homelab that can host something like Qwen3 I'll recoup my costs in about 20 months without having to rely on Anthropic.
Presently, look up the Cerebra Coder subscription. It’s cut down my reliance on paying per token by about 80% due to the model being good for most development tasks and the rate limits are such that I never hit them per day, alongside being faster than anything else out there.
Lots of folks also just explore new models on OpenRouter as they come on, albeit they don’t seem to have caching support so it can get expensive.
Aside from that, self-hosting can be worth it but you need lots of memory and beefy compute to have good performance without quantizing things super far. There’s a really big difference between the 30B and 480B versions of Qwen Coder and while the smaller models are getting better, feels like there are diminishing returns there.
Because I think it's very much the same way as it is with Cloudflare - while the large vendors aren't always openly hostile, we can just smile and hope that they don't get too keen on reminding us that they're holding us hostage.
I don't see that changing anytime soon. I've personally also used Hetzner, Contabo, Scaleway, Vultr, DigitalOcean, Time4VPS and some other platforms, but when people couple their setups to CF/AWS/GCP/Azure, typically that coupling is hard to get rid of and doing so is hard to justify.