I've got this feeling that the endless feature creep of Github has begun to cause rot of core essential features. Up until only recently, the PR review tab performed so poorly it was practically useless for large PRs.
Bets on where everything/everyone goes next? Will it be like the transition from SourceForge to GitHub, where the center of gravity moves from one big place to another big place? Or more like Twitter, where factions split off to several smaller places?
Personally I doubt we will see a huge centralized place like GitHub again. Trust in American companies, and big companies in general has been eroded. I think it would be for the better if it split off, and hopefully more devs decide to self host with tools like Forgejo.
> Personally I doubt we will see a huge centralized place like GitHub again.
I can almost guarentee we will. Consumers love simplicity through centralization.
> Trust in American companies, and big companies in general has been eroded.
Where are you seeing that? I've seen general dislike of large corpos forever, and the anti-US sentiment is more common abroad from places like Europe that have never 'liked' US culture and companies.
I'm all for Forgejo or even a simple forge without any namespaces (I abandoned GitHub when MS acquired them). But the major issue with these alternative platforms is the the discoverability of projects on them. Github doesn't have any noteworthy feature in this regard, but it has the first mover advantage. The users unfortunately ceded that advantage to them.
Many forges are working on a federated development infrastructure. That's great. But I believe that for these platforms to really become popular, we must solve the problem of federated project search and discovery as well. Unfortunately, nobody seems to be paying much attention in this area.
I need an easy way to host a nice UI for mercurial. That's rock solid stable, zero maintenance.
I've been pushing my repos to a random $5 server I have for years now. It's been rock solid. But I have no UI. I can push and pull and it supports exactly 1 user (me) and it's never gone down because I just never touch the server. I did go the extra mile to set up automatic backups but that's it.
> I've got this feeling that the endless feature creep of Github has begun to cause rot of core essential features.
Tangential, but... I was so excited by their frontend, which was slowly adopting web components, until after acquisition by Microsoft they started rewriting it in React.
GitHub in essence is still pretty much the same, there's products that have feature creep but I wouldn't say GitHub does that.
I can't say that I'm having issues with the performance either. I work with large PRs too (Especially if there's vendored dependencies) but I never ran into a show stopping performance issue that would make it "useless".
> GitHub in essence is still pretty much the same, there's products that have feature creep but I wouldn't say GitHub does that.
I think we're using two different products. Off the top of my head, I can think of Github Projects (the Trello-like feature), Github Marketplace, Github Discussions, the complete revamp of the file-viewer/editor, and all the new AI/LLM-based stuff baked into yet another feature known as Codespaces.
> I can't say that I'm having issues with the performance either. I work with large PRs too
The same in the sense that it doesn't get in the way during my daily work with it. Yes they've added features but that didn't mean that existing features got removed or things got in the way.
I often miss entire files in the review process because the review page collapses them by default and makes them hard to spot. If they’re going to be collapsed by default at least make it very visible. This is critical for security too, you don’t want people sneaking in code.
HN sure has changed. A few years ago there would be at least a dozen comments about installing Gitlab, including one major subthread started by someone from Gitlab.
In gitlab, yes (well, two lines, login then push). In forgejo, there is no cicd token that gives you scoped access to the built in container registry. You must create a long lived token and add it as a secret to the repo you want to push from.
I've used self-hosted GitLab a bunch at work, it's pretty good there still. In my opinion GitLab CI is also a solid offering, especially for the folks coming from something like Jenkins, doubly so when combined with Docker executors and mostly working with containers.
I used to run a GitLab instance for my own needs, however keeping up with the updates (especially across major versions) proved to be a bit too much and it was quite resource hungry.
My personal stack right now is Gitea + Drone CI + Nexus, though I might move over to Woodpecker CI in the future and also maybe look for alternatives to Nexus (it's also quite heavyweight and annoying to admin).
Having tried gitlab, it's a very poor product almost unmaintainable as a self hosted option. Reminds me of Eclipse IDE - crammed with every other unnecessary feature/plugin and the basic features are either very slow or buggy.
At this point Gitlab is just there because being even a small X% of a huge million/billion dollar market is good enough as a company even if the product is almost unusable.
> Instead of selling products based on helpful features and letting users decide, executives often deploy scare tactics that essentially warn people they will become obsolete if they don't get on the AI bandwagon. For instance, Julia Liuson, another executive at Microsoft, which owns GitHub, recently warned employees that "using AI is no longer optional."
So many clowns. It's like everyone's reading from the same script/playbook. Nothing says "this tool is useful" quite like forcing people to use it.
> It's like everyone's reading from the same script/playbook.
I'd assume that many CEO are driven by the same urge to please the board. And depending on your board, there might be people on it who spend many hours per week on LinkedIn, and see all the success stories around AI, maybe experienced something first hand.
Good news: It's, from my estimate, only a phase. Like when blockchain hit, and everyone wanted to be involved. This time - and that worries me - the ressources involved are more expensive, though. There might be a stronger incentive for people to "get their money back". I haven't thought about the implications yet.
People say this a lot, please the board. But why would so many boards be hype-driven and CEO's be rational? It might just as well be the C-suite themselves who are the source of it.
It's not like blockchain. Blockchain legitimately made things slower and less useful for dubious benefits.
AI is more like the early web. There is definite value that people can see, but no one really knows how to monetize beyond the incredibly obvious 'sell people access to it', so everyone is throwing spaghetti at the wall waiting for it to stick. When someone gets it to stick, there will be a giant amount of money coming at them, but until then there will be a ton of people with sauce all over their faces looking like idiots.
Upvoted to save you from the negatives because I too am tired of seeing the comparison to blockchain. I'm not sure where it even comes from other than just being another recent hype train people remember, but blockchain settled into a relatively tiny niche. The most basic deployment of LLMs / AI by comparison is instantly, obviously more useful than that.
As soon as it starts returning to me factual, confirm-able answers consistently. Then I'll use it. I just had to fix something a co-worker fucked up by asking ai how to do it. The responses are so confidently wrong it's like watching Kash Patel tell me that Jeffrey Epstein killed himself.
I agree. Overconfidence and sycophancy is the real problem. This should be the focus of development energy. The models are already capable; now they need to be reliable.
People are biased to using tools they are familiar with. The idea that if a tool was useful people would use it simply false. In order to avoid being disrupted, extra effort needs to be made to get people to learn new tools.
A few people will use said new tool. If they start writing software that is sustainably better for half the cost, eventually others will take notice. Early adopter sort of thing. Switching takes energy, yes, so many will be resistant. But when you find yourself the last person doing things the old way and it's taking more time and effort... It might be time to spend the effort and get with the times.
Not necessarily saying this AI is worth switching to yet. It could fizzle out, we'll see. But I'm saying if it's truly worth it's salt, it'll take off because it's good, rather than die despite being good.
Things that this aren't true for are things that are only marginally better. if A is 5% better but B is 95% more popular.. A might yet die because it's not worth switching to. AI is claiming a lot more than 5% gains though
From the CEO's article referenced in that post [1]:
> the rise of AI in software development signals the need for computer science education to be reinvented as well.
> Teaching in a way that evaluates rote syntax or memorization of APIs is becoming obsolete
He thinks computer science is about memorizing syntax and APIs. No wonder he's telling developers to embrace AI or quit their careers if he believes the entire field is that shallow. Not the best person to take advice from.
It's also hilarious how he downplays fundamental flaws of LLMs as something AI zealots, the truly smart people, can overcome by producing so much AI slop that they turn from skeptics into ...drumroll... AI strategists. lol
If you deliberately decide to use a system that introduces a single point of failure into a decentralised system, you have to live with the consequences.
From their point of view, unless they start losing paying users over this, they have no incentive to improve. I assume customers are happy with the SLA, otherwise why use Github?
That's because people can't handle speed. With a natural delay, they could cool down or at least become more detached. Society needs natural points where people are forced to detach from what they do. That's one reason why AI and high-speed communications are so dangerous: they accelerate what we do too quickly to remain balanced. (And I am speaking in general here, of course there will be a minority who can handle it.)
We have post-its with file names on a wall in the office. You take one down if you edit the file, and put it back up when you're done. Easy.
Though I wish I was entirely kidding. ~12 years ago or so we did that if one of two parallel development teams had to modify a message of the network protocol to avoid incompatibilities and merge problems.
Mind you, these were SVN merges. I can't even verbalize my feelings about SVN merges but by a mixture of laughing and groaning in pain, like if you stubbed your toe in a painful, but entirely funny way.
What is this eternal meme about merges in svn being harder than in other tools? Git used literally the same merge algorithm, even if that has changed a bit since then, and merge conflicts are not something a tool can't just magically make disappear. If you want concurrent edits (the c in cvs), conflicts come in the same package. Various algortihms can supply their own dose of magic, but they're more similar than different (minus a few special cases such as rerere in git).
My interpretation within that company: You know this new idea of "If it's painful, do it more"? People in that company didn't do that in the SVN days or earlier, because merges were painful. Thus, merges filled a sprint if they had to be done. This made sense if you came from CSV or nothing, tbh.
Git in turn made branches easier, causing merges to be more prevalent and developers overall learned to merge more, merge more often.
That doesn't make any sense to me. Why would you merge more often if it takes less time to create a new branch?
What types of merges are we talking about? Surely it must be where you merge in changes from a main branch to your local branch, which in the case of long-lived branches will be the more common merge. Creating new branches isn't even part of that workflow.
Keeping it when tech can't keep up is genuinely a good hack for any kind of engineering. Physical lock out tag out on industrial machines for instance. Passing paper notes/wooden blocks in air traffic control towers to see who's responsible for what even if computers go down.
You joke, but when I was doing my start-up we made good money on the side from monitoring websites to detect when the designers had pushed regressions to the live site. We would keep track of change requests that were filed and resolved, then scripts would monitor the sites to see if any earlier changes had been backed out. (Getting the designers to use version control was considered to be in the "too hard" bucket. This was back in the mid 2000s.)
That's a single point of failure. If you email code changes around and use an email client that copies everything offline, then the history of your code base is distributed across all of your developers' laptops.
I found this hilariously confusing when I first heard about DVCSs.
I'm like ok... So they're "distributed".. how do I share the code with other people? Oh..I push to a central repository? So it's almost exactly like SVN? Cool cool.
Status page says "Incident with Pull Requests". Pull requests status is listed as "Normal". Status text says issue with degraded performance for Webhooks and Issues, does not mention Pull Requests.
As someone who is partially responsible for supporting github at a very large organization, no it isn't. At least not until the incident is at least 30m old if ever.
I worked there for 3 years and yes GitHub development happens on github.com. Of course there’s ways to deploy and rollback changes while the site is down but that’s very unusual. The typical flow happens in github.com and uses the regular primitives everybody uses: prs, ci checks, etc.
The pipeline for deploying the monolith doesn’t happen in GitHub Actions though but in a service based in jenkins.
Fun fact: playbooks for incidents used to be hosted in GitHub too but we moved them after an incident that made impossible to access them while it lasted.
I don't remember clearly where we moved them. It was probably to something owned by Google (because GitHub uses Google Workspaces) or Microsoft (for obvious reasons).
if GitHub Enterprise Server is anything to go by, they build (almost) everything for containers, and the entire site is hosted by containers managed by Nomad. So there are probably lots of older images around that they can fall back on if the latest image of any container causes problems.
How they would deploy the older container, I don't know.
A lot of this is guesswork, I don't work for them or anything. And I know that GHES in the way that my employer manages it is very unlike the way that GitHub host github.com, so everything i've assumed could be wrong.
I estimate that on some days an outage like this could ultimately save some businesses money.
There's a lot of cowboy development going on out there. Why not take this opportunity to talk to your customers for a bit? Make sure you're still building the right things.
I've never worked somewhere I couldn't email a customer as long as the team was CC'd. This is a bit of a circular problem because if you don't get exposed to the customer in some capacity you'll never get good at working with them.
If the business is afraid to let you email the customer, you might need to work on your communications skills and go through some intentional demonstration efforts. For example, "Good morning <boss>, here's a draft of what I think we should send <CTO's name @ customer> regarding their feedback on the last build.".
That's literally all it takes to get into the game. Don't ask for permission to write the draft because then your managers will think it's gonna be this big ordeal and they'll definitely say no.
At a B2C, I would not email a customer directly without sign-off. We have marketing teams, research teams, comms, customer support, etc. I would be stepping on so many toes, and risking brand reputation, if I were to interact with our customers.
Radicle.xyz fixes this with COBs (Collaborative Objects). They're stored inside your git repo as normal objects, and benefit from its p2p mechanism as well. It's the true sovereign forge.
don't wanna be spreading fake news, but i wonder if this is related to a cloudflare issue? i've been unable to login to cloudflare for the past ~30 minutes. and: https://www.cloudflarestatus.com/