> ... then we can make C safe without any technical changes just by adding some language to the standard saying that C programmers are obliged to ensure that their code maintains a certain list of invariants.
In Rust you can use #![forbid(unsafe_code)]
to totally forbid unsafe code in your codebase. Rust also checks for memory safety at compile time, these are strong guarantees that ensure that if the code compiles it is memory safe.
I'm aware of that, but I'm responding to the original claim that "Rust makes the same guarantees regardless of the unsafe keyword" (see https://news.ycombinator.com/item?id=46262774)
Ah. I agree with you. When unsafe is used the borrow checker cannot check for memory safety, the programmer has to provide the guarantees by making sure their code does not violate memory safety, similar to programming in C.
But unsafe Rust is still far better than C because the unsafe keyword is visible and one can grep it and audit the unsafe parts. Idiomatic Rust also requires that the programmer provides comments as to why that part is unsafe.
I think making things more explicit with "unsafe" is an advantage of Rust, but I think "far better" is a bit of an exaggeration. In C you need to audit pointer arithmetic, malloc/free, casts and unons. If you limit pointer arithmetic to a few safe accessor functions and have a documented lifetime rules, this is also relatively simple to do (more difficult than "grep" but not much). Vice versa, if you use a lot of "unsafe" in Rust or in complicated ways, it can also easily become possible to guarantee safety. In contrast to what people seem to believe, the bug does not need to be inside in unsafe block (a logic error outside can cause the UB inside unsafe or a violation of some of Rust's invariants inside unsafe can allow UB outside of unsafe) and can result even from the interaction of unsafe blocks.
The practical memory safety we see in Rust is much more the result of trying hard to avoid memory safety issues and requiring comments for unsafe blocks is part of this culture.
> the bug does not need to be inside in unsafe block
The argument is that while you wouldn't in fact fix the bug by modifying the unsafe code block, the unsafe code block was wrong until you fixed the other code.
For example imagine if a hypothetical typo existed inside RawVec (the implementation details of Vec) causing the growable array to initially believe it has 1 element inside it, not 0 even though no space has been allocated and nothing was stored. That's safe code, and of course the correct fix would be to change it from 1 to 0, easy. But this broken type is arguably broken because the unsafe code would deference a pointer that isn't valid, trying to reach that non-existent value. It would be insane, perhaps even impossible, to modify that code to somehow handle the "We wrote 1 instead of 0" mistake, when you could instead fix the bug - but that is where the theoretical fault lies.
If the security flaws are in the training data AI will be able to detect them, stuff like OWASP are definitely in the training data. So in a way this is like more intelligent fuzzing, which is a fantastic tool to have in your toolbox. But I doubt AI will be able to detect novel security flaws that are not included in its training data.
Patents, trademarks, copyright, deeds and other similar concepts are part of what makes capitalism what it is, without them capitalism will not work because they are the mechanisms that enforce private property.
Good luck with that. When 3/4 of the world laughs at your patent what is the point of patents? IP only works when everyone agrees to it. When they don't it's just a handicap on the ones who do that benefits nobody.
Part of the problem is that Codeberg/Gitea's API endpoints are well documented and there are bots that scrape for gitea instances. Its similar to running SSH on port 22 or hosting popular PHP forums software, there are always automated attacks by different entities simply because they recognize the API.
I don't agree with this at all. I think the reason Github is so prominent is the social network aspects it has built around Git, which created strong network effects that most developers are unwilling to part with. Maintainers don't want to loose their stars and the users don't want to loose the collective "audit" by the github users.
Things like number of stars on a repository, number of forks, number of issues answered, number of followers for an account. All these things are powerful indicators of quality, and like it or not are now part of modern software engineering. Developers are more likely to use a repo that has more stars than its alternatives.
I know that the code should speak for itself and one should audit their dependencies and not depend on Github stars, but in practice this is not what happens, we rely on the community.
These are the only reasons I use GitHub. The familiarity to students and non-developers is also a plus.
I have no idea what the parent comment is talking about a "well-formed CI system." GitHub Actions is easily the worst CI tool I've ever used. There are no core features of GitHub that haven't been replicated by GitLab at this point, and in my estimation GitLab did all of it better. But, if I put something on GitLab, nobody sees it.
From what I gather it's that GH Actions is good for easy scenarios: single line building, unit tests, etc. When your CI pipeline starts getting complicated or has a bunch of moving parts, not only do you need to rearchitect parts of it, but you lose a lot of stability.
And this is the core problem with the modern platform internet. One victor (or a handful) take the lead in a given niche, and it becomes impossible to get away from them without great personal cost, literal, moral, or labor, and usually a combo of all three. And then that company has absolutely no motivation at all to prioritize the quality of the product, merely to extract all the value from the user-base as possible.
Facebook has been on that path for well over a decade, and it shows. The service itself is absolute garbage. Users stay because everyone they know is already there and the groups they love are there, and they just tolerate being force-fed AI slop and being monitored. But Facebook is not GROWING as a result, it's slowly dying, much like it's aging userbase. But Facebook doesn't care because no one in charge of any company these days can see further than next quarter's earnings call.
This is a socio-economic problem, it can happen with non internet platforms too. Its why people end up living in cities for example. Any system that has addresses, accounts or any form of identity has the potential for strong network effects.
Github became successful long before those 'social media features' were added, simply because it provided free hosting for open source projects (and free hosting services were still a rare thing back in the noughties).
The previous popular free code hoster was Sourceforge, which eventually entered its what's now called "enshittifcation phase". Github was simply in the right place at the right time to replace Sourceforge and the rest is history.
There's definitely a few phases of Github, feature and popularity wise.
1. Free hosting with decent UX
2. Social features
3. Lifecycle automation features
In this vein, it doing new stuff with AI isn't out of keeping with its development path, but I do think they need to pick a lane and decide if they want to boost professional developer productivity or be a platform for vibe coding.
And probably, if the latter, fork that off into a different platform with a new name. (Microsoft loves naming things! Call it 'Codespaces 365 Live!')
Technically so was BitBucket but it chose mercurial over git initially. If you are old enough you will remember articles comparing the two with mercurial getting slightly more favorable reviews.
And for those who don’t remember SourceForge, it had two major problems in DevEx: first you couldn’t just get your open source project published. It had to be approved. And once it did, you had an ugly URL. GitHub had pretty URLs.
I remember putting up my very first open source project back before GitHub and going through this huge checklist of what a good open source project must have. Then seeing that people just tossed code onto GitHub as is: no man pages, no or little documentation, build instructions that resulted in errors, no curated changelog, and realizing that things are changing.
Github was faster than BitBucket and it worked well whether or not JavaScript was enabled. This does seem to be regressing as of late. I have tried a variety of alternatives; they have all been slower, but Github does seem to be regressing.
And GitHub got free hosting and support from Engine Yard when they were starting out. I remember it being a big deal when we had to move them from shared hosting to something like 3 dedicated supermicro servers.
> Things like number of stars on a repository, number of forks, number of issues answered, number of followers for an account. All these things are powerful indicators of quality, and like it or not are now part of modern software engineering.
I hate that this is perceived as generally true. Stars can be farmed and gamed; and the value of a star does not decay over time. Issues can be automatically closed, or answered with a non-response and closed. Numbers of followers is a networking/platform thing (flag your significance by following people with significant follower numbers).
> Developers are more likely to use a repo that has more stars than its alternatives.
If anything, star numbers reflect first mover advantage rather than code quality. People choosing which one of a number of competing packages to use in their product should consider a lot more than just the star number. Sadly, time pressures on decision makers (and their assumptions) means that detailed consideration rarely happens and star count remains the major factor in choosing whether to include a repo in a project.
So number of daily/weekly downloads on PyPI/npm/etc?
All these things are a proxy for popularity and that is a valuable metric. I have seen projects with amazing code quality but if they are not maintained eventually they stop working due to updates to dependencies, external APIs, runtime environment, etc. And I have see projects with meh code quality but so popular that every quirk and weird issue had a known workaround. Take ffmpeg for example: its code is.. arcane. But would you choose a random video transcoder written in JavaScript just due to the beautiful code that was last updated in 2012?
It is fine if a dependency hasn't been updated in years, if the number of dependent projects hasn't gone down. Especially if no issues are getting created. Particularly with cargo or npm type package managers where a dependency may do one small thing that never needs to change. Time since last update can be a good thing, it doesn't always mean abandoned.
> Things like number of stars on a repository, number of forks, number of issues answered, number of followers for an account. All these things are powerful indicators of quality
They're NOT! Lots of trashy AI projects have +50k stars.
> Things like number of stars on a repository, number of forks, number of issues answered, number of followers for an account. All these things are powerful indicators of quality
I guess if I viewed software engineering merely as a placing of bets, I would not, but that's the center of the disagreement here. I'm not trying to be a dick (okay maybe a little sue me), the grandparent comment mentioned "software engineering."
I can refer you to some github repositories with a low number of stars that are of extraordinarily high quality, and similarly, some shitty software with lots of stars. But I'm sure you get the point.
You are placing a bet that the project will continue to be maintained; you do not know what the future holds. If the project is of any complexity, and you presumably have other responsibilities, you can't do everything yourself; you need the community.
There are projects, or repositories, with a very narrow target audience, sometimes you can count them on one hand. Important repositories for those few who need them, and there aren't any alternatives. Things like decoders for obscure and undocumented backup formats and the like.
I agree with the title, but for me the evolution is more higher level and based on Data. No AI without search engines and social networks, No search engines without WWW, no WWW withou TCP/IP and no TCP/IP without the computer.
Allot of people have a social media account rather than a website and allot of people use gmail rather than host their own mail. Decentralized means do it yourself, but most people just want something with batteries included that works well and don't really care about centralization.
Not necessarily. Just one famous example; BitTorrent is decentralized but for most people it's just "run this app, download files". "Decentralized" just means "doesn't rely on a centralized service to accomplish a goal". As long as the application isn't too complex to install and use, most folks won't care one way or the other whether it's decentralized or not, as long as it accomplishes the goal they're looking to accomplish.
There has to be a payoff though. BitTorrent is actually pretty hard to get working correctly, track down the torrent files... people do it because it's the only way to get some content and a way to get content you'd otherwise have to pay for. With social media, there's not much reward and most people's friends already post for free on other networks. Not saying it's not worthwhile, but it's hard to extract this lesson from BitTorrent.
But it can also be specialized forums like https://startrek.website/ which is hosted using Lemmy but you can use your federated login. It can help bring back indie forums and websites that aren’t controlled by Reddit or meta.
Yeah, for sure. Anything trying to be a social network in a properly peer-to-peer fashion would have to be as simple to use (or simpler) than existing social networks, and / or offer some genuinely unique and desirable feature(s) in order to attract any serious critical mass of users.
Interestingly the original Napster was a pretty good social network! I really liked being able to browse through all of a user's shared files. We should bring something like that back.
"Anything trying to be a social network in a properly peer-to-peer fashion would have to be as simple to use..."
In practice this issue arise something like this: A decentralized service is launched it is so decentralized the user has to store their own private keys. Later a centralized solution is launched where the user does not have to go through the trouble of storing the private keys, everything is managed for them... everyone joins the centralized service.
> ... "under this definition, bluesky and friends, dsspite all their talk, really does fit in the centralized camp."
In my mind, I put them somewhere in-between, leaning a tad more toward "centralized" because they still rely on an individual to host the service no matter how "federated" they are. Until they're truly peer-to-peer, there's still that aspect of centralization involved. We need something kinda like BitTorrent but for messaging / social connections.
Maybe Bluesky is analogous to Github, if the AT protocol truly does allow for migration away to an alternative?
Although Git repositories are portable, PRs, issues, actions and such aren't — so even if the migration away from Bluesky is lossy the comparison seems apt.
The issue is only developers know the benefits of those features. Most people just want to view content or post and get their likes. That is why they use social media rather than post on their own website.
I don't think this is a technology problem, its more of a socioeconomic problem. People tend to choose the centralized option and projects that start out decentralized tend to end up centralized WWW-Social media, Email-Gmail, Git-Github, Bitcoin-Coinbase etc
I think that used to be true, but influencers and such I believe would value some of the freedom of moving to other platforms and keep both their content and follower.
Also, I think many users would now appreciate more control over the moderation policies they want applied, and also be able to choose between different feed algorithms to find one that promotes things that they prefer.
Would most people still probably use the one big "instance"? For sure, but I think you'd still have a good 20-30% that would use alternatives.
Assuming it all just-worked. Which I think is what this article is trying to say, the AT protocol can provide these features and ease of use. I don't know if that's true, but it seems to be the claim.
This is where tech family and friends need to play a role. Host these services for them!
My family just thinks Jellyfin and Navidrome is another Netflix or Spotify they have access to. And most of them prefer Jellyfin as content doesn’t disappear and is much more curated.
Decentralised here means keeping companies honest by avoiding lock-in. It's fine to have the centralisation if it's easy to switch. BlackSky users don't need to care about the details, but if they don't like the community they can move their data elsewhere. Try doing that with Instagram.
Didn’t they just adopt DNS? I mean I guess you have a DID people can follow ( tho afaik there’s no other identity server for resolving DIDs besides bsky app), but the way to tell that someone is who you think they are is their handle being connected to their domain
did:web (DNS) is just one option for identity. did:plc is what you want, it's not reliant on ICANN or BlueSky.
Any PDS should be able to resolve a did:web or did:plc.
Apologies, I was mistaken. I'd confused the self-certifying bit with decentralization. did:plc relies on trusting a central server to accept all valid events and not allow users to rewrite their history.
I think Facebook is pretty useless and just not using the site is a great way to transfer away from it. But I feel like to engage with the idea of switching away constructively, I’d have to find some value in the content I had on the site.
Until you kids school uses it for organizing information for parents or that’s the only place a niche group you like is.
Getting banned from Facebook means loosing access to all of that. Kinda like getting banned from YouTube could mean loss of access to email, groups, drive and a bunch of other services. Hell I’ve heard of company contractors getting banned from Google Play’s Developer and everyone in the company then getting banned from all Google services!
If I get banned from a Lemmy community that doesn’t ban me from other communities or other servers and I can always run my own if I need to.
Naw, decentralized means not having everyone on one platform. ActivityPub-enabled sites (Mastodon, PeerTube, Lemmy, etc.) can be run by just about anyone, and can serve multiple users.
So, if you have the technical skills and the willingness to host an ActivityPub-enabled instance, you can serve it for others who either don't have the skills or ability to manage it themselves. If you keep it limited just to the folks in your own communities - people you know, friends of friends, etc. - then you limit a lot of the issues that arise from running huge instances - moderation, privacy issues, etc.
We took something natively decentralized - TCP/IP internet - and handed it off to handful of companies to run, thus centralizing it. That was a mistake, especially as they use the power they acquired to push back against folks, for example, trying to build independent community ISPs.
We need to decentralize as much as feasible - it's not all self-hosting, but "just let the money perverts run things" has not worked out so well for us. The solution lay somewhere in the middle, where cooperative groups serve the needs of the communities that matter to them in exchange for fair compensation.
> We took something natively decentralized - TCP/IP internet - and handed it off to handful of companies to run, thus centralizing it. That was a mistake, especially as they use the power they acquired to push back against folks, for example, trying to build independent community ISPs.
This is not and was not ever true. IP was explicitly designed from the start to be difficult to operate without centralisation because the telecoms operators wanted to maintain their "monopoly" on communications infrastructure.
That is why IP insisted on not separating the interface address from device/service identity despite knowing ahead of time this would make multihoming a nightmare (as it did with ARPANET) and despite this problem already having been solved by CYCLADES (it being basically the one feature they explicitly avoided adopting from CYCLADES).
That among other things.
This is in large part why BGP is and always has been such a clusterfuck. There were known issues ahead of time but they were willfully ignored as they made relying on the heavily centralised telecoms operators essentially always the path of least resistance.
Why would decentralized technology be easy to use?
Limewire was installed on over one-third of computers world wide in 2007 [1]. That's because even grandma could press next->next->next on a window setup file and it just worked. There is no technical reason hosting your email isn't as easy as that.
Look at roof top solar panel. Literally hundreds of millions of households have roof top solar to generate decentralized power. The fundamental complexity in email hosting is hundred times less, but the software engineering community choose to not make it possible.
The distinction blurs with AT protocol. My data lives on Bluesky's PDS for now, but I can log in to that PDS from anything that supports AT. Like leaflet.pub
This post is stored in Leaflet's own lexicon in its own collection right next to all my Bluesky data. I could move this to a different PDS if I wanted. I could come up with a script to turn the collection into static pages or convert them to another platform's import format.
Nobody cares about decentralization until they do[0] and AT seems to have the best answer for that eventuality.
You misunderstood what I said. DNS is certainly a decentralized protocol and obviously not at all necessarily DIY. That’s all I was speaking to. Decentralized can be that simple.
What you originally said could be interpreted as either DNS-the-system or DNS-the-protocol. I assumed the former, since that seemed more likely.
Sure, the protocol could be used without the resolver hierarchy, but I would argue that's not a useful way to think about it, since it won't happen in practice.
In Rust you can use #![forbid(unsafe_code)] to totally forbid unsafe code in your codebase. Rust also checks for memory safety at compile time, these are strong guarantees that ensure that if the code compiles it is memory safe.
reply