Haha, It reminds me some old movie called Andromeda I think... Space human probe crashlanded on earth and contained some greenish patches of stuff on it. It was space dwelling orgamism that direcly used energy to matter conversion for growth. It was pretty decent movie actually :)
Yeah, kids like to waste time to make C more safe or bring C++ features.
If you need them, use C++ or different language. Those examples make code look ugly and you are right, the corner cases.
If you need to cleanup stuff on early return paths, use goto.. Its nothing wrong with it, jump to end when you do all the cleanup and return.
Temporary buffers? if they arent big, dont be afraid to use static char buf[64];
No need to waste time for malloc() and free. They are big? preallocate early and reallocate or work on chunk sizes. Simple and effective.
No, because I did NOT do serious analisis of this. Nor I care, ask upper commenter.. C have some corner case and undefined behaviours and this stuff will make it worse IMO.
My thoughts as well. The only thing I would be willing to use is the macro definition for __attribute__, but that is trivial. I use C, because I want manual memory handling, if I wouldn't want that I would use another language. And now I don't make copies when I want to have read access to some things, that is simply not at a problem. You simply pass non-owning pointers around.
In a function? That makes the function not-threadsafe and the function itself stateful. There are places, where you want this, but I would refrain from doing that in the general case.
It also has different behaviour in a single thread. This can be what you want though, but I would prefer it to pass that context as a parameter instead of having it in a hidden static variable.
What different behaviour you mean? static in function means that this is just preallocated somewhere in data, not on stack nor heap. Thats it. Yes, its not thread safe so should never be used in libraries for any buffering.
But in program, its not bad. If I ever need multiple calls to it in same thread:
static char buf[8][32];
static int z;
char *p=buf[z];
z=(z+1)&7;
Static foremost means that the value is preserved from the last function invocation. This is very different behaviour, than an automatically allocated variable. So calling a function with a static variable isn't idempotent, even when all global variables are the same.
> If I ever need multiple calls to it in same thread:
What is this code supposed to do???? It hands out a different pointer, the first 8 times, than starts from the beginning again? I don't see what this is useful for!
If you want to return some precalculated stuff w/o using malloc() free(). So you just have 8 preallocated buffers and you rotate them between calls. Of course you need to be aware that results have short lifetime.
That sounds like a maintenance nightmare to me. If you insist on static, I would at least only use one buffer, to make it predictable, but personally I would just let the caller pass a pointer, where I can put the data.
What application is that for? Embedded, GUI program, server, ...?
it is very predicatable.. Every call it returns buffer, after 8 calls it wraps.
I use such stuff in many places.. GUI programs.. daemons.. Most stuff are single threaded. If threads are used, they are really decupled from each other.
Yes, you should never ever use it in Library.. But small utility functions should be okish :)
This is example from my Ruby graph library. GetMouseEvent can be called alot, but I need at most 2 results. Its Ruby, so I can either dynamicaly allocate objects and let GC pickup them later, or just use static stuff here, no GC overhead. it can be called 100s of times per second, so its worth it.
static GrMouseEvent evs[8];
static int z=0;
GrMouseEvent *ev=&evs[z];
God forbid we should make it easier to maintain the existing enormous C code base we’re saddled with, or give devs new optional ways to avoid specific footguns.
Goofy platform specific cleanup and smart pointer macros published in a brand new library would almost certainly not fly in almost any "existing enormous C code base". Also the industry has had a "new optional ways to avoid specific footguns" for decades, it's called using a memory safe language with a C ffi.
I meant the collective bulk of legacy C code running the world that we can’t just rewrite in Rust in a finite and reasonable amount of time (however much I’d be all on board with that if we could).
There are a million internal C apps that have to be tended and maintained, and I’m glad to see people giving those devs options. Yeah, I wish we (collectively) could just switch to something else. Until then, yay for easier upgrade alternatives!
I was also, in fact, referring to the bulk of legacy code bases that can't just be fully rewritten. Almost all good engineering is done incrementally, including the adoption of something like safe_c.h (I can hardly fathom the insanity of trying to migrate a million LOC+ of C to that library in a single go). I'm arguing that engineering effort would be better spent refactoring and rewriting the application in a fully safe language one small piece at a time.
I’m not sure I agree with that, especially if there were easy wins that could make the world less fragile with a much smaller intermediate effort, eg with something like FilC.
I wholeheartedly agree that a future of not-C is a much better long term goal than one of improved-C.
A simple pointer ownership model can achieve temporal memory safety, but I think to be convenient to use we may need lifetimes. I see no reason this could not be added to C.
Would be awesome if someone did a study to see if it's actually achievable... Cyclone's approach was certainly not enough, and I think some sort of generics or a Hindley-Milner type system might be required to get it to work, otherwise lifetimes would become completely unusable.
C does have the concept of lifetimes. There is just no syntax to specify it, so it is generally described along all the other semantic details of the API. And no it is not the same as for Rust, which causes clashes with the Rust people.
I think there was a discussion in the Linux kernel between a kernel maintainer and the Rust people, which started by the Rust people demanding formal semantics, so that they could encode it in Rust, and the subsystem maintainer unwilling to do that.
One of them was a maintainer of that particular subsystem, but that doesn't mean that the other folks aren't also maintainers of other parts of the kernel.
It seems that people do NOT understand its already game over.. Lost.. When stuff was small, and we had abusive actors, nobody cared.. oh just few bad actors, nothing to worry about, they will get bored and go away. No, they wont, they will grow and grow and now most even good guys turned bad because there is no punishment for it.. So as I said, game over.
Its time to start do own walled gardens, build overlay VPN networks for humans. Put services there, if someone misbehave? BAN his IP. Came back? BAN again. Came back? wtf? BAN VPN provider.. Just clean the mess.. different networks can peer and exchange. Look, Internet is just network of networks, its not that hard.
Good idea. Another solutions is to move our things to p2p. These corporations need expensive servers to run huge models on or just collect data. Sometimes winning move is not play the game: true server-less.
If there is indeed a lot of people using VPNs then way not to form darknet already? Ask interested site to peer with you. Peer with others, from overlay network, where you and interested parties will be in control. Its the only way imo. We need to build new net from the scratch, using current Internet as transport. VPNs is so easy to use these days, that even no-tech people can use it. All what is need to be done is to provide service by more technical people.
I read somewhere that Hacker News should have been named Startup News, and sometimes interactions like the one upthread reminds me of that. I'm not saying it's wrong - if you're good at something don't do it for free and all that - but it's kinda sad that in-depth discussions on public forums are getting harder and harder to find these days.
Normal conversations by topic enthusiasts usually have fun stuff hidden in their profiles and at times lead to fun rabbit holes where you endlessly learn and somehow, forgot that you were initially browsing HN.
Agree about the public discussion part, one of the reasons why I'm here lately.
Also, why can't someone create Startup News: Where every article reply is an opportunity to be sold a service, SN would take a cut of transactions. /s
These are people already trying to divert the discussion off-site for their benefit. Very few would honestly report any resulting transaction for the cut to be taken from.
[yeah, it did see the sarcasm tag, just clarifying to put off would be entrepreneurs so we aren't inundated by show-hn posts from people vibe-coding the idea over the next few days!]
That's a fair point. Nylon is like a packaged version of that setup, all into a single application, protocol and interface. You perhaps lose a little bit of control and performance, for ease-of-use and a bit more portability.
I'm not sure about the specifics for your network, but if you want to set up a similar network using WireGuard as the tunnel, you'd have to set up each peering arrangement manually. (Similar to: https://blog.bella.network/internal-bgp-with-wireguard/) This means adding a new node to your network will require you to create new key pairs, add new interfaces to existing nodes (that you want to peer with), and configure your routing daemon.
This may in fact be desirable to many, as it gives them more control over what happens in their network. I'm sure there might be tools to automate that process, but nylon takes a different approach.
Nylon implements babel at the level of WireGuard, offering:
Simplicity.
- Nylon bypasses the requirement for needing a new WireGuard interface on each end of a peering pair. (Peering arrangements are defined as WireGuard endpoints on a graph, instead of interfaces). This also means there will only be a single nylon interface, and all of the routing logic is hidden away from the user.
- Adding a new node on nylon is pretty trivial. You would set up the node with a private key, put the public key in the central config, and declare the peering on that config. Then, you can use the built-in config distribution mechanism to push it to all of your nodes.
- Both the control packets (for routing) and data packets (IP) are also sent encrypted in the same WireGuard tunnel, so you would only have to expose the bare minimum to the public.
Usability.
- Nylon is more portable, as it does not depend on your system's routing table, routing daemon or special kernel features such as network namespaces. Therefore, we can support Linux, macOS and Windows (pretty much any platform that wireguard-go supports).
- As it's built as an extension into the WireGuard protocol, it remains backwards compatible. There is even special handling, which allows "vanilla" wg devices to roam freely between configured nylon nodes. (Nylon will re-advertise the new "gateway" node and expire routes accordingly)
Okey, fair point, more easy use for less network oriented people and maybe portability. Altough, I never want my Windows enpoint to do any complicated forwarding :)
XenClient. I would really love to have some minimal OS HyperVisor running, and then you slap multiple OSes on top of that w/ easy full GUI switching via some hotkeys like Ctrl+Shift+F1. Additionaly, special drivers to virtualize Gfx and Sfx devices so every VM have full desktop capabilities and low latency.
Unfortunately, it died because its very niche and also they couldnt keep up with development of drivers for desktops.. This is even worse today...
reply