Hacker Newsnew | past | comments | ask | show | jobs | submit | mfld's commentslogin

Based on those meeting notes, the conflict of interest that arises when attempting to add features that compete with paid ones is real. So its that ideology that it is actually needed for a Government user/contributor.

To this day anything of worth that's been added to Gitea is released under MIT. Their business model is: you pay us to develop the features we need, we release them for everybody, which is how their collaboration with Blender has been working thus far. If it's good enough for Blender, who decided to stay with Gitea, it's good enough for me.

The given example is from GitLab - thanks for pointing out that Gitea follows a different OSS strategy.

Not sure: the government could just buy Gitea Enterprise license right? And thereby not really run true 'open source' software, but it would support the main development behind Gitea.

There's a batch of dialog that indicates an interest in 'digital sovereignty', so it sounds like they are less interested in being an explicit customer of a given company.

You can do that by self hosting the code.

My point was that you don't need to compete with paid features, just please give the developers money to develop the software further (and fix bugs/issues), so e.g. buy some 'enterprise license', even if you don't need it in terms of features.


While there's vscode console, I think that bare Xterm.js would be a nice addition to the list.

Agreed. Proprietary tools could then rely on those coreutils without any license fears.


Nice - thanks! I assume the non-naive implementations skip the sorting and instead hash the input lines?


yeah that's right - there are trade-offs in doing so as it can require much more memory. So like everything it's an application specific decision


The NHGRI updated these plots for years. Sad to see that there is no update since 2022, presumably due to lack of funding.

The sub-$100 genomes could be in reach within the next 5 years, from what I have seen.


I like puzzles. Therefore I'll give a shot what this might be: Lovable for apps.


I can relate. I recently used ChatGPT/DallE to create several images for birthday coupons for my daughter - a.k.a. girl in different activities. She likes Mangas, so this was the intended styling. 3/4 of the time was spent working around diverse content policies.


    Using larger-than-default window sizes has the drawback of requiring that the same --long=xx argument be passed during decompression reducing compatibility somewhat.
Interesting. Any idea why this can't be stored in the metadata of the compressed file?


It is stored in the metadata [1], but anything larger than 8 MiB is not guaranteed to be supported. So there has to be an out-of-band agreement between compressor and decompressor.

[1] https://datatracker.ietf.org/doc/html/rfc8878#name-window-de...


Thanks! So the --long essentially signals the decoder "I am willing to accept the potentially large memory requirements implied by the given window size"


Seems useful for games marketplaces like Steam and Xbox. You control the CDN and client, so you can use tricky but effective compression settings all day long.


For internal use like that you can also use the library feature. The downside of using long=31 is increased memory usage, which might not be desirable for customer facing applications like Steam.


It uses more memory (up to +2gb) during decompression as well -> potential DoS.


Sending a .zip filled with all zeroes, so it compresses extremely well, is a well-known DoS historically (zip bomb, making the server run out of space in trying to read the archive)

You always need resource limits when dealing with untrusted data. RAM is one of the obvious ones. They could introduce a memory limit parameter; require passing --long with a value equal to or greater than what the stream requires to successfully decompress; require seeking support for the input stream so they can look back that way (TMTO); fall back to using temp files; or interactively prompt the user if there's a terminal attached. Lots of options, each with pros and cons of course, that would all allow a scenario where the required information for the decoder is stored in the compressed data file


The decompressed output needn't be in-memory (or even on-disk; it could be directly streamed to analysis) all at the same time, at which point resource limits aren't a problem at all. And I believe --long already is a "grater than or equal to" value, and should also be effectively a memory limit (or pretty close to one at least).

Seeking back in the input might theoretically work, but I feel like that could easily get very bad (aka exponential runtime); never mind needing actual seeking.


No chemist, but the outcome - a cellulose film - seems to be about the same even though the production process is different.


I assume most protests are driven by a Not In My Backyard attitude.


It's worse than that, its culture war driven in a lot of cases. Doesn't matter if it's anywhere close to their back yard.


Yeah, but those people still have roads, large buildings and factories in their backyard.


You can't have civilization without roads, so people naturally accept them. Large buildings and factories are not common in the countryside, and people do protest when you try building them in their backyards. No different than wind farms.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: