Based on those meeting notes, the conflict of interest that arises when attempting to add features that compete with paid ones is real. So its that ideology that it is actually needed for a Government user/contributor.
To this day anything of worth that's been added to Gitea is released under MIT. Their business model is: you pay us to develop the features we need, we release them for everybody, which is how their collaboration with Blender has been working thus far. If it's good enough for Blender, who decided to stay with Gitea, it's good enough for me.
Not sure: the government could just buy Gitea Enterprise license right? And thereby not really run true 'open source' software, but it would support the main development behind Gitea.
There's a batch of dialog that indicates an interest in 'digital sovereignty', so it sounds like they are less interested in being an explicit customer of a given company.
My point was that you don't need to compete with paid features, just please give the developers money to develop the software further (and fix bugs/issues), so e.g. buy some 'enterprise license', even if you don't need it in terms of features.
I can relate. I recently used ChatGPT/DallE to create several images for birthday coupons for my daughter - a.k.a. girl in different activities. She likes Mangas, so this was the intended styling. 3/4 of the time was spent working around diverse content policies.
Using larger-than-default window sizes has the drawback of requiring that the same --long=xx argument be passed during decompression reducing compatibility somewhat.
Interesting. Any idea why this can't be stored in the metadata of the compressed file?
It is stored in the metadata [1], but anything larger than 8 MiB is not guaranteed to be supported. So there has to be an out-of-band agreement between compressor and decompressor.
Thanks! So the --long essentially signals the decoder "I am willing to accept the potentially large memory requirements implied by the given window size"
Seems useful for games marketplaces like Steam and Xbox. You control the CDN and client, so you can use tricky but effective compression settings all day long.
For internal use like that you can also use the library feature. The downside of using long=31 is increased memory usage, which might not be desirable for customer facing applications like Steam.
Sending a .zip filled with all zeroes, so it compresses extremely well, is a well-known DoS historically (zip bomb, making the server run out of space in trying to read the archive)
You always need resource limits when dealing with untrusted data. RAM is one of the obvious ones. They could introduce a memory limit parameter; require passing --long with a value equal to or greater than what the stream requires to successfully decompress; require seeking support for the input stream so they can look back that way (TMTO); fall back to using temp files; or interactively prompt the user if there's a terminal attached. Lots of options, each with pros and cons of course, that would all allow a scenario where the required information for the decoder is stored in the compressed data file
The decompressed output needn't be in-memory (or even on-disk; it could be directly streamed to analysis) all at the same time, at which point resource limits aren't a problem at all. And I believe --long already is a "grater than or equal to" value, and should also be effectively a memory limit (or pretty close to one at least).
Seeking back in the input might theoretically work, but I feel like that could easily get very bad (aka exponential runtime); never mind needing actual seeking.
You can't have civilization without roads, so people naturally accept them. Large buildings and factories are not common in the countryside, and people do protest when you try building them in their backyards. No different than wind farms.
reply