If you have a massive banger and die immediately that is a pretty good way to go.
However, many people suffer from heart failure which, despite the name, means partial heart failure. The permanent breathlessness gives them a terrible quality of life. They can live with this for decades sometimes but it's not much fun.
There are security issues in the C implementation they currently use. They could remove this without breaking anything by incorporating the JS XSLT polyfill into the browser. But they won't because money.
Sorry. My bad. Looks like the size is also very sensitive to the method of compression and software as well (regardless of being PNG or WEBP). I found another PNG picture here https://www.freecodecamp.org/news/where-do-all-the-bytes-com... it is 64KiB. When you stretch the image, it's also likely to add kilobytes. I guess I need to update the image.
But anyways, I think it's still very demonstrative when an entire game size is smaller than its picture. Also consider that even your tiny PNG example (3.37KiB) still cannot fit into the the RAM / VRAM of a NES console which shows the contrast between these eras in regards of amounts of memory.
That image has a similar problem to yours. It has been scaled up using some kind of interpolation which introduces a load of extra colours to smooth the edges. This is not a great fit for PNG, which is why it is 64KB.
The article claims that it is only 5.9KB. I guess it was that small originally and it's been mangled by the publishing process.
I think a more realistic comparison is a BMP image (16 or 256 color) because the game didn't compress the image the way PNG or WebP does. When I converted that tiny ~3KiB image to 16 color BMP the size was 28.1KiB, for 256 color it was 57KiB. But some colors were lost.
Anyways I don't think we can have 100% apple to apple comparison, because the game used a different compression and rendering technique. Also consider the actual amount of RAM / VRAM the images are occupying. In RAM / VRAM they are probably in a decompressed form which is much more memory.
I agree that there's no perfect comparison. I just think that whatever image format you use for the comparison, you should be storing an original picture, not one that has been stretched or otherwise modified.
FWIW, if you convert the 3KB image to 16 colours in GIMP (Image | Mode | Indexed... and choose "Generate Optimum Palette") it looks exactly the same. I'm pretty sure there are only 16 colours in the image. The resulting PNG is 1,991 bytes.
I don't use Google cloud but the seven step OIDC configuration process is the kind of thing that can be scripted quite easily in Azure, e.g. using the az CLI tool. When a step creates a new object the command can return its ID so you can save it in a variable and use it in a subsequent step.
If Google has something similar this seems preferable to the alternatives.
The security argument isn't that great. Google has been grumbling about xslt for more than a decade. If security was really their concern they could have replaced the compiled C library with an asm.js version ten years ago, much as they did for pdf rendering. They could use wasm now. They don't need to deprecate it.
Andrew is ungallant so it is reasonable to strip him of his knighthoods. But it is rather silly to strip him of the title Prince for behaving like a prince.
You may be joking but in case you are not it is grossly unacceptable to be condoning child rape and the vast conspiracy that enabled this kind of behaviour that kept it going on for so long.
And also it is the privilege of royalty that he could expect to get away with it. The fact that he is experiencing any consequences at all is frankly shocking (in a good way).
> buttering the cost of switches [over the whole execution time]
The switches get cheaper but the rest of the code gets slower (because it has less flexibility in register allocation) so the cost of the switches is "buttered" (i.e. smeared) over the rest of the execution time.
But I don't think this argument holds water. The surrounding code can use whatever registers it wants. In the worst case it saves and restores all of them, which is what a standard context switch does anyway. In other words, this can be better and is never worse.
The .Net runtime generates code that relies on bools being either 0 or 1. It's quite easy using interop to inadvertently set a bool to something else and this leads to very odd bugs where simple boolean expressions appear to be giving the wrong result.
(Tangentially, VB traditionally used -1 for true. VB.NET uses the same 0 or 1 internal representation for a bool as C# but if you convert a bool to a number in VB.NET it comes out as -1.)
-1 isn't even a bad choice, since that's basically using 0xFF for true and 0x00 for false. The weirdness is the fact that you're converting a binary value to a signed integer.
This goes all the way to early BASIC, and it's signed because the language didn't have any unsigned numbers to begin with.
The main reason for this particular arrangement is that, so long as you can rely on truth being represented as -1 - i.e. all bits set - bitwise operators double as logical ones. Thus BASIC would have NOT, AND, OR, XOR, IMP, EQV all operating bitwise but mostly used for Booleans in practice (it misses short-circuiting, but languages of that era rarely defaulted to it).
A lot of the time variables will be in registers. And registers are basically ints. That would be my guess for why many things are ints that could fit in less space.
However, many people suffer from heart failure which, despite the name, means partial heart failure. The permanent breathlessness gives them a terrible quality of life. They can live with this for decades sometimes but it's not much fun.
reply