We already have good compression algorithms that describe images that the human-brain can see, as well as the "portion" of images that the human brain can't tell the difference very easily.
In particular: JPEG compression. Similar pictures with similar information are compressed into the same image. JPEG deletes huge chunks of the image's information, under the assumption that the human eye / brain doesn't seem to tell the difference between them.
Why not do this in reverse: Turn a 128-bit hash into a 128-bit JPEG image through "some manner". Would that work? Maybe a bit of work needs to be done (ensure that colorblind users see the same result... so fully random JPEG images isn't useful).
Maybe monochrome JPEG images? Surely a 32x32 monochrome image (1024 pixels * 8-bit monocrhome == 8192 bits of data) will be sufficient space for 128-bits or 256-bits to "crawl" around?
-----------
Take a very-low quality JPEG compression, but be inspired from the JPEG matrix for discrete cosine transforms for how many "bits" to spend.
It seems like JPEG is a 16x16 macroblock (256 pixels). 4-macroblocks would make a 32x32 image. If we're working with a huge hash (ex: 512-bit SHA3), that's 128-bits of data we'll need to differentiate per 16x16 macroblock.
That seems usable?? Converting the JPEG into ASCII art though would be the hardest part though! But I feel like that part has been solved already by weird Linux programs (play Doom on a Linux console and whatnot)
EDIT: https://github.com/hzeller/timg Seems like modern consoles can view very crude low-resolution images. Something like a 32x32 picture is probably usable on modern consoles.
-------
JPEGs have that whole "discrete fourier cosine transform" thing that grossly complicates the algorithm though. PNGs are simpler: largely run-length encoding based off of zlib compression.
The idea of a "white" block being copied for "0 to 15" pixels (4-bits given to the runlength-encoder, 4-bits selected for the "color") might be an easier "artification" step with more obvious differences in display.
512-bits / 8 (4-bit color + 4-bit runlength) == 64 runs. A 32x32 square has 1024 pixels, so your 64-runs of average length 7.5 will average 480-pixels long (well within the 1024-pixels of a 32x32 square == 1024 pixel square).
I think the JPEGs could be useful without converting to ASCII art. Could you use JPEG decoding to convert hashes into small and visually-distinct images?
Those 64-images are called the "coefficients of a Discrete Fourier Transform", and the magic of FFT means that all images can be represented by a sum of those 64 images.
The images on the bottom right are "harder" for the eye to recognize, while the images on the top-left are "easier" for the eye to recognize. JPEG standard spends "more bits" on the top left images, and "fewer bits" on the bottom right images.
----------
Our controls are as follows: we can multiply an image with a value (ex: 4-bits give us 0Coefficient(0,0) through 15Coefficient(0,0)). We should give more bits to the ones our eyes can see (ex: more bits to Coefficient(0,0)), while giving fewer bits to the bottom-right coefficients / images (ex: the bottom right checkerboard pattern looks very similar to the checkerboards around it. We probably want to spend 0-bits on that pattern, never to use it at all so that our eyes can distinguish our patterns more)
We already have good compression algorithms that describe images that the human-brain can see, as well as the "portion" of images that the human brain can't tell the difference very easily.
In particular: JPEG compression. Similar pictures with similar information are compressed into the same image. JPEG deletes huge chunks of the image's information, under the assumption that the human eye / brain doesn't seem to tell the difference between them.
Why not do this in reverse: Turn a 128-bit hash into a 128-bit JPEG image through "some manner". Would that work? Maybe a bit of work needs to be done (ensure that colorblind users see the same result... so fully random JPEG images isn't useful).
Maybe monochrome JPEG images? Surely a 32x32 monochrome image (1024 pixels * 8-bit monocrhome == 8192 bits of data) will be sufficient space for 128-bits or 256-bits to "crawl" around?
-----------
Take a very-low quality JPEG compression, but be inspired from the JPEG matrix for discrete cosine transforms for how many "bits" to spend.
It seems like JPEG is a 16x16 macroblock (256 pixels). 4-macroblocks would make a 32x32 image. If we're working with a huge hash (ex: 512-bit SHA3), that's 128-bits of data we'll need to differentiate per 16x16 macroblock.
That seems usable?? Converting the JPEG into ASCII art though would be the hardest part though! But I feel like that part has been solved already by weird Linux programs (play Doom on a Linux console and whatnot)
EDIT: https://github.com/hzeller/timg Seems like modern consoles can view very crude low-resolution images. Something like a 32x32 picture is probably usable on modern consoles.
-------
JPEGs have that whole "discrete fourier cosine transform" thing that grossly complicates the algorithm though. PNGs are simpler: largely run-length encoding based off of zlib compression.
The idea of a "white" block being copied for "0 to 15" pixels (4-bits given to the runlength-encoder, 4-bits selected for the "color") might be an easier "artification" step with more obvious differences in display.
512-bits / 8 (4-bit color + 4-bit runlength) == 64 runs. A 32x32 square has 1024 pixels, so your 64-runs of average length 7.5 will average 480-pixels long (well within the 1024-pixels of a 32x32 square == 1024 pixel square).