Hacker Newsnew | past | comments | ask | show | jobs | submit | zokier's commentslogin

Working with single fixed bit depth is imho different than being bit-depth agnostic. Same argument could be made about color spaces too.

What is your complaint here exactly?

Title is fine. The key is the plural, smartwatches. From the actual article:

> No geodetic station is involved in this study, as one of the smartwatches serves as the base station.


Oh, that's interesting, I have to read it... Did they set up the smartwatch in a fixed place?

Ah yes, somehow I missed this :-S. Would edit my comment if I could.

Well they also claim to be able to cache build steps somehow build-system independently.

> As the build runs, any step that exactly matches a prior record is skipped and the results are automatically reused

> SourceFS delivers the performance gains of modern build systems like Bazel or Buck2 – while also accelerating checkouts – all without requiring any migration.

Which sounds way too good to be true.


Yeah, I agree. This part is hand waved away without any technical description of how they manage to pull this off since knowing what is even a build step and what dependencies and outputs are are only possible at the process level (to disambiguate multi threaded builds). And then there’s build steps that have side effects which come up a lot with CMake+ninja.

A fuse filesystem can get information about the thread performing the file access: https://man.openbsd.org/fuse_get_context.3

So they could in principle get a full list of dependencies of each build step. Though I'm not sure how they would skip those steps without having an interposer in the build system to shortcut it.


Didn't tup do something like that? https://gittup.org/tup/index.html Haven't looked at it in a while, no idea if it got adoption.

But initially the article sounded like it was describing a mix of tup and Microsoft's git vfs (https://github.com/microsoft/VFSForGit) mushed together. But doing that by itself is probably a pile of work already.


Yes, you are correct - SourceFS also caches and replays build steps in a generic way. It works surprisingly well, to the point where it’s hard to believe until you actually see it in action (here is a short demo video, but it probably isn't the best way to showcase it: https://youtu.be/NwBGY9ZhuWc?t=76 ).

We intentionally kept the blog post light on implementation details - partly to make it accessible to a broader audience, and partly because we will be posting gradually some more details. Sounds like build caching/replay is high on the desired blogpost list - ack :-).

The build-system integration used here was a one-line change in the Android build tree. That said, you’re right - deeper integration with the build system could push the numbers even further, and that’s something we’re actively exploring.


Yeah that’s what I meant. I bet you the build must be invoked through a wrapper script that interposes all executables launched within the product tree. Complicated but I think it could work. Skipping steps correctly is the hard part but maybe you do that in terms of knowing somehow the files that will be accessed ahead of time by that processes and then skipping the launch and materializing the output (they also mention they have to run it once in a sandbox to detect the dependencies). But still, side effects in build systems seem difficult to account for correctly; I bet you that’s why it’s a “contact us” kind of product - there’s work needed to make sure it actually works on your project.

Seems viable if you can wrap each build stap with a start/stop signal.

At the start snapshot the filesystem. Record all files read & written during the step.

Then when this step runs again with the same inputs you can apply the diff from last time.

Some magic to automatically hook into processes and doing this automatically seems possible.


I think I got the magic part. You can store all build system binaries in the VFS itself. When any binary gets executed, VFS can return a small sham binary instead that just checks command line arguments, if they match, checks the inputs, and if they match, applies the previous output. If there is any mismatch, it can execute the original binary as usual and make the new output. Easy and no process hacking necessary.

I used to use a python program called ‘fabricate’ which did this. If you track every file a compiler opens, then id the same compiler is run with the same flags, and no input changed, you can just drop a cached copy of the outputs in place.

I’m actually disappointed this type of thing never caught on, it’s fairly easy on Linux to track every file a program accesses, so why do I need to write dependency lists?


You could manage this with a deterministic vm, cf antithesis.

There are like at least half dozen open source terminal emulators that have decent performance text rendering if you want to look into how it is really done. It is not simple, but at this point I feel it is a largely solved problem.

Kingbright recently released 01005 (0.45mm x 0.25mm x 0.2mm) sized LEDs, which afaik are one of the smallest ones easily available. One neat idea would be to pack those on DIP14 sized pcb, making tiny neat character display. I guess something like 5x7 or 6x8 matrix could be doable with small mcu to drive them.

For those 1mm addressable RGB LEDs I've been thinking how you could do cool cyberpunk looks by stringing them on some hairthin magnet wire and sticking them on your body/face/hair/etc. Blend them in with some latex or something if needed. Just need to hide the controller/battery somewhere.


Those look like normal 2.5-3mm LEDs, that is big difference to 1mm² LEDs. The circle disc from OPs link has 2.5x higher LED density, and they could be probably packed even more densely in a grid.

Looks like they're WS2812B-2020, like on this PCB:

https://www.wemos.cc/en/latest/d1_mini_shield/8x8_rgb.html


BTW: The 2020 are super bright, but the 1010 not so much.

Elastic II is very neat by preserving drainage basins. But Antarctica looks really distorted, in a way that doesn't seem necessary? Could that be fixed somehow?

for blending oklab almost always works better than srgb (linear or gamma).

It's possible I wasn't specific enough when I said "graphics". Typically I blend in CIELAB when interpolating between colors for visualizations (eg data science).

But I'm unaware of rendering engines that do alpha blending in something other than linear or SRGB. Photoshop, for instance, blends in sRGB by default, while renderers that simulate light physically will blend in linear RGB (to the best of my knowledge).

It depends on the GPU and the implementation, but I personally would not want to spend the compute on per-pixel CIELAB conversions for blending.


> But I'm unaware of rendering engines that do alpha blending in something other than linear or SRGB.

Well, spectral rendering is a thing, kinda bypasses the problem of color blending for rendering in some cases.


I'd assume it is 64-bit, which would explain why it is limited to Pi 3 upwards

Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: