Hacker Newsnew | past | comments | ask | show | jobs | submit | potato-peeler's commentslogin

How is uniqueness of devices even calculated?

I am curious - there are many x11 only apps, will they stop working OOTB?

No, XWayland isn't going away. Just X.org for running them.

> Just X.org for running them

What does that mean?


X11 applications will still "work" on Plasma Wayland session by utilizing XWayland, a tool that (afaik) runs a compact X11 session for each application.

What is being removed is running an X11-native Plasma session, only Wayland Plasma sessions will be available starting with Plasma 6.8.


You don’t need an extension to do this. Simply add a “before:” search filter to your search query, eg - https://www.google.com/search?q=Happiness+before%3A2022

I hope you will continue maintaining a mirror in GH. Some tools like deepwiki are excellent resources to learn about a codebase when their is not much documentation going around. But these tools only support pulling from GH.

I have the exact opposite experience where I had to block multiple such "excellent resources" from my search results.

How is pulling dependent on github?

Git pulling isn't unique to github and it works over http or ssh?


A neat thing about GitHub is that every file on it can be accessed from URLs like https://raw.githubusercontent.com/simonw/llm-prices/refs/hea... which are served through a CDN with open CORS headers - which means any JavaScript application running anywhere can access them.

Demo: https://tools.simonwillison.net/cors-fetch?url=https%3A%2F%2...


That feature seems common to other git hosts / forges. For example, here's one of Dillo's files, from a few commits ago, from their cgit-based host

https://git.dillo-browser.org/dillo/plain/src/ui.cc?id=29a46...


That doesn't have open CORS headers: https://tools.simonwillison.net/cors-fetch?url=https%3A%2F%2...

It's also not being served via a caching CDN, which means I don't feel comfortable running anything automated against it as that might add load to the server that they aren't ready for.


It's less about pulling and more about tools like DeepWiki making the assumption that its inputs live in GitHub, so repository URLs are expected to be GH URLs as opposed to a URL to a git repository anywhere.

That being said, there's no reason for tools like it to have those constraints other than pushing users into an ecosystem they prefer (i.e. GitHub instead of other forges).


Firefox already provides this feature in about:config using resist fingerprint - https://support.mozilla.org/en-US/kb/resist-fingerprinting

The value of the biomarkers are they applicable for all age range? What if someone already had a heart attack, then what should be the normal values post incident?


AFAIU, for LDL and ApoB, the real danger lies in the area under the curve. Lifetime exposure. That's not to say that lifestyle improvement can't help in other ways, but the damage caused by LDL is very difficult (impossible) to reverse.

So, if you hit the point where you already had a heart attack, you really want to prevent any further damage, but the "accumulated" risk is still there.

I think that's part of what makes LDL so tragic. You should care about it your whole life, but when you are young, you just don't.

Worse, high LDL is becoming a thing in children as well, that's an extra decade of accumulation which has historically not happened.

I don't think people should panic about these things, but I think it highlights the importance of developing good habits early, and the role parents and society has in making those habits easy for young people to adopt.


What does using native view mean? Do they invoke native ui controls instead of drawing their own? Seems similar to boden - https://www.boden.io/


Hi, netsurf provides separate html, css and dom parser as an independent lib [1]. Does dillo provide the same?

[1] - https://www.netsurf-browser.org/projects/hubbub/


Realistically, are these enough to replicate the chips?


To capture the individual transistors on a modern CPU, you'd need an image tens of terabytes in size, and it'd have to be captured by an electron microscope, not an optical image. And even that wouldn't let you see all the layers. Some of the very old CPUs, I'm not sure what resolution would be required.


Absolutely not. It's like opening a hood of your car, taking picture of what you see and then try to build replica of the engine based on that.


Mostly no. You do not see the lower layers and for anything sub 1um or so the resolution is too poor anyway.


A few early CPUs (famously including the 6502) were fully reverse engineered through "ordinary" die photos, and they even have gate-level simulators now where you can basically see the individual transistors switching: http://www.visual6502.org

I don't know if that's still near feasible for an 8088 or 8086. Anything past that, almost certainly not. Anything modern, absolutely not.


Are there any shots of audio amplifiers?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: