Hacker Newsnew | past | comments | ask | show | jobs | submit | martinsnow's favoriteslogin

The naming scheme used to be "Claude [number] [size]", but now it is "Claude [size] [number]". The new models should have been named Claude 4 Opus and Claude 4 Sonnet, but they changed it, and even retconned Claude 3.7 Sonnet into Claude Sonnet 3.7.

Annoying.


Thank you, Simon. I believe that cases like Elastic and Redis returning back to an open source license is like writing on the rock: "open source won", at least in the system software space. Companies get created, prosper and fail over time, but this message is here to stay with us for a long time, and shapes the society of tomorrow. It's a win of the software community itself.

It does a lot of things, though many are somewhat subtle, like screen locking timeout and stuff with networking and a bunch of utility programs and so on. I like to start off Xfce Debian and plaster i3wm over it, it's the best 'power user' setup I've come across.

I wouldn't hesitate to put a 'regular' computer user in front of Xfce, it strikes a nice combination of simple and discoverable with very few annoyances. It's also where I go when I want to use some many-windowed application that doesn't fit into tiling.


All Android devices support running the Android Open Source Project via Treble and we could quickly add support for non-Pixel devices doing things in a reasonable way too. Those devices don't meet our hardware security requirements (https://grapheneos.org/faq#future-devices) which is why we don't support them. It wouldn't be that hard to add a Sony or Motorola device but they're missing the expected security features and proper updates. It wouldn't be possible to provide our standard security protections on them which is the real blocking issue, not difficulty. Android has made device support simple, but the rest of the Android device ecosystem is not at all competitive in security with Pixels and iPhones.

We automate a huge portion of the work via https://github.com/GrapheneOS/adevtool. We do a GrapheneOS build with it and it outputs state you can see in https://github.com/GrapheneOS/vendor_state which is then used to automatically figure out all of the missing overlays, SELinux policy, firmware files, etc. When we have our own devices in partnership with an OEM we won't need to use adevtool. We can borrow a lot from Pixels for those devices though.

Pixels are very similar to each other, which does make things simpler. The entire kernel source tree is identical for 6th, 7th, 8th and 9th generation Pixels. They all use the Linux 6.1 long term support branch on bare metal and the Linux 6.6 branch for on-device virtual machines. They'll likely advance to new Linux kernel branches together rather than ending up very split across different ones as they were in the past. That makes things easier for us.

Pixels also share most of the same drivers for the SoC and lots of other drivers. Those drivers support the different generations of hardware with the same codebase for the most part. There are still 6 different Wi-Fi/Bluetooth drivers across them but 5 of those are variations of a very similar Broadcom Wi-Fi/Bluetooth driver and only 1 is a separate Qualcomm Atheros driver (Pixel 7a).

We have various hardware-based hardening features such as our hardware-based disabling of the USB-C port with variations across different hardware generations (https://grapheneos.org/features#usb-c-port-and-pogo-pins-con...) and similar features. Our exploit protection features also uncover lots of memory corruption bugs across devices in their drivers. We do have a lot of device-specific work fixing uncovered bugs. Hardware memory tagging in particular finds nearly every heap memory corruption bug occurring during regular use including out-of-bound reads so that finds a lot of bugs we need to handle. Many of the bugs we find with hardware memory tagging and other memory corruption exploit protections are in drivers or the portable Bluetooth software stack which is thankfully one of the components Android is currently gradually rewriting in Rust along with the media stack.

If we supported a device with much different drivers, there wouldn't be much work to deal with that directly but enabling our features like our hardware memory tagging support would require fixing a bunch of memory corruption bugs occurring during regular use across it. Supporting other Android devices with the Android Open Source Project is easy. Supporting them with GrapheneOS is significantly harder due to various hardening features needing integration at a low level along with others uncovering a lot of latent bugs which were occurring but not being noticed most of the time. The ones which get noticed often due to breaking things get fixed, but many latent memory corruption bugs remain there unless the OEM heavily tests with HWASan or MTE themselves, which is unlikely. Pixels are tested with HWASan and MTE by Google but yet we still have to fix a lot ourselves largely because testing them in a device farm is different than users actually using them with assorted Bluetooth accessories, etc.


the tailwind-esque ergonomics are appealing, especially considering how painful the current meta (chart.js spaghetti) can be to maintain.

Text wrapping is actually a difficult optimization problem. That's part of the reason LaTeX has such good text wrapping -- it can spend serious CPU cycles on the problem because it doesn't do it in real time.

I agree completely, but I always struggle with how to characterize this. Software engineers are generally pretty privileged, and even relatively mediocre ones can pretty easily break $100k per year. But, work in this field is incredibly unsatisfying and frustrating. For sure, none of us would drop what we're doing to go work in retail. It's not as if we're suffering in any strict sense; no one is really allowed to abuse us, our jobs aren't ruining our bodies compared to something like construction. But, many of us kind of hate it and only stay because of the money, and would work anywhere else if the other jobs paid well enough.

Now before you get out your tiny violin, I'm not saying other people don't have it worse, or that anyone should direct their sympathy towards us at all. I guess it feels more like a golden handcuffs situation.


GL.iNet is a popular brand, though I can't find a Wikipedia page for it.

https://www.gl-inet.com/about-us/ says:

> GL Tech (HK) Ltd: #601, 5W, Hong Kong Science Park, N.T. Hong Kong

> GL Intelligence, Inc.: 10400 Eaton Place, Suite 215, Fairfax, VA 22030

I'm a little curious about this. One of the reasons that some people run OpenWrt is for improved security. In the general security space, a Shenzen company isn't the most usual choice of vendor for Western countries. Also, the company having the US subsidiary/office/unit based in Virginia, and with "intelligence" in the name, hits a somewhat odd note.


I don't love this.

> Unfortunately, while ID Tokens do include identity claims like name, organization, and email address, they do not include the user’s public key. This prevents them from being used to directly secure protocols like SSH

This seems like dubious statement. SSH authentication does not need to be key based.

I understand the practicality of their approach, but I would have preferred this to be proper first-class authentication method instead of smuggling it through publickey auth method. SSH protocol is explicitly designed to support many different auth methods, so this does feel like a missed opportunity. I don't know openssh internals, but could this have been implemented through gssapi? That's the traditional route for ssh sso. If not gssapi, then something similar to it.

https://datatracker.ietf.org/doc/html/rfc4462


Another option:

SSH certificates have been around for a while now, so you can create an in-house SSH CA, so that they are short-lived (compared to on-laptop keys) and you have to authenticate to get a fresh one.

To automate getting SSH certs there are a number of options, including the step-ca project, which can talk to OAUTH/OIDC systems (Google, Okta, Microsoft Entra ID, Keycloak):

* https://smallstep.com/docs/step-ca/provisioners/#oauthoidc-s...

as well as cloud providers:

* https://smallstep.com/docs/step-ca/provisioners/#cloud-provi...

There are commercial offerings as well:

* https://www.google.com/search?q=centrally+managed+ssh+certif...


In the US salaries for the top dogs at nonprofits is reported.

Some trick this with “consulting fees” to companies controlled by the top dogs, but it sat least something.


Quora was fantastic back around 2011. It did provide value. That has been decreasing gradually as the site declined.

No you don't have to pay on Quora to get answers; that's incorrect. Having said that, these days most questions languish without good, or often any, answers. The only ones that get traffic from humans are in what Quora calls Spaces, i.e. a group for Q&A around a certain topic, and/or a certain point of view.


I can't help but think that those lazy mathematicians might benefit from a congressional order to clean up that twin prime problem too.

If memory safety was "just the right regulations" easy, it would have already been solved. Every competent developer loves getting things right.

I can imagine a lot more "compliance" than success may be the result of any "progress" with that approach.

The basic problem is challenging, but makes it hard-hard is the addition of a mountain of incidental complexity. Memory safety as a retrofit on languages, tools and code bases is a much bigger challenge than starting with something simple and memory safe, and then working back up to something with all the bells and whistles that mature tool ecosystems provide for squeaking out that last bit of efficiency. Programs get judged 100% on efficiency (how fast can you get this working? how fast does it run? how much is our energy/hardware/cloud bill?), and only 99% or so on safety.

If the world decided it could get by on a big drop in software/computer performance for a few years while we restarted with safer/simpler tools, change would be quick. But the economics would favor every defector so much that ... that approach is completely unrealistic.

It is going to get solved. The payoff is too high, and the pain is too great, for it not to. But not based on a concept of a plan or regulation.


If you'd like, feel free to reach out to me via email with your requirements and we can get a conversation going. I've built a few voice agent systems in both python and JavaScript and would love to hear about what issues you're running into. Might be able to build what you need.

Wow! I wouldn't be surprised if I make more than 100 searches per day.

You can also intercept the xhr response which would still stop generation, but the UI won't update, revelaing the thoughts that lead to the content filter:

    const filter = t => t?.split('\n').filter(l => !l.includes('content_filter')).join('\n');

    ['response', 'responseText'].forEach(prop => {
      const orig = Object.getOwnPropertyDescriptor(XMLHttpRequest.prototype, prop);
      Object.defineProperty(XMLHttpRequest.prototype, prop, {
        get: function() { return filter(orig.get.call(this)); }
      });
    });
Paste the above in the browser console ^

Why would it cast any doubt? If you can use o1 output to build a better R1. Then use R1 output to build a better X1... then a better X2.. XN, that just shows a method to create better systems for a fraction of the cost from where we stand. If it was that obvious OpenAI should have themselves done. But the disruptors did it. It hindsight it might sound obvious, but that is true for all innovations. It is all good stuff.

Units might seem simple but they have a ton of edge cases. Do you want to be able to add inches and feet? Be careful about potential precision/rounding issues. What is the unit for a temperature delta? You can’t simply keep the original unit (eg C or F) because conversion from F to C is a different rule than ΔF to ΔC. Etc.

Units do prevent bugs in programs, so they have an important role to play. But they also need to be designed very carefully.

Java adopted units via JSR 385 (https://belief-driven-design.com/java-measurement-jsr-385-21...)


It’s always good to take a look, many things are decided on the client side, and developer tools are part of the browsers anyway.

The other day I wanted to make reservations for a service to send my luggage from the airport to my house in Japan, and the form was giving me errors.

Searching for the error string around I realized there was a timeout set on the client side, so I increased it and could slowly but smoothly fill in all the information that required a server check.

I guess they never bothered to debug their system when accessing it from the other side of the world. All it needed was a few extra milliseconds for the requests to arrive in time.


The cost for both training and inference is vaguely quadratic while, for the vast majority of users, the marginal utility of additional context is sharply diminishing. For 99% of ChatGPT users something like 8192 tokens, or about 20 pages of context would be plenty. Companies have to balance the cost of training and serving models. Google did train an uber long context version of Gemini but since Gemini itself fundamentally was not better than GPT-4 or Claude this didn't really matter much, since so few people actually benefited from such a niche advantage it didn't really shift the playing field in their favor.

One even-better approach IMHO

Just keep a .gitconfig in your HOME with aliases for your identities. Then just after initializing/cloning the repo do git config-company or git config-personal

    er453r@r7:~$ cat ~/.gitconfig 
    [user]
        useConfigOnly = true
    [alias]
        config-personal = !echo CONFIG-PERSONAL && \
            git config --local user.email 'personal@email.com' && \
            git config --local user.name 'personal' && \
            git config --local core.sshCommand 'ssh -i ~/.ssh/id_rsa_personal'
        config-company = !echo OLD CONFIG-COMPANY && \
            git config --local user.email 'official@comapny.io' && \
            git config --local user.name 'Name Surname' && \
            git config --local core.sshCommand 'ssh -i ~/.ssh/id_rsa_company'

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: