Hacker Newsnew | past | comments | ask | show | jobs | submit | more jFriedensreich's commentslogin

That post fails to address the main issue, its not that we don't have time to vet dependencies, its that nodejs s security and default package model is absurd and how we use it even more. Even most deno posts i see use “allow all” for laziness which i assume will be copy pasted by everyone because its a major pain of UX to get to the right minimal permissions. The only programming model i am aware if that makes it painful enough to use a dependency, encourages hard pinning and vetted dependency distribution and forces explicit minimal capability based permission setup is cloudflares workerd. You can even set it up to have workers (without changing their code) run fully isolated from network and only communicate via a policy evaluator for ingress and egress. It is apache licensed so it is beyond me why this is not the default for use-cases it fits.


Another main issue is how large (deep and wide) this "supply chain" is in some communities. JavaScript and python notable for their giant reliance on libs.

If I compare a typical Rust project, with a same JavaScript one, JavaScript project itself often has magnitudes more direct dependencies (wide supply chain?). The rust tool will have three or four, the JavaScript over ten, sometimes ten alone to help with just building the typescript in dev. Worsened by the JavaScript dependencies own deps (and theirs, and theirs, all the way down to is_array or left_pad). Easily getting in the hundreds. In rust, that graph will list maybe ten more. Or, with some complex libraries, a total of several tens.

This attitude difference is also clear in Python community. Where the knee-jerk reaction is to add an import, rather than think it through, maybe copy paste a file, and in any case, being very conservative. Do we really need colors in the terminal output? We do? Can we not just create a file with some constants that hold the four ANSI escape codes instead?

I'm trying to argue that there's also an important cultural problem with supply chain attacks to be considered.


> [...] python notable for their giant reliance on libs.

I object. You can get a full-blown web app rolling with Django alone. Here's it's list of external dependencies, including transitive: asgiref, sqlparse, tzdata. (I guess you can also count jQuery, if you're using the _builtin_ admin interface.)

The standard library is slowly swallowing the most important libraries & tools in the ecosystem, such as json or venv. What was once a giant yield-hack to get green threads / async, is now a part of the language. The language itself is conservative in what new features it accepts, 20yro Python code still reads like Python.

Sure, I've worked on a Django codebase with 130 transitive dependencies. But it's 7yro and powers an entire business. A "hello world" app in Express has 150, for Vue it's 550.


> If I compare a typical Rust project, with a same JavaScript one, JavaScript project itself often has magnitudes more direct dependencies (wide supply chain?).

This has more to do with the popularity of a language than anything else, I think. Though the fact that Python and JS are used as "entry level" languages probably encourages some of these "lazy" libraries cough cough left-pad cough cough.


To be fair, the advantage of Deno here is really the standard library that includes way more functionality than Node.

But in the end, we should all rely on fewer dependencies. It's certainly the philosophy I'm trying to follow with https://mastrojs.github.io – see e.g. https://jsr.io/@mastrojs/mastro/dependencies


Looks uncanny at that screen size, my only hope for a mini replacement is probably a reality where glasses make screen size irrelevant.


Interesting they nearly exclusively talk about security but do not introduce a real security framework such as google CaMeL that would solve the issues not fully but more fundamentally. They only talk about mitigations and classical agent hardening that will clearly not be enough for a browser. 11% and 0% for selected cases is just not gonna cut it.


Its the most frightening naive reply i could imagine, if you can ask for it, it can hallucinate you asking for it or it can get prompt injected you asking for it. for voice only agents without UI approval process the only way is to have a separate clean room permission agent that does only get absolute safe context not even aggregate email titles. also for emails its impossible to design a safe agent that does any sort write action after reading anything in a mailbox because the mailbox is by definition tainted third party data and personal sensitive at the same time. even moving to a folder without can be used for attacks by hiding password reset notification mails etc.


I learned this still in the 90s, readable without issues and i can still write it if i concentrate. But i just realised that i haven't even used a pen in years and just the act to write on paper feels truly weird now.


Its pretty clear to a growing number of devs what a review tool should look like. It is more a matter of what needs to happen so this becomes a usable and sustainable reality and what shape of organisation/ players can make this happen in the right way.

- git itself wont go much further than the change-id which is already a huge win (thanks to jj, git butler, gerrit and other teams)

- graphite and github clearly showed they are not interested in solving this for anyone but their userslaves and have obviously opposing incentives.

- there are dozens of semi abandoned cli tools trying this without any traction, a cli can be a part of a solution but is just a small part

What we need:

- usable fully local

- core team support for vscode not just a broken afterthought by someone from the broader community

- web UI for usecases where vscode does not fit (possibly via vscode web or other ways to reuse as much of the interface work that went into the vscode integration)

- the core needs to be usable from a cli or library with clear boundaries so other editor teams can build as great integrations as the reference but fitting their native ui concepts

- it needs to work for commits, branches, stacked commits and any snapshot an agent creates as well as reviewing a devs own work before pushing

- it needs to incorporate CI/CD signals natively, meta did great UI work on this and its crucial to not ignore all that progress but build on top of it

- it needs to be as fine grained as the situation requires and with editability at every step. Why can i just accept one line in cursor but there is nothing like that when reviewing a humans code? Why can i fix a typo without any effort when reviewing in cursor when i have to go through at least 5 clicks to do the same when fixing a typo of a human.

- It needs to by fully incremental, when a pr is fixed there needs to be a simple way to review just the fix and not re-review the whole pr or the full file


orbstack is just a vm provider for docker on mac, colima offers the same features without a ui and is a great open replacement but as neither supports podman both are not really relevant to the podman discussion.


The UI of OrbStack is probably one the biggest features, so a replacement without the UI doesn't make a ton of sense for most people that like OrbStack.


Podman has this built-in, and there is an optional UI called Podman Desktop.


> orbstack is just a vm provider for docker on mac

”just” is a big statement here. Performance between colima and OrbStack are from different planets.

Apple just released their own runtime so that is also worth inspecting.


I haven't used OrbStack in a while but would you say Colima or OrbStack is faster? At least on Intel Mac Colima is for me way better than Docker. Also better than Podman in terms of compatibility, although I had to switch back to Docker Desktop since I need full compat.


You know someone has NOT used OrbStack when they just think all they have to offer is the UI. In fact, I barely use the UI, I just see the icon in the Menu Bar, from then on I just love the performance, feels almost like being back on Linux.


can you back that claim up? i see a huge difference between orbstack and docker desktop but colima and orbstack use afaik the same technology and the performance was near identical in my tests. (Though you need to change the colima settings to vz and virtiofs)


> (Though you need to change the colima settings to vz and virtiofs)

I think I have just used the defaults. The difference was huge in regular use. E.g. simple test to upgrade OS packages and time that.


> but as neither supports podman both are not really relevant to the podman discussion

FWIW lima (upon which COlima was built) ships with "boot me up a podman": <https://github.com/lima-vm/lima/blob/v1.2.1/templates/podman...> and <https://github.com/lima-vm/lima/blob/v1.2.1/templates/podman...>

I can't think of any stellar reason why colima couldn't also support it, since they even go out of their way to support Incus as a runtime, but I don't currently have the emotional energy to prosecute such a PR


It's more general than that, closer to WSL. I usually use Podman Desktop for container stuff, but I like OrbStack for managing Linux VMs. It has some really slick integrations and it performs very, very well.


If complex ci becomes indistinguishable from build systems, simple ci becomes indistinguishable from workflow engines. in an ideal world you would not need an ci product at all. the problem is there is neither a great build system nor workflow engine.


Using "Beautifully and Elegant" on the website when everything is just obviously the most basic vibe coded sonnet 4 design is quite a statement.


And as important: making it impossible or very hard and annoying to export and own your data.


Yes, I am happy I can export my data with google but boy it is annoying to do.


Those pricks throttled the download to 30 kbps. When I tried to download with aria, after a few failed attempts (not straight forward ofc) I got a message saying I can only download it 6 times, and that I should send a new request.

This is evil.


I have downloaded my data with google takeout dozens of times without a single issue. Speed was very high (maximum possible for my connection) and never had a download error. I’m talking about multi-gigabyte exports of my email and my drive.


Different experience for me, ~500Gb so about 10 chunks of 50Gb (largest chunk size) that had to be downloaded by hand because of their auth. When the download got interrupted I had maybe 4 more tries, might have been more, but after trying to many times the entire takeout expired. Automating the process, and using smaller chunks didn't work at the time because of their opaque API and its auth.

I feel like this has been made a shitty experience intentionally.


I have 900gb in my account and on my 500mbps connection it took forever to download, not because of my speed but because of theirs and it just 'connection failed' at 80% many many many times and asking to relogin. It should be illegal. Not supporting just wget -c (you can use it with a lot of trouble/hacks and it's not reliable which defeats the point) is just clearly done to annoy you into not doing it.


I had a similar issue downloading a large file from Google drive (125gb?) over the web.

I had to install the drive client on a Windows laptop and download it through that.


Yes, I am sure this is a mustache-twirling power move by Google, and not a bug in your obscure 20-year-old HTTP utility.


considering google is evil, yes I would expect this is google's fault


I tried to for a number of years after they added it and my download always expired before I was able to complete it since it didn't support restarting. Eventually I got locked out of my account so I just lost all the data.

These days I think of every account as ephemeral, anything I don't have in git on my local machine will disappear one day.


Some companies somehow blatantly get away with not allowing any export at all.

For example, Amazon eero, the overpriced WiFi router that doesn't even work (without phoning back home and having an app installed on your phone). They had an outage like a year ago, and during said outage, all your existing ad blocking stopped working, too, even if you never rebooted during the outage, and even though said blocking is supposed to be performed locally. I think you can't even get the ad blocking unless you or your ISP pays for the special subscription, either. (I imagine the thing could have removed all local ad blocking settings and lists during the time it couldn't confirm you're still a paying customer because their cloud was down?)

Does anyone know how exactly does Amazon get away with not providing data export for their eero product? I haven't seen a Blink or Ring exports, either. The main Amazon dot com does have the export, which has some extensive data you may not think they do collect, but it doesn't cover eero, Blink or Ring.


> Does anyone know how exactly does Amazon get away with not providing data export for their eero product?

I checked eero.com. It seems info about the product other than “it’s a secure WiFi router that doesn’t require users to manage it” is in the videos, if it is on that site at all, but I couldn’t get the videos to play, so I may be wrong, but why would a WiFi router have personal data on the device?

It will have the username and password at your internet provider, but what else does it store?


It collects WiFi Radio Analytics (2.4GHz / 5GHz-Low / 5GHz-High frequency utilisation), Activity History (data usage by device, as well as "scan" and ad blocks by device).

For ad blocking and network control, it also has "Block & Allow Sites" with the blacklisted and whitelisted domain names, which you may have to use to block ads and also unblock some domains that stop working as a result of bogus entries in the ad block.

All of this information is stored in the cloud, but I found no way to export it in any way. I've actually contacted eero, asking for the export, and they've basically admitted that it's not supported.


If you share data locally that's almost certainly over HTTP. Also DNS is usually over HTTP.

So that's all your websites you visit, plus any data transmitted from your phone to computer or google TV or whatever the fuck.


I’m guessing Amazon could have info on their side about your eero. Without knowing more about the router’s cloud functionality it’s hard to say what exactly they would have.


we are in the upcoming golden age of browser automation

this will stop being a problem


this is what i HOPE but it could play out in many different ways.


We didn't set out to hide our GDPR requests, we put them behind our Support/Legal button. But we got sued anyway, and we lost.

Now we have to have the "delete my data" and "request my data" as part of our main settings list. Result: flooded with requests. People are clicking the buttons just because they are there. For me it's not a big deal, I automate all the requests. But, I still feel like this went too far.


> People are clicking the buttons just because they are there.

I think this isn't a very charitable opinion of why people click buttons.

> But, I still feel like this went too far.

Why?


Yeah, as long as there's eg a confirmation to prevent misclicks "Are you sure you want to delete", I don't really see what's the problem.


I don’t know what business you work for, but what makes you sure users aren’t clicking the buttons because it’s what they want AND it’s convenient?


Its our human right to have realtime machine readable data copies of everything we do, its no companies business to question or interfere. Unless it crashes your servers because trolls are trying to DOS, it is really hard to not be angry at a statement as "this is going too far".


> People are clicking the buttons just because they are there.

The reasons why they click the buttons are utterly irrelevant to anyone except them.

Let them click the buttons. It's their right.

> But, I still feel like this went too far.

Not far enough. I think data should be a massive liability. It should actively cost you lots of money to know any fact at all about any person anywhere on the planet.

In other words, in an ideal world you would be scrambling to press that button on their behalf the second your business with them was concluded. "Can we please forget everything we know about you please?" and only their explicit affirmative consent would allow you to not delete their data.


At the moment, holding data about someone is not a significant recurrent cost, but it is a liability in the form of a risk that could get you in serious trouble if you get something wrong. However, that particular business risk doesn't tend to be recognised by many many organisations. It should be.


If they can afford to be ignorant of the risks, it's because the liability is not high enough. Gotta raise the liability until they start doing what we want them to do by default. Private information should be an existential risk for them. They should be deleting every last bit without even asking, not sucking up endless amounts of it without consent.


Users have basic bare bones functionality that all applications should support is "too far"?

If the user can create and account, they should be able to delete one. One is not harder or further than the other.

We just don't view it that way because we're all parasites who feed off the current status quo.


> Users have basic bare bones functionality that all applications should support is "too far"?

They were objecting to the idea that putting it behind the "support" button is a violation. If true, that's excessive in terms of mandating accessibility.


I would never file a support ticket to open an account. If you did that, your business would be under by the end of the week.

No, requiring actual application functionality isn't too far. For God's sake, just make normal software like a normal person. This should all be very intuitive.

Stop trying to game things, stop trying to maximize conversions and other bullshit metrics, stop trying to implement every dark pattern under the sun and just... Be normal. I promise you will comply without even trying.

And, bonus points, your software will be less shit. I know it doesn't feel that way right now, because most software is shit. You shouldn't aspire to be another turd floating around in the cesspool that is the modern web.


> I would never file a support ticket to

Well now we're deep into the realm of assumptions.

They said "behind our Support/Legal button" which to me sounds like it probably loads another normal page.

Though a GDPR request basically is a ticket.


Can we get the full story? I don't believe that's what happened because GDPR does not prescribe any specific avenue of requesting data. You're not required to have a button on your website at all, it's completely valid to accept and respond to requests by mail, but it's obviously much cheaper to offer automated data export.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: