Hacker Newsnew | past | comments | ask | show | jobs | submit | more gshulegaard's commentslogin

Not everyone has a Costco membership or a Costco near them. There also doesn't seem to be any indication that there is a pre-arranged pricing arrangement for Amazon like Costco.

Amazon is simply letting Hyundai dealerships list car inventory on Amazon, Amazon isn't selling the cars.


The cost of a Costco membership is negligible compared to the cost of a car or even just the potential savings.


While I agree with what you are saying in principle, I do want to point out that there is a massive difference between anecdotal evidence and _research_. In a purely academic sense, it is not an unreasonable statement to make that there is no evidence. In contrast, in the opioid case there existed scientific evidence of the highly addictive nature of the drugs that was more than just suppressed, they were outright _lied_ about by the pharmaceutical company behind the drug.

> Purdue Pharma created false advertising documents to provide doctors and patients illustrating that time-released OxyContin was less addictive than other immediate release alternatives. Furthermore, they sought out doctors who were more likely to prescribe opioids and encouraged them to prescribe OxyContin because it was safer. They did this because OxyContin quickly became a cash cow for the company. (https://oversight.house.gov/release/comer-purdue-pharma-and-...)

A degree of malfeasance in the same realm as Big Tobacco's denials of the risks and addictiveness of smoking:

https://www.cbsnews.com/news/big-tobacco-kept-cancer-risk-in... https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2879177/

Although, perhaps could be considered worse since it occurred more recently in a theoretically more highly regulated market than mid 1900's tobacco.


Both http2_max_concurrent_streams and keepalive_requests (the configuration parameters discussed in this article) are configuration parameters available in open source nginx:

http://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_...

http://nginx.org/en/docs/http/ngx_http_core_module.html#keep...

So are limit_conn and limit_req:

https://nginx.org/en/docs/http/ngx_http_limit_conn_module.ht...

https://nginx.org/en/docs/http/ngx_http_limit_req_module.htm...

So it pertains to both NGINX and NGINX+.


There is a difference between an application being innately vulnerable and a user configuration exposing a vulnerability.

Interestingly enough, HAProxy seems to have the same mitigation:

> Until HAProxy dips below the configured stream limit again, new stream creation remains pending—regular timeouts eventually apply and the stream is cut if the situation does not resolve itself. This can occur during an attack.

https://www.haproxy.com/blog/haproxy-is-not-affected-by-the-...

That is, if I read it correctly, default configuration is safe and you can use configuration of stream limits to ensure you are not vulnerable, but they are saying HAProxy is not vulnerable...at least in the title. Later on they soften the language:

> HAProxy remains resistant to HTTP/2 Rapid Reset


I think the important distinction is ‘a user may plausibly have this non default configure’ vs ‘this config is obscure almost nobody will be running it this way’


I am not sure I understand how stream limit configuration between two L4/L7 load balancers is meaningfully different. In my mind, either the configuration of stream limits is a vulnerability for all L4/L7 load balancers that offer that configuration or it's not for all of them.

If one doesn't _offer_ configuration of stream limits and therefore is not susceptible to user misconfiguration, then I would get the distinction. But as I understand it both HAProxy and NGINX have the same configuration options which _could_ be vulnerable if configured poorly by the user. One is just putting a lot more positive spin on it.


Nginx and HAProxy work around the issue in different ways.

Nginx by default simply kills the entire connection after 1000 requests. With this attack, that's two packets. This severely limits its amplification and basically makes the bypass of the concurrent stream limit a moot point - unless you manually increased the requests-until-killed count.

HAProxy avoids the issue by deviating from the specification. You are supposed to only count active requests towards the concurrent stream limit and ignore cancelled ones, but HAProxy does count cancelled requests and only removes them from the stream count once their resources are fully released. In practice this means the attack isn't any worse than regular http/2 requests.

The protocol-level bug still exists, but in both cases it just can't be used to launch an attack anymore.


Thanks for taking the time to explain the nuanced implementation difference.


I'm not saying there isn't a difference.

OP mentioned they didn't find Nginx listed on the CVE, and the reply said

>If you read the article, you'll see that the default configuration is not affected.

Which, in the context of OPs comment, implies that the CVE wouldn't be associated because the default config is not affected.

Hence my reply that CVEs don't care whether its default config or not. If there is a CVE associated with the program, there is a CVE associated with the program, rare config or not.


If you followed the source that is cited/linked in the quote you copied you would have found:

> The Messi Effect is real! Subscribers to #MLSSeasonPass on @AppleTV have more than doubled since Messi joined @InterMiamiCF. Also, Spanish language viewership on #MLSSeasonPass on @AppleTV has surpassed over 50% for Messi matches and continues to rise. How exciting for a truly global fan base!

https://twitter.com/Jorge__Mas/status/1689758782828556288


I mean, it wasn't too long ago we had auto pilot software from a major aerospace engineering company that would literally pitch planes into nosedives when only one of two redundant angle-of-attack sensors misbehaved/failed.

Also there seem to be quite a few train derailments as of late.

Oh and what about that apartment building that just...collapsed in Ohio?

Seems like there have been quite a few cases of safety regulatory failure in recent memory.


I think this comment is missing a key bit of context in that regionals/smaller banks were exempted from some of the regulations of Dodd-Frank.

https://www.npr.org/sections/thetwo-way/2018/05/22/613390275...

So it's not that we assume bigger banks can better manage risk, we are simply, currently regulating them to manage risk better.


Isn't the answer then to require small banks to conform to the same rules as big banks? Instead of allowing them to fail and be acquired by big banks?


Well that's precisely what we were doing until 2018 when lobbyists and the Trump Administration carved out exceptions for banks with less than (or equal to) $250 Billion in assets (see link above). So a more pertinent question from my perspective is: "Who's brilliant idea was this anyway? /s"

But if "the answer" you are referring to is to prevent consolidation in banking, then I think it's not quite as cut-and-dry. Regulation, especially risk-management regulations, can act as barriers to entry and create an environment that favors larger/established institutions leading to market consolidation. As an over simplification by a layman, it could look something like:

    - Higher regulation constrains profitability and raises fixed costs of running a bank (compliance burden).
    - This means that the amount of assets under control by a bank has to be higher in order for the bank to be profitable.
    - This higher floor reduces the likelihood new banks start (since the asset requirements to be profitable are higher).
    - Existing banks, in an effort to increase profits, acquire and merge with other banks.
These final two trends (less banks starting, more MnA's) are what could drive market consolidation. In a lot of ways Airlines are an example of this in a different industry.

I am definitely _not_ suggesting small banks should be exempted from Dodd-Frank like they were. I actually found the 2018 deregulation disturbing and disappointing. I am only trying to illustrate that even if 2018 hadn't happened, I don't know for sure if there would be more competition in the banking industry. (Only that the banks we do have would be less risky)


That would still result in tbtf banks since smaller banks will find it harder to comply with onerous regulations. I'm talking about community banks and the like, which don't really do the fancy investment banking stuff.


I wouldn't set random_page_cost lower than seq_page_cost. It can cause the query planner to do wacky things (I learned the hard way). The documentation mentions it, but not as strongly as I think is warranted given how erratic my PostgreSQL cluster started behaving after I made that configuration change.

> Although the system will let you set random_page_cost to less than seq_page_cost, it is not physically sensible to do so. However, setting them equal makes sense if the database is entirely cached in RAM, since in that case there is no penalty for touching pages out of sequence. Also, in a heavily-cached database you should lower both values relative to the CPU parameters, since the cost of fetching a page already in RAM is much smaller than it would normally be.

https://www.postgresql.org/docs/current/runtime-config-query...

Curious though, lowering both values is something I haven't done before but now I am curious about.


I think I would rephrase this question as why would I stop using emacs for VS code and intellij? Seriously, there is no feature they have that my emacs config doesn't (AFAICT after using Intellij and VS code each for 4+ years). But I have the ability to work with emacs without ever touching my mouse. You may consider that only being "slightly" faster, but every time I use VS code, _feel_ slow because I know how fast I would have been in emacs.

Combine this with my tiling window manager and I pretty much only have to use my mouse to interact with web browsers. That's the killer feature that makes VS Code and Intellij _feel_ inferior to me.

Now I wouldn't go so far as to say that VSCode and Intellij are _actually_ inferior to Emacs. I think both are great in their own right, but after years of familiarity I have a fluency with Emacs (because of the keyboard oriented interface) that I could never have with them.

To be fair, for someone without emacs fluency I would probably steer them towards VSCode and Intellij as they are definitely more approachable. In fact, I have done exactly that for several of my friends and colleagues.


The no mouse thing is one of the best arguments I've heard. I'm not convinced it makes it worthwhile, but I'm willing to grant it's possible.


So interestingly enough, the Pangolin is the first laptop model I am aware of from System76 that isn't an outright Clevo chassis (via Sager). I don't know all the facts but I did find this interesting reddit comment:

https://www.reddit.com/r/System76/comments/10a3lqj/comment/j...

Which suggests there are pretty major changes to how they have previously done laptops:

1) The manufacturer is Emdoor(?) instead of Clevo. (I have never personally heard of Emdoor)

2) Final assembly is done in-house @ System76 instead of via Sager. That is assembly for Pangolin is done in Denver at the same place the Theelios desktops and Launch keyboards are made.

While not necessarily all in-house yet, both of these changes are a pretty massive difference so as a former owner of several System76 laptops, I am curious how much better the Pangolin is.

Edit: So after doing some googling, it seems like Emdoor is a Chinese manufacturer out of Shenzen (https://www.emdoordigi.com/about.html?category_id=0). Personally, not sure moving from a Taiwanese manufacturer to a Chinese one is actually a good thing...but still curious none-the-less.


> Personally, not sure moving from a Taiwanese manufacturer to a Chinese one is actually a good thing...but still curious none-the-less.

As somebody working in a related space, _any_ diversification is a good thing simply because it forces System76 to redesign its processes to become less Clevo-specific.

Once you start, you're going to have a hard time to find an ODM that's willing to teach you the ropes _and_ accept that you're not a single-ODM shop _and_ deal with whatever fancies S76 brings up, so that limits options. (Any of these mean that S76 will be a high-maintenance customer for Emdoor, and these three more than compound in that way)

I can't overemphasize how big of a step this is.


For what it's worth I used a Clevo (branded as Medion) laptop as my daily driver for about 6 years, while backpacking around the world. The thing was a tank and still works as my backup at about 9 years old now. It's been through rainforests and deserts and up mountains and definitely got damp in a few rainstorms. Some keys on the keyboard no longer work and the bearings on the fan are very worn out. But that thing is a tank.


It depends on how much impact they can have with the design. IF they can have more impact on the Emdoor design, its a good idea.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: