Hacker Newsnew | past | comments | ask | show | jobs | submit | not_kurt_godel's commentslogin

I'd love to see the results of mandating a random order dict impl at an actual company/org (but hate to be forced to participate). Hopefully you hired developers who really like to write sorting algos.

Swift (heavily used by Apple) has randomly ordered dictionaries for security:

> In particular, random seeding enables better protection against (accidental or deliberate) hash-flooding attacks

https://forums.swift.org/t/psa-the-stdlib-now-uses-randomly-...


Perhaps not unrelated to why Python is the #1 most popular language while Swift is #22 https://www.tiobe.com/tiobe-index/

Swift isn’t popular because its Dictionary type uses randomly ordered keys?

It certainly could be a reason among many. Just look at the thread GP shared, containing multiple years' worth of users voicing frustration at the introduction of this behavior.

Probably the inference is YAGNI .

Well, that is how hash tables in go works, so you'd not have to look that far.

Perl since 5.8.something has had the option of perturbing the hash function, so it is different from run to run. You can also set the set to a given value in order to lock in the sequence.

In any case, it is not ordered. If you want that, you have to explicitly sort the keys of the hash.


Great. Maybe GP will go a step farther and also mandate arrays that return elements in random order too. Relying on insertion order for any reason is for weaklings.

And then you're sunk the moment anyone else needs to run your code, or even if you just need to run your own code on another machine.

Never happened.

I salute you for never needing a new computer, ever.

I get a new machine most years from the business.

I suggest a systems administration course if you're having so much trouble with Python libs. It can help, knowing your way around the filesystem and how to use PATHs, etc.


Hey, good for you that you like doing things the hard, fragile way. Personally I'll stick with the natively supported python solution that was made part of the standard library precisely because the overwhelming majority of Python programmers find your approach unsatisfactory.

https://peps.python.org/pep-0405/


It's not hard at all, just a pip install away. Perhaps a rare uninstall later.

It sounds like you haven't read the full thread. It's common for younger developers to be slaves to "best practice" even in exceptions where it doesn't apply.

Appeal to authority is not a compelling argument either.


So you don't have the necessary credentials, and you still wouldn't be qualified to comment even if you did have them unless you had access to the internal data. But no worries, I'm sure you'd be OK getting surgery from a surgeon's son who never went to medical school nor read your chart.


I would take the advice of a surgeons son, who is also somewhat active in the field, that something sounds fishy about a operation, to further look into it. That is very different from letting him perform the surgery.

There is incentive to play down accidents. No idea what happened here, I actually rather think it recived publicity because falling into a nuclear reactor pool sounds way more dramatic than it is, but ... not my area. Still was happy to get arthurcolle's input.


There is also incentive for people to inflate their sense of importance by weighing in on topics they're not qualified on, especially if it's motivated by a sense of familial pride. You can see elsewhere on this thread that arthurcolle self-admits a lack of familiarity with basic interpretation of CPM.

Misinformation, whether ill-intentioned or not, does real and tangible harm to our society. Misinformation about the supposed dangers of nuclear power, as arthurcolle is spreading, are especially harmful because they form the foundation are the biggest obstacle to safe, clean, cheap, and abundant energy that could radically improve our lives at the systemic level.


"Misinformation about the supposed dangers of nuclear power, as arthurcolle is spreading "

I maybe did not read all of it, but which missinformation is he spreading exactly?

(Follow up, why are you in a position to judge that? )

As for missinformation in general, I happened to be born after chernobyl. Where the authorities in eastern germany said, all is fine. But since the people got western television, where they said no, not fine, children may not go outside while the radioactive raincloud is still there, my immediate experience is rather people downplaying the dangers.


https://www.forbes.com/sites/rogerpielke/2020/03/10/every-da...

> Every Day 10,000 People Die Due To Air Pollution From Fossil Fuels

> The NBER study found that “the switch from nuclear power to fossil fuel-fired production resulted in substantial increases in global and local air pollution emissions.” A key reason for the increased air pollution was that “lost nuclear production was replaced by electricity production from coal- and gas-fired sources in Germany as well as electricity imports from surrounding countries.”

> The study concluded that “the phase-out resulted in more than 1,100 additional deaths per year” due to excess mortality from the consequences of increased air pollution. Since 2011 that totals more than 10,000 deaths, far more than all deaths attributable to nuclear power in history.


Are you arguing about nuclear safety compared to fossil fuels with me? I was aware of those numbers, thank you.

But I asked for cases where arthurcolle was spreading missinformation, which is what you claimed and which is what I perceived as an unecessary attack.


Perhaps you should finish reading the threads to discover his numerous self-admissions of limited knowledge and incorrect statements when confronted with people who cite sources

But I am not doing surgery. I am expressing skepticism at the "oh no it's all fine" from literally everyone reporting on this story


You're being unnecessarily attacked for what is largely a casual forum where people make casual comments and speculation all of the time.

Further, your reasoning is biased towards safety (rather than risk), which seems completely sane.


Agreed. These violent reactions aren’t unusual for HN but they are unnecessary and acutely disappointing.


Skepticism and sarcasm are not violence.


Thanks for giving me the chance to clarify. You're right, of course. I was using it in the spirit of the phrase "violent disagreement" which is meant figuratively.

Off topic, but the idea that the “violence” of ideas, where the only thing in play is your point of view, is somehow equatable to physical violence, where physical integrity is at risk, is one of the least endearing features of the 21st century so far.

I cannot overstate how dangerous to human prosperity this false equivalence is. It is a first-tier ideological scourge that we entertain at great peril both to critical thought and the notion of objective truth itself.

On the other hand, it’s an excellent proxy to clarify that an idea, position, or sometimes even an entire ideology or its sycophant exist for entertainment purposes only and must not, on their own merits, be taken seriously.

Are we really so isolated from the brutality of nature to think that the inconvenient beating of a butterfly’s wings is the same category of experience as being disemboweled and eaten alive by a hungry beast?

Or is it that the whole ideological sham of the violence of ideas is merely a cowardice, a poverty of ingenuity, a plea for clemency by virtue of infantilism?

The pen, or the thought given flight, is mightier than the sword.

That does not make an idea a sword. It is in character , spirit, reach, and endurance a very different type of thing. A sword can be forged from an idea, but an idea will never spring forth from a blade.

Hell in a hand basket, get off my lawn, and uphill both ways to school. Lol.


Agree completely. I should’ve used a different term.


Yep you know better than the people who have the credentials you don't and the access to internal data you don't. I don't see what's holding you back from doing surgery, qualifications and context are no barrier to the application of your self-imagined expertise.


I don't claim to know better. But restarting a $1.5B plant after 2 years of inactivity and having a worker fall into a vat of radioactive water and still being at 300 CPM after a decontamination procedure is not normal.


Phrases such as massive red flag and bureaucratic nonsense were claims you knew better.

Who claimed the event was normal? A worker falling in non contaminated water would not be normal. Many things are bot normal and not emergencies. False dichotomy and straw man are logical fallacies.

Were the plant cost and status meant to support your claim 300 counts per minute was a red flag? They appeared irrelevant.


That'd be a very interesting statement if you were qualified to make it


What makes one qualified to make a statement?

Are you qualified to make the statement I'm replying to?


In technical fields: Accredited formal education, professional certification(s), and/or recognition from other experts in the field who have the same.


How do I know you're qualified to make that statement?


If I'm not, then we're not grounded in the same consensus-driven objective reality, making this conversation meaningless, and therefore not worth your time to reply further.


[flagged]


I care enough that I would trust assessment of their health and condition only to qualified professionals with access to the relevant information, just like anyone else that I care about


Do the people who have control of the information have an incentive to lie?


Do people on the Internet have an incentive to baselessly speculate in order to indulge their own Dunning-Krugerized delusions of grandeur?


> I don't claim to know better.

You very much do, if you're calling into question the statements in the article that it's fine.


I appreciate the conceptual analogy, but that's not really HATEOAS. HATEOAS would mean your browser/client would be entirely responsible for the presentation layer, in whatever form you desired, whether it's buttons or forms or pages or not even a GUI at all, such as a chat interface.


The Web is not only true HATEOAS, it is in fact the motivating example for HATEOAS. Roy Fielding's paper that introduced the concept is exactly about the web, REST and HATEOAS are the architecture patterns that he introduces primarily to guide the design of HTTP for the WWW.

The concept of a HATEOAS API is also very simple: the API is defined by a communication protocol, 1 endpoint, and a series of well-defined media types. For a website, the protocol is HTTP, that 1 endpoint is /index.html, and the media types are text/html, application/javascript, image/jpeg, application/json and all of the others.

The purpose of this system is to allow the creation of clients and servers completely independently of each other, and to allow the protocols to evolve independently in subsets of clients and servers without losing interoperability. This is perfectly achieved on the web, to an almost incredible degree. There has never been, at least not in the last decades, a big where, say, Firefox can't correctly display pages severed by Microsoft IIS: every browser really works with every web server, and no browser or server dev even feels a great need to explicitly test against the others.


It's a broader definition of HATEOAS. A stricter interpretation with practical, real-world benefits is a RESTful API definition that is fully self-contained that the client can get in a single request from the server and construct the presentation layer in whole with no further information except server responses in the same format. Or, slightly less strictly, a system where the server procedurally generates the presentation layer from the same API definition, rather than requiring separate frontend code for the client.


It is the original definition from Roy Fielding's paper. Arguably, you are talking about a more specific notion than the full breadth of what the HATEOAS concept was meant to inform.

The point of HATEOAS is to inform the architecture of any system that requires numerous clients and servers to interoperate with little ability for direct cooperation; and where you also need the ability to evolve this interaction in the longer term with the same constraint of no direct cooperation. As the dissertation explains, HATEOAS was used to guide specific fixes to correct mistakes in the HTTP/1.0 standard that limited the ability to achieve this goal for the WWW.


> HATEOAS would mean your browser/client would be entirely responsible for the presentation layer, in whatever form you desired, whether it's buttons or forms or pages or not even a GUI at all, such as a chat interface.

Browsers can alter a webpage with your chosen CSS, interactively read webpages out loud to you, or, as is the case with all the new AI browsers, provide LLM powered "answers" about a page's contents. These are all recontextualizations made possible by the universal HATEOAS interface of HTML.


Altering the presentation layer is not the same thing as deriving it from a semantic API definition.


Altering the presentation layer is possible precisely because HTML is a semantic API definition: one broad enough to enable self-description across a variety of domains, but specific enough that those applications can still be re-contextualized according to the user's needs and preferences.


Your point would be much stronger if all web forms were served in pure HTML and not 95% created by JS SPAs.


I think the web itself would be stronger if it was served in pure HTML and not 95% created by JS SPAs.


That's a little picky, maybe it's HATEOAS + a little extra presentation sauce (the hottest HATEOAS extension!)


It's not. The whole point of HATEOAS is that the presentation can be entirely derived from the API definition, full stop.


That is just wrong.

https://ics.uci.edu/~fielding/pubs/dissertation/net_arch_sty...

The server MUST be stateless, the client MAY be stateful. You can't get ETags and stuff like that without a stateful client.


Deriving a presentation layer from an API definition has no bearing on whether the client has to be stateful or not. The key difference for 'true' HATEOAS is that the API schema is sufficiently descriptive that the client does not need to request any presentation layer; arguably not even HTML, but definitely not CSS or JavaScript.

https://ics.uci.edu/~fielding/pubs/dissertation/rest_arch_st...

> any concept that might be the target of an author's hypertext reference must fit within the definition of a resource


Dude, he literally mentions Java Applets as an example (it was popular back then, if it was written today it would have been JavaScript). It's all there. Section 5.1.7.

It's an optional constraint. It's valid for CSS, JavaScript and any kind of media type that is negotiable.

> resource: the intended conceptual target of a hypertext reference

> representation: HTML document, JPEG image

A resource is abstract. You always negotiate it, and receive a representation with a specific type. It's like an interface.

Therefore, `/style.css` is a resource. You can negotiate with clients if that resource is acceptable (using the Accept header).

"Presentation layer" is not even a concept for REST. You're trying to map framework-related ideas to REST, bumping into an impedance mismatch, and not realizing that the issue is in that mismatch, not REST itself.

REST is not responsible for people trying to make anemic APIs. They do it out of some sense of purity, but the demands do not come from HATEOAS. They come from other choices the designer made.


I will concede the thrust of my argument probably does not fully align with Fielding's academic definition, so thank you for pointing me to that and explaining it a bit.

I'm realizing/remembering now that our internal working group's concept of HATEOAS was, apparently, much stricter to the point of being arguably divergent from Fielding's. For us "HATEOAS" became a flag in the ground for defining RESTful(ish) API schemas from which a user interface could be unambiguously derived and presented, in full with 100% functionality, with no HTML/CSS/JS, or at least only completely generic components and none specific to the particular schema.


It happens.

"Schema" is also foreign to REST. That is also a requirement coming from somewhere else.

You're probably coming from a post-GraphQL generation. They introduced this idea of sharing a schema, and influenced a lot of people. That is not, however, a requirement for REST.

State is the important thing. It's in the name, right? Hypermedia as the engine of application state. Not application schema.

It's much simpler than it seems. I can give a common example of a mistake:

GET /account/12345/balance <- Stateless, good (an ID represents the resource, unambiguous URI for that thing)

GET /my/balance <- Stateful, bad (depends on application knowing who's logged in)

In the second example, the concept of resource is being corrupted. It means something from some users, and something to others, depending on state.

In the first example, the hypermedia drives the state. It's in the link (but it can be on form data, or negotiation, for example, as long as it is stateless).

There is a little bit more to it, and it goes beyond URI design, but that's the gist of it.

It's really simple and not that academical as it seems.

Fielding's work is more a historical formalisation where he derives this notion from first principles. He kind of proves that this is a great style for networking architectures. If you read it, you understand how it can be performant, scalable, fast, etc, by principle. Most of the dissertation is just that.


Yes, which is exactly true of the Web. There is no aspect of a web page that is not derived from the HTML+JS+CSS files served by a server.


...which are a presentation layer and not a semantic, RESTful API definition.


No, they are a semantic layer for the browser-server communication. They encapsulate human-readable content in a machine interpretable definition.


From what I read on wiki, I'm not sure what to think anymore - it does at least sound inline with the opinion that the current websites are actually HATeOAS.

I guess someone interested would have to read the original work by Roy (who seems to have come up with the term) to find out which opinion is true


I worked on frontend projects and API designs directly related to trying to achieve HATEOAS, in a general, practical sense, for years. Browsing the modern web is not it.


I think you are confusing the browser with the web page. You probably think that the Javascript code executed by your browser is part of the "client" in the REST architecture - which is simply not what we're talking about. When analyzing the WWW, the REST API interface is the interface between the web browser and the web server, i.e. the interface between, say, Safari and Apache. The web browser accesses a single endpoint on the server with no prior knowledge of what that endpoint represents, downloads a file from the server, analyzes the Content-Type, and can show the user what the server intends to show based on that Content-Type. The fact that one of these content types is a language for running server-controlled code doesn't influence this one bit.

The only thing that would have made the web not conform to HATEOAS were if browsers had to have code that's specific to, say, google.com, or maybe to Apache servers. The only example of anything like this on the modern web is the special log in integrations that Microsoft and Google added for their own web properties - that is indeed a break of the HATEOAS paradigm.


I'm not confusing it. I was heavily motivated by business goals to find a general solution for HATEOAS-ifying API definitions. And yes, a web page, implemented in HTML/CSS/JS is a facsimile for it in a certain sense, but it's not self-contained RESTful API definition.


Again, you're talking about a particular web page, when I'm talking about the entire World Wide Web. The API of the WWW is indeed a RESTful API, driven entirely by hyperlinks. You can consider the WWW as a single service in this sense, where there is a single, and your browser is a client of that service. The API of this service is described in the HTTP RFCs and the WHATWG living standard for HTML, and the ECMAScript standard.

Say I as a user want to read the latest news stories of the day in the NYT. I tell my browser to access the NYT website root address, and then it contacts the server and discovers all necessary information for achieving this task on its own. It may choose to present this information as a graphical web page, or as a stream of sound, all without knowing anything about the NYT web site a priori.


The solution is to use the appropriate tool for the job. If you're locked in to highly crusty legacy software, it's inevitably going to require workarounds. There are good technical reasons why arbitrary-size single-part file uploads are now considered an anti-pattern. If you must support them, then don't be shocked if you wind up needing EC2 or other lower-level service as a point of ingress into your otherwise-serverless ecosystem.

If we want to treat the architectural peculiarities of GP's stack as an indictment of serverless in general, then we could just as well point to the limitations of running LAMP on a single machine as an indictment of servers in general (which obviously would be silly, since LAMP is still useful for some applications, as are bare metal servers).


I'm sure, then, that you must you have some alternate means of keeping abreast of current events so that your public discourse is grounded in a shared understanding of objective reality. Care to share what those are?


I don't. I just use the clickbait whores because that's all I've got.


https://apnews.com/ is a good place to start. In the meantime, perhaps you should refrain from opinionated commenting on current events if your opinion is self-admittedly derived from "clickbait whores".


3 * 0 = 0.

Checkmate, aitheists.


You probably don't need Redis until you have thousands of requests per minute, nevermind per day.


I'd go further and even say per second! Actually PG can still handle it, the main problem is that it has a more complex runtime that can spike. Backups? Background jobs doing heavy writes? Replication? Vacuum? Can tend to cause multisecond slowdowns which may be undesirable depending on your SLA. But otherwise it would be fine.


Agreed. People talk up Claude but every time I try it I wind up coming back to Gemini fairly quickly. And it's good enough at coding to be acceptably close to Claude as well IMO.


> Leaving aside the pendatic "you can't be a multiple smaller than another object"

Feel free to not leave this out, it's a pet peeve of mine. Thank you for the moment of catharsis.


Can you explain this to me? Trying to understand but can’t haha.


Grandparent comment should have said "1/5th the size" instead of 5x smaller.


Oddly we all knew what he meant. Huh.


How small are you? How small are you multiplied by 5?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: