Hacker Newsnew | past | comments | ask | show | jobs | submit | kuang_eleven's commentslogin

"correct" is certainly not the right way to put that. Inheritance and composition here are both fully valid methods for modeling the relationship, and the decision to use either should be dependent on how the models are being used and the expectation for future extension.


I've switched almost entirely to ebooks for fiction, but I will only buy books that fit the following criteria:

1. DRM-free

2. No more expensive than the cheapest suggested retail price

3. Available in a generic format (EPUB, preferably)

So far, I've had excellent luck with Baen books, and decent luck with Tor books and miserable luck with just about every other publisher. Still, jokes on them, I will just pirate if I can't find a version that fits my criteria.


Baen used to give away huge portions of their catalog in the CDs included with their hardbacks. Buying the latest in the series meant a good chance the rest of the series was on the CD. Even now they're the kings of available formats, even if they're not quite as high quality as other ebook providers. And their prices are so cheap (most books are $10 new, less as they age) I have to wonder how they and the authors make money.


With new books, selling an ebook means saving all of the production costs of a physical book. If you're buying from Baen directly, they also save all the merchandising costs, too.

With old books, they're competing with used books which are most often very inexpensive, so may as well get you an ebook for free or low cost and if you like it, hopefully you're buy the author's future books when they're new.


The joke is of course also on the authors who will not be compensated by you for reading their work.


Then they shouldn't use a DRM publisher.


You think that because of your beliefs you should be able both to enjoy somebody’s work and not pay them? Boycott them if you want - that’s absolutely fair. But enjoying their work anyway, and just not paying, is unprincipled and cheap. And arguing that it’s their fault that you’re stiffing them is awfully convenient.


By this logic, buying paperback books second hand is also unprincipled and cheap, no?

Which is a nice segue into a related topic, that by instituting DRM, publishers have removed the possibility for consumers to buy a book, read it once, and, knowing they will not want to read it again, decide to resell it for some of the value so they can buy more books.

None of this makes piracy ethical. But the whole system is rife with unethical practices, participating in almost any way is a quandary. The choice of simply not consuming literature and technical knowledge with the benefits of modern technology is not a perfect solution, for the individual nor society, either.


Yeah that’s an interesting angle. I don’t think buying secondhand books is cheap, because you are paying _someone_. Good question as to the morality of it re: author incomes. Paying no one and still getting whatever you want is what’s cheap.


At least somebody bought that book first. And before you say something like, "Wull, somebody had to buy a DRM book and strip the DRM out," think about the economies of scale we're talking about. The last thing is, buying used books is legal, which does put a dent in the "unprincipled" part of your argument.


Do you really mean to imply absolute adherence to the law, even unenforced laws, is a requirement to be a principled person? Does that mean all laws in all countries or are the laws of your country the most principled ones?

In some US cities it is literally illegal to feed the homeless. In every US city it is illegal to go on mile over the speed limit. In some countries it is illegal to not turn in your neighbors for being gay.

Be careful with confusing legality with ethics.


Nope I'm not implying that, but you certainly inferred it. Legality is one marker you can use when deciding ethics, especially in a democratic country. Also we're talking about the permissive side of the law (reselling used books), not the restrictive side (not being allowed to feed homeless people).

Maybe: "In some US cities it is literally legal to apply eminent domain to kick people out of their houses to enlarge a freeway" would be a better slippery slope, but that's still a totally different power dynamic.

So yeah I dunno. Try to come up with a better analogy, but definitely keep to your slippery slope. It'll convince somebody.


The onus is on you to argue why, in the case of piracy, breaking the law is moral. With feeding the homeless etc the argument is clear.


Easy. DRM restricts freedom and privacy in ways I find intolerable and will not financially support. Borrowing, piracy, and buying first or secondhand paper books does not have these problems.


Can you argue this position using a consequentialist utilitarianism system?


Sure, because only the most trivial forms of utilitarian are act-by-act. A rule-utilitarianism could easily require us to pay one another for things, even things we could just take, to maximise overall utility. If you think it needs to be act-by-act, you prove it. But start with why consequentialism is a better ethical system than any other.


Yeah those filthy rich authors with all their choices just going straight for the DRM so they can control what you read! Stick it to 'em, greedy bastards.


I feel like this article is dramatically underselling the Lend-Lease program.

But, more to the point, it never actually addresses why the French changed their minds!


I think many underestimate allies bombing campaign, which destroyed Germans industrial complex, oil industry and air forces (75% of aircraft were destroyed by allies, many on the airbases).


This is true, and not widely cited because the Allied bombing of civilian areas was horrific, dwarfing the suffering inflicted by the two atom bombs.

WWII was awful.


It also dramatically underselling the North African campaign which tied Axis logistics and what little strategic lift capacity they had, at some point 80-90% of the long range aviation and most importantly fuel was dedicated to try and win Africa.

Both Germany and Italy have expended significant resources in Africa and lost a significant amount of personnel and more importantly materiel over the 3 years it lasted.


I can only speculate that it's a combination of several factors: 1. older French people told stories of how Americans freed their cities in France (because US was on Germany's West front) 2. USSR had become a big bad evil with the Cold War 3. US and French movies mostly focus on the fights the US/France did.


Yeah, this article's awful. I was expecting, you know, coverage of afore-written campaign. Instead it just notes that French attitudes changed, then attempts to refute that changed attitude. The "why", which the headline implies will be the focus, isn't just not the focus, but is absent.


> it never actually addresses why the French changed their minds!

I'll take a try as a French.

* Our exiled government was located in the UK, which as strong ties to the US

* France never saw any Russian troops, and a big event of the war was the Normandy landings

* We have the same societal model. Even though the state participation in the economy is one of the highest of the world (which would make us more communists that Russia or China), our system is deeply rooted in property and capitalism. We do have the "American Dream" of an individual making it to the top, rather than China's "Harmonious Society"[1] model. We love democracy.

* France is in NATO, and Russia was seen as menacing. When France decided to get the bomb, here is what De Gaulle had to say: "Within ten years, we shall have the means to kill 80 million Russians. I truly believe that one does not light-heartedly attack people who are able to kill 80 million Russians, even if one can kill 800 million French, that is if there were 800 million French"

* France is culturally closer to the US

* France does a lot more business with the US, benefited from the Marshall Plan.

* The Hollywood machine has won. Moving-making and distribution are expensive, and this might be a sector where the moats are deep and the winner takes all. With that the US army has an extensive movie sponsorship program, where they won't give money but they'll happily grant access to an aircraft carrier. Also, the Marshall plan mandated that at least 30% of movies projections had to come from the US.

[1] https://en.wikipedia.org/wiki/Harmonious_Society


As a French, I upvote :) Very good summary.

One thing though about the cultural closeness. It really depends on the generation.

For young people Russia is an unknown land they do not hear about (well, until the war). They are closer to the US culture, but not that specifically - I think it is difficult to speak of a culture in their c'est, it is very international and oriented to internet.

Older folks were actually quite close to Russian culture, at least the idealized one. Not that much to the US one.


At least for Netrunner, it was only 3x for the base set, and 2x would almost certainly get you enough. All the additional sets after the base set came with a full playset of 3x copies of each card.

Sadly, Netrunner is dead and the license lost, so not a lot of hope for new, official cards any time soon.


Yes, I glossed over the core set 2x problem

The community did pick up Netrunner: https://nullsignal.games/


In a team-based isometric RPG like BG3 you kinda have to be, the characters are dime sized and anything less flashy just wouldn't read. It's like stage acting, you have to overexaggerate so that audience can actually see what's going on from a distance.


That's an interesting perspective, hadn't thought of it that way. Even so, the fact that everything is so overwhelming also adds to that problem in a way. Can't see the wood through the trees so to speak.


Because it honestly doesn't matter most of the time.

In the majority of use cases, your runtime is dominated by I/O, and for the remaining use-cases, you either have low-level functions written in other languages wrapped in Python (numpy, etc.) or genuinely have a case Python is a terrible fit for (eg. low-level graphics programming or embedded).

Why bother making a new variant language with limitation and no real benefit?


This perspective is a common one, but it lacks credibility.

https://twitter.com/id_aa_carmack/status/1503844580474687493...


Look, Carmack is a genius in his own corner, but he is taking that quote vastly out of context. The actual linked article[1] is quite fascinating, and does point to overhead costs as being a potential bottleneck, but that specific quote is more to do with GPUs being faster than CPUs rather than anything about Python in particular.

More specifically, overhead (Python + pyTorch in this case) is often a bottleneck when tensors are quite small comparatively. It also claims that overhead largely doesn't scale with problem size, so the overhead would only matter when running very small tensor operations with pyTorch with a very tight latency requirement. This is... rare in practice, but it does happen to occur, then sure, that's a good reason to not use Python as-is!

1. https://horace.io/brrr_intro.html


You’ve posted this multiple times in this thread, and not once has it been relevant to the point being made. You are sticking your fingers in your ears and deferring to a contextless tweet by a celebrity.


If you want somebody to engage in a serious discussion, insulting them is not the way to go.

The post is highly relevant. Next time, if you don't understand why, just ask.


His code has had 16.6ms to execute since before a lot of people here had been born. Of course Python is hopeless in his domain. It’s creator and development team will be the first to admit this.


John Carmack is hardly an unbiased source.

In any case, if your program is waiting on network or file I/O, who cares whether the CPU could have executed one FLOP's worth of bytecode or 9.75 million FLOPs worth of native instructions in the meantime?


It's trivial to prove that this is true for most software. Luckily modern OS's are able to measure and report various performance stats for processes.

You can open some software such as htop right now, and it will show how much CPU time each process on your system has actually used. On my system the vast majority of processes spend the majority of their time doing nothing.

Is it true for all software? Of course not! Something like my compositor for example spends a lot of time doing software compositing which is fairly expensive, and it shows quite clearly in the stats that this is true.


The "vast majority of software" is now defined as "processes that happen to run in the background on chlorion's machine?" That reasoning is not sound.


I would add high-volume parsing / text processing to the list of bad fits for Python, although I'm not sure if there are native extensions for the different use cases?


Quite possibly; do you specifically mean NLP work? I'll admit, it's not something I work in myself, spaCy seems to be the go-to high-performance NLP library, and does appear to use C under the hood, but I couldn't say how it performs compared to other languages.


I had SAX-style parsing of XML and XSL transformation as concrete use cases in mind, because that happened to be what I worked with. I believe I went with Node.js at the time, which had a library that was much easier to work with than what was common for Python. Although, I mostly used Microsoft's or Saxon's XSLT processors overall in that job.

Another use case was parsing proprietary, text-based file formats with ancient encodings. I believe I did use Python for that as there wasn't that much data to convert anyway and it just worked.


I can't say I see the point of this, unless you have a criminally unresponsive backend dev team.

Any half-decent backend API will offer parameters to limit the response of an endpoint, from basic pagination to filtering what info is returned per unit. What's the use of extra complexity of a "BFF" if those calls can be crafted on the fly.

And to be clear, I am not suggesting that custom endpoint be crafted for every call that gets made, that just seem like a strawman the article is positing; but rather that calls specify when info they need.


The article doesn't do a great job at explaining that this isn't always just filtering, sometimes it's aggregation too.

A mobile client may need data points to display a single page that require calling 20 different APIs. Even if every single backend offered options for filtering as efficiently as possible, you may still need an aggregation service to bundle those 20 calls up into a single (or small set) of service calls to save on round-trip time.


You still have to aggregate somewhere. You can do it on the client or the frontend backend, it still has to get done. In the case of the latter we’re adding one extra hop before the client gets their data.

This pattern is advocating for reduced technical performance to accommodate organizational complexity, which I think the parent finds odd. You either have the client call 20 service/data?client_type=ios or you have the frontend backend call 20 different service/data?client_type=ios (after the client called)


> In the case of [backend for frontend] we’re adding one extra hop before the client gets their data.

> You either have the client call 20 service/data?client_type=ios or you have the frontend backend call 20 different service/data?client_type=ios

The article touches on this point, and it mirrors what I've seen as well. The time from client -> backend can be significant. For reasons completely outside of your control.

By using this pattern, you have 1 slow hop that's outside of your control followed by 20 hops that are in your control. You could decide to implement caching a certain way, batch API calls efficiently, etc.

You could do that on the frontend as well, but I've found it more complex in practice.

Also a note: I'm not really a BFF advocate or anything, just pointing out the network hops aren't equal. I did a spike on a BFF server implemented with GraphQL and it looked really promising.


You won't necessarily have to have ?client_type=xyz params on your endpoints if the BFF can do the filtering, so it saves having to build out all sorts of complexity in each backend service to write custom filtering logic. Of course, you'll pay the price in serialization time and data volume to transmit to the BFF, but that's negligible compared to the RTT of a mobile client.

I'd much rather issue 20 requests across a data center with sub-millisecond latency and pooled connections than try to make 20 requests from a spotty mobile network thats prone to all sorts of transmission loss and delays, even with multiplexing.


> You still have to aggregate somewhere.

Tbh, I'm not entirely sold on this - although I see this (server-side aggregation a cross data sources) as the main idea behind graphql. So seems like it belongs in your graphql proxy (which can proxy graphql, rest, soap endpoints - maybe even databases).

But for the "somewhere" part - consider that your servers might be on a 10gbps interconnect (and on 1 to 10gbps interconnect to external servers) - while your client might be on a 10 Mbps link - over bigger distances (higher latency).

Aggregating on client could be much slower because of the round-trip being much slower.

In addition, you might be able to do some server-side caching of queries that are popular across clients.


I agree with your assessment here, but one additional benefit is the capability to iterate faster on the backend. You have control over _where_ the aggregated data is coming from without waiting months for users to update their mobile app so that it sends requests to a new service, for example.


The article implicitly assumes that you have multiple backend teams and you need to combine the results from different services that belong to different teams. The services having interdependencies would lead to a giant ball of mud, so you need a service in front doing that for you. Now if you have also more frontends with different requirements who is going to take responsibility in the central service?

The architecture really only makes sense if you have a lot of people that would step on each other’s toes if you don’t assign them their areas and would come to a halt if you didn’t gave them enough autonomy.


The article puts forward a slightly different proposition, but IMO BFF in smaller organizations are often managed by backend teams (makes sense seeing that "backend" is still the name...). Same goes for teams with full stack devs, they'll be touching whatever layer they want.

Having multiple layers of backend services can have benefits, and one of these layers would just happen to be dedicated to the public facing frontend. To me the one of the main advantage is to make it easier to manage the security settings and applying different assumptions across the whole API. It is single purpose, so it helps a lot for customization and management (or even choosing a different stack altogether, having different scaling strategies etc.)

It becomes something managed really differently when the "real" backend is opaque and off limits (as describe in the article), but then I'm not sure we should call that "BFF", it's just a regular backend for that team as they have no other backend in charge (i.e if I had a single backend API for a mobile app, that I use to interface with Stripe, I wouldn't call it "BFF", that would make no sense)


I've got some vendors. One of them is still in the Stone Age and has difficulty with anything more than http basic auth. The other can handle a more modern OAuth setup.

Each of these vendors needs a list of accounts in an area. Deep in the stack, the list of accounts and managing individuals tied to those accounts is one service. The vendor doesn't need access to the rest of the service, just the list of accounts.

Each vendor also needs to be able to submit orders and issue refunds. That's a different backend service.

One of the vendors has been known to be... shall we say "inconsiderate" in the rate of requests. It is important that the inconsiderate vendor doesn't impact the other one's performance.

We could add in basic auth into those back end services and add in some more roles for each one of the vendors. This would complicate the security libraries on the back end services needing to accept internal OAuth provider, external OAuth provider, and basic auth - and making sure that someone internal isn't using basic auth because they shouldn't be. And trying to handle per account/application rate limiting on those back end services that really don't need to or want to care about the assorted vendors that are out there.

So, we have a pair of BFFs. They're basically the same, though there's a different profile setting in Spring Boot to switch which set of auth is configured - if its the vendor that uses the external OAuth provider then the external OAuth configuration is used. If the profile is for the other vendor, then the basic auth is used.

Each BFF calls the appropriate internal services and aggregates them - so that the internal services don't need to be concerned with the vendors. There's a BFF that has two instances and has access to these endpoints - that's good enough for the internal services. Likewise, each BFF has its own rate limit so that the inconsiderate vendor won't overload the backend or accidentally rate limit the other vendor out of being able to make requests.

The BFF handles the concern of the vendor. Aggregating the internal requests, rate limiting that vendor, providing isolation between the two vendors, and each handling the authentication and authorization for the vendor. By putting those in the BFF, those aspects of making requests to the vendor are kept isolated.

Additionally, the API contract for the vendor can be held constant while internal teams can change the internal API calls without worrying about if that data is leaking out to the vendor accidentally. Services internal can be updated with new endpoints and the BFF can be changed to point to them as long as the external vendor API contract remains the same - without needing to involve the vendor with a migration from a V1 endpoint to a V2 endpoint.


I am mystified at how many comments here are shocked at the price; the 4090 exactly in line with recent NVIDIA pricing. The 4080s do seem a little strangely overpriced, but not by huge margins.

For comparison, the 3090 launched at $1500.


We’re not shocked that the prices are in line with their recent pricing strategy. We are shocked that they haven’t decided to change course to make up for the past lack of availability, crazy power consumption, and card prices soaring through the roof.


Land value taxes are not functionally different than just regular property taxes in the places where it matters. In all of the areas that are in the worst of the housing crisis, property value is already dominated by land value; LVT doesn't really change much.

The problem, just like the source says, is that the tax is too low, not mention the absolute disaster that is Prop 13 in California.


Interesting. Even with the examples given, I found Python considerably easier to follow, with the possible exception of the inheritance example.

Just goes to show how subjective it all is!


Indeed. I started working with Ruby for at least a year before I started working with Python. To this day I still can't do anything useful in Ruby, and I'm proficient in Python.

Python is far more readable and comprehensible than Ruby.

Haven't even bothered to read TFA because it's just weird flamewar bait.


I agree, ruby's @ and @@ probably will make it a bit harder to read for anyone that is new to both languages, plus it seems more lengthy to me, python wins.


As an aside, it's bad practice to use @@ variables (IMO), they're easily clobbered. Class instance variables are much better[1].

I might also add, if you create getters for an instance variable then you don't need to use the @, except in the getter itself (and you don't even need to do that as there is the `attr_reader` helper for that).

[1] https://maximomussini.com/posts/ruby-class-variables


Yes, up until the mention of multiple inheritance, I thought the post was mediocre satire.


Same. The amount of boilerplate and "unnecessary" symbols in Ruby make it considerably less readable to me. The Python examples aren't just readable, they're glanceable.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: