Hacker Newsnew | past | comments | ask | show | jobs | submit | amiga386's commentslogin

So you're using Ryanair's own-issued payment card, to avoid the mandatory fees it charges for every other payment option?

You forgot to mention picking the "No I don't need travel insurance" option shoved in the middle of the list of travel insurance prices, which defaults to you buying travel insurance from Ryanair.

Do you already have their spyware app installed and tracking you on your phone, to avoid being charged £50 for a plain boarding pass which you print yourself?

You're describing some other airline's website, surely. If you'd used Ryanair's site you would not be unaware of its fuckery.


You're a few years out of date. You don't get charged extra for using any credit card.

And clicking "I don't need insurance" is easy.


It was there the last time I used Ryanair (which is one time too often IHMO)

They didn't choose to remove those fees - they were legally compelled to: https://www.dw.com/en/german-court-forbids-ryanair-from-char...

Dark patterns are still sketchy and unconscionable, regardless of how easy you find them to get past. They're put there by unscrupulous businesses to catch some people -- can you say no Ryanair customer has ever accidentally purchased Ryanair insurance they didn't need?

Similarly, their latest wheeze, that you skipped over, is to compel people to use their "app". The trading standards regulators need to smack Ryanair about the head with a cricket bat and again force them not to apply such bollocks.

https://www.independent.co.uk/travel/news-and-advice/ryanair...

> Indeed, when I checked in for my 12 November flight to Germany a day ahead, I was told: “Make sure to print and bring your boarding passes to the airport or access them through the Ryanair app” and even “boarding passes must be printed for use”.

> But Ryanair says those are no longer acceptable. Oddly, though, you can use a paper boarding pass that is printed out at the airport by ground staff working for Ryanair – at no charge.

Such utter bollocks. They are totally capable of accepting paper boarding passes (or screenshots or PDFs of boarding passes shown on a phone -- better airlines let you download a PDF from their website once checked in, and you can put it on your phone or print it out; no proprietary app needed), they just want to compel you to install their app and get tracked and dinged and marketed at and upsold up the wazoo with zero benefit to you. It is not necessary at all, and I will continue to never travel with them.


On the other hand, almost every merchant and waiter in Spain told me, when handing me the card terminal, to select "local currency" (decline the first swindle attempt) then "don't convert" (decline the second swindle attempt). There's obviously some required workflow where they must pass the terminal to the customer, but they are wise to the payment gateway's trick to extract additional value from the transaction. They don't want their customer bilked, or to take the reputational damage when the customer leaves an angry review.

So if your Mexican merchants "don't know" what their terminal says? Either you were their first foreigner, or they're useful idiots, or they know.


Veritasium did a great video on Midgley: https://www.youtube.com/watch?v=IV3dnLzthDA

And therein they give the reason why ethanol was passed over: a lot of it is required to be effective (~10% of the fuel mixture), seriously dampening the profit margin of fuel sales! It works, but tetraethyl lead is so much cheaper


It's not even that. alcohol destroys engines.

Sure in retrospect lead is a bad idea. but for the sake of argument. If we ignore all the subtlety of the real world choices, research and development required the argument would probably be.

We have this great additive that will let us make more powerful efficient engines that is also stable and lubricating or we could put something in the gas that degrades quickly and eats all the rubber seals out of our customers engines.

In short even ignoring price alcohol was a non starter then, even today with many years of developing rubbers that handle alcohol better E blends are a lot harder on engines than non E blends.

And a fun science experiment "how do you tell how much alcohol is in the gas?" fill a glass mason jar about a third full of gas, mark a line on the jar where the gas is. put another third of water in and color it with food coloring, put lid on and shake well, let separate and settle out. mark new line on glass where gas is. figure out percentage. The alcohol is water soluble and will have formed a solution with the water, the food coloring will only color the water and will let you see the boundery layer easier.


That used to be true.

For a while now, any petrol car can run on high ethanol mixed without any damage.


I’m perpetually having to take apart and clean the carbs of a 2003 motorbike because of the added Ethanol in fuel nowadays.

True but it was a real consideration for a surprisingly long time. And you still find a lot of lawnmowers that tell you not to use E mixes in them, I am not sure why (my guess are either they are being super cheap on the rubber or just acknowledging the fact that lawnmowers tend to sit and the E mixes sitting tends to corrode things and go bad.)

Then why use gasoline at all? This might sound sacrilegious, but I honestly wish LPG had more pull than it did in Canada.

It makes a lot of sense to me.

They key problem is that GO SUB and GO TO are not instantaneous like a CALL or JP instruction would be in Z80. The BASIC interpreter does a linear scan through the entire source code, until it reaches the specified line number. Every time you want to jump or call.

That's why this article is called "Efficient Basic Coding"... all the "tricks" are about moving the most frequently called code to the lowest line numbers, so the interpreter spends as little time scanning the source code destination line numbers as possible.

The second article in the series is on a theme, where variables aren't indexed either, the interpreter scans through each variable in turn until it finds the one with the name referenced in the source code... again you want to define them in order of most frequently accessed...


That's also what gopsutils does, IIRC: it tries to look up process information with kernel APIs but can fall back to invoking /usr/bin/ps (which is setuid root on most systems) at the cost of being much less performant.

Per the State Department in 2023:

https://x.com/John_Hudson/status/1615486871571935232

> fonts like Times New Roman have serifs ("wings" and "feet") or decorative, angular features that can introduce accessibility issues for individuals with disabilities who use Optical Character Recognition technology or screen readers. It can also cause visual recognition issues for individuals with learning disabilities.

> On January 4, 2023, in support of the Department's iCount Campaign on disability inclusion (reftels), Secretary Blinken directed the Department to use a more accessible font. Calibri has no wings and feet and is the default font in Microsoft products and was recommended as an accessibility best practice by the Secretary's Office of Diversity and Inclusion in collaboration with the Executive Secretariat and the Bureau of Global Talent Management's Office of Accessibility and Accommodations.

In 2023, the US State Department signalled how virtuous it was, by moving from the previously-default MS Office font to the then-currently-default MS Office font. The current MS Office default font is Aptos, place your bets on what the State Department is going to switch the font to in 3 years time.

As far as I know, font choice has no zero effect on screen readers, which ask compatible software what words are on screen and read them out. There is evidence that serifs cause visual recognition issues for some individuals, but there's also evidence they aid recognition for different individuals.

It probably helped everyone to choose 14pt Calibri over 12pt Times New Roman, as the font is more legible on LCD screens.

The virtue being signalled by the current administration is that everything their predecessors did was wrong and they're literally going to reverse everything out of sheer pettiness. If anything, they should acknowledge the president's long friendship with Epstein and pick Gill Sans as the default. That would be the ultimate "anti-woke" move I think.


There is. You can easily find by googling "hector martin" "asahi lina" and you will soon find a pile of obsessively archived evidence.

My view, now that Hector has resigned from the LKML both as himself and as Lina, is there is no problem any more. If Hector wants to project a persona, or decide his identity is the persona, or maybe multiple personas, that is fine on social media. People do that. So long as he's not sockpuppeting Linux kernel maintenance, it's fine.


Instead of prompting me to "do my own research" (spoiler: The 'evidence' is lacking and is mostly tedious speculation) you could provide something more compelling

Previously on HN:

https://news.ycombinator.com/item?id=35238601

https://news.ycombinator.com/item?id=35252948

https://news.ycombinator.com/item?id=42978356

https://news.ycombinator.com/item?id=42979681

Hector does active work to try and prevent being associated to his Lina persona, mainly by continuing his vendetta against former associate and co-creator of the Lina character, Luna the Foxgirl.


It all depends on the value each and every person brings. Otherwise, I'd rather chose to work with a normal colleague rather than with someone who can't keep his obsessive schizophrenia out of work process. I guess this should be the baseline.

Chiefly because "supporting it" requires a full JavaScript interpreter, and subscribing to changes in "system settings" during the lifetime of your program. Easier just to support http_proxy/https_proxy/no_proxy (and what standard for no_proxy? Does it support CIDR ranges?) or even less flexibility than that.

If only http_proxy/https_proxy/no_proxy at startup time were more widely supported then. In my case I had to deploy software into a kubernetes cluster managed by a different company that required these configurations.

Does this mean that all architectures that Linux supports but Rust doesn't are straight in the bin?

No, Rust is in the kernel for driver subsystems. Core linux parts can't be written in Rust yet for the problem you mention. But new drivers *can* be written in Rust

Which ones would that be?

In order to figure this out I took the list of platforms supported by Rust from https://doc.rust-lang.org/nightly/rustc/platform-support.htm... and those supported by Linux from https://docs.kernel.org/arch/index.html, cleaned them up so they can be compared like for like and then put them into this python script:

  linux = { "alpha", "arc", "arm", "aarch64", "csky", "hexagon",
    "loongarch", "m68k", "microblaze", "mips", "mips64", "nios2",
    "openrisc", "parisc", "powerpc", "powerpc64", "riscv",
    "s390", "s390x", "sh", "sparc", "sparc64", "x86", "x86_64",
    "xtensa" }
  
  rust = { "arm", "aarch64", "amdcgn", "avr", "bpfeb", "bpfel",
    "csky", "hexagon", "x86", "x86_64", "loongarch", "m68k",
    "mips", "mips64", "msp430", "nvptx", "powerpc", "powerpc64",
    "riscv", "s390x", "sparc", "sparc64", "wasm32", "wasm64",
    "xtensa" }
  
  print(f"Both: {linux.intersection(rust)}")
  print(f"Linux, but not Rust: {linux.difference(rust)}")
  print(f"Rust, but not Linux: {rust.difference(linux)}")
Which yields:

Both: {'aarch64', 'xtensa', 'sparc', 'm68k', 'mips64', 'sparc64', 'csky', 'riscv', 'powerpc64', 's390x', 'x86', 'powerpc', 'loongarch', 'mips', 'hexagon', 'arm', 'x86_64'}

Linux, but not Rust: {'nios2', 'microblaze', 'arc', 'openrisc', 'parisc', 's390', 'alpha', 'sh'}

Rust, but not Linux: {'avr', 'bpfel', 'amdcgn', 'wasm32', 'msp430', 'bpfeb', 'nvptx', 'wasm64'}

Personally, I've never used a computer from the "Linux, but not Rust" list, although I have gotten close to a DEC Alpha that was on display somewhere, and I know somebody who had a Sega Dreamcast (`sh`) at some point.


Well, GCC 15 already ended support for the nios2 soft-core. The successor to it is Nios V which runs RISC-V. If users want us update the kernel, they'll also need to update their FPGA.

Microblaze also is a soft-core, based on RISC-V, presumably it could support actual RISC-V if anyone cared.

All others haven't received new hardware within the last 10 years, everybody using these will already be running an LTS kernel on there.

It looks like there really are no reasons not to require rust for new versions of the kernel from now on then!


Is this the same NIOS that runs on FPGA? We wrote some code for it during digital design in university, and even an a.out was terribly slow, can't imagine running a full kernel. Though that could have been the fault of the hardware or IP we were using.

You fell into the trap I predicted a few years ago when the renaming happened. Microblaze refers to the old microblaze soft core, not the new one that they named the same thing and that is based on RISCV.

It’s an interesting list from the perspective of what kind of project Linux is. Things like PA-RISC and Alpha were dead even in the 90s (thanks to the successful Itanium marketing push convincing executives not to invest in their own architectures), and SuperH was only relevant in the 90s due to the Dreamcast pushing volume. That creates an interesting dynamic where Linux as a hobbyist OS has people who want to support those architectures, but Linux as the dominant server and mobile OS doesn’t want to hold back 99.999999+% of the running kernels in the world.

There was a time when it came to 64 bit support Alpha really was the only game in town where you could buy a server without adding a sixth zero to the bill. It was AMD, not Itanium that killed Alpha.

I remember that time being before 64-bit became a requirement for most people (I worked with some computational scientists who did buy Alphas for that reason since they needed the flat memory space without remapping hacks). The Alpha was indeed great but Intel did enough of a job convincing most manufacturers that the only future was Itanium that only a few niche workstation manufacturers picked up the Alpha, and they had 64-bit competition from SPARC, MIPS, and POWER pretty quickly.

I do agree that it was AMD which really made 64-bit mainstream. There’s an interesting what-if game about how the 90s might’ve gone if Intel’s marketing had been less successful or if Rick Belluzo hadn’t bought into it since he killed PA-RISC and HPUX before moving to SGI where he killed both MIPS and Irix and gave the high-end graphics business to nVidia.


HP owned Compaq already, which had previously bought DEC, when the Alpha was killed. HP chose Itanium for their servers. They settled some patent issues with Intel partly by killing Alpha instead of a more traditional cross-licensing agreement.

AMD killed Itanium. HP was pretty far along killing Alpha all on their own.


AMD, and the addition of PAE to the Pentium Pro which allowed 32-bit systems to reasonably have huge (for that time) amounts of memory

There's Rust for Dreamcast (https://dreamcast.rs) via Rust's GCC backend.

You probably shouldn't include Rust's Tier 3 in this list. If you have to, at least make it separate.

some of the platforms on that list are not supported well enough to say they actually have usable rust, e.g m68k

I've certainly used parisc and alpha, though not for some years now.

https://lwn.net/Articles/1045363/

> Rust, which has been cited as a cause for concern around ensuring continuing support for old architectures, supports 14 of the kernel's 20-ish architectures, the exceptions being Alpha, Nios II, OpenRISC, PARISC, and SuperH.


> supports 14 of the kernel's 20-ish architectures

That's a lot better than I expected to be honest, I was thinking maybe Rust supported 6-7 architectures in total, but seems Rust already has pretty wide support. If you start considering all tiers, the scope of support seems enormous: https://doc.rust-lang.org/nightly/rustc/platform-support.htm...


They probably get a few for "free" from LLVM supporting them

LLVM platform support is neither sufficient (rustc needs to be taught about the platform) not technically necessary (you could write a rustc backend that targets a platform that LLVM doesn't, like modifying cranelift or once the gcc backend reaches maturity).

Wasn't that a whole tent pole of LLVM?

Its strange to me that Linux dropped Itanium two years ago but they decided to keep supporting Alpha and PA-RISC.

Itanium was mainly dropped because it was impeding work in the EFI subsystem. EFI was originally developed for Itanium before being ported to other platforms, so the EFI maintainers had to build and test their changes to shared code on Itanium. They eventually decided that this wasn't worth the effort when there was basically nobody running modern Linux on Itanium besides a few hobbyists. I imagine that alpha and hppa will get dropped in the future if they ever also create an undue maintenance burden. There's more context here if you're interested: https://lwn.net/Articles/950466/

I do wonder how many humans beings are running the latest linux kernel on Alpha.

And more specifically which ones that anyone would use a new kernel on?

Some people talk about 68k not being supported being a problem

m68k Linux is supported by Rust, even in the LLVM backend.

Rust also has an experimental GCC-based codegen backend (based on libgccjit (which isn't used as a JIT)).

So platforms that don't have either LLVM nor recent GCC are screwed.


how on earth is linux being compiled for platforms without a GCC?

additionally, I believe the GCC backend is incomplete. the `core` library is able to compile, but rust's `std` cannot be.


>nor recent GCC are screwed.

Not having a recent GCC and not having GCC are different things. There may be architectures that have older GCC versions, but are no longer supported for more current C specs like C11, C23, etc.


I don't believe Rust for Linux use std. I'm not sure how much of Rust for Linux the GCC/Rust effort(s) are able to compile, but if it was "all of it" I'm sure we'd have heard about it.

Yes. Cooked.

Perl's binary brings with it the ability to run every release of the language, from 5.8 onwards. You can mix and match Perl 5.30 code with 5.8 code with 5.20 code, whatever, just say "use v5.20.0;" at the start of each module or script.

By comparison, Python can barely go one version without both introducing new things and removing old things from the language, so anything written in Python is only safe for a a fragile, narrow window of versions, and anything written for it needs to keep being updated just to stay where it is.

Python interpreter: if you can tell "print" is being used as a keyword rather than a function call, in order to scold the programmer for doing that, you can equally just perform the function call.


> By comparison, Python can barely go one version without both introducing new things and removing old things from the language

Overwhelmingly, what gets removed is from the standard library, and it's extremely old stuff. As recently as 3.11 you could use `distutils` (the predecessor to Setuptools). And in 3.12 you could still use `pipes` (a predecessor to `subprocess` that nobody ever talked about even when `subprocess` was new; `subprocess` was viewed as directly replacing DIY with `os.system` and the `os.exec` family). And `sunau`. And `telnetlib`.

Can you show me a real-world package that was held back because the code needed a feature or semantics from the interpreter* of a 3.x Python version that was going EOL?

> Python interpreter: if you can tell "print" is being used as a keyword rather than a function call, in order to scold the programmer for doing that, you can equally just perform the function call.

No, that doesn't work because the statement form has radically different semantics. You'd need to keep the entire grammar for it (and decide what to do if someone tries to embed a "print statement" in a larger expression). Plus the function calls can usually be parsed as the statement form with entirely permissible parentheses, so you have to decide whether a file that uses the statement should switch everything over to the legacy parsing. Plus the function call affords syntax that doesn't work with the original statement form, so you have to decide whether to accept those as well, or else how to report the error. Plus in 2.7, surrounding parentheses are not redundant, and change the meaning:

  $ py2.7 
  Python 2.7.18 (default, Feb 20 2025, 09:47:11) 
  [GCC 13.3.0] on linux2
  Type "help", "copyright", "credits" or "license" for more information.
  >>> print('foo', 'bar')
  ('foo', 'bar')
  >>> print 'foo', 'bar'
  foo bar
The incompatible bytes/string handling is also a fundamental shift. You would at least need a pragma.


> Can you show me a real-world package that was held back because the code needed a feature or semantics from the interpreter

That is not what I was getting at. What I was saying is that, if you write code for perl 5.20 and mark it "use 5.20.0;", then that's it, you're done, code never needs to change again. You can bring in newer perl interpreters, you can upgrade, it's almost certainly not going to break.

You can even write new code down the line in Perl 5.32 which wouldn't be possible in 5.20, and the 5.20 code wouldn't be valid in 5.32, but as they're both declaring which version of the language they're written in, they just seamlessly work together in the same interpreter.

Compared to Python's deliberate policy, which is they won't guarantee your code will still run after two minor releases, and they have a habit of actively removing things, and there's only one version the interpreter implements and all code in the same interpreter has to be be compatible with that version... it means a continual stream of having to update code just so it still runs. And you don't know what they're going to deprecate or remove until they do it, so it's not possible to write anything futureproof.

> in 2.7, surrounding parentheses are not redundant,

That is interesting, I wasn't aware of that. And indeed that would be a thorny problem, moreso than keeping a print statement in the grammar.

Fun fact: the parentheses for all function calls are redundant in perl. It also flattens plain arrays and does not have some mad tuple-list distinction. These are all the same call to the foo subroutine:

    foo "bar", "baz"
    foo ("bar", "baz")
    foo (("bar", "baz"))
    foo (("bar"), "baz")
    foo (((("bar")), "baz"))


> Compared to Python's deliberate policy, which is they won't guarantee your code will still run after two minor releases

They don't guarantee that the entire standard library will be available to you two minor releases hence. Your code will still run if you just vendor those pieces (and thanks to how `sys.path` works, and the fact that the standard library was never namespaced, shadowing the standard library is trivial). And they tell you up front what will be removed. It is not because of a runtime change that anything breaks here.

Python 3 has essentially prevented any risk of semantic changes or syntax errors in older but 3.x-compatible code. That's what the `__future__` system is about. The only future feature that has become mandatory is `generator_stop` since 3.7 (see https://peps.python.org/pep-0479/), which is very much a corner case anyway. In particular, the 3.7-onward annotations system will not become mandatory, because it's being replaced by the 3.14-onward system (https://peps.python.org/pep-0649/). And aside from that again the only issue I'm aware of (or at least can think of at the moment) is the async-keyword one.

> And you don't know what they're going to deprecate or remove until they do it

This is simply untrue. Deprecation plans are discussed in public and now that they've been burned a few times, removal is scheduled up front (although it can happen that someone gives a compelling reason to undo the deprecation).

It's true that you can't make your own code, using the standard library (which is practically impossible to avoid), forwards-compatible to future standard libraries indefinitely. But that's just a matter of what other code you're pulling in, when you didn't write it in the first place. Vendoring is always an option. So are compatibility "forward-ports" like https://github.com/youknowone/python-deadlib. And in practice your users are expecting you to put out updates anyway.

And most of them are expecting to update their local Python installations eventually, because the core Python team won't support those forever, either. If you want to use old FOSS you'll have to accept that support resources are limited. (Not to mention all the other bitrot issues.)


asyncio.get_event_loop ?


I seem to have messed up my italics. The emphasis was supposed to be on "from the interpreter". asyncio.get_event_loop is a standard library function.


Well isn't that nice. The boxes I care most about are 32 bit. The perl I use is 5.0 circa 2008. May you amiga386, or anyone else, thank you in advance, may be able to tell me what do I need to upgrade to perl 5.8? Is it only perl 5.8 and whatever is the contemporaneous gcc? Will the rest of my Suse 11.1 circa 2008, crunch? May I have two gcc's on the same box/distro version, and give the path to the later one when I need it? The reason I am still with Suse 11.1, is later releases broke other earlier things I care about, and I could not fix.


suse 11.1 includes perl 5.10: https://ftp5.gwdg.de/pub/opensuse/discontinued/distribution/...

perl 5.8 was released in 2002. perl 5.000 (there is no perl 5.0) was released in 1994. I have difficulty accepting you have perl "5.0" installed.


I assumed 5.0 was referring to using any of the 5.00x series.


The Python approach seems better for avoiding subtle bugs. TIMTOWTDI vs "there should be one obvious way to do it" again.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: