Hacker Newsnew | past | comments | ask | show | jobs | submit | more andrelaszlo's commentslogin

This.

Sure, there's persistence but it always seemed like an afterthought. It's also unavailable in most hosted Redis services or very expensive when it's available.

There's also HA and clustering, which makes data loss less likely but that might not be good enough.

For the people wondering who would ever use Redis this way, check out Sidekiq! https://sidekiq.org/ "Ephemeral" jobs can be a big trade-off that many Rails teams aren't really aware of until it's too late. Reading the Sidekiq docs doesn't mention this, last time I checked, so I can't really blame people when they go for the "standard"/"best" job system and they are surprised when it gets super expensive to host it.


The Swedish part of AMPRNet [0] has some ambitions to be a fallback in case of a crisis[1]. It seems cheaper and easier (a bit of an understatement) to deploy and repair, in case it gets attacked.

0: https://en.m.wikipedia.org/wiki/AMPRNet

1: https://amprnet.se/images/Kriskommunikation-2014-01-27.pdf


Fascinating, the task was supposed to be straightforward and a way to judge code quality etc. Yet, when the candidate solved it in a simple way, they were told they had a bug that they didn't have. This is okay of course, coding is hard. It could have been an interesting opportunity for discussion, I guess.

What's interesting to me is the conclusion that it's somehow Python's fault. I wonder if that attitude would work if it came from the candidate.

I think we should be more careful with tests like this. They need to be done with more humility, so it's great that the post was written!


I think that's a good point. Say it's your company, then you need to decide if you want to build the product in C# or Python. It's going to affect productivity, but it's very difficult to say how. If you pick Brainf* of course most people can tell you productivity will suffer, but in the C#/Python example you might start building in C# since it's what you're familiar with, then have problems recruiting developers in your area a few years down the line, since most people are now doing Python (say).

Technically, your original choice might be "correct" in some ways, but who knows?

Perhaps the discussion should revolve around how we make decisions better in complex, rapidly changing environments, with limited information? It feels like we're clinging to the need for things to be predictable and measurable even when they're not, and we end up in more or less delusional discussions about which of two almost identical programming languages is better, or even tabs vs spaces :D


Perhaps this specific comparison is flawed.

It's the same as if comparing "Do we build it in Rust or TypeScript?" (not in terms of hiring pool but in terms of specific language used - picking either Python or C# will result in products with massively different grade of performance and quality).


Other things that make it difficult, some mentioned in the article:

- It's a strange activity. The output is non-linear, non-repeatable, and basically chaotic in some ways.

- You almost never build the same thing twice. Even if you do, productivity is affected (positively or negatively) just by the fact that you already did it before.

- The value of what you produce is unknown or fluctuating wildly. You might be a unicorn one day and bankrupt (even personally liable!) the next. Less extreme examples are interesting too, maybe your software is a plugin for a product that stops supporting third party plugins.

I think, like the article says, that pretending to be able to measure output just because we have something that might look superficially like actual output in other activities (e.g. number of units produced in a factory) is fundamentally misguided. It's a cop-out since what's really missing a lot of the time is (good) leadership.


"Became" doesn't add much -> "Gothic architecture spooky"

"Gothic architecture" and "spooky" is basically synonymous -> "Spooky!"

Why use word when emoji do trick? -> U+1F47B


It reminded me of this conversation where DALL·E 3 refused to generate a picture of just water.

https://mastodon.social/@sibilant/113340784251650338

It's funny, their current landing page reads: "DALL·E 3 understands significantly more nuance and detail than our previous systems, allowing you to easily translate your ideas into exceptionally accurate images."

I'm still impressed most of the time, don't get me wrong.


Almost halfway there ;)


There are 32 pieces on the board at the start of the game.


Cool! I've been wondering for s while if it wouldn't be possible to use lichess games for various ratings to make typical mistakes.

I'm also curious about if it would be possible to mimic certain playing styles. Two beginners can have the same rating but one might lose because they have a weak opening, and the other one because they mess upo the end game, for example.

Random mistakes doesn't mimic human play very well.


Exactly. My eventual goal is to be able to emulate any single player with a public game history. Maybe even flag unhuman-like moves that also happen to be top stockfish moves as possible cheating.

My current chess engine already hangs its queen sometimes and walks into forks. I'm still experimenting with how to improve personalization.


A lot of the power of expect seems to come from the fact that it's (normally) configured/scripted in Tcl

https://linux.die.net/man/1/expect

I really like that it, like the article mentions, just looks like config for basic scripts but also scales to whatever you need it to do.


In our application server written in Tcl/C, the configuration files were a Tcl DSL, the server would search for specific extensions and source the files, done.


indeed, and other ports of Expect (Perl Pexpect, Python PyExpect) feel awkward as the constructs don't map quite as well to those languages.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: