Hacker Newsnew | past | comments | ask | show | jobs | submit | vodou's commentslogin

Do you have some concrete or specific examples of intentional compensation or purposeful scaffolding in mind (outside the topic of the article)?

Not scaffolding in the same way, but, two examples of "fetishizing accidental properties of physical artworks that the original artists might have considered undesirable degradations" are

- the fashion for unpainted marble statues and architecture

- the aesthetic of running film slightly too fast in the projector (or slightly too slow in the camera) for an old-timey effect


Isn’t the frame rate of film something like that?

The industry decided on 24 FPS as something of an average of the multiple existing company standards and it was fast enough to provide smooth motion, avoid flicker, and not use too much film ($$$).

Overtime it became “the film look”. One hundred-ish years later we still record TV shows and movies in it that we want to look “good” as opposed to “fake” like a soap opera.

And it’s all happenstance. The movie industry could’ve moved to something higher at any point other than inertia. With TV being 60i it would have made plenty of sense to go to 30p for film to allow them to show it on TV better once that became a thing.

But by then it was enshrined.


Another example: pixel art in games.

Now, don't get me wrong, I'm a fan of pixel art and retro games.

But this reminds me of when people complained that the latest Monkey Island didn't use pixel art, and Ron Gilbert had to explain the original "The Curse of Monkey Island" wasn't "a pixel art game" either, it was a "state of the art game (for that time)", and it was never his intention to make retro games.

Many classic games had pixel art by accident; it was the most feasible technology at the time.


I don't think anyone would have complained if the art had been more detailed but in the same style as the original or even using real digitized actors.

Monkey Island II's art was slightly more comic-like than say The Last Crusade but still with realistic proportions and movements so that was the expectation before CoMI.

The art style changing to silly-comic is what got people riled up.


Hard disagree.

(Also a correction: by original I meant "Secret of" but mistyped "Curse of").

I meant Return to Monkey Island (2022), which was no more abrupt a change than say, "The Curse of Monkey Island" (1997).

Monkey Island was always "silly comic", it's its sine qua non.

People whined because they wanted a retro game, they wanted "the same style" (pixels) as the original "Secret", but Ron Gilbert was pretty explicit about this: "Secret" looked what it looked like due to limitations of the time, he wasn't "going for that style", it was just the style that they managed with pixel art. Monkey Island was a state-of-the-art game for its time.

So my example is fully within the terms of the concept we're describing: people growing attached to technical limitations, or in the original words:

> [...] examples of "fetishizing accidental properties of physical artworks that the original artists might have considered undesirable degradations"


Motion blur. 24fps. Grain. Practically everything we call cinematic

I wouldn't call it "fetishizing" though; not all of them anyway.

Motion blur happens with real vision, so anything without blur would look odd. There's cinematic exaggeration, of course.

24 FPS is indeed entirely artificial, but I wouldn't call it a fetish: if you've grown with 24 FPS movies, a higher frame rate will paradoxically look artificial! It's not a snobby thing, maybe it's an "uncanny valley" thing? To me higher frame rates (as in how The Hobbit was released) make the actors look fake, almost like automatons or puppets. I know it makes no objective sense, but at the same time it's not a fetishization. I also cannot get used to it, it doesn't go away as I get immersed in the movie (it doesn't help that The Hobbit is trash, of course, but that's a tangent).

Grain, I'd argue, is the true fetish. There's no grain in real life (unless you have a visual impairment). You forget fast about the lack of grain if you're immersed in the movie. I like grain, but it's 100% an esthetic preference, i.e. a fetish.


>Motion blur happens with real vision, so anything without blur would look odd.

You watch the video with your eyes so it's not possible to get "odd"-looking lack of blur. There's no need to add extra motion blur on top of the naturally occurring blur.


On the contrary, an object moving across your field of vision will produce a level of motion blur in your eyes. The same object recorded at 24fps and then projected or displayed in front of your eyes will produce a different level of motion blur, because the object is no longer moving continuously across your vision but instead moving in discrete steps. The exact character of this motion blur can be influenced by controlling what fraction of that 1/24th of a second the image is exposed for (vs. having the screen black)

The most natural level of motion blur for a moving picture to exhibit is not that traditionally exhibited by 24fps film, but it is equally not none (unless your motion picture is recorded at such high frame rate that it substantially exceeds the reaction time of your eyes, which is rather infeasible)


In principle, I agree.

In practice, I think the kind of blur that happens when you're looking at a physical object vs an object projected on a crisp, lit screen, with postprocessing/color grading/light meant for the screen, is different. I'm also not sure whatever is captured by a camera looks the same in motion than what you see with your eyes; in effect even the best camera is always introducing a distortion, so it has to be corrected somehow. The camera is "faking" movement, it's just that it's more convincing than a simple cartoon as a sequence of static drawings. (Note I'm speaking from intuition, I'm not making a formal claim!).

That's why (IMO) you don't need "motion blur" effects for live theater, but you do for cinema and TV shows: real physical objects and people vs whatever exists on a flat surface that emits light.


You're forgetting about the shutter angle. A large shutter angle will have a lot of motion blur and feel fluid even at a low frame rate, while a small shutter angle will make movement feel stilted but every frame will be fully legible, very useful for caothic scenes. Saving private Ryan, for example, used a small shutter angle. And until digital, you were restricted to a shutter angle of 180, which meant that very fast moving elements would still jump from frame to frame in between exposures.

I suspect 24fps is popular because it forces the videography to be more intentional with motion. Too blurry, and it becomes incomprehensible. That, and everything staying sharp at 60fps makes it look like TikTok slop.

24fps looks a little different on a real film projector than on nearly all home screens, too. There's a little time between each frame when a full-frame black is projected (the light is blocked, that is) as the film advances (else you'd get a horrid and probably nausea-inducing smear as the film moved). This (oddly enough!) has the effect of apparently smoothing motion—though "motion smoothing" settings on e.g. modern TVs don't match that effect, unfortunately, but looks like something else entirely (which one may or may not find intolerably awful).

Some of your fancier, brighter (because you lose some apparent brightness by cutting the light for fractions of a second) home digital projectors can convincingly mimic the effect, but otherwise, you'll never quite get things like 24fps panning judder down to imperceptible levels, like a real film projector can.


Reminds me of how pixel-perfect emulation of pixel art on a modern screen is often ugly, compared to the game played on a CRT.

> (which one may or may not find intolerably awful).

"Motion smoothing" on TVs is the first thing I disable, I really hate it.


Me at every AirBnB: turn on TV "OH MY GOD WTF MY EYES ARE BLEEDING where is the settings button?" go turn off noise reduction, upscaling, motion smoothing.

I think I've seen like one out of a couple dozen where the motion smoothing was already off.


I think the "real" problem is not matching shutter speed to frame rate. With 24fps you have to make a strong choice - either the shutter speed is 1/24s or 1/48s, or any panning movement is going to look like absolute garbage. But, with 60+fps, even if your shutter speed is incredible fast, motion will still look decent, because there's enough frames being shown that the motion isn't jerky - it looks unnatural, just harder to put your finger on why (whereas 24fps at 1/1000s looks unnatural for obvious reasons - the entire picture jerks when you're panning).

The solution is 60fps at 1/60s. Panning looks pretty natural again, as does most other motion, and you get clarity for fast-moving objects. You can play around with different framerates, but imo anything more than 1/120s (180 degree shutter in film speak) will start severely degrading the watch experience.

I've been doing a good bit of filming of cars at autocross and road course circuits the past two years, and I've received a number of compliments on the smoothness and clarity of the footage - "how does that video out of your dslr [note: it's a Lumix G9 mirrorless] look so good" is a common one. The answer is 60fps, 1/60s shutter, and lots of in-body and in-lens stabilization so my by-hand tracking shots aren't wildly swinging around. At 24/25/30fps everything either degrades into a blurry mess, or is too choppy to be enjoyable, but at 60fps and 1/500s or 1/1000s, it looks like a (crappy) video game.


Is getting something like this wrong why e.g. The Hobbit looked so damn weird? I didn't have a strong opinion on higher FPS films, and was even kinda excited about it, until I watched that in theaters. Not only did it have (to me, just a tiny bit of) the oft-complained-about "soap opera" effect due to the association of higher frame rates with cheap shot-on-video content—the main problem was that any time a character was moving it felt wrong, like a manually-cranked silent film playing back at inconsistent speeds. Often it looked like characters were moving at speed-walking rates when their affect and gait were calm and casual. Totally bizarre and ruined any amount of enjoyment I may have gotten out of it (other quality issues aside). That's not something I've noticed in other higher FPS content (the "soap opera" effect, yes; things looking subtly sped-up or slowed-down, no).

[EDIT] I mean, IIRC that was 48fps, not 60, so you'd think they'd get the shutter timing right, but man, something was wrong with it.


Great examples. My mind jumps straight to audio:

- the pops and hiss of analog vinyl records, deliberately added by digital hip-hop artists

- electric guitar distortion pedals designed to mimic the sound of overheated tube amps or speaker cones torn from being blown out


- Audio compression was/is necessary to get good SNR on mag tape.

true - but are you implying audio engineers are now leaning into heavy compression for artistic reasons?

Not necessarily heavy (except sometimes as an effect), but some compression almost all the time for artistic reasons, yes.

Most people would barely notice it as it's waaaay more subtle than your distorted guitar example. But it's there.

Part of the likeable sound of albums made on tape is the particular combination of old-time compressors used to make sure enough level gets to the tape, plus the way tape compresses the signal again on recording by it's nature.


I work in vfx, and we had a lecture from one of the art designers that worked with some formula 1 teams on the color design for cars. It was really interesting on how much work goes into making the car look "iconic" but also highlight sponsors, etc.

But for your point, back during the pal/ntsc analog days, the physical color of the cars was set so when viewed on analog broadcast, the color would be correct (very similar to film scanning).

He worked for a different team but brought in a small piece of ferrari bodywork and it was more of a day-glo red-orange than the delicious red we all think of with ferrari.


Yes. The LEON series of microprocessors is quite common in space industry. It is based on SPARC v8 and SPARC is big-endian. And also, yes, SPARC v8 is a 33 years old 32-bit architecture, in space we tend to stick to the trailing edge of technology.

Hard to replace last years model super-resolutionator with this years model when you have to go out of your way to install it. ;)

Also remember: Even though many of these articles/books/papers/etc. are good, even great, some of them are starting to get a bit old. When reading them, check what modern commentators are saying about them.

E.g.: What every programmer should know about memory (18 years old) [1]

How much of ‘What Every Programmer Should Know About Memory’ is still valid? (13 years old) [2]

[1]: https://lwn.net/Articles/250967/

[2]: https://stackoverflow.com/questions/8126311/how-much-of-what...


While i cannot comment on the specifics u listed i dont think the fundamentals have changed much concerning memory. Always good to have something more digestible though.


I've always wondered how well these RPi based cubesats really work in space. Really hard to find out. Also, people (naturally) aren't always eager to talk about failed projects. Maybe some people here on HN have experiences to share?


In my experience, having provided advice to a lot of academic CubeSats: the issues usually aren't related to the parts, the problems are usually lack of testing and general inexperience.

Yes, a Raspberry Pi isn't radiation hardened, but in LEO (say around 400-500 km) the radiation environment isn't that severe. Total ionizing dose is not a problem. High energy particles causing single event effects are an issue, but these can be addressed with design mitigations: a window watchdog timer to reset the Pi, multiple copies of flight software on different flash ICs to switch between if one copy is corrupted, latchup detection circuits, etc. None of these mitigations require expensive space qualified hardware to reasonably address.

The usual issues I see in academic CubeSats are mostly programmatic. These things are usually built by students, and generally speaking a CubeSat project is just a bit too long (3-4 years design and build + 1-2 years operations) to have good continuity of personnel, you usually have nobody left at the end there since the beginning except the principal investigator and maybe a couple PhD students.

And since everyone is very green (for many students, this is their first serious multidisciplinary development effort) people are bound to make mistakes. Now, that's a good thing, the whole point is learning. The problem is that extensive testing is usually neglected on academic CubeSats, either because of time pressure to meet a launch date or the team simply doesn't know how to test effectively. So, they'll launch it, and it'll be DOA on orbit since nobody did a fully integrated test campaign.


As someone that have successfully flown a RPi CM4 based payload on a cubesat, I fully agree with this. There's not enough funding in my research group to hire a dedicated test engineer so I need to both design and test my payload. It was a long lonely road

It does work at the end, but shortly after we got our first data from space, I decided to quit the space industry and become a test engineer at a terrestrial embedded company instead


It's a bit like balloon projects that have a transmitter. I think now the 20th group found out that standard GPS receivers stop reporting data of at a specific height because of the COCOM limit implementation (They 'or' speed and height). Well.. there are quite a few modules around that 'and' this rule and so work perfectly fine in great heights.

It's all about the learning experience and evolution of these projects. Mistakes must happen.. but learning from them should take place too.


That's kind of how I was thinking about it. Why does each cubesat project have to start over from scratch? Why isn't there a basic set of projects that a team can build on top of to make their own custom sensors for their purpose, but the basic operational stuff like the suggested multiple storage types with redundant code shouldn't need to be recreated each time. Just continue using what worked, and tweak what didn't. No need to constantly reinvent the wheel just because it's students learning.


Yep, but students love reinventing the wheel ;).

I agree though, my dream for years has been an open source CubeSat bus design that covers say 80% of academic CubeSat use cases and can be modified by the user for the other 20%. Unfortunately I have very little free time these days with family commitments.


Well, the point of a student's project is to reinvent the wheel.

One should limit the number of wheels being reinvented each time, though. What would also reduce the time-to-space of those projects. The design should cover 100% of the CubeSat, so the students can redesign any part they want.


>Yep, but students love reinventing the wheel ;).

And ... professors love making students reinvent the wheel


I thought professors loved making students by the latest version of the book they wrote discussing how the wheel was invented


And that


> I agree though, my dream for years has been an open source CubeSat bus design that covers say 80% of academic CubeSat use cases and can be modified by the user for the other 20%

Surely this, or something like it, exists?


Not really. There are a couple of open source projects (LibreCube being the biggest example) but they aren't flight-ready.


Seems like we have similar thoughts as we wrote more or less the same comment 10 minutes apart :) Would love to chat about this, maybe we figure out a way to get there? Email is on my profile.


Email sent. I am generally very busy with family commitments but happy to stay in touch.


have you seen https://github.com/the-aerospace-corporation/satcat5 ? sadly I don't have the FPGA skills to play with it, but the features are very cool


Not just students TBH...


And it would be much cheaper too.

Imaging a group building an managing a robust power supply design for Cubesats that can be immediately ordered from JLCPCB. With a well maintain BOM list.


My dream is to build an open source CubeSat kit (hardware, software, mission control software) with an experience similar to Arduino. Download GUI, load up some examples, and you're directly writing space applications. Ideally should be capable of high end functions like attitude control and propulsion. The problem is that designing and testing such a thing is a rather expensive endeavour. So far I haven't found a way to get funds to dedicate time on this kind of "abstract"/generic project, most funding organizations want a specific mission proposal that ends generating useful data from space.


Sounds like you have yourself a YCombinator startup proposal in the making


I wondered about the radiation hardening aspect.

At one altitude does that make a difference?


The annoying answer is "it depends." The main drivers are reliability (ie: how much risk of failure are you willing to accept) and mission life (ionizing dose is cumulative, so a 2 year vs. 10 year mission will have different requirements).

I would say you certainly need to start seriously considering at least some radiation hardening at around 600 km, but missions that can accept a large amount of risk to keep costs down still operate at that altitude with non-hardened parts. Likewise, missions with critical reliability requirements like the International Space Station use radiation hardening even down at 400 km.

The "hard" limit is probably around 1000 km, which is where the inner Van Allen Belt starts. At this altitude, hardware that isn't specifically radiation hardened will fail quickly.

The inner Van Allen Belt also has a bulge that goes down as low as 200 km (the South Atlantic Anomaly), so missions in low inclined orbits that spend a lot of time there or missions that need good reliability when flying through the SAA may also need radiation hardening at comparatively low altitudes.


Always wondered if you could mitigate this somewhat by basically putting your sat in a bag of water and leaving the antenna and solar panels sticking out.


Not really. Radiation shielding has diminishing returns with thickness as the relationship is logarithmic. A few millimeters of aluminum cuts down most of your ionizing dose by orders of magnitude over unshielded, but doing appreciably better requires impractically thick shields.

And that only helps with ionizing dose, which is already not really a problem in LEO. The issue is more high energy particles like cosmic rays, which cause single event effects (SEEs) - things like random bit flips in RAM or CPU registers, or transistor latchup that can cause destructive shorts to ground if not mitigated. These are impractical to shield against, unless you want to fly a few feet of lead. So instead we mitigate them (ECC memory, watchdog timers, latchup supervisor circuits that can quickly power cycle a system to clear a latchup before it can cause damage, etc).

If you want to get an idea of how much shielding is effective in a particular orbit, you can use ESA's SPENVIS software (online, free): https://www.spenvis.oma.be/. Despite being free, it's the tool of choice for initial radiation studies for many space missions worldwide.


There are many Raspberry Pis on the International Space Station (AstroPis). They're subject to a similar amount of space radiation as CubeSats in LEO, and they work just fine. There's also an increasing trend of building CubeSat On-Board Computers (OBCs) as some form of Linux System-on-Module (these would traditionally be microcontrollers). I think Raspberry Pis (especially the Compute Modules) are quite suitable for Payload Data Handling (PDH) systems, although I've personally not had a chance to launch a RPi chip yet.


But even in LEO, there must be quite a few SEUs and resets?


I personally haven’t seen confirmed SEUs in the satellites I’ve designed/operated (as in, an ionized particle affecting a transistor/MOSFET in a way that creates a short circuit and can only be cleared with a power cycle). But it’s good practice to design space systems to have current monitoring and automatically power off in case of such events.

Resets etc. are common, most likely caused by software bugs. This is more or less assumed as a fact of life; software for space applications is often as stateless as possible, and when it’s required you’d implement frequent state checkpoints, redundant data storage, etc. These are all common practices that you’d do anyway, it doesn’t make a huge difference if the software is running on a rad-hard microcontroller or off the shelf Linux processor - although (IMO) there are many benefits to the latter (and some downsides as well.) Assuming a base level of reliability, of course - you don’t want your OBC/PDH to overheat or reboot every 5 minutes.


About 50% of cubesats fail, at least partially. I've worked with a dozen or so of them, supporting different people and companies trying to use them. Only one failed to work at all. But many of the others had serious problems of one kind or another that limited their usefulness.


We’ve been using Raspberry Pis in CubeSats for a while, for LEO they are good enough for a year or two. It’s the common consumer grade SD cards that are the weakest point. There are more robust industrial grade SD cards and there are RPis with flash (the compute modules) that can work great.


I've participated in the design or manufacture or launch of dozens of cubesats. The ones with RPis as their flight computers either accept that they'll get messed up by radiation with some regularity throughout their mission (and design other components accordingly, such as timeout watchdog resets), or accept that they'll have a quite limited mission lifetime.


Great project! Been using it for years together with VTS [1] to visualize real-time and propagated satellite positions and attitudes, and also star tracker and payload "beams".

[1] https://timeloop.fr/vts/


Swish is great, but it is sensitive infrastructure. It has already been down multiple times due to DDoS attacks (together with BankId). Don't let Swish completely replace a stash of cash at home.


I've got bad news for you on that front: https://www.explorsweden.com/sweden-destinations/sweden-a-ca...

(tldr Sweden is pretty cashless and a lot of shops don't accept cash)


speaking from a Swedish perspective, I haven’t even seen Swedish currency in about eight years.

I’m not even joking.


There are lots of places where most customers still pay in cash. Mostly ethnic minorities in stores that cater for ethnic minorities. I find it interesting that ethnicity or religion would end up being a signal informing if one is using cash or digital money.


That's your personal choice. Doesn't mean stores don't accept cash.


No, its the nature of the country.

And, most companies don’t take cash, especially smaller ones.

Large companies (ICA the supermarket) takes cash but it’s like 1/200 who use cash and the cashier is often visibly flustered when cash is presented.


Please stop mansplaining (and wrongly so at that!).

I happen to be in sweden in this very moment. And no, cashiers are not flustered and alarmed when they see cash, old people pay with cash every day.

Yes there is some restaurants that will not take cash.

Last april I was in a place where they did not take cards :)

There's more to sweden than what you personally experience.


Curious where abouts you are in Sweden? (i.e. how far north/south).

I'm not Swedish but a lot of my family are, so I visit fairly often. In Gothenburg at least, it's pretty standard in my experience that small grocery stores won't accept cash. I won't pretend that generalises anywhere else.


What does gender have to do with this conversation?


Unfortunately there is no term for "stop explaining my own country to me", but there are similarities with that other situation so… a rhetorical figure? Heard of those?


There was a term: condescending, which is ironic because you were being quite condescending in your reply to me about my observed experience being somehow a personal choice.

I could write a huge diatribe of statistics and behaviours that back me up, it's quite public that even in 2022 across the entire country only 8% of transactions were made in cash- which is even lower in the cities. https://www.riksbank.se/globalassets/media/rapporter/betalni...

And it's also quite well known that many independent businesses do not accept cash (my Coffee Shop, the restaurants I frequent (Quan in Malmo, Marvin in Malmo)).

And yes I've visibly seen cashiers recoil after putting a transaction through to the payment terminal; only to have the person tell them that they'd rather pay in cash (leading to the cashier becoming flustered).

Yes, it's more common that old people use cash (from my observed experience) but increasingly they're using debit cards (not mobile payment methods like younger folks), but no: the country is pretty much cashless; and coming from the UK (where not accepting cash is definitely a more controversial decision outside of London): here it's seen as pretty normal to say "no cash" or "cash free".

Speaking for: Stockholm, Malmo, Gothenburg, Lund, Sundsvall, Oskarshamn and Umeå, and after being in the country for 11 years. I'm not sure what other representation I should be seeing.

Talking about my personal observed experience doesn't invalidate yours, but it feels like I can speak for the overwhelming majority of the population here.

And incidentally I'm also in Sweden right now (https://mrkoll.se/person/Jan-Martin-Harris-Harasym-Kattsunds...); if you'd like me to document a day trying to use only cash I'll let you know how it goes. But I won't be able to get to work (Malmo Busses do not take cash) and I won't be able to eat at any of the restaurants in Malmo (Saluhallen and the others I mentioned above are entirely cashless) so I'll have to use COOP, Willis or ICA exclusively.


Well I've seen a lot of incompetent cashiers. Especially in summer when the real ones are on vacation. I'm not sure what that proves besides that being a cashier isn't as easy as you might think.

I assure you that pressbyrån accepts cash and you can buy tickets there to get to work. Also having a long subscription for public transport on the phone is a bad idea, because if you drop your phone you'll also lose your subscription. And depending on how many days you had left and what phone you had, it could cost you more than the phone.

And handpicking restaurants that don't accept cash is no more a proof than if I were to do the same but handpicking restaurants that don't accept cards.


Happy to hear of any restaurants that only take cash in Sweden tbh


Wasn't your claim that they won't take cash at all? Why the sudden shift?

Anyway it was a place i went in april, i'll have to look it up, i don't really remember.


patronizing


Many many restaurants and stores never accept cash here. This would be a huge problem if cash suddenly becomes the only way of paying for everyone at the same time.


Stores accept cash. Some restaurants don't, but restaurants aren't really something necessary for survival.


Yeah, I don't know if I even seen the new money.


It's another knife in the drawer.


The same can be said about prints.


Yes, but to a lesser extent.


Most books are, sadly, quite worthless nowadays (monetary value). But the Tove Jansson illustrated, swedish edition of Bilbo is still a sought-after book that usually goes for hundreds of dollars.

Here is an ongoing auction on Tradera (the swedish ebay), currently at SEK 3050 (~$320):

https://www.tradera.com/item/341571/686383148/j-r-r-tolkien-...


>Most books are, sadly, quite worthless nowadays (monetary value).

I am not sure I understand. Aren't books "worthless" because they are readily available? Books are only expensive if they are rare (out of print, special limited edition, hand made or labor intensive, author signed, etc.). I don't think I would want "most" books to be rare and difficult to obtain.


It is becoming increasingly difficult to sell, or even give away, books. In Stockholm, Sweden, where I am most familiar with the situation, most charity second-hand stores no longer accept hardcover books at all. The monetary value of most second-hand books is so low that many end up being thrown away instead of recirculated.

Of course, there are rare antiquarian books that always find a buyer, but they are quite few. And perhaps nobody will mourn the vast number of cheap crime novels thrown away every day, but there is so much more: good, beautiful, high-quality books that happen to be out of fashion for the moment. These, too, are being thrown away.

It was a long time since public libraries aimed to maintain a somewhat curated (or complete-ish) collection. Nowadays it is all about statistics. If books are not borrowed often enough, they are removed from the shelves and disappear.

Perhaps I am overly pessimistic, but I fear that many, many books will, for all intents and purposes, be lost. There are so many books that aren't scanned/digitized.


There are plenty of books which are scarce but not sought after. Not necessarily because they lack intrinsic value but simply because they are forgotten. Beautifully crafted antique books which can be bought for almost nothing nowadays since the collector’s value isn’t there.


Not my experience - the Victorian books I bought cheap as a teenager I wouldn't even attempt to replace these days. Maybe the books I'm interested in held their value for some reason. (Just picked one at random, the exact binding I have isn't in abe, but the two closest, less decorative examples are 135GBP and 195GBP).


If you find someone who has cataloged and listed what they have, especially of “pre-ISBN” books you’re going to have a certain price floor. And if you want a book, likely others do, too.

But you also can find them and garage and thrift stores, languishing unsold.

Wonder Books has a concept of how to save them: https://booksbythefoot.com/about/

Which perhaps people buying books because how they look on the shelf is bad, but is it worse than the giant recycling grinder machine turning them into pulp to fuel Amazon’s Mordor-esque delivery furnaces?


This is only partly true. The fact that the OP is referring to is the fact that books aren't sought after. Many books that have been bought for a 100 dollars in 1980 are worth only a few dollars nowadays even if they are relevant. Not many people look for used books.


Or there is _The Old English Exodus_ which is four figures (and annoyingly, the son of the author gainsaid my request for permission to reprint).


Wow, printed in 1994


Yeah, look at that! Fourteenth edition. Then it stayed in print for more than 30 years. I guess earlier editions are even more expensive.

A bit strange though. So many editions should mean there are quite a few in circulation and that they aren't that rare, or pricey.


She wrote a "treatise" on electronic music called An Individual Note of Music, Sound and Electronics. From the back cover:

"[...] a fascinating glimpse into the creative mind behind the Oramics machine. In this engaging account of the possibilities of electronic sound, Oram touches on acoustics, mathematics, cybernetics and esoteric thought, but always returns to the human, urging us to 'see whether we can break open watertight compartments and glance anew' at the world around us."

http://www.anomie-publishing.com/coming-soon-daphne-oram-an-...


After years of maintaining and using an application suite that relies on multicast for internal communication, I would hesitate to use "reliable" and "multicast" in the same sentence. Multicast is great in theory, but comes with so many pitfalls and grievances in practice. Mostly due to unreliable handling of switches, routers, network adapters and TCP/IP stacks in operating systems.

Just to mention a few headaches I've been dealing with over the years: multicast sockets that joins the wrong network adapter interface (due to adapter priorities), losing multicast membership after resume from sleep/hibernate, switches/routers just dropping multicast membership after a while (especially when running in VMs and "enterprise" systems like SUSE Linux and Windows Server, all kinds of socket reuse problems, etc.

I don't even dare to think about how many hours I have wasted on issues listed above. I would never rely on multicast again when developing a new system.

But that said, the application suite, a mission control system for satellites, works great most of the time (typically on small, controlled subnets, using physical installations instead of VMs) and has served us well.


I recently finished eight years at a place where everyone used multicast every day. It consistently worked very well (except for the time when the networks team just decided one of my groups was against policy and firewalled it without warning).

But this was because the IT people put effort into making it work well. They knew we needed multicast, so they made sure multicast worked. I have no idea what that involved, but presumably it means buying switches that can handle multicast reliably, and then configuring them properly, and then doing whatever host-level hardware selection and configuration is required.

In a previous job, we tried to use multicast having not done any groundwork. Just opened sockets and started sending. It did not go so well - fine at first, but then packets started to go missing, and we spent days debugging, and finding the obscure errors in our firewall config. In the end, we did get it working, but i would't have done it again. Multicast is a commitment, and we weren't ready to make it.


Yep- the main issue is multicast is so sparsely utilized that you can go through most of a career in networking with minimal exposure to multicast except on a particular peer link- once you scale support to multi-hop the institutional knowledge is critical because the individual knowledge is so spotty.


Aeron is very popular in large financial trading systems. Maybe since multicast is already commonplace (that's how most exchanges distribute market data).


"reliable" means that if one of the recipients observes a gap, it can ask for a replay of the missing packets.


Sure. But if you don't have reliability at the network layer you don't have any chance to have a reliable transport layer.



Printers seem to be a solved problem and they mostly use zeroconf which uses mDNS (multicast DNS). I have done a bit of work in the area and I didn't run into the problems you mentioned.

However I had very semi-strict control of my network, but used plenty of random routers for testing.


Link-local multicast like mDNS can be a bit simpler to wrangle than routed multicast. For the link-local case a lot of the interop failure cases with network equipment just devolves into "and it turned into a broadcast" instead of "and it wasn't forwarded". You can still run into some multiple interface issues though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: