Silent films were inconsistent on frame rate, especially early on. The cameras were hand cranked and so the frame rate was determined by the operator. It wasn't unusual to speed up and slow down the frame rate within a scene to achieve a desired effect. The person cranking the projector, in the early days, had to make a similar judgment. As hand cranked projectors have way to electrically powered ones, handling abnormal speeds became difficult, which lead to standardization. Early silent films almost always run too fast when projected on modern projectors. As a result, variable speed has always been a desirable feature for projectors used for historic films. Film preservationists would often modify projectors to provide it.
It can similarly be very difficult to find aperture masks to fit the unusual aspect ratios of older films, so a projectionist might have to fabricate one. Fortunately that's pretty easy with a file.
I wrote an article about this almost a year and a half ago, and I was wondering how it's held up. I think Scientific American is remiss in not digging a bit deeper, because to me one of the main parts of this story is a commercial phenomenon - street lighting in the US is pretty monopolized, almost all municipal street lights and a good portion of area lights are made by the same company (Acuity). That makes it a lot less surprising that they would exhibit a similar failure around the same time.
Acuity has acknowledged a phosphor defect in their lights and had launched a major warranty repair campaign, but I'm not sure how well that's gone given that new failures are still occurring. At least a year ago, they were struggling with the scale of the problem: it just takes a long time to schedule replacement of failed fixtures when there are so many of them.
I agree that the article text could be clearer. As I understand it, it was originally incorporated as Multiscreen Corporation (probably to work on some kind of multi-projector format which were in vogue at the time) and then renamed to IMAX Corporation after the success of the IMAX system at the 1970 Expo.
This is definitely an area for debate. I've seen the physical resolution of a 70/15 film frame estimated at 70MP, which is obviously a lot more than the ~8MP of 4k. The MP comparisons between film and digital are a little iffy though, and digital ought to be sharper within the limitations of that resolution than film. Ultimately it comes down to marketing but, having not had a direct comparison, I would still expect 70mm to look better than a digital projection system.
I think that digital LED domes might beat film because of the excellent light output and color reproduction, but I guess I'll have to shell out for the Sphere to find out as there are very few of that size.
I did some scanning for Universal. Depending on how the image is framed, you can usually just squeeze 4K from 35mm. The 70mm I had I easily pulled 8K from and I'm pretty sure I could have gone to 10K.
Thanks! email to me@computer.rip should work, sorry if it has given you trouble. Theater organs are one of my weird little interests, so maybe it's a leap but when I saw a tangential mention that Preston Fleet had been a theater organist some of the dramatic design features of many Omnimax theaters (like the glass-walled projection rooms and displaying the speakers in the preshow) made more sense to me. They're similar to the way many theater organs were installed, especially as they started to become such a niche instrument.
I think you're right, I mixed up some different locations. Here's the cool thing: while I was checking that against newspaper archives I happened to run across an older version of an illustration I saw used in the '90s, but the older version has a more complete caption! It confirms that the Science Museum of Minnesota installation was at least planned to have a Spitz STS like the Fleet. I'll see if I can tell if it was ever installed or not. I've been unsure of whether or not the Fleet was the only example of a combined Omnimax/planetarium.
The same illustration appeared with announcements of some other Omnimax theaters, but I suspect it had just been copied from the Minnesota design without paying much attention. The captions never mention the STS.
However, the side control booth located about halfway up the house, which is present in all of the Omnimax theaters where I've been able to check, is labeled as the "Planetarium console." This could explain the curiosity of the '90s Omnimax theaters having two different control booths. It seems odd to keep that feature without the planetarium projector.
Modern speed control technology has expanded the incline range for steel-wheeled trains quite a bit. Inclines that would have historically pointed towards rubber-tired or non-traction systems are usually within the range of steel wheels with solid-state motor control. Basically the control of torque is much finer than in old resistance-box parallel/series speed controllers, so you can avoid slippage much more easily.
It should be understood that the largest impact of the Starfish Prime test, knocking out streetlights, was the result of a very specific design detail of the street lights that is now quite antiquated (they were high-voltage, constant-current loops with carbon disc arc-over cutouts, and the EMP seems to have caused some combination of direct induced voltage and disregulation of the constant current power supply that bridged the carbon disks). The required repair was replacement of the carbon disks, which is a routine maintenance item for that type of system but of course one that had to be done on an unusually large scale that morning. The same problem would not occur today, as constant-current lighting circuits have all but disappeared.
In the case of the burglar alarms, it is hard to prove definitively, but a likely cause of the problem was analog motion detectors (mostly ultrasonic and RF in use at the time) which were already notorious for false alarms due to input voltage instability. Once again, modern equipment is probably less vulnerable.
Many of the detailed experiments in EMP safety are not published due to the strategic sensitivity, but the general gist seems to be along these lines: during the early Cold War, e.g. the 1950s, EMP was generally not taken seriously as a military concern. Starfish Prime was one of a few events that changed the prevailing attitude towards EMP (although the link between the disruptions in Honolulu and the Starfish Prime test was considered somewhat speculative at the time and only well understood decades later). This lead to the construction of numerous EMP generators and test facilities by the military, which lead to improvements in hardening techniques, some of which have "flowed down" to consumer electronics because they also improve reliability in consideration of hazards like lightning. The main conclusion of these tests was that the biggest EMP concern is communications equipment, because they tend to have the right combination of sensitive electronics (e.g. amplifiers) and connection to antennas or long leads that will pick up a lot of induced voltage.
The effects of EMP on large-scale infrastructure are very difficult to study, since small-scale tests cannot recreate the whole system. The testing that was performed (mostly taking advantage of atmospheric nuclear testing in Nevada during the 1960s) usually did not find evidence of significant danger. For example, testing with telephone lines found that the existing lightning protection measures were mostly sufficient. But, there has been a long-lingering concern that there are systemic issues (e.g. with the complex systems behavior of electrical grid regulation) that these experiments did not reproduce. Further, solid-state electronics are likely more vulnerable to damage than the higher-voltage equipment of the '60s. Computer modeling has helped to fill this in, but at least in the public sphere, much of the hard research on EMP risks still adds up to a "maybe," with a huge range of possible outcomes.
LEDs use constant current drivers, though. And even if you disagree, LEDs need to be current limited, so something will break with a large pulse of current, the driver or the LEDs themselves.
the constant-current drivers in LED lighting are a very different concept from constant-current lighting circuits, which are a ~1920s technology rarely seen today. constant-current lighting circuits can be miles long, operate at up to 1kV or so, and require some type of cut-out/bypass feature at each individual light so that a failure of a single bulb does not take the entire circuit out. The problems that constant-current lighting circuits address (maximizing the life of incandescent bulbs) are all solved in different, more robust ways in modern lighting systems. Most significantly, the carbon-disc cutouts that were the direct cause of the street lighting failures are no longer used (even in legacy constant-current lighting systems, where they have been replaced with more modern devices).
There's a few different reasons things are more complex, but one interesting wrench to throw in is that audio tones may not be the best way to send DTMF. Many digital telephone networks support out-of-band DTMF where the digits are sent as digital keypress events instead of actual tones (tones are usually still emitted to the user for comfort). There are a few potential benefits but mostly it improves reliability over iffy connections, reducing instances of one press being detected as two due a dropout in the middle, for example. I believe 3GPP has supported out-of-band DTMF for some time but it may not have been common, VoLTE encourages it much more. The other end is always going to have support for traditional in-band DTMF because they don't know what type of connection the caller has, so it's not a fatal problem, but less than ideal to use in-band DTMF when out-of-band DTMF is supported. This type of consideration is one of the reasons that the telephony part of phones is more complicated than you might think.
CDMA networks broadcasted the time (from GPS) in the clear as part of the base station advertisements, so you used to see CDMA used as a precision time source without any kind of subscription required. CDMA equipment required accurate time for TDM coordination. Unfortunately GSM uses a different architecture for synchronization and does not require accurate time at all, so you have to be a subscriber to request time information and even then it is not all that reliable.
It can similarly be very difficult to find aperture masks to fit the unusual aspect ratios of older films, so a projectionist might have to fabricate one. Fortunately that's pretty easy with a file.