When I was young I started my career working in manufacturing. Specifically machine shops with presses, CNC machines, EDM machines, ect...
You would be amazed at the level of hazard people are willing to accept. For example, I recall running one machine, a 300 ton press with an 84"x54" bed and 24" of stroke. It was 25 feet tall and we nicknamed this one Optimus Prime. When Optimus was warmed up he would spit warm hydraulic oil all over the place. A nice fine mist along with a slurry of hydraulic rain drops would cover the area. The solution was to wear Weimao hats made out of disposable cardboard.
Another machine was a 50+ year old roll form machine. How I did not lose my life on this machine is beyond me. Modern machines feed the material automatically and have clutches and brakes with optical sensors so they can stop on a dime. This one literally used inertia and a massive flywheel to function. You got the rollers spinning and fed the material into the first roller, then as it came out you had to guide the material into the next roller. Manually. In between spinning rollers. With your hands. And the machine had a 1,000lb flywheel that gave the whole thing intertia. You only needed to give it throttle once, and the whole machine would spin for 30+ seconds whether it was forming material, or your arm, or whatever. Chances are it would have sucked an entire human into the rollers on one blip of the throttle. And the coup de grâce was the throttle was a 50lb lever on a swing pivot. If you drop this lever to turn the machine OFF, it would bounce with gravity and bounce itself back on. This is not a third world country. These machines are located in Newburyport Massachusetts.
I was a lot younger back then, but to this day that is how helicopter engine are made. Those antiquated tools are more important to the major aerospace companies than any operator they've ever had.
You might listen to the podcasts. They are good and they are well researched. Listen: I met Kissinger a few times and spent a few decades of my life working with foriegn policy wonks. He was a monster beyond compare.
And I'll just add this in. When I was 24 I got a job at the New York Times working on the tech team that would launch nytimes.com. The "web editor" was one Bernard Gwertzman. Look him up. He was the foreign desk editor of the paper of record for decades. He made his name reporting on the Vietnam war. Would you like to know who his best friend was in 1996 when I met him? Henry Kissinger. He had lunch with him every wednesday at the Harvard Club. Having read Manufacturing Consent more than once I was flabbergasted. If Chomsky had known this... Anyway, he and I were the first ones to show up for a meeting one time and I asked him how he and Henry K had met. He leaned over and said (with a literal wink) "while I was reporting on Vietnam, but don't tell anyone!"... said the man who among many other things 1. reported that we were not bombing Cambodia, 2. Supported Pinochet and 3. didn't report on the East Timor genocide. All policies that were 100% Kissinger.
This is just pure excess cleverness. It's not even obvious how many times this loop runs for. There's a comment 1000 lines away that says this:
#define NUM_INSTANCES 5 // or 3, but change char relative[] = "-2" to "-1"
This is just insanely unmaintainable code. If a string is warranted I would have just done the "stupid" thing with a `sprintf(buf, "%+d", relative);` to make it obviously correct, even if it seems "slower".
These days, all Java runtimes are created with jlink, including the one that's bundled in the JDK, so it's worth it to take a few minutes to learn how to do it yourself for a custom image. The result is not only drastically smaller, but more secure, as the potential attack surface area is much smaller.
If your application is modularised, it can become a part of the image, but, as the article shows, creating a custom runtime is easy and recommended even if your application is not modularised.
BTW, the JDK contains not just a Java runtime, but development tools, as well as an additional copy of the entire class library (this copy, in jmod files, is stored in a format that jlink uses as its input; i.e. it's there only to allow generating new runtime images). Use the entire JDK as an runtime is a real waste of space, since, among other things, it contains all libraries twice.
One minor comment, though. jlink produces Java runtime images or, in short, Java runtimes -- not a JRE. The name JRE refers to a particular kind of Java runtime from a bygone era when there was a global Java runtime environment, which was used by applets (and Web Start applications). When applets and Web Start were removed, the JRE and the very concept of one, was gone along with them. Some people still use the anachronistic term JRE to refer to a Java runtime (and some companies distribute pre-linked Java runtimes and call them JREs), but the real JRE is gone.
Although most organ music played for liturgical or academic reasons is terrifyingly boring, there is a decent contingent of symphonic and "theatre" (yes -re) organ people who want something that moves us and maybe erodes some of the concrete work elsewhere in building.
TLDR: the folks that want to hear organs sound good go find other people that want to hear organs sound good and we have conventions and listen to organs sounding good.
In the mid-seventies at Swarthmore College, we were mired in punched card Fortran programming on a single IBM 1130. The horror, a machine less powerful than the first Apple II. My job six hours a week was to reboot after each crash. People waited hours for their turn to crash the machine. I let a line form once people had their printouts. I'd find the single pair of brackets in a ten line listing, and I'd explain how their index was out of bounds. They thought I was a genius. Late one Saturday night, I made a misguided visit to the computer center while high, smelled sweat and fear, and spun to leave. Too late, a woman's voice: "Dave! I told Professor Pryor he needed you!" We didn't know that Fred Pryor was the economics graduate student freed in the 1962 "Bridge of Spies" prisoner exchange. Later he’d learn that I was his beagle’s favorite human, and I’d dog-sit to find steaks for me I couldn’t afford, but for now I feared him. So busted! Then I heard this voice “See these square brackets? See where you initialize this index?” He was spectacularly grateful.
One cannot overstate the rend in the universe that an APL terminal presented, catapulting me decades into the future. I quickly dreamed in APL. For $3 an hour for ten hours (a massive overcharge) I took a professor’s 300 line APL program translated literally from BASIC, and wrote a ten line APL program that was much faster. One line was classic APL, swapping + and * in an iterated matrix product for max and min. The other nine lines were input, output. The professor took years to realize I wasn’t also calling his code, then published my program.
Summer of 1977 I worked as a commercial APL programmer. Normally one never hires college students for the summer and expects them to be productive. The New York-based vice president was taking the train every day to Philadelphia because the Philly office was so far underwater, and desperate to try anything to save himself the commute. He knew Swarthmore had a terminal, and heard about me. At my interview I made a home-run derby of the questions from the Philly boss. The VP kept trying to intervene so he could put me in my place before hiring me. The tough questions were “dead key problems”. How do you write the following program, if the following keys are broken?
Our client was a mineral mining company, our task a reporting system. The reports were 2-dimensional projections of a 9-dimensional database. The accountants wanted all totals to be consistent across reports, and to be exactly the sums of their rounded components. I broke the news to our team that we needed to start over, rounding the 9-dimensional database once and for all, before generating each report. This took a few weeks; I wrote plenty of report generation helper routines. My coworkers overheard me say on a phone call that I was being paid $5 an hour, and at the time I didn’t understand which way they were shocked. I didn’t have much to do, the rest of the summer.
The mining company VP found me one morning, to ask for a different report, a few pages. He sketched it for me. He found me a few hours later to update his spec. He loved the printout he saw, imagining it was a prototype. “It’s done. I can make your changes in half an hour.”
At a later meeting he explained his own background in computing, how it had been key to his corporate rise. Their Fortran shop would take a month to even begin a project like I had knocked off in a morning, then weeks to finish it. He pleaded with me to pass on Harvard grad school and become his protege.
Some Lisp programmers had similar experiences, back in the day. Today, APL just sounds like another exotic language. In its heyday it was radical.
Thanks for sharing, I love these types of stories. Really makes me pine for the "old" days, and wonder if there's a parallel universe where technology took a very different route such that languages like APL, Lisp, and Smalltalk are used instead of JavaScript, Java, and C#, and what that world looks like.
> Some Lisp programmers had similar experiences, back in the day.
About 20 years ago (so not quite so far back) I was an engineering intern in an industry (nuclear energy) with two main tools: heavy number crunching software in Fortran, and Excel for everything else. The plant I was working at had just gotten some software for managing and tracking fuel movement (reactor cores are comprised of several hundred fuel bundles, which are re-arranged every 1-2 years), and my task was to set it up and migrate historical data from Excel spreadsheets, either by entering data manually with the GUI (which wasn't really that good) or using the primitive built-in import/export functions (CSV-based probably). Good intern task, right?
At some point I noticed this odd window running in the background whenever I started the program: "Gold Hill Common Lisp". Hm, what's this, it seems to have some kind of command line... and so I dived down the rabbit hole of the CL REPL and live image manipulation. I discovered the apropos command (or maybe GHCL had an actual UI for it?), which let me learn about all the internal data structures and methods, which I was able to use to quickly configure the plant specifics and import data.
"Oh, you're done already? OK next we need to get these custom reports out, talk to the vendor about implementing them. And see if you can now export data into our old report generator" (another spreadsheet of course). So I dutifully started the requisition process to get custom reports added, but while that was working through the system, I was stretching my new-found Lisp knowledge to not just dump out report data, but add the functionality to the UI. Coming from a background in C and Fortran I was fully ingrained with "write, compile, run" being how things worked. Image how much it blew my mind when I found out I could call a function in the REPL and actually add a menu to the running program!
One feature of the software was it could be used for "online" fuel movement tracking, which was traditionally done on paper, in duplicate. It's probably still done that way for good reasons, but still nice to have that electronic tracking. I was so proud when we were demonstrating it to the reactor operators, and they asked if we could add some little functionality (the details escape me), and I was able to say "yep no problem!" No requisitions, then back-and-forth with the vendor, eventually getting the feature in a year maybe. Really wish all software was so powerful (although admittedly my hijinks were a bit of a QA nightmare, but the software wasn't considered safety-related since there were many checks and paper records).
Fast-forward a couple years, after much coursework in Fortran and Matlab, I'm about to graduate and am now interviewing with the vendor. Question comes up "so what would you change about our software?" "Well, the interface is a bit clunky, I'd probably want to re-write it in a modern language like C++" :facepalm:.
Only years later, re-discovering CL, along with Racket and Clojure, did it occur to me how much was wrong with that response, and how sad that the key lesson of that semester internship had gone over my head.
Luca is a cash grab, and maybe more sinister. Since the CWA - the official contact tracing app - switched to not collecting a central database that could be misused we have conservative politicians firing against it. "We give too much importance to data privacy", of course without being able to mention a single feature that the privacy protecting app is missing. Now Luca arrived, with some semi-prominent advocates, and you see conservative politicians shoving millions into that abomination of proprietary, data collecting and now evidently copyright infringing garbage.
It is as if a certain political class had this dream scenario of a new location registry of every move of the population. That they did not get via the CWA app, and since then they attack it. But Luca could create it.
Don't forget: This is Germany. Very low corruption at the lower level of society (you will never see a bribe in everyday life) and the basic organisation of the country seems competent. Incredible high amount of corruption and incompetence in the higher spheres - Wirecard, Cum Ex, Kohls schwarze Kassen, the governing party (CDU) currently has a scandal about members gaining millions via corruption when organising FFP2 masks, the country completely failed to contain Covid after the first more or less successful lockdown. Luca fits right in.
When out walking one night a few years back, I was lucky enough to find a baby jackdaw on the ground that must have fallen from its nest. It was completely cold and not moving. I picked it up and was about to discard it again, thinking it was dead, when I noticed its claw twitch slightly. So I cupped it in my hands to warm it and took it home with me and gradually nursed it back to health.
It stayed with me for about a year or more and even came on a couple of camping holidays round Ireland and Wales with me. I became quite a tourist attraction, as I'd be walking along a beach or in some countryside and suddenly this bird would appear as if from nowhere and land on my shoulder. passers by would be gobsmacked and ask to take a photo of the crazy bird man.
As the jackdaw got older, it got more independent and would go flying off for longer and longer periods until, one day it flew off to join another group of jackdaws in a park and never came back.
One of my coolest memories ever was one night me and the missus had gone out to dinner. The jackdaw had flown off somewhere, earlier in the day and not come back. When we returned home around midnight we found it fast asleep in its cage in the living room. It had come home, found us not there and let itself in through the cat-flap and went to bed, all by itself. Even though it had always been 'free' in the sense it never had a door on its cage and was free to fly off wherever and whenever it wanted, somehow that incident really touched me as it was like the bird was telling me that it considered our house its home too.
Absolutely amazing creatures and so intelligent. I read once that corvids have similar amounts of neurons in their brains to small monkeys but the folding of the corvid brain is more complex, so those neurons fit into a smaller brain volume. After having the privilege of spending that amount of time with one 'close up' I can certainly believe it.
The advice to starting a negotiation with a favourable number as a conceptual anchor makes the mistake that a number should be at the beginning of a negotiation at all. The whole point of a negotiation is not to haggle down to a price, but to discover a "true" price based on seeking out principles. The point of negotiation isn't to simply raise the number like in the haggle, it's to influence their number to be the effect of your principles, so that the result is everyone feels they've got a good deal.
With practice, you can de-anchor discussions simply by re-framing their anchors using new principles.
e.g. Anchor: "Given your current salary is probably around $50k, we think you're way underpaid and we will offer you $52,500, which is a %5 raise just for switching jobs! You can thank me for getting you this incredible deal by signing right now."
Re-frame: "I really appreciate your initial effort on this. We can't disclose my current salary here because my employer treats it as competitive information and I'm still a member of this team so I can't really comment on that. Let's move the numbers discussion out a bit, and get a sense of the value I can provide in the role. However, looking at the glassdoor and city cost of living salary data for your company, the range you suggested is just below the average salary for other people in the role. I can solve one of your major problems with my unique experience out of the gate, which would take at least a quarter to six months in learning curve for your current team."
This simple re-framing is, destabilize the premise (their guess of your salary), add 2 new objective principles of a) competitive information, and b) glassdoor/city data source, then provide them with relief from the instability stress you created using a soft sweetener (offer of hidden value) without even coming back with a number. This is a simplified case, but you get the idea. So yes, anchoring, but now that you see the reframing to your principles (a new anchor), it's much less of an obstacle.
The company I worked for before ~ 8 years (a bank) was using Appian for a bunch of its non-core workflows and actually was in the process of using it even more.
Appian is a strange beast: It's a BPMS written as a Java web application that has an in-memory kdb (the db of the K programming language which is well-known in HN) for storing all process data.
The greatest advantage of Appian as compared to other workflow systems I've used (Activiti, jBPM etc) is that it offers a really complete environment for creating a more or less complete workflow without the need to write code. So you can actually teach non-technical people to do it. I remember we had a couple of Business Analysts that were creating very complex workflows back then. The didn't have any technical knowledge; their background was mostly on economics. Of course, for integration with legacy systems or relational databases or doing some tricky UI you'd still need a developer. But most of the work could be done by a not so tehnical guy which is the holy grail of such systems. Also, it was a really complete system where you could rather easily implement all your workflow needs (workflow design, user tasks, notifications, exceptions, integrations, reporting, authorizations, business rules, subprocessing, parallel execution etc). After some initial configuration you'd rarely need to touch code unless you needed some custom bpm nodes.
Appian was also claiming that because it was using the kdb as a database backend it was very fast. I don't have an opinion on this; it wasn't slow but wasn't blazing fast. And also when the kdb size grew too much (we're talking some tens of GBs) it was taking a really long time to start (half an hour or something) and needed the same amount of memory from the server (IIRC we had 64GB back then) because it needed to load the kdbs into memory. Also I remember that we had a constant fear that the kdbs will be compromised somehow (for example if the server rebooted unnormally) and we'll lose data. Or maybe I had that fear; I was never able to "trust" it as I could trust the good-old IBM DB2 database the bank had. Concerning data-loss, we had a bunch of incidents that were related to having configured appian as a cluster; after we switched to a single server it was better. The good thing (or maybe bad because I was never able to learn K) is that it had a complete API in Java so we didn't actually need to touch the kdbs; I remember with awe however when we had a support request where an Appian engineer was using K to actually query the kdb and see the status of our server.
In any case, the main drawback of Appian is how expensive it was. I don't remember how much but I remember that the bank had a special agreement to have a low price for Appian (don't exaclty now the details); buying it fully was too expensive even for the bank (!) (especially if it was to be used by all employees since it had a per user fee).
Beyond all these, I belive that Appian is a solid product and deserves its success in the enterprise world.
The US has a lot of weird norms in the corporate office culture.
It's a result of a protracted period of tucking more and more of the emotion of an interaction into the background context to avoid offense, conflict, or resentment. A lot of this is early Corporate Culture engineering.
A good beginners example is the phrase, "per my last email." That's the one you'll see most, but variants such as "per the last meeting" and "per our previous conversation" are included.
This phrase seems innocuous enough, but it's an expression of deep frustration with a tinge of insult at it's target. It's essentially saying, "You haven't been paying attention or you're an idiot. Either way, you are wasting my time."
Imperative requests framed as optional questions is kinda a part of that. A constant stream of imperatives from your manager begins to feel like you're being "ordered around" and you're not "appreciated." So a manager will often ask you giving you the appearance of choice even though you both know you do not because the context of their authority makes it so.
For a very long time, this worked. It was definitely manipulative, if not borderline brainwashing. Baby Boomer's misguided advice to younger generations about loyalty to employers is a result of this.
For Gen X and on, it's mostly an empty husk of norms that are either meaningless or just the accepted way to insult someone without losing your job. "You're an idiot" will cost you your job but "Per my last email" carries the same message and doesn't cost you your job. Asking me politely to do a task is just how you assign tasks now, no one thinks it means you actually respect or care the assignee.
I can definitely see how this layer would cause issues with people on the spectrum. It causes enough problems for neurotypicals.
There is a setting you can set to disable it and make the provider treat all traffic as if it is non-tethered.
adb shell settings put global tether_dun_required 0
Considering how knowledgeable the HN crowd is on all things networking, it surprises me to see so much uncertainty on something so easy to check in the code!
You're completely missing how the NT I/O subsystem works, and how to use it optimally.
> * Asynchronous disk I/O is in practice often not actually asynchronous. Some of these cases are documented (https://support.microsoft.com/en-us/kb/156932), but asychronous I/O also actually blocks in cases that are not listed in that article (unless the disk cache is disabled). This is the reason that node.js always uses threads for file i/o.
The key to NT asynchronous I/O is understanding that the cache manager, memory manager and file system drivers all work in harmony to allow a ReadFile() request to either immediately return the data if it is available in the cache, and if not, indicate to the caller that an overlapped operation has been started.
Things like extending a file, opening a file, that's not typically hot-path stuff. If you're doing a network oriented socket server, you would submit such a blocking operation to a separate thread pool (I set up separate thread pools for wait events, separate to the normal I/O completion thread pools), and then that I/O thread moves on to the next completion packet in its queue.
> * For sockets, the downside of the 'completion' model that windows is that the user must pre-allocate a buffer for every socket that it wants to receive data on. Open 10k sockets and allocate a 64k receive buffer for all of them - that adds up quickly. The unix epoll/kqueue/select model is much more memory-efficient.
Well that's just flat out wrong. You can set your socket buffer size as large or as small as you want. For PyParallel I don't even use an outgoing send buffer.
Also, the new registered I/O model in 8+ is a much better way to handle socket buffers without the constant memcpy'ing between kernel and user space.
> IMO the Windows designers got the general idea to support asynchronous I/O right, but they completely messed up all the details.
I disagree. Write a kernel driver on Linux and NT and you'll see how much more superior the NT I/O subsystem is.
I think this is too much interpreting into it and the specific, and very different, views of people like Alan Kay and Richard Gabriel. Alan was a visionary who actually thought a lot about making programming and computing accessible - even to children. In the Lisp/AI community SOME were working on similar things (Minsky influenced LOGO for example). Richard Gabriel was running a Lisp vendor, which addressed the UNIX market with a higher-end Lisp development and delivery tool: Lucid Common Lisp. You'd shell out $20k and often much more for a machine and a LCL license. Customers were wealthy companies and government - the usual Lisp/AI customers who also wanted to deploy stuff efficiently.
> See the Lisp community practiced the Right Thing software philosophy which was also know as "The MIT Approach" and they were also known as "LISP Hackers".
A typical mistake is to believe that there is a single homogenous Lisp community, a single approach or a single philosophy. In fact the Lisp community was and is extremely diverse. If you look at the LISP hackers, their approach wasn't actually to do the 'right thing' (whatever that is), but to tackle interesting problems and having fun solving it. The Lisp Hackers at MIT (and other places like Stanford) were working for government labs swimming in money and people like Marvin Minsky provided a fantastic playground for them - which then clashed with the 'real world' when DARPA wanted to commercialize the research results it funded, to move the technology in to the field of military usage (also doing technology transfer into other application areas like civilian logistics). If you've ever looked at the Lisp Machine code, you see that it is full of interesting ideas, but the actual implementation is extremely diverse and grown over time. Often complex, under-documented, sketchy - not surprising since much of that was research prototypes and only some was developed for actual commercial markets. The 'MIT Approach' was creating baroque style designs. Is it the 'right thing' to have a language standard telling how to print integers as roman numbers?
Thus 'the right thing' might not be what you think - I think it is more 'image' than reality.
> The destiny of computers is to become interactive intellectual amplifiers for everyone in the world pervasively networked worldwide
That was Alan Kay's vision, not the vision of the Lisp community. Kay's vision was personal computing - and not the crippled version of Apple, IBM and others. Much of the Lisp community was working on AI and related. Which had much different visions and Lisp for them was a tool - they loved or hated. Lisp/AI developers think of it as 'AI assembler' - a low-level language implementing much higher-level languages ( https://en.wikipedia.org/wiki/Fifth-generation_programming_l... ). When no effective systems were available to be used as powerful development environments for small and medium-sized research groups - they invented their own networked / interactive development system using the technology they knew best: Lisp. They hacked up development environments and even operating systems. But it was not necessary to keep them, once similar platforms were available from the market. With Lucid CL one could develop and deploy a complex Lisp application on a UNIX workstation and were not bound to a Lisp Machine - which was still more expensive, used special hardware/software and was less general as a computing platform. Lucid CL was quite successful in its niche for a while - but Gabriel then tried to make that technology slightly more mainstream by addressing C++ developers with a sophisticated development environment - sophisticated, and expensive. This tool was then sinking the company. But, anyway, much of the commercial AI development moved to C++ - for example most of the image/signal processing stuff.
Parts of the Lisp community shared different parts of the Kay vision: OOP as basic computing paradigm (Flavors, Object Lisp, CLOS, ...) , accessible programming (LOGO as an educational Lisp dialect), intellectual amplifiers (AI assistants) - but where Kay developed an integrated vision (Smalltalk and especially Smalltalk 80), the Lisp community was walking in all directions at the same time and this created literally hundreds of different implementations. Simple languages like Scheme were implemented a zillion times - but only sometimes with an environment like that of Smalltalk 80.
The Lisp community addressed both medium and high-end needs. Something like Interlisp-D (developed right next to Alan Kay - but as a development tool for Lisp/AI software and not addressing programming for children and similar) was a very unique programming tool, but its user base wasn't large and more towards the higher end of development - most of it still in AI. There was no attempt to make that thing popular in a wider way by for example selling it to a larger audience. It was eventually commercialized, but Xerox quit the market with the first signs of the AI winter in the 80s. Its actual and practical impact was and is also very limited, since only very few living developers have really seen it and almost no one has an idea what to do with it or even how to use it. It's basically lost art. I saw them in the end 80s when they were on they way out.
I doubt that from hundred authors of advocacy articles has even one used something like Interlisp-D or Lucid CL to develop or ship software. I know only very few people who have actually started it, even much less having seen it on a Xerox Lisp Machine. So much of it is based on some old people telling about it and very few have ever checked out how much of what they hear is actually true and how useful that stuff actually is. One reason for it: it's no longer available.
The 'worse is better' paper was slightly misaddressed at the Lisp users - since they were not after the operating system and base development tool market (like UNIX and C was) and were not married to a particular system or environment. They used Lisp on mainframes in the 60s/70s, on minicomputers in the 70s/80s, on personal workstations (even developing their own) in the end 70s / 80s and personal computers from the 80s onwards - unfortunately Lisp never really arrived at mobile systems - though it participated in an early attempt (transferring a lot of technology to the early Apple Newton - or what it was called before it was brought to the market - projects). The main Lisp influence on what we see as web technology, was the early influence on Javascript.
Yes, I have been doing software development in Japan for the past decade.
Of course you are right that no sane person would contemplate this calendar for any purpose other than user-facing display, and even then only where it is absolutely required.
But the insane WTF thing is that it does still seem to be widely required on any kind of financial document. Expense reports, salary statements, that kind of thing. All my banks use this format (and also SJIS text encoding when I download my data (T_T)...)
There are also lots of internal processes^W^W Excel spreadsheets in active use that expect these values so I've seen more than one program that converts 2013 to "H.25"... and it's literally impossible to represent a future date, since we don't know when the emperor might die, even if we have a projected abdication date.
I can't be too smug about it, since I'm American (miles and feet, anybody?) but it is... objectively non-optimal.
Customization is key, you CAN't customize. I worked on an SAP implementation, a Fortune 500 company, complexity like crazy, sub-companies, of sub-companies, of sub-companies, federal, state and local compliance for what the company did, operating in 50 states, with an International supply chain, and our integrator was nothing special.
We held firm to the NO CUSTOMIZATION rule, we re-engineered our processes to fit SAP. Other than a few hiccups when integrating all aspects of a company that has 100 sub companies, and is under federal, state and local regulation the project basically went off without a hitch.
They are still taking upgrades 6 years later, without issue.
Most companies think they are unique, and special, and feel justified in needed to customize, all those companies are wrong.
I was on a production DB once, and ran SHOW FULL PROCESSLIST, and saw "delete from events" had been running for 4 seconds. I killed the query, and set up that processlist command to run ever 2 seconds. Sure enough, the delete kept reappearing shortly after I killed it. I wasn't on a laptop, but I knew the culprit was somewhere on my floor of the building, so I grabbed our HR woman who was walking by and told her to watch the query window, and if she saw delete, I showed her how to kill the process. Then I ran out and searched office to office until I found the culprit -
Our CTO thought he was on his local dev box, and was frustrated that "something" was keeping him from clearing out his testing DB.
Did I get a medal for that? No. Nobody wanted to talk about it ever again.
Hey everyone: What we can call Wolfram Derangement Syndrome—the vast indignation provoked in internet commenters by his vast self-reference—is off topic. Nothing so predictable can be interesting, and predictable rage reflexes are toxic.
The first few times this came up, years ago, it was worth noting. I laughed at the same parodies everyone else did. But by now, Wolfram's odd tic has long been commoditized, and it's our problem if we choose to dwell on it.
Wolfram has other things to say as well, and many of them—recently about Ada Lovelace, George Boole, and now Minsky—are interesting. Those are the things HN should be discussing.
It's a test for this community: can we stay focused on what's interesting? Or must we lose our shit every time the catnip is wiggled?
There are gems in this article that would stimulate a good HN discussion under normal circumstances. Let us put on our anti-troll suits and give that a try.
> It turns out there is very little code that would break if Emacs strings became immutable
Which means that very little Clojure code would break if they just went ahead and used Emacs strings.
> Common Lisp fans claim that lisp-2s have first-class functions, but the way they are kept separate from other first-class values in their own namespace ghetto brings to mind claims of "separate but equal"—at best it is Jim Crow functional programming.
Jim Crow functional programming?? The virtue signalling is strong in this one. He's wrong on the facts, too.
A Lisp-2 doesn't keep the function inside the ghetto, it keeps everyone else out. Functions can be everywhere, but a non-function can't be the value of a function binding. Function's aren't second class in this system, they're royal class.
Back in the "bad old days" of the simplex NCP protocol [1], before the full duplex TCP/IP protocol legalized same-sex network connections, connect and listen sockets had gender defined by their parity, and all connections were required to use sockets with different parity gender (one even and the other odd -- I can't remember which was which, or if it even mattered -- they just had to be different).
The act of trying to connect an even socket to another even socket, or an odd socket to another odd socket, was considered a "peculiar error" called "homosocketuality", which was strictly forbidden by internet protocols, and mandatory "heterosocketuality" was called the "Anita Bryant feature" [2].
When the error code is zero, the next 8 bit byte is the Stanford peculiar error code, followed by 72 bits of the ailing command returned. Here are the Stanford error codes. [...]
IGN 3 Illegal Gender (Anita Bryant feature--sockets must be heterosocketual, ie. odd to even and even to odd) [...]
Illegal gender in RFC, host hhh/iii, link 0
The host is trying to engage us in homosocketuality. Since this is against the laws of God and ARPA, we naturally refuse to consent to it.
; Try to initiate connection
loginj:
init log,17
sixbit /IMP/
0
jrst noinit
setzm conecb
setom conecb+lsloc
move ac3,hostno
movem ac3,conecb+hloc
setom conecb+wfloc
movei ac3,40
movem ac3,conecb+bsloc
move ac3,consck
trnn ac3,1
jrst gayskt ; only heterosocketuals can win!
movem ac3,conecb+fsloc
mtape log,[
=15
byte (6) 2,24,0,7,7
] ; Time out CLS, RFNM, RFC, and INPut
[...]
gayskt: outstr [asciz/Homosocketuality is prohibited (the Anita Bryant feature)
/]
ife rsexec,<jrst rstart;>exit 1,
(The PDP-10 code above adds the connect and listen socket numbers together, which results in bit 0 being 0 if they are the same gender, then TRNN is "test bits right, no change, skip if non zero", which skips the next instruction (jrst gayskt) if they different sex.)
If I had to use one word to describe systemd's integration and adoption into the Linux ecosystem it would have to be "hostile" - the label has unfortunately been applicable in both directions.
Most of the feathers flew around 2012 when the major Linux distributions adopted systemd as their default init system, irreversibly pulling in all of systemd's system management policies as well, many of which were poorly designed.
Several big names in the Linux community (Linus Torvalds and Greg Kroah-Hartman, to name two) have had heated discussions with Lennart Poettering and other people behind systemd about major bugs, design flaws and policy integration issues, with the systemd response consistently being "the way we're doing it is the right way, no patches will be accepted, go away" even when shown multiple times that something contravenes design best practices or tradition (aka principle of least surprise).
For this reason I dislike systemd's highly bureaucratic "manglement" style, and am very sad that all major distributions have adopted it so widely. systemd uses a very dictatorial approach which makes it very very hard to use any other init system without nontrivial and obscure system reconfiguration.
I understand Lennart also built PulseAudio and got it integrated into pretty much all Linux distributions. PA works well now, but if it's having a bad day and I really need sound working in a pinch, I can just kill it and use ALSA/OSS directly.
systemd categorically isn't like that because it's (ostensibly) an init system. However it comes with so many extra "side features" (which an increasing number of things are depending on) that temporarily shoving it out of the way to became impossible very quickly, and before any real documentation was established. I think it's understandable a large amount of the Linux community have growled and snarled when presented with this set of circumstances.
Nowadays, systemd is pretty much part of the woodwork now, but the communication and social issues continue.
It's like someone wanted to write the ultimate PHP OS-management framework.
PHP is installed virtually anywhere, unbelievably fast (really!), achieves most tasks with a reasonable minimum of boilerplate (okay, okay, within reason) while remaining verbose enough to be easy to learn, and makes it possible for anyone to get started.
I finally figured it out: this is exactly the same mindset behind systemd! D:
The OP article says that
> systemd should be written in a memory safe language. The obvious picks are Rust or Go.
My immediate response was "no no no no no NO! You need to replace the developers!"
I totally get the idea behind using a kitchen-sink-safe language for Important Critical Stuff™, I really do. And I know that C is slowly going out of vogue for system-level tools of the same order that systemd is in.
What I don't like is that the developers just blindly bumble along thinking everything's fine, denying the importance of issues like these... and get all confused when they get death threats (https://plus.google.com/+LennartPoetteringTheOneAndOnly/post...).
It would be much nicer if the people with the power could just go "ooh, whoa, thanks. Uhh, I don't feel like working on this, but PRs are welcome, and I'll doublecheck if it's been fixed in two weeks." That acknowledgement is really all people want...
You would be amazed at the level of hazard people are willing to accept. For example, I recall running one machine, a 300 ton press with an 84"x54" bed and 24" of stroke. It was 25 feet tall and we nicknamed this one Optimus Prime. When Optimus was warmed up he would spit warm hydraulic oil all over the place. A nice fine mist along with a slurry of hydraulic rain drops would cover the area. The solution was to wear Weimao hats made out of disposable cardboard.
Another machine was a 50+ year old roll form machine. How I did not lose my life on this machine is beyond me. Modern machines feed the material automatically and have clutches and brakes with optical sensors so they can stop on a dime. This one literally used inertia and a massive flywheel to function. You got the rollers spinning and fed the material into the first roller, then as it came out you had to guide the material into the next roller. Manually. In between spinning rollers. With your hands. And the machine had a 1,000lb flywheel that gave the whole thing intertia. You only needed to give it throttle once, and the whole machine would spin for 30+ seconds whether it was forming material, or your arm, or whatever. Chances are it would have sucked an entire human into the rollers on one blip of the throttle. And the coup de grâce was the throttle was a 50lb lever on a swing pivot. If you drop this lever to turn the machine OFF, it would bounce with gravity and bounce itself back on. This is not a third world country. These machines are located in Newburyport Massachusetts.
I was a lot younger back then, but to this day that is how helicopter engine are made. Those antiquated tools are more important to the major aerospace companies than any operator they've ever had.