Alan Kay, my favorite curmudgeon, spent decades trying to remind us we keep reinventing concepts that were worked out in the late 70s and he’s disappointed we’ve been running in circles ever since. He’s still disappointed because very few programmers are ever introduced to the history of computer science in the way that artists study the history of art or philosophers the history of philosophy.
When I was at RIT (2006ish?) there was an elective History of Computing course that started with the abacus and worked up to mainframes and networking. I think the professor retired years ago, but the course notes are still online.
To strenghten GPs point a bit: There are courses on conceptual art (1966-72) or minimal art alone. One "History of Computing" course, while appreciated, is not doing its history enough justice.
To be fair, the history of computing is only ~200 years old even if you go back to Babbage and Lovelace. The history of art is literally as old as recorded history.
Hello fellow RIT alum! I don't think I knew about this class when I went there, though I started as a Computer Engineering student (eventually switched to Computing Security).
The effective history of computing spans a lifetime or three.
There's no sense comparing the two. In the year 2500 it might make sense to be disappointed that people don't compare current computational practices with things done in 2100 or even 1970, but right now, to call what we have "history" does a disservice to the broad meaning of that term.
Another issue: art and philosophy have very limited or zero dependence on a material substrate. Computation has overwhelming dependence on the performance of its physical substrate (by various metrics, including but not limited to: cpu speed, memory size, persistent storage size, persistent storage speed, network bandwidth, network scope, display size, display resolution, input device characteristics, sensory modalities accessibly via digital to analog conversion, ...). To assert that the way problems were solved in 1970 obviously has dramatic lessons for how to solve them in 2025 seems to me to completely miss what we're actually doing with computers.
If Alan Kay doesn't respond directly to this comment, what is Hacker News even for? :)
You're not wrong about history, but that only strengthens Kay's case. E.g., our gazillion-times better physical substrate should have led an array of hotshot devs to write web apps that run circles around GraIL[1] by 2025. (Note the modeless GUI interaction.) Well, guess what? Such a thing definitely doesn't exist. And that can only point to programmers having a general lack of knowledge about their incredibly short history.
(In reality my hope is some slice of devs have achieved this and I've summoned links to their projects by claiming the opposite on the internet.)
Edit: just so I get the right incantation for summoning links-- I'm talking about the whole enchilada of a visual language that runs and rebuilds the user's flowchart program as the user modelessly edits it.
My gut is your main complaint is largely the modern web ecosystem? Games can run circles around that application, as obvious inspiration. But high end architectural tools are probably more of what you have in mind.
The easy example I used to use to really blow people's minds on what was possible was Mathematica.
That is to say, it isn't so much lack of knowledge of history. It is lack of knowledge of the present. And a seeming unwillingness to want to pay for some things from a lot of folks.
> E.g., our gazillion-times better physical substrate should have led an array of hotshot devs to write web apps that run circles around GraIL[1] by 2025
Why? What problem did it solve that we're suffering from in 2025?
This is just "old person yelling at cloud" territory, though? People often don't know the actors, singers, authors, inventors, whatever from the last few generations. They know the current generation and maybe some originals.
But the rhyme and reason for who is known is not at all obvious. Outside of "who is getting marketed."
The man on the street may not know this history, but serious actors, singers, authors, and inventors themselves certainly know what came before them. If not, they are presumably not actually that interested in their own vocation (which is also normal, by the way).
Do you know this for fact? My gut is that most performers will know of the performers they watched for inspiration. Just like athletes. But few will know the history of their field.
I will agree that the "greats" seem to tend to know all of this. Such that I think I'm agreeing with your parenthetical there. But most practitioners?
I don't know it for fact, no. BUT...I would be very surprised if the average working film director hasn't heard of Ernst Lubitch or Ringo Lam (here I'm deliberately picking names that aren't commonly known by the public at large, like Steven Spielberg). Obviously we could do this for lots of vocations, but really my statement above was about serious practitioners, people who are deliberately trying to improve their art, rather than just hammer a check (which, again, is normal and fine!).
I'll confirm (and then nerdily complicate) your thesis for the art-form I practiced professionally for the first half of my adult life: yes, every serious actor I've been privileged to work with knows of previous performers, and studies texts they leave behind.
I owned at one time a wonderful two-volume anthology called Actors on Acting, which collected analysis and memoir and advice going back... gosh, to Roman theatre, at least. (The Greeks were more quasi-religious, and therefore mysterious - or maybe the texts just haven't survived. I can't remember reading anything first-hand, but there has been a good deal of experimental "original practice" work done exploring "how would this have worked?"). My graduate scholarship delved into Commedia dell'Arte, and classical Indian theatre, as well as 20th century performers and directors like Grotowski, and Michael Chekhov, and Joan Littlewood. Others, of course, have divergent interests, but anyone I've met who cares can geek out for hours about this stuff.
However, acting (or, really, any performance discipline), is ephemeral. It invokes a live experience, and even if you (and mostly you don't, even for the 20th c) have a filmed version of a seminal performance it's barely anything like actually being there. Nor, until very recently, did anyone really write anything about rehearsal and training practice, which is where the real work gets done.
Even for film, which coincidentally covers kinda the same time-period as "tech" as you mean it, styles of performance - and the camera technology which enables different filming techniques - have changed so much, that what's demanded in one generation isn't much like what's wanted in the next. (I think your invocation of film directors is more apt: there are more "universal" principles in composition and framing than there are in acting styles.)
Acting is a personal, experiential craft, which can't be learned from academic study. You've got to put in hours of failure in the studio, the rehearsal room, and the stage or screen to figure out how to do it well.
Now, here's where I'll pull this back to tech: I think programming is like that, too. Code is ephemeral, and writing it can only be learned by doing. Architecture is ephemeral. Tooling is ephemeral. So, yes: there's a lot to be learned (and should be remembered) from the lessons left by previous generations, but everything about the craft pulls its practitioners in the opposite direction. So, like, I could struggle through a chapter of Knuth, or I could dive into a project of my own, and bump up against those obstacles and solve them for myself. Will it be as efficient? No, but it'll be more immediately satisfying.
Here's another thing I think arts and tech have in common: being a serious practitioner is seldom what gets the prize (if by that you mean $$$). Knuth's not a billionaire, nor are any of my favorite actors Stars. Most people in both disciplines who put in the work for the work's sake get out-shined by folks lucky enough to be in the right place at the right time, or who optimize for hustle or politics or fame. (I've got no problem with the first category, to be clear: god bless their good fortune, and more power to them; the others makes me sad about human nature, or capitalism, or something.) In tech, at least, pursuing one's interest is likely to lead to a livable wage - but let's see where our AI masters leave us all in a decade, eh?
Anyway, I've gone on much to much, but you provoked an interesting discussion, and what's the internet for if not for that?
You do know people have imagination and guys back in 1970 already imagined pretty much everything we use now and even posed problems that are not going to be solved by our computing power.
Dude watch original StarTrek from 1960’s you will be surprised.
You might also be surprised that all AI stuff nowadays is so hyped was already invented in 1960’s only that they didn’t have our hardware to run large models. Read up on neural networks.
> To assert that the way problems were solved in 1970 obviously has dramatic lessons for how to solve them in 2025 seems to me to completely miss what we're actually doing with computers.
True they might not all be "dramatic lessons" for us, but to ignore them and assume that they hold no lessons for us is also a tragic waste of resources and hard-won knowledge.
Its because CS is not cared about as a true science for the most part. Nearly all of the field is focused on consolidating power and money dynamics. No one cares to make a comprehensive history since it might give your competitors an edge.
I have thought that's the common definition and doesn't need much thought...
My dictionary absolutely implies that, it even claims that all the sciences were split of from Philosophy and that a common modern topic of Philosophy is the theory of science. The point of Philosophy is to define truth in all aspects, how is that not science? It's even in the name: "friend of truth". Philosophy is even more fundamental and formal than mathematics. Mathematics asks what sound systems are, what properties they have and how they can be generalized. Philosophy asks, what something truly is, what it means to know, what it means to have a system and whether it's real. The common trope of going even more fundamental/abstract goes: "biology -> chemistry -> physics -> mathematics -> philosophy"
You're confusing computer science with economics. The ahistorical nature of classical and neoclassical economics basically declares that history is irrelevant. Economists do not really concern themselves with economic history, like at all.
You first sentences already suggest one comparison between the histories of computing and philosophy: history of computing ought to be much easier. Most of it is still in living memory. Yet somehow, the philosophy people manage it while we computing people rarely bother.
I always think it is great value to have a whole range of history of X courses.
I once thought about a series of PHYS classes that focus on historical ideas and experiments. Students are supposed to replicate the experiments. They have to read book chapters and papers.
History of physics is another history where we have been extremely dependent on the "substrate". Better instruments and capacity to analyze results, obviously, but also advances in mathematics.
Just because a period of history is short doesn't make it _not history_.
Studying history is not just, or even often, a way to rediscover old ways of doing things.
Learning about the people, places, decisions, discussions, and other related context is of intrinsic value.
Also, what does "material substrate" have to do with history? It sounds here like you're using it literally, in which case you're thinking like an engineer and not like a historian. If you're using it metaphorically, well, art and philosophy are absolutely built on layers of what came before.
Art and philosophy have very limited or zero dependence on a material substrate
Not true for either. For centuries it was very expensive to paint with blue due to the cost of blue pigments (which were essentially crushed gemstones).
Philosophy has advanced considerably since the time of Plato and much of what it studies today is dependent on science and technology. Good luck studying philosophy of quantum mechanics back in the Greek city state era!
The rate of change in computer technology has been orders of magnitudes faster than most other technologies.
Consider transport. Millennia ago, before the domestication of the horse, the fastest a human could travel was by running. That's a peak of about 45 km/h, but around 20 km/h sustained over a long distance for the fastest modern humans; it was probably a bit less then. Now that's about 900 km/h for commercial airplanes (45x faster) or 3500 km/h for the fastest military aircraft ever put in service (178x faster). Space travel is faster still, but so rarely used for practical transport I think we can ignore it here.
My current laptop, made in 2022 is thousands of times faster than my first laptop, made in 1992. It has about 8000 times as much memory. Its network bandwidth is over 4000 times as much. There are few fields where the magnitude of human technology has shifted by such large amounts in any amount of time, much less a fraction of a human lifespan.
That gives even more reason to study the history of CS. Even artists study contemporary art from the last few decades.
Given the pace of CS (like you mentioned) 50 years might as well be centuries and so early computing devices and solutions are worth studying to understand how the technology has evolved and what lessons we can learn and what we can discard.
> Computation has overwhelming dependence on the performance of its physical substrate (by various metrics, including but not limited to: cpu speed, memory size, persistent storage size, persistent storage speed, network bandwidth, network scope, display size, display resolution
This was clearly true in 01970, but it's mostly false today.
It's still true today for LLMs and, say, photorealistic VR. But what I'm doing right now is typing ASCII text into an HTML form that I will then submit, adding my comment to a persistent database where you and others can read it later. The main differences between this and a guestbook CGI 30 years ago or maybe even a dialup BBS 40 years ago have very little to do with the performance of the physical substrate. It has more in common with the People's Computer Company's Community Memory 55 years ago (?) using teletypes and an SDS 940 than with LLMs and GPU raytracing.
Sometime around 01990 the crucial limiting factor in computer usefulness went from being the performance of the physical substrate to being the programmer's imagination. This happened earlier for some applications than for others; livestreaming videogames probably requires a computer from 02010 or later, or special-purpose hardware to handle the video data.
Screensavers and demoscene prods used to be attempts to push the limits of what that physical substrate could do. When I saw Future Crew's "Unreal", on a 50MHz(?) 80486, around 01993, I had never seen a computer display anything like that before. I couldn't believe it was even possible XScreensaver contains a museum of screensavers from this period, which displayed things normally beyond the computer's ability. But, in 01998, my office computer was a dual-processor 200MHz Pentium Pro, and it had a screensaver that displayed fullscreen high-resolution clips from a Star Trek movie.
From then on, a computer screen could display literally anything the human eye could see, as long as it was prerendered. The dependence on the physical substrate had been severed. As Zombocom says, the only limit was your imagination. The demoscene retreated into retrocomputing and sizecoding compos, replaced by Shockwave, Flash, and HTML, which freed nontechnical users to materialize their imaginings.
The same thing had happened with still 2-D monochrome graphics in the 01980s; that was the desktop publishing revolution. Before that, you had to learn to program to make graphics on a computer, and the graphics were strongly constrained by the physical substrate. But once the physical substrate was good enough, further improvements didn't open up any new possible expressions. You can print the same things on a LaserWriter from 01985 that you can print on the latest black-and-white laser printer. The dependence on the physical substrate has been severed.
For things you can do with ASCII text without an LLM, the cut happened even earlier. That's why we still format our mail with RFC-822, our equations with TeX, and in some cases our code with Emacs, all of whose original physical substrate was a PDP-10.
Most things people do with computers today, and in particular the most important things, are things fewer people have been doing with computers in nearly the same way for 30 years, when the physical substrate was very different: 300 times slower, 300 times smaller, a much smaller network.
Except, maybe, mass emotional manipulation, doomscrolling, LLMs, mass surveillance, and streaming video.
A different reason to study the history of computing, though, is the sense in which your claim is true.
Perceptrons were investigated in the 01950s and largely abandoned after Minsky & Papert's book, and experienced some revival as "neural networks" in the 80s. In the 90s the US Postal Service deployed them to recognize handwritten addresses on snailmail envelopes. (A friend of mine who worked on the project told me that they discovered by serendipity that decreasing the learning rate over time was critical.) Dr. Dobb's hosted a programming contest for handwriting recognition; one entry used a neural network, but was disqualified for running too slowly, though it did best on the test data they had the patience to run it on. But in the early 21st century connectionist theories of AI were far outside the mainstream; they were only a matter of the history of computation. Although a friend of mine in 02005 or so explained to me how ConvNets worked and that they were the state-of-the-art OCR algorithm at the time.
Then ImageNet changed everything, and now we're writing production code with agentic LLMs.
Many things that people have tried before that didn't work at the time, limited by the physical substrate, might work now.
Usually it's because of an initiative by the Long Now Foundation that is supposed, among other things, to raise awareness about their 10,000 years clock and what it stands for.
> Another issue: art and philosophy have very limited or zero dependence on a material substrate. Computation has overwhelming dependence on the performance of its physical substrate
That's absolutely false. Do you know why MCM furniture is characterized by bent plywood? It's because we developed the glues that enabled this during world war II. In fashion you had a lot more colors beginning in the mid 1800s because of the development of synthetic dyes. Really odd that oil paints were really perfected around Holland (major place for flax and thus linseed oil), which is what the dutch masters _did_. Architectural mcmansions began because of the development of pre-fab roof trusses in the 70s and 80s.
How about philosophy? Well, the industrial revolution and it's consequences have been a disaster for the human race. I could go on.
The issue is that engineers think they're smart and can design things from first principles. The problem is that they're really not, and design things from first principles.
I recall seeing a project on github with a comment:
Q: "Sooo... what does this do that Ansible doesn't?"
A: "I've never heard of Ansible until now."
Lots of people think they are the first to come across some concept or need. Like every generation when they listen to songs with references to drugs and sex.
I think software engineering have so many social problems to a level that other fields just don't have. Dogmatism, superstition, toxicity ... you name it.
I reflect on university, and one of the most interesting projects I did was an 'essay on the history of <operating system of your choice>' as part of an OS course. I chose OS X (Snow Leopard) and digging into the history gave me fantastic insights into software development, Unix, and software commercialisation. Echo your Mr Kay's sentiments entirely.
Sadly this naturally happens in any field that ends up expanding due to its success. Suddenly the number of new practitioners outnumbers the number of competent educators. I think it is a fundamental human resources problem with no easy fix. Maybe llms will help with this, but they seem to reinforce the convergence to the mean in many cases as those to be educated is not in a position to ask the deeper questions.
> Sadly this naturally happens in any field that ends up expanding due to its success. Suddenly the number of new practitioners outnumbers the number of competent educators. I think it is a fundamental human resources problem with no easy fix.
In my observation the problem rather is that many of the people who want to "learn" computer science actually just want to get a certification to get a cushy job at some MAGNA company, and then they complain about the "academic ivory tower" stuff that they learned at the university.
So, the big problem is not the lack of competent educators, but practitioners actively sabotaging the teaching of topics that they don't consider to be relevant for the job at a MAGNA company. The same holds for the bigwigs at such companies.
I sometimes even see the conspiracy that if a lot of graduates saw that what their work at these MAGNA involves is from the history of computer science often decades old and has been repeated multiple times over the decades, this might demotivate the employees who are to believe that they work on the "most important, soon to be world changing" thing.
Your experience with bad teachers seems more like an argument in favor of better education than against it. It's possible to develop better teaching material and methods if there is focus on a subject, even if it's time consuming.
Not really, you only need one really good teacher who can put their knowledge into written or video form so it's easily shared with others. It actually only takes one great mind.
At least for history of economics, I think it's harder to really grasp modern economic thinking without considering the layers it's built upon, the context ideas were developed within etc...
That's probably true for macro-economics. Alas that's also the part where people disagree about whether it made objective progress.
Micro-economics is much more approachable with experiments etc.
Btw, I didn't suggest to completely disregard history. Physics and civil engineering don't completely disregard their histories, either. But they also don't engage in constant navel gazing and re-hashing like a good chunk of the philosophers do.
I can't concur enough. We don't teach, "how to design computers and better methods to interface with them" we keep hashing over the same stuff over and over again. It gets worse over time and the effect is that what Engelbart called, "intelligence augmenters" become, "super televisions that cause you political and social angst."
How far we have fallen but so great the the reward if we could, "lift ourselves up again." I have hope in people like Bret Victor and Brenda Laurel.
I spent my teens explaining to my mum that main memory (which used to be 'core', she interjected) was now RAM, a record was now a row, a thin client was now a browser, PF keys were now just function keys. And then from this basis I watched Windows Forms and .NET and all the iterations of the JDK and the early churn of non-standardized JavaScript all float by, and thought, 'hmm.'
Maybe he managed to work them out and understand them in the '70s, if you believe him. But he has certainly never managed to convey that understanding to anyone else. Frankly I think if you fail to explain your discovery to even a fraction of the wider community, you haven't actually discovered it.
> He’s still disappointed because very few programmers are ever introduced to the history of computer science in the way that artists study the history of art or philosophers the history of philosophy.
Software development/programming is a field where the importance of planning and design lies somewhere between ignored and outright despised. The role of software architect is both ridiculed and vilified, whereas the role of the brave solo developer is elevated to the status of hero.
What you get from that cultural mix is a community that values ad-hoc solutions made up on the spot by inexperienced programmers who managed to get something up and running, and at the same time is hostile towards those who take the time to learn from history and evaluate tradeoffs.
See for example the cliche of clueless developers attacking even the most basic aspects of software architecture such as the existence of design patterns.
with that sort of community, how does anyone expect to build respect for prior work.
Maybe history teaches us that planning and design do not work very well....
I think one of the problems is that if someone uses a word, one still does not know what it means. A person can say 'design patterns' and what he is actually doing is a very good use of them that really helps to clarify the code. Another person can say 'design patterns' and is busy creating an overengineered mess that is not applicable to the actual situation where the program is supposed to work.
Simulate increasingly unethical product requests and deadlines. The only way to pass is to refuse and justify your refusal with professional standards.
Watch as peers increasingly overpromise and accept unethical product requests and deadlines and leave you high, mighty, and picking up the scraps as they move on to the next thing.
Conventional university degrees already contain practical examples of the principles of both these courses in the form of cheating and the social dynamics around it.
Hmm I seem to remember considering the ethics of contracting for the military, for example, but not in a mock sprint planning, if that's what you mean.
I've never seen this. Is this some weird right wing talking point?
I have seen a disconnect between what is covered in ethics classes and the types of scenarios students will encounter in the working world. My (one) ethics class was useless. But not political even with the redrawn ethical map of the Trump era.
Building systems that don't bake bias into code or worrying about privacy in a dating app is probably the kind of politics the parent is talking about.
I'm not really sure how you could totally separate politics (forming laws) from ethics anyway.
I would add debugging as a course. Maybe they should teach this but how to dive deep into figuring out how to learn the root cause of defects and various tools would have been enormously helpful for me. Perhaps this already exists
Great idea. I had a chemistry lab in college where I was given a vial of a white powder on the first day of class and the course was complete when I identified what it was.
A similar course in CS would give each student a legacy codebase with a few dozen bugs and performance / scaling problems. When the code passes all unit and integration tests, the course is complete.
I’m not negating that. But in most cases, using a debugger is the vastly superior way to debug issues, yet many developers have never bothered to pick up on that knowledge. Like a carpenter that hasn’t ever learned to work with power tools and insists to use manual screwdrivers for everything.
Interactive debugging tends to be unhelpful on anything realtime. By the time you look at what's going on, all the timing constraints are shot and everything is broken. You may be able to see the current state of that thread at that moment, but you can't move forward from there. (Unless you freeze all the threads - but then, all the threads involved might not be in one process, or even on one machine.)
CSCI 0001: Functional programming and type theory (taught in English [0])
For decades, the academia mafia, through impenetrable jargon and intimidating equations, have successfully prevented the masses from adopting this beautiful paradigm of computation. That changes now. Join us to learn why monads really are monoids in the category of endofunctors (oh my! sorry about that).
"CSCI 4020: Writing Fast Code in Slow Languages" does exist, at least in the book form. Teach algorithmic complexity theory in slowest possible language like VB or Ruby. Then demonstrate how O(N) in Ruby trumps O(N^2) in C++.
One of my childhood books compared bubble sort implemented in FORTRAN and running on a Cray-1 and quicksort implemented in BASIC and running on TRS-80.
The BASIC implementation started to outrun the supercomputer at some surprisingly pedestrian array sizes. I was properly impressed.
To be fair, the standard bubble sort algorithm isn't vectorized, and so can only use about 5% of the power of a Cray-1. Which is good for another factor of about 5 in the array size.
Yes, as I understand it, its 80MHz clock gave it a 12.5ns memory access time, and I think it normally accessed memory four times per instruction, enabling it to do 20 MIPS (of 64-bit ALU ops). But the vector units could deliver 160 megaflops, and usually did. I think a TRS-80 could technically run about half a million instructions per second (depending on what they were) but only about 0.05 Dhrystone MIPS—see the Cromemco Z2 on https://netlib.org/performance/html/dhrystone.data.col0.html for a comparable machine.
So we can estimate the Cray's scalar performance at 400× the TRS-80's. On that assumption, Quicksort on the TRS-80 beats the Cray somewhere between 10000 items and 100_000 items. This probably falsifies the claim—10000 items only fits in the TRS-80's 48KiB maximum memory if the items are 4 bytes or less, and although external sorting is certainly a thing, Quicksort in particular is not well-suited to it.
But wait, BASIC on the TRS-80 was specified. I haven't benchmarked it, but I think that's about another factor of 40 performance loss. In that case the crossover isn't until between 100_000 and 1_000_000 items.
So the claim is probably wrong, but close to correct. It would be correct if you replaced the TRS-80 with a slightly faster microcomputer with more RAM, like the Apple iiGS, the Commodore 128, or the IBM PC-AT.
We had this as a lab in a learning systems course. converting python loops into numpy vector manipulation (map reduce), and then into tensorflow operations, and measuring the speed.
Gave a good idea of how python is even remotely useful for AI.
We are rebuilding a core infrastructure system from unmaintained python (it's from before our company was bought and everyone left) to java. It's nothing interesting, standard ML infrastructure fare. A straightforward, uncareful, like weekend implementation in java was over ten times faster.
The reason is very simple: Python takes longer for a few function calls than Java takes to do everything. There's nothing I can do to fix that.
I wrote a portion of code that just takes a list of 170ish simple functions and run them, and they are such that it should be parallelizable, but I was rushing and just slapped the boring serialized version into place to get things working. I'll fix it when we need to be faster I thought.
The entire thing runs in a couple nanoseconds.
So much of our industry is writing godawful interpreted code and then having to do crazy engineering to get stupid interpreted languages to do a little faster.
Oh, and this was before I fixed it so the code didn't rebuild a constant regex pattern 100k times per task.
But our computers are so stupidly fast. It's so refreshing to be able to just write code and it runs as fast as computers run. The naive, trivial to read and understand code just works. I don't need a PhD to write it, understand it, or come up with it.
Big O notation drops the coefficient, sometimes that coefficient is massive enough that O(N) only beats out O(N^2) at billions of iterations.
Premature optimisation is a massive issue, spending days working on finding a better algorithm is many times not with the time spent since the worse algorithm was plenty good enough.
Real world beats algorithmic complexity many many times because you spent ages building a complex data structure with a bunch of heap allocations all over the heap to get O(N) while it's significantly faster to just do the stupid thing that is in linear memory.
Python has come along way. It’s never gonna win for something like high-frequency trading, but it will be super competitive in areas you wouldn’t expect.
The Python interpreter and core library is mostly C code, right? Even a Python library can be coded in C. If you want to sort an array for example, it will cost more in Python because it's sorting python objects, but it's coded in C.
I imagine this is a class specifically about slow languages. Writing code that doesn't get garbage collected, using vectorized operations(numpy), exploiting jit to achieve performance greater than normal C, etc.
After reading this article, I suddenly remembered an elective I took in college called “Software Archaeology.” The professor asked us to reimplement compiler exercises from the 1970s. At the time it felt useless, but later I realized that course taught me more about system design than any modern framework ever did.
Software archaeology in its most literal form would be a fantastic course addition for anyone going into a company with a medium-large codebase > 5 years old. Especially if you end up at a FAANG or something akin to it.
Being able to navigate not just a codebase but bugs/tickets attached to it, discussions in documents, old wiki pages that half work, extracting context clues from versioning history, tracing people by the team they worked on at the time...digital detective work is a serious part of the job sometimes.
Many computer science programs today have basically turned into coding trade schools.
Students can use frameworks, but they don’t understand why languages are designed the way they are, or how systems evolved over time.
It’s important to remember that computing is also a field of ideas and thought, not just implementation.
My large state university still has the same core required classes as it did 25 years ago. I don't think CS programs can veer to far away from teaching core computer science without losing accreditation.
Not only encoding/decoding but searching and sorting is also different. We may also cover font rendering, unicode modifiers and emoji. They are so common and fundamental but very few understand them.
Where’s the course on managing your reaction when the client starts moving the goal posts on a project that you didn’t specify well enough (or at all), because you’re a young eager developer without any scars yet?
Back in the 90s, this was actually a sneaky part of Dartmouth's CS23 Software Engineering course. At least half your grade came from a 5-person group software project which took half a semester. The groups were chosen for you, of course.
The professors had a habit of sending out an email one week before the due date (right before finals week) which contained several updates to the spec.
It was a surprisingly effective course.
(Dartmouth also followed this up with a theory course that often required writing about 10 pages of proofs per week. I guess they wanted a balance of practice and theory, which isn't the worst way to teach CS.)
In uni we had a semester long group project with the stronger coders as project leaders. Group leaders controlled pass/fail for the group members and vice versa. After the leaders put together project plans for their teams and everyone was supposed to start working WHOOPSIE reorg and all the leaders got put on random teams and new people got "promoted" into leader (of groups they didn't create the plan for). If the project didn't work at the end the entire group failed.
I've never been taught anything more clearly than the lessons from that class.
Shit I was thinking about exactly the same thing: professor deliberately change requirements at last week to mess up the students and give them a bit of taste of true work.
My uni kind of had that course! They just didn't tell us what it was going to be ahead of time and it was horrendous. We all absolutely hated the professor but it was required to graduate so we all came up with various coping strategies and at the very end he said "congratulations this is what the real world is like!"
(I didn't believe him at the time, but in some ways he really didn't go far enough...)
People in tech industry, seem to have no idea how the systems in the wild work. Enterprise Java runs the backbone of operations for all of large business organisations such as banks. It is just as grounded as MS Office is. It is object-oriented software that is running the bulk of production environments of the world. Who is going to maintain these systems for the next few decades?
And in reality, there is nothing wrong with Java or object orientation. It has the best battle-tested and rich ecosystem to build enterprise systems. It mirrors the business entities and a natural hierarchy and evolution of things. It has vast pool of skilled resources and easy to maintain. Python is still a baby when it comes operational readiness and integrations. You might get excited about Jupyter cells and REPL, but that is all a dev-play, not production.
Unlearning OOP does not necessarily involve forgetting abstraction and the concept of an object. "Unlearning OOP" involves freeing yourself from the notion that all programming should be designed as an object hierarchy.
There is/was a tendency in object-oriented programming to consider that it is the only way Real Software™ is made. You tend to focus more on the architecture of your system than its actual features.
Notice the prerequisite to unlearning something is learning it first. I don't think anyone proposes that the concept of an object is useless.
I don’t necessarily agree with a somewhat childish “unlearn OOP” idea, but… a lot of that enterprise software is of bad quality. Whether it’s OOP or something else’s fault, simply stating that a lot of backbone is written in Java does not prove Java is a good choice, nor does it prove that there is nothing wrong with Java.
Maybe it should be "really learn about object-oriented programming (at a low level)".
Methods are functions, with an implicit argument usually called "self". Unless they are static, in which case, they are just regular functions. Classes are data structures, abstract methods are function pointers, inheritance adding data at the end of an existing data structure. In fact, inheritance is like a special case of composition.
Those who oppose object-oriented programming the most are typically the functional programming guys. But what is a function variable if not an object with a single abstract method, add attributes and you have a closure.
It will all end up as machine code in the end, and understanding how all these fancy features end up on the bare metal help understanding how seemingly different concepts relate.
The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said "Master, I have heard that objects are a very good thing - is this true?" Qc Na looked pityingly at his student and replied, "Foolish pupil - objects are merely a poor man's closures."
Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures. He carefully read the entire "Lambda: The Ultimate..." series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.
On his next walk with Qc Na, Anton attempted to impress his master by saying "Master, I have diligently studied the matter, and now understand that objects are truly a poor man's closures." Qc Na responded by hitting Anton with his stick, saying "When will you learn? Closures are a poor man's object." At that moment, Anton became enlightened.
For those who don't know, Guy L. Steele (Anton's interlocutor here) is the original designer of the Scheme programming language, the author of the "Lambda: The Ultimate Zozo" series of papers, and the second biggest contributor to the design of Java.
> It mirrors the business entities and a natural hierarchy and evolution of things
I've seen the entities you're describing from the inside and they resemble nothing natural besides perhaps a tumor. Hopefully we can just dispense with them rather than shackle the next generation with the burden of maintaining them.
If you think programmers should defer to bank managers on what the best way to design software is, do you also think that bank managers should defer to plumbers on what the best way to manage liquidity is?
Its not bank managers but IT managers choosing somewhat the tech stack, rest is inertia and things working. Bank managers dont give a fuck in same way they couldnt care less what plumbing material is used in their buildings. They care about long term stability, as they should. Same is true for almost any business outside SV.
Parent is correct, been doing this my entire (profitable to the absolute limit) career and will most probably retire doing same. You clearly seem to lack any expertise in field discussed.
"CSCI 2100: Unlearning Object-Oriented Programming" immediately caused me to disagree this one.
When I code in C, in the end, I usually miss the syntax for defining "objects/classes" (structs with functions and access controls), the syntax/notation that encapsulates/binds/groups the related state and its functions/API to define some specific concept/model == custom data type.
Of course OOP can be taken to extreme complexity and then lose its usefulness.
That happened not by choice, but by chance and due to "OOP consultants" running rampant in the 2000s. Source: i have to maintain Java slop in a bank, and used to maintain Java slop in manufacturing.
> Enterprise Java runs the backbone of operations for all of large business organisations such as banks.
This is rather anti-recommendation. At this point I'm expecting from a bank only to reliably login, preview balance and transaction history, receive and send bank transfers... and they oftentimes fail at this basic feature set. I don't even need credit or interest rates from them.
Banks as an example of "getting things done" is laughable. Real industry gets things done: manufacturing, construction, healthcare etc. We could do without the massive leech that is the finance sector.
"Real industry" also has quite a hard time getting things done these days. If you look around at the software landscape, you'll notice that "getting things done" is much easier for companies whose software interfaces less with the real world. Banking, government, defense, healthcare etc. are all places where real-life regulation has a trickle-down effect on the actual speed of producing software. The rise of big tech companies as the dominant economic powerhouses of our time is only further evidence that it's easier to just do a lot of things over the internet and even preferred, because the market rewards it. We would do well to figure out how to get stuff done in the real world again.
And that needs skyscrapers and entire financial districts to achieve? This is a tiny fraction of the "work" done by the financial sector. Most of what they do is pointless trading between themselves and sapping the output of the real economy.
The banks' real "product" is trust. You will work an entire month for a "bank transfer" (something you can't even hold, let alone eat or burn) because you believe your landlord will similarly accept a "bank transfer" in exchange for you rent (or, if you have a mortgage, you work an entire month because you believe this means you will be able to continue living in your house unchallenged). This has absolutely nothing to do with what programming languages or paradigms they have in place.
With so many issues and costs that people buy stablecoins to exchange money with smart-contracts that have no substantial guarantee other than the issuer's right to block funds...
> PSYC 4410: Obsessions of the Programmer Mind
Identify and understand tangential topics that software developers frequently fixate on: code formatting, taxonomy, type systems, splitting projects into too many files. Includes detailed study of knee-jerk criticism when exposed to unfamiliar systems.
Each class would "just" study a hackernews thread in depth.
- telling clients that the proof-of-concept is non-conclusive so it's either bag it or try something different
- spending innovation tokens in something else than a new frontend framework and/or backend language
- understanding that project management methods are tools (not rites) and if your daily standup is 45min then there's a problem
Self-taught Python, and PHP before that. The course I always wanted might be named, “Systems: static and flexible abstractions”
Which is an expression of the frustration I had trying to learn frameworks (I forgot I attempted to learn Drupal until I read this question), where I found it nearly impossible to ask questions on SO (a lot of ‘stay in your lane’ remarks).
And the frustrations I feel today when I try to reconcile my understanding of Python used in the ‘systems’ I created for CLI scripts, and the baffling brain fog I experience when trying to understand how someone else’s code works.
I am happy to sign up for all these classes. Tbh this is what coursera or whatever should be. Not yet another machine learning set of lectures with notebooks you click run on.
Yes! I was a CS major. But when I got my first job out of school in 2005, I had never encountered version control before. It was very hard learning that from zero, on top of everything else that everyone learns in their first job.
All of which is to say: yes, this should totally be a course!
Let's be real for a moment. I just looked it up and found a playlist for the full Handmade Hero series which contains 696 videos. Adding up the total duration of the playlist gave me a time of 4726711 seconds, or 1313 hours.
Even for the most passionate of developers, you're gonna have a hard time getting someone to commit even a fraction of that time towards educational content.
If you think there are any specific videos from the Handmade Hero series that are really worth watching, you should recommend them directly. But pointing someone to 1300 hours of content is an absurd suggestion.
You don't watch the full playlist from start to finish. You start and keep watching until "you get it". Most of the foundational stuff is of course at the beginning. Feel free to pick the topics you find interesting, like allocations, etc.
There's a sizable community around Handmade Hero which can point you to more specific topics.
PSYC 2230: Measuring Things - Gathering evidence, overcoming bias, and comparative analysis.
Most developers cannot measure things at any level, in any regard.
CSCI 5540: Transmission Control - Comparative analysis of existing data transfer protocols, to include practical application, as well as authoring new and original protocols
Most developers don’t know how any transmission protocols work internally, except possibly HTTP
CSCI 3204: Tree Traversal - Data structure analysis applied to data model hierarchies and taxonomies
I have heard from so many developers who say they spend most of their education on data structure analysis but in the real world cannot apply it against tree models in practical application on any level. The people in library sciences figure this out in the real world but not educated software developers.
I was definitely guilty of this in my last role. Some of my refactorings were good and needed, but also a distraction from saying the codebase was "good enough" and focusing on the broader people/team/process problems around me.
I wish more comp sci curricula would sprinkle in more general courses in logic and especially 20th century analytic philosophy. Analytic philosophy is insanely relevant to many computer science topics especially AI.
Systems Engineering 101/201/301/401: How to design a computer system to be reliable
Security Engineering 101/201/301/401: How security flaws happen and how to prevent them
Conway's Law 101/201: Why the quality of the software you write is less important than your org chart
The Real DevOps 101/201/301: Why and how to simultaneously deliver software faster, with higher quality, and fewer bugs
Old And Busted 101/201: The antiquated patterns developers still use, why they're crap, what to use instead
Thinking Outside the Box 101: Stupid modern designs and why older ones are better
New Technology 101: The new designs that are actually superior and why
Project Management 101/201/301: History of project management trends, and how to manage any kind of work
Managing for Engineers 101/201/301: Why and how to stop trying to do everything, empowering your staff, data-driven continuous improvement
Quality Control 101/201: Improving and maintaining quality
Legal Bullshit 101/201: When you are legally responsible and how not to step in it
In addition,
Team Dynamics 301: A course in Blame Management
Handling the traditional “innocent punished, guilty escape/promoted” issue. With explanation of the meme “Success has 100 fathers/mothers while failure is a orphan.
If anything, we need the opposite of this class. Learn OO well and create tight apps with a small runtime footprint, well isolated code boundaries, and clean interfaces.
Jm2c but people tend to conflate two overlapping but different fields.
CS is the study of computation, SE is the study of building computer programs.
Those overlap in the same way physics and chemistry do. Of course the two overlap and chemist's are also exposed to classical and quantum physics and know about Dirac spaces or Born-Oppenheimer equations. But the bulk and core of a chemist curriculum will involve few of these courses and with a focus on what's relevant to the chemist. E.g understanding how quantum physics make water appear transparent in a glass but blue in a lake or deep pool.
Same is for CS and SE. Of course they are related, but CS is much more focused on the theoretical and mathematical parts of computing, not the practical side of building systems.
One wants to know what can be computed and how and with what properties. The other wants to know how to build computer programs, but does not need to understand and be intimate with the mathematics of type inference or Hoare logic.
"Software engineering" is the political ideology that the study of management practices can enable you to deliver a successful software project without learning how to program. It is very popular in fields where a software project can be considered successful without ever delivering usable software, such as cost-plus defense contracting, management consulting, and enterprise software.
If you want to know how to build computer programs, then learn the type system of your chosen language, and learn how to reason about the behavior of sequences, loops, and conditionals—even if you do it informally or with small-step operational semantics instead of Hoare logic, and even if your language doesn't have type inference. Don't listen to the comforting lies of "Software Engineering" promising easy shortcuts. There is no royal road to Geometry, and there is no royal road to Google. Git gud.
But it is also true that there is a great deal that you could learn about computer science that you do not need to write working software, fast. Sequential search is often fast enough. Simple hash tables are usually better than fancy balanced trees. You will probably never use a computer that uses one's complement or the network stack the OSI model describes. If you have an array to sort, you should probably use a sorting function from the system library and definitely not implement bubble sort from scratch. Or even Quicksort. You can program in Erlang or Java for decades without having to understand how the garbage collector works.
> "Software engineering" is the political ideology that the study of management practices can enable you to deliver a successful software project without learning how to program.
Software engineering is not an ideology, but the application of engineering practices to building computer programs, the same way civil engineering is the application of engineering practices to building bridges.
Your statement is odd: software engineering curricula do include theoretical and computational courses, but ultimately those are a limited part and not the focus of the curriculum.
In the same way CS curricula do include few engineering and application-focused exams, but again, they are not the focus.
It's absolutely fine for the two curricula to be different and they are indeed different in most of Europe.
E.g. at the university of Pisa the CS curriculum (obviously speaking about masters, arguing about bachelors is partially irrelevant, you just can't get in enough depth of any topic) has exams like parallel computing, category theory, models of computation, compilers and interpreters.
But the software engineering curriculum has: mobile and physical systems, machine learning, distributed computing, business process modeling, IT risk assessment, IT infrastructures, peer to peer systems, etc.
Of course many exams are shared (albeit they have slightly different focuses) such as: randomized algorithms, competitive programming, advanced programming and you can likely choose one of the other courses as your optionals.
But the focus is ultimately different. One focuses on the theory behind computation, one focuses on the practical aspect.
Or look at who's actually executing successfully on the practical aspect of building software. It isn't people who got a master's degree in IT risk assessment and business process modeling.
There should be a course on Linux. Not your typical operating systems course where you write a toy OS and teach a bunch of theory, but rather a deep dive into various Linux subsystems, syscalls, tooling, etc.
Actually, I think it should be the inverse of that. A CS student should come into CS!01 after hacking for years as a teenager and know how to use something like Linux from a practical standpoint and then college course should be all about the theory and ideas.
I remember when I was 10 or 12 or so hacking with my IBM 8086 and using basic, I accidentally "invented" the bubble sort. In fact, mine was extra slow and inefficient because both my outer and my inner loop went from 1 to N and there was no early exit if no swaps were made. A true O(N^2) algorithm. I didn't now what O(N^2) meant, but I had some understanding the things quickly got slower.
Then later in CS101 I learned about big-O and all the theories around sorting and it immediately clicked because I had a deep understanding of something that I experienced and then could tie it to real theory. The other way around - learning the theory before the experience - wouldn't have worked as well.
To tie it to your comment, you should have a deep experience with your OS of choice and then when you go to school, you learn why things are the way they were.
When I say this I often get accused of gate keeping, but I don't view it that way. I look at it as other types of majors that have existed longer than CS. I often make an analogy to music majors. I can't enroll as a freshman and say I'm going to be a music major without ever having played an instrument. People get accepted to a music department after they demonstrate the ability (usually though the equivalent of hacking while they were kids), and in their music classes they learn theory and how to play different instruments (just like learning different OSes or languages).
I kind of feel that CS should be the same way, you should show up to CS101 knowing how to do things from deep experience. You may not know any of the whys or theory, that's fine, but you should have experience in doing.
To tie it back to the parent: you should come to CS knowing how to run Linux, maybe because you copied configurations or scripts from the dark corners of the internet. And then the CS classes should be around why it's all that way. E.g., you know that to schedule something you use cron; and CS would be a discussion around how generic OSes need a way to schedule tasks.
> CSCI 3300: Classical Software Studies
Discuss and dissect historically significant products, including VisiCalc, AppleWorks, Robot Odyssey, Zork, and MacPaint. Emphases are on user interface and creativity fostered by hardware limitations.
Definitely would love that. Reading source code is pretty hard for newbies like me. Some guidance is appreciated.
I think it would be a good idea, especially CSCI 3300. (Learning them in a course is not the only way to learn computer and other stuff, but is (and should be) one way to do.)
(However, CSCI 2100 shouldn't be necessary if you should learn stuff other than OOP the first time, even if you also learn OOP.)
I really don't understand the modern hate towards OOP. From my experience over the last few decades working with large C and C++ codebases, the former turns into a big ball of mud first.
Most hate of OOP comes from the definition that OOP = inheritance. Meanwhile, among people that consider themselves OO programmers, there is often the same revulsion towards inheritance and a preference for encapsulation while still calling that OOP. Because each language is subtly different, these things tend to flame war.
Which of course people do and why of course you have:
I think that OOP can be good for some things, but that does not mean that all or most programs should use OOP for all or most things. I would say that for most things it is not helpful, even though sometimes it is helpful.
Functional programming exists in any reputable computer science course. Standard is haskel, For a true "unlearning" it might need to be a third or forth year subject
The Classical Software Studies would be quite useful. Go write a game in 64kb of RAM in BASIC. It would really stretch your creativity and coding skills.
Agreed, it would be very interesting to see some of the care taken for resource management that is lost now because every machine has “enough” RAM and cycles…
I think working on that kind of system would be actively harmful for most programmers. It would give them a totally unbalanced intuition for what the appropriate tradeoff between memory consumption and other attributes (maintainability, defect rate, ...) is. If anything, programmers should learn on the kind of machine that will be typical for most of their career - which probably means starting with a giant supercomputing cluster to match what's going to be in everyone's pocket in 20 years' time.
Ha. You call it "history". I call it "childhood". I did that years before getting to Uni :)
Although, to be fair, while it was a helpful practice at coding, I'm not a game designer, so it was a game too terrible to play.
First year Uni though I spent too many hours in the lab, competing with friends, to recreate arcade games on the PC. Skipping the game design part was helpful. To be fair by then we had a glorious 640k of ram. Some Assembly required.
I’m e long thought the Gameboy Advance would make a great educational platform. Literally every aspect of the hardware is memory mapped. Just stuff values into structs at hard-coded addresses and stuff happens. No need for any OS or any API at all.
It is interesting that no software engineering or computer science course I’ve seen has ever spent any time on CI/CD.
Jenkins, Docker, Kubernetes, none of these sorts of things - and I don’t even mean these specific technologies, but moreover nothing even in their ballpark.
> It is interesting that no software engineering or computer science course I’ve seen has ever spent any time on CI/CD.
It's hard to fit everything student needs to know in the curriculum. Someone else posted here they had 10 pages of proofs per week, for one course. I would have been fired for assigning so much homework!
I was a CS professor at a local college. My solution was to ignore CS1 and CS2 curriculum (we were not ABET accredited, so that's okay) in the second course of Java programming. Instead, I taught students Maven/Gradle, Git and GitHub, workflows, CI/CD, regular expressions, basic networking, basic design patterns, Spring Boot, and in general everything I thought new programmers ought to know. I even found a book that covered much of this stuff, but in the end I wrote my own learning materials and didn't use a book.
The course was a victim of its success. The school mandated the course for non-Java programmers too, resulting in a lot of push-back from the non-Java students.
If anyone is interested, I have the syllabus online still (I've since retired) at <https://wpollock.com/>. Look for COP2800 and COP2805C. I can also send the Java teaching materials as a PDF to anyone interested (book length, but sadly not publishable quality).
>Someone else posted here they had 10 pages of proofs per week, for one course.
Huh. As a professor, I would not be able to grade this kind of volume in any serious capacity. Especially since proofs need to be scrutinized carefully for completeness and soundness. I wonder how their instructor manages.
I'm doing this in the software engineering¹ course I teach.
However:
a) Practical CI/CD requires understanding (and some practical experience) of many other concepts like Linux shell scripting, version control, build automation, containers, server administration, etc. As few students (in our degree programme) have sufficient experience in these concepts, I spend about half of the semester teaching such devops basics instead of actual software engineering.
b) Effectively, teaching CI/CD means teaching how GitHub or GitLab do CI/CD. I feel a little bit uncomfortable teaching people how to use tech stacks owned by a single company.
¹) Actually it's more a course on basic software craftsmanship for media informatics students because no such course exists in our curriculum and I find it more important that students learn this than that they understand the V model.
It would be easy for me to agree with you. I hold a graduate degree in computer science and I’m named for my contributions proofreading/correcting a graduate text about algorithms in computability theory.
I love abstraction and algorithms and pure theory, but this whole “computer science is a branch of mathematics, nothing more” idea has always struck me as ridiculous. Are you willing to throw out all of the study of operating systems, networking, embedded systems, security (hardware and software), a good chunk of AI, programming languages, UI/UX/human computer interaction, graphics, just to draw a line around algorithms and Turing Machines and say this is all there is to computer science?
Cryptography is all math, networking is largely math and algorithms (IMO yes this should really be replaced with information theory. Just understanding Shannons paper would have been more valuable than learning about how routers work), AI is mostly statistics (And AI as a whole Id argue is the essence of computer science), graphics is largely math and algorithms.
Yes I very much think a computer science degree should be as close to the foundation of theory as possible. And still learning Jenkins and kuberenetes or even a general course on how effectively push code is still far from the things you listed.
Theres so much computer science that isnt even covered that id include before including courses on CI/CD
Yeah cryptography mostly (but certainly not all) math but it accounts for a negligible (pun intended) portion of interesting security work.
AI is a lot of math, especially if you hang out with the quasiconvex optimization crowd, but a vast majority of work in that field can not properly constitute “theory”
I think it’s clear in practice that computer science has officially strayed beyond whatever narrow bounds people originally wished to confine it to.
Alot of its a push for practicality/catering to student's interests. IMO its a result of a really archaic education system. Universities were originally small and meant for theoretical study, not as a de facto path for everyone to enroll into in order to get a job.
If it were me Id get rid of statically defined 4 year programs, and/or definite required courses for degrees, or just degrees in general. Just offer courses and let people come learn what they want.
One of my favorite classes was a python class that focused on building some simple games with tkinter, making a chat client, hosting a server, because it was the first time I understood how actual software worked. Im really glad I took that class.
On the other hand Id love to have learned information theory, lamba calculus, all the early AI, cognitive science, theory of programming languages, philosophy behind all of it that got us here
Your point is well taken and to some extent I agree, but I think you have to recognize. It’s not just student interest, career preparation, and practicality.
The research done by professional academic computer scientists also reflects the broad scope I’m advocating for.
I have never felt that way, and I’ve worked on a variety of projects at a variety of companies.
Everyone has a bespoke mishmash of nonsense pipelines, build tools, side cars, load balancers, Terragrunt, Terraform, Tofu, Serverless, Helm charts, etc.
There are enough interesting things here that you wouldn’t even need to make a tool heavy project style software engineering course - you could legitimately make a real life computer science course that studies the algorithms and patterns and things used.
I think OOP became popular because it feels profound when you first grasp it. There is that euphoric moment when all the abstractions suddenly interlock, when inheritance, polymorphism, and encapsulation seem to dance together in perfect logic. It feels like you have entered a secret order of thinkers who understand something hidden. Each design pattern becomes a small enlightenment, a moment of realization that the system is clever in ways that ordinary code is not.
But if you step back far enough, the brilliance starts to look like ornament. Many of these patterns exist only to patch over the cracks in the paradigm itself. OOP is not a natural way of thinking, but a habit of thinking that bends reality into classes and hierarchies whether or not they belong there. It is not that OOP is wrong, but that it makes you mistake complexity for depth.
Then you encounter functional programming, and the same transformation begins again. It feels mind expanding at first, with the purity of immutable data, the beauty of composability, and the comfort of mathematical certainty. You trade one set of rituals for another: monads instead of patterns, recursion instead of loops, composition instead of inheritance. You feel that familiar rush of clarity, the sense that you have seen through the surface and reached the essence.
But this time the shift cuts deeper. The difference between the two paradigms is not just structural but philosophical. OOP organizes the world by binding behavior to state. A method belongs to an object, and that object carries with it an evolving identity. Once a method mutates state, it becomes tied to that state and to everything else that mutates it. The entire program becomes a web of hidden dependencies where touching one corner ripples through the whole. Over time you code yourself into a wall. Refactoring stops being a creative act and turns into damage control.
Functional programming severs that chain. It refuses to bind behavior to mutable state. Statelessness is its quiet revolution. It means that a function’s meaning depends only on its inputs and outputs. Nothing else. Such a function is predictable, transparent, and portable. It can be lifted out of one context and placed into another without consequence. The function becomes the fundamental atom of computation, the smallest truly modular unit in existence.
That changes everything. In functional programming, you stop thinking in terms of objects with responsibilities and start thinking in terms of transformations that can be freely composed. The program stops feeling like a fortress of interlocking rooms and begins to feel like a box of Lego bricks. Each function is a block, self-contained, perfectly shaped, designed to fit with others in infinitely many ways. You do not construct monoliths; you compose arrangements. When you need to change something, you do not tear down the wall. You simply reassemble the bricks into new forms.
This is the heart of functional nirvana: the dream of a codebase that can be reorganized endlessly without decay. Where every part is both independent and harmonious, where change feels like play instead of repair. Most programmers spend their careers trying to reach that state, that perfect organization where everything fits together, but OOP leads them into walls that cannot move. Functional programming leads them into open space, where everything can move.
Reality will always be mutable, but the beauty of functional programming is that it isolates that mutability at the edges. The pure core remains untouched, composed of functions that never lie and never change. Inside that core, every function is both a truth and a tool, as interchangeable as Lego bricks and as stable as mathematics.
So when we ask which paradigm handles complexity better, the answer becomes clear. OOP hides complexity behind walls. Functional programming dissolves it into parts so small and transparent that complexity itself becomes optional. The goal is not purity for its own sake, but freedom; the freedom to recompose, reorganize, and rethink without fear of collapse. That is the real enlightenment: when your code stops feeling like a structure you maintain and starts feeling like a universe you can endlessly reshape.
The great achievement of OOP is that it inspires such passion.
In essence OOP is just, "hey, if you have a struct and a bunch of operations that operate on that struct, let's put the name of the struct and a dot in front of the names of those operations and you don't need to pass the struct itself as an argument"
It beats me how either the high priests or its detractors get so worked up about it, even with the add-ons like inheritance, poly-morphism or patterns. (Which of course also exist in a more mathematically clean way in functional languages.)
These patterns have seen real use (not saying optimal) in the wild.
Of course we know today composition is better than inheritance, plain data structs are enough for most cases, and "parse, don't validate". but did people know it in 1990s?
You’re missing the depth of the difference. It’s not just syntax sugar for calling object.method() instead of func(object). The key distinction is what happens when the method mutates the object.
When state is mutable, every method that touches it becomes coupled to every other method that touches it. The object stops being a collection of independent behaviors and turns into a shared ecosystem of side effects. Once you mutate state, all the code that relies on that state is now bound together. The object becomes a single, indivisible unit. You cannot take one method and move it elsewhere without dragging the rest of its world along with it.
Functional programming avoids that trap. Functions are isolated. They take input and return output. They don’t secretly reach into a shared pile of state that everything else depends on. That separation is not aesthetic, it is structural. It’s what makes functions genuinely modular. You can pull them out, test them, recombine them, and nothing else breaks.
# OOP version
class Counter:
def __init__(self):
self.value = 0
def increment(self, n):
self.value += n
def double(self):
self.value *= 2
c = Counter()
c.increment(5)
c.double()
print(c.value)
Here, every method is bound to self.value. Change how one works and you risk breaking the others. They share a hidden dependency on mutable state.
Now compare that to the functional version:
def increment(value, n):
return value + n
def double(value):
return value * 2
increment_and_double = lambda x: double(increment(x, 5))
print(increment_and_double(0))
In this version, increment and double are completely independent. You can test them, reuse them, and combine them however you like. They have no shared state, no implicit dependency, no hidden linkage.
People often think OOP and FP are complementary styles. They are not. They are oppositional at the core. OOP is built on mutation and shared context. FP is built on immutability and isolation. One binds everything together, the other separates everything cleanly.
Mutation is what breaks modularity. Every time you let a method change shared state, you weave a thread that ties the system tighter. Over time those threads form knots, and those knots are what make change painful. OOP is built around getters and setters, around mutating values inside hidden containers. That’s not structure. It’s coupling disguised as design.
Functional programming escapes that. It separates state from behavior and turns change into a controlled flow. It makes logic transparent and free. It’s not just another way to code. It’s the only way to make code truly modular.
You can write these same methods in an OOP language like Java. You dont have to use classes for everything.
But alot of times, yes it makes sense to group a set of related methods and states.
You say this is not a natural way of thinking but I strongly disagree, it lines up perfectly with how I think. You are you, the car dealership is a dealership. You buy a car from the car dealership, the dealership gets money, you lose money, the dealership loses a car and you gain a car. I want these states reflected in the objects they belong and not passed around globally and tracked
Or if I am writing an API library, yes I very much want to 1. group all my calls together in a class, and 2. keep track of some state, like auth tokens, expirations, configuration for the http client, etc. So you can just do api.login, api.likeX, etc
Moreover, most methods youd write in a large project are so limited in scope to the type and purpose, this idea of some great modularity is nonsense. Its not as if you can have a single delete function that works on deleting users, images from your s3, etc. Youd end up writing bunch of functions like deleteUser(user), createUser(user), deleteImage(image), and wow wouldn't it be great if we could just group these functions together and just do user.delete, user.create? we could even define an interface like Crudable and implement it differently based on what we're deleting. wows
There's a video from Gary Bernhardt called Functional Core, Imperative Shell which made sense to me. These paradigms can work well together. OOP can be great too, the Ruby language is deeply OO and the core library is rich and OO.
You can also only use hashes and arrays and type them with Typescript, using functions, types and namespaces and duck-typing, instead of classes and methods.
ps: closures are worth thinking about, we use them without even thinking about it
I want to believe you, but some of this verbage sounds very strange, as if you're don draper trying to sell me functional programming. There are a lot of stinks within those paragraphs. I have no proof that you did, but something about it feels off.
There’s no need to respond then. No point in this accusation either. You make an accusation, I deny it, life goes on and nothing is different except a little wasted effort. Save the effort.
- a basic computer science course, teaching how to be at home in a FLOSS desktop system
- an intermediate course teaching how to properly automate this environment, from scripting to classic development
- a basic course in networking and system management to reach the level of being able to be a dummy sysadmin at home
all of these must be preparatory to CS because without them it's like studying literature before knowing the basics of the language in which it's written. So far, it's assumed that students do it themselves, but the facts prove that this is not the case.
I think the misconception here is that studying computer science prepares you for a career as a software engineer. It helps but there are a lot of blind spots in the curriculum that boil down to the notion that academics don't really tend to have engineering backgrounds. The fix isn’t stuffing the curriculum with pet topics, because the list of useful ones is practically endless. And it would get outdated soon anyway.
I actually did a phd in the topic of software engineering and I had to learn a lot of stuff after I started practicing what I preached. I realized that this was the case while I was working on my thesis and it was a big reason for me to get some hands on experience.
Basically, academics tend to have a lot of somewhat naive notions about software engineering that usually manifest in them waffling about things like waterfall style development or emphasizing things like formal methods, which in 30 years of practice, I've rarely encountered in the wild.
That doesn't mean teaching that is a waste of time. But it does mean that there's more to software engineering than is taught in universities. You can't really learn most of that from someone that hasn't been exposed to real life software engineering. Most academics never leave university so they are not necessarily that up to speed with modern practices.
Teaching in most engineering disciplines boils down to a lot of theory followed by apprenticeships. The theory doesn't evolve nearly as fast as practice and tools. That's why it's fine using somewhat outdated or academic languages and tools in university. I don't think that's unique to computer science either.
Studying computer science gives you a theoretical basis and the ability to learn. Which is nice and relevant. But I actually know a lot of software engineers that studied completely different topics (theoretical physics, philosophy, mathematics, geology, etc.) that do fine without it. A few years of working can compensate for that. Having an academic background prepares people to wrap their heads around complex new stuff. Getting a degree basically means "you have a working brain". The most important skill you learn in university is using your brain.
I don't care what language people use in university. But I'd prefer people to have been exposed to more things than just 1 language and knowing that there are multiple ways to do the same thing. I did logic, functional, and imperative programming in my first year. OO was kind of hot and newish but that was a second year topic. I later studied aspect oriented programming as well (there are several flavors of that) and a few offshoots of object oriented (prototype based, role based). Many javascript programmers may have never heard of the language Self. But that's one of the the languages that inspired Brandan Eich; it's a prototype based OO language (no classes, just prototype objects). That's the difference between a good engineer and one with a decent computer science background. You don't need to know that to use Javascript. But it helps.
>CSCI 2100: Unlearning Object-Oriented Programming
Discover how to create and use variables that aren't inside of an object hierarchy. Learn about "functions," which are like methods but more generally useful. Prerequisite: Any course that used the term "abstract base class."
This is just a common meme that often comes from ignorance, or a strawman of what OOP is.
>CSCI 4020: Writing Fast Code in Slow Languages
Analyze performance at a high level, writing interpreted Python that matches or beats typical C++ code while being less fragile and more fun to work with.
I like this one, but see?
Python is heavily OOP, everything is an object in python for example.
I'm wondering if OP took a basic OOP course or would otherwise be interested in taking one? You can learn about a thing you are against, or even form your opinion after actually learning about it.
>I strongly disagree. How is everything being called an object in any way "heavily OOP
Do I need to spell it out? The O in OOP stands for object.Everything is an object therefore it is Object Oriented.
It's not much more complex than that man.
And I don't mean that it supports users writing oop code, I mean that the lang, interpreter and library are themselves written with oop. Inheritance? Check. Classes? Check. Objects? Check. Even classes are an object of type MetaClass.
Zork? I prefer advent and Spider and Web. Just kidding, I'd put Rogue/Nethack and Slashem too. Thanks to Rogue we have Curses and NCurses.
Also, Eliza and Megahal plus Markov Chains.
I had forgotten about prog21, and I'm impressed how he wrapped up his blog:
> I don't think of myself as a programmer. I write code, and I often enjoy it when I do, but that term programmer is both limiting and distracting. I don't want to program for its own sake, not being interested in the overall experience of what I'm creating. If I start thinking too much about programming as a distinct entity then I lose sight of that.
Programming is a useful skill, even in the age of large language models, but it should always be used to achieve some greater goal than just writing programs.
No, it's about creating something which does something useful and is easy to maintain. Plumbers have great ideas and approaches but you just want plumbing which works and can be fixed .
It's time developers realised they are plumbers not **** artists.
[HN reduced my expletive to just four asterisks which seems a bit reductionist]
People don’t get this up in arms when there is a math course about using math tools or math history or how math is used to solve actual problems, but for some reason they do about computer science.
They just label such people as Applied Mathematicians, or worse: Physicists and Engineers; and then get back to sensible business such as algebraic geometry, complex analysis and group theory.
If universities offered a major in which students learned to operate telescopes of various kinds and sizes, and then called it astrophysics, people would be mad too.
Alan Kay, my favorite curmudgeon, spent decades trying to remind us we keep reinventing concepts that were worked out in the late 70s and he’s disappointed we’ve been running in circles ever since. He’s still disappointed because very few programmers are ever introduced to the history of computer science in the way that artists study the history of art or philosophers the history of philosophy.
reply