There were a few languages designed specifically for parallel computing spurred by DARPA's High Productivity Computing Systems project. While Fortress is dead, Chapel is still being developed.
Those languages were not effective in practice. The kind of loop parallelism that most people focus on is the least interesting and effective kind outside of niche domains. The value was low.
Hardware architectures like Tera MTA were much more capable but almost no one could write effective code for them even though the language was vanilla C++ with a couple extra features. Then we learned how to write similar software architecture on standard CPUs. The same problem of people being bad at it remained.
The common thread in all of this is people. Humans as a group are terrible at reasoning about non-trivial parallelism. The tools almost don't matter. Reasoning effectively about parallelism involves manipulating a space that is quite evidently beyond most human cognitive abilities to reason about.
Parallelism was never about the language. Most people can't build the necessary mental model in any language.
This was, I think, the greatest strength of MapReduce. If you could write a basic program you could understand the map, combine, shuffle and reduce operations. MR and Hadoop etc. would take care of recovering from operational failures like disk or network outages by idempotencies in the workings behind the scenes, and programmers could focus on how data was being transformed, joined, serialized, etc.
To your point, we also didn't need a new language to adopt this paradigm. A library and a running system were enough (though, semantically, it did offer unique language-like characteristics).
Sure, it's a bit antiquated now that we have more sophisticated iterations for the subdomains it was most commonly used for, but it hit a kind of sweet spot between parallelism utility and complexity of knowledge or reasoning required of its users.
That's why programming languages are important for solving this problem.
The syntax and semantics should constrain the kinds of programs that are easy to write in the language to ones that the compiler can figure out how to run in parallel correctly and efficiently.
That's how you end up with something like Erlang or Elixir.
Maybe we can find better abstractions. Software transactional memory seems like a promising candidate, for example. Sawzall/Dremel and SQL seem to also be capable of expressing some interesting things. And, as RoboToaster mentions, in VHDL and Verilog, people have successfully described parallel computations containing billions of concurrent processes, and even gotten them to work properly.
the distinction matters less and less. Inside the GPU there is already plenty of locality to exploit (catches, schedulers, warps). nvlink is a switch memory access network, so that already gets you some fairly large machines with multiple kinds of locality.
throwing infiniband or IP on top is really structurally more of the same.
You’re right and this is also a bit of a pet peeve of mine. “Lisp” hasn’t described a single language for more than forty years, but people still talk about it as if it were one.
Emacs lisp and Clojure are about as similar as Java and Rust. The shared heritage is apparent but the experience of actually using them is wildly different.
Btw, if someone wants to try a lisp that is quite functional in the modern sense (though not pure), Clojure is a great choice.
The difference is time, effort and scalability. There are many things that humans can do that society doesn't strictly regulate, because as human activities they are done in limited volumes. When it becomes possible to automate some of these activities at scale, different sorts of risks and consequences may become a part of the activity.
and much higher interfaces for interaction and ai manipulation. Like directly recording episodes of training data so that the arm can use a VLA instead of simple IK.
The idea is that you define a number of pre- and post- condition predicates for a function that you want proved (in what's effectively the header file of your Ada program). Like with tests, these checks that show that the output is correct are often shorter than the function body, as in this sorting example.
Then you implement your function body and the prover attempts to verify that your post-conditions hold given the pre-conditions. Along the way it tries to check other stuff like overflows, whether the pre- and post- conditions of the routines called inside are satisfied, etc. So you can use the prover to try to ensure in compile-time that any properties that you care about in your program are satisfied that you may otherwise catch in run-time via assertions.
reply