My productivity is not significantly limited by my ability to generate code, so I see little value in tools which offer to accelerate the process. (I don't use autocomplete, either; I type quickly, and prefer my editor to stay out of the way as much as possible.) I spend far more time reading, discussing, testing, and thinking than I do writing.
The people who rave about AI tools generally laud their facility with the tedious boilerplate involved in typical web-based business applications, but I have spent years steering my career away from such work, and most of what I do is not the sort of thing you can look up on StackOverflow. Perhaps there are things an AI tool could do to help me, and perhaps someday I will be curious enough to try; but for now they seem to solve problems I don't really have, while introducing difficulties I would find annoying.
One of the negative consequences of the “modern secular age” is that many very intelligent, thoughtful people feel justified in brushing away millennia of philosophical and religious thought because they deem it outdated or no longer relevant. (The book A Secular Age is a great read on this, btw, I think I’ve recommended it here on HN at least half a dozen times.)
And so a result of this is that they fail to notice the same recurring psychological patterns that underly thoughts about how the world is, and how it will be in the future - and then adjust their positions because of this awareness.
For example - this AI inevitabilism stuff is not dissimilar to many ideas originally from the Reformation, like predestination. The notion that history is just on some inevitable pre-planned path is not a new idea, except now the actor has changed from God to technology. On a psychological level it’s the same thing: an offloading of freedom and responsibility to a powerful, vaguely defined force that may or may not exist outside the collective minds of human society.
Lisp is a language family, not one specific language. Do you have a particular one in mind? There are many languages that can be called Lisp which are different from each other, and some have multiple implementations.
Mainstream Lisp dialects have had objects other than lists for many, many decades. The LISP-1 programmer's manual from 1960, referencing the original language which started it all, describes zero-based arrays already.
In some Lisp-like languages, the syntax processing itself is based on arrays, like Janet. The parenthesized notation turns into a nested array, not a nested linked list.
In Lisps where the syntax is based on lists, that doesn't imply that your program has to work with list at run-time. The code-transformations (macros) which happen at compile time will be working with linked lists.
Budding computer scientists and engineers like to write toy Lisp dialects (sometimes in one weekend). Often, those languages only work with linked lists, and are interpreted, meaning that the linked lists representing the code structure are traversed to execute the program, and repeatedly traversed in the case of loops.
(If you're making remarks about an important historic language family based on familiarity with someone's toy Lisp project on github, or even some dialect with an immature implementation, that is a gross intellectual mistake. You wouldn't do that, would you?)
Linked lists may "kind of suck" on cached hardware with prefetch, but that doesn't prevent them from being widely used in kernels, system libraries, utilities, language run-times (internally, even in the run-times of languages not known for exposing linked lists to the programmer), ... C programmers use linked lists like they are going out of style.
S2N is great, clean, actively maintained, and even has experimental support for post-quantum key exchange (compatible with BoringSSL and the Zig standard library).
Oh it can be. Typically, if I need to do some text transformation or extraction, I start by getting sample data and renaming it to a .txr suffix. Then just generalize that data into the TXR pattern that matches it and gets out what is needed.
As an example, I was doing some kernel work and needed patches to conform to the kernel's "checkpatch.pl" script. Unfortunately, this thing outputs diagnostics in a way that Vim's quickfix doesn't understand; I wanted to be able to navigate among the numerous sources of errors in the editor.
First I looked at the checkpatch.pl script hoping that of course they would have the diagnostic output in one place, right? Nope: formatting of messages is scattered throughout the script by cut-and-paste coding.
TXR to the rescue:
Sample output:
WARNING: line over 80 characters
#279: FILE: arch/arm/common/knllog.c:1519:
+static void knllog_dump_backtrace_entry(unsigned long where, unsigned long from
WARNING: line over 80 characters
#321: FILE: arch/arm/include/asm/unwind.h:50:
+extern void unwind_backtrace_callback(struct pt_regs *regs, struct task_struct
WARNING: line over 80 characters
#322: FILE: arch/arm/include/asm/unwind.h:51:
+ void dump_backtrace_entry_fn(unsigned long where,
WARNING: line over 80 characters
#323: FILE: arch/arm/include/asm/unwind.h:52:
+ unsigned long from,
Result (redirected into errors.err, loads with vim -q):
arch/arm/common/knllog.c:1519:WARNING (#279):line over 80 characters
arch/arm/include/asm/unwind.h:50:WARNING (#321):line over 80 characters
arch/arm/include/asm/unwind.h:51:WARNING (#322):line over 80 characters
arch/arm/include/asm/unwind.h:52:WARNING (#323):line over 80 characters
arch/arm/include/asm/unwind.h:53:WARNING (#324):line over 80 characters
arch/arm/kernel/unwind.c:352:ERROR (#337):inline keyword should sit between storage class and type
The nice thing is that we know what the above does when we revisit it six months later.
I tried to follow the course using the backcountry map, but due to some Slack in the GPS, I ended up Discoursed. I fell, badly twisting my ankle, and was not able to Github and walk. Thus stranded in the middle of nowhere, I peered into my backpack. I only one Carrot left to eat, and to my utter dismay, I had left the Flarum gun at the Basecamp. Luckily, my cellphone's data connection somehow worked well enough that my E-mail client was able to clear the SOS message of the outbox after some 23 retries.
Government is a beast whose appetite is never satiated. Such is the consequence of Pournelle's Iron Law of Bureaucracy:
First, there will be those who are devoted to the goals of the organization. Examples are dedicated classroom teachers in an educational bureaucracy, many of the engineers and launch technicians and scientists at NASA, even some agricultural scientists and advisors in the former Soviet Union collective farming administration.
Secondly, there will be those dedicated to the organization itself. Examples are many of the administrators in the education system, many professors of education, many teachers union officials, much of the NASA headquarters staff, etc.
The Iron Law states that in every case the second group will gain and keep control of the organization. It will write the rules, and control promotions within the organization.
Julia's SIMD programming model is still very much a work in progress; I think we have a way to go in providing the kind of flexibility and control that languages such as ISPC, Halide, TVM, etc... provide.
That being said, packages such as SIMD.jl [0], and LoopVectorization.jl [1] are making fantastic progress, to the point that LoopVectorization forms the basis of a legitimate BLAS contender, in pure Julia [2]. It's not totally there yet, but it's close enough that real work is being done in LV at OpenBLAS-like speeds.
As an aside, I find it incredible that these kinds of extensions can be built in packages thanks to the fact that Julia's compiler is extensible enough to allow for direct manipulation of the LLVM intrinsics being emitted by user code.
No, many dynamically typed languages do not perform implicit conversions: Lisp and Python are languages that do it better. They still have some subtleties (Lisp has the EQ/EQL distinction, not to mention a huge number of other quality operators in Common Lisp), but I see implicit conversions as a singularly bad idea in a dynamically typed language.
It's not operator precedence, both (True == False) is False and True == (False is False) are true.
As explained in the link, it's because of chained comparisons, it expands to True == False and False is False. More useful for expressions like 1 < x < 3.
Yep, python is probably one of the most human-readable and writable languages out there.
There are some dark corners, like the site above illustrates, but it is pretty easy to avoid them.
(It is still not the good fit for SICP, but that’s a different conversation)
As someone who started with C and then made a really hard effort to become an expert C programmer, when later in life I was introduced to Smalltalk and Lisp, I was angry that C had warped my brain and I could only think of programming in the context of a physical machine. I feel it would have been much better to go in the opposite direction.
The main problem beginning Japanese learners often face is that they are taught polite form before plain form. Polite form is a natural extension of plain form, but if you start with that, it's actually quite mind bending to back track to plain form. The secret is to abandon polite form entirely until you are relatively fluent with plain form and then add polite form back in.
For example, "tanoshii" is present/future tense. "tanoshikatta" is past tense. If you want to make it polite, then you just add "desu". Super easy.
While it is grammatically incorrect, it is completely acceptable in normal conversation to do the same with the negation. "tanoshikunai" is the negation. Past tense negation is "tanoshikunakatta" (ye gods, I can't read romaji...). You can do exactly the same thing to make it polite -- just jam "desu" on the end. That's what every child will do. The wrong bit is that "tanoshikunai desu" should really be "tanoshiku arimasen".
For "na" adjectives, it works differently. "suki" is present tense. To make it polite: "suki desu". Past tense is "suki datta". To make it polite "suki deshita". Negation is "suki de wa nai" (seriously, romaji makes me cringe...). Polite negation is "suki de wa arimasen" (though you can very much get away with the mistake of saying "suki de wa nai desu" -- again, every single child speaks this way).
Past tense negation is "suki de wa nakatta". Polite is "suki de wa arimasen deshita" (but again, the easy way is "suki de wa nakatta desu").
So, why is it like this? The reason is that "i" adjectives were originally verbs that had a different set of inflections/conjugations. Very obscure piece of trivia (that most Japanese people don't even know) is that "ohayou gozaimasu" is actually one of those conjugations -- it's actually "(honourific) o hayai de gozaru" in polite form. The "i" ending mixes with "de" to produce the "ou" ending. Anyway, the point is that you have to inflect it because it is literally a verb that is modifying a noun.
"na" adjectives on the other hand are actually adjectives. They are called "na" adjectives because you have to add "na" when modifying the noun. For example, "suki na hito". The "na" is actually a contraction of "ni aru" -- because in Japanese you can only modify nouns with verb phrases.
So this is why there is a difference between the negation of "i" adjectives and "na" adjectives. "ku" is the verb combining form of the old style "i" verbs (like "te" is on modern verbs). So "tanoshikunai" is really "tanoshiku nai" -- you are combining the "tanoshi" verb with the "nai" verb. On the other hand "suki" is actually an adjective, not a verb, so you have to say "suki de wa nai" -- you can't combine them.
Past tense is exactly the same. In "tanoshikunakatta", it's really combining 2 verbs and conjugating the last one (as per the rules" -- "tanoshiku nakatta"). If you want to make it polite, the polite past tense of "nai" is "arimasen deshita" (but you can get away with "nakatta desu" in virtually every situation).
With "na" adjectives -- "suki de wa nakatta", we've conjugated the only verb. Again to make it polite you can say "suki de wa arimasen deshita" (or "suki de wa nakatta desu" if you want to sound like an uneducated bumpkin like me).
Hope this helps! Avoid polite form until you can handle plain form and it's almost all completely logical ;-)
Edit: Fix past tense in the examples of incorrect, but acceptable polite forms.
This is awesome! I've always wanted to try this. The only real complaint I have is that "da" is not actually a copula in the strictest sense. It's a contraction of "de aru". Similarly "na" is not a modifier. It's a contraction of "ni aru". "aru" is the verb which is the closest you get to a copula in Japanese - it means "it exists" for non-animate noun-phrases.
So if you say "sakana da", it does mean "It is fish", but so does just "sakana". The copula is implied. The "da" is completely optional and is actually only added for emphasis -- the literal translation is kind of like "That it is fish exists". In literary Japanese you would say "sakana de aru", "de" being the particle that links a verb the the means with which the verb is executed. For example "basu de iku" means "will go by bus" -- bus is the means by which we will go. In "sakana de aru" or "sakana da" we are basically saying that "fish" is the means of its existence.
The "na" modifier is also interesting. It is really "ni aru" where "ni" is essentially the "direction" in which something exists. "Something like a fish" would be "sakana no you". If you want to say "It is a fragrance something like a fish" you could say "sakana no you na kaori". Although I'm not aware of any modern Japanese that would express it like this, this is equivalent to "sakana no you ni aru kaori" -- "It is a fragrance that exists in the direction like a fish". Hopefully you can understand.
The interesting part of this is that adding "ni aru" to the end of a noun phrase just turns it into a verb phrase. And the even more interesting bit is that the only thing that can modify a noun phrase is a verb phrase.
But, you may have heard of "i-adjectives" -- these are adjectives that end in i. In actuality, these are not adjectives! They are verb phrases! So the word "cute" is "kawaii". However, the actual word is "kawai" and the inflection is "i". That's why when you want to say "not cute" it becomes "kawaiku nai" -- the "i" turns into "ku" because you are inflecting a verb.
This in turn is why you modify nouns directly with "i-adjectives". "kawai sakana", or "cute fish". Other adjectives are actually noun phrases in Japanese. "yumei na sakana" or "famous fish". This is, again, exactly the same as "yumei ni aru sakana" -- "The fish exists in the direction of fame".
So the rules are even simpler than presented in this blog post.
By the way, for anyone trying to learn Japanese and who wants to go beyond phrase-book level: learn plain form first and polite form later (if ever). Japanese makes absolutely no sense if you learn polite form first. It's incredibly logical (even the polite form extensions) if you start with plain form.
It's usually from the same people that are super active and obnoxious on FB. Posting every single hours of their life. And suddenly realizing they addicted to it. And they still need to post about it.
Hy feels all wrong to me, though I really wanted to like it, and I put some time in when I tried it.
Part of it is that it compiles to Python bytecode, and took the simplest route of adopting Python's not at all lisplike scope. With `let`, you have a visual clue of variable scope; Hy had a broken `let`, and then got rid of it completely. Now it looks like there's a new `let` that might make more sense. I haven't used it in a couple years.
But that's not the only problem. It has the same feeling I get from Clojure and the various lisps that compile to javascript; it just doesn't feel divorced enough from its parent language to be satisfying in itself. I'd rather write lisp than Python, sure, but except for a few libraries I'd like to be able to use, I don't see much reason to prefer Hy over Racket or some other similarly-mature Scheme, which would have somewhat fewer libraries, but would feel like a consistent and self-contained language.
Do you have your laptop? Good. Write a program that does this while I watch you code and criticize/ask noob questions.
(checks watch)
We don't use that old version here. Sorry, we don't have any company laptops available with it installed. No, install the latest version of X/Y/Z on yours. Let me get the wifi password for you.
(checks watch, leaves room for a really long time, comes back with password, checks watch again)
(while it's installing) So, how long have you been writing in X/Y/Z? Do you have any questions for me?
(checks watch, showing 20 minutes elapsed)
Looks like my 30 minutes are up. I'll go check with #RECEPTIONIST and see if there's anyone else to interview you.
(later, at the wrap-up meeting) I didn't think they had the experience necessary to do the job. They couldn't write a program in X/Y/Z.
I will answer your question through its dual : the people who love LISP and functional programming are in my experience people who love maths - as in, algebra, etc. . You can easily recognize them, because they say weird things such as "this demonstration is so elegant !".
Functional programming of course maps (heh) very cleanly to this line of thought.
But most people hate maths and this way of thinking. In contrast, you get first year students "re-discovering" OOP ever year - for instance a common trick to make them learn design patterns is just to put a problem that calls for it in front of them, and three times out of four in my experience they will even come up with a pattern name close to the original ones.
I try to introduce my boy (12) in programming trough Scratch but still no big success. However, we solved recently two math homework exercises with Prolog (logic) and MetaPost (geometry) which caused some effect, so I'm thinking to deepen in that languages or may be switch to Logo as intermediate lisp like solution.
Clojure variants are incompatible in various basic ways, since they use the host language for various things. The underlying platform also restricts what the implementation provides. No TCO in JVM -> no TCO in Clojure. Numbers are internally floats in Javascript -> Numbers are internally floats in ClojureScript and use Javascript semantics, not Clojure-Semantics.
Clojure:
Clojure 1.8.0
user=> (/ 3 4)
3/4
ClojureScript
cljs.user=> (/ 3 4)
0.75
Looks like these are different languages...
Common Lisp implementations OTOH implement most of the standard. There is also more choice in implementations:
* native implementations AND hosted implementations
* actual Lisp interpreters or a mix of interpreters and compilers, interactive compilers, batch compiler
* compilers written in Lisp itself with good error messages, forms of compile-time type checking/inference, and advanced error handling (like SBCL, CMUCL)
* compilation to C, full embedding into C programs (ECL and others)
* full embedding in C++ (CLASP)
* compilation to shared libraries, embeddable into other applications
* whole program compilers for delivery of compact applications (like mocl)
The main Clojure compiler is written in Java and it shows...
If one targets a certain host environment (JVM, Javascript, ...) then this range of options and choice might not matter, even hinder - besides getting a poorer version of interactivity.
If we see a Lisp as a language on its own, then it matters a lot.
> Emacs-Lisp is dynamically scoped. That's precisely the kind of mistake (yes, mistake), that hinders large scale anything.
Yup, default-dynamic scope is a mistake, although in the particular case of emacs it has made certain code very clean (and led to plenty of bugs, as well). Emacs has added lexical scoping, which is a start.
Common Lisp's lexical default and optional dynamic scoping provides the best of both worlds.
> And who cares Scheme is not Lisp? It's close, and it's reputedly cleaner.
Among other things, its continuations are semantically broken, preventing correct implementation of UNWIND-PROTECT.
It's a good language for its intended purpose (teaching & research), but the language as standarised is not well-suited for building large projects (individual implementations, of course, can be quite good — but then one enters the world of implementation-dependence).
Common Lisp is superior for large projects, in part because it's much more fully-specified, in part because that specification includes more features, in part culturally (because Lisp systems have more often tended to be large and long-lasting).
That entirely depends on the cut of meat - a piece of tenderloin will cut much differently than a well marbled hunk of ribeye. Part of that is the direction of the grain, part is marbling and other intramuscular tissues, part is simply the density and composition of the muscle itself.
Unlike a regex string, using incorrect syntax with a parser combinator generally results in a syntax error. Regex parsers generally treat "unknown" characters as characters to match, hiding bugs.
The people who rave about AI tools generally laud their facility with the tedious boilerplate involved in typical web-based business applications, but I have spent years steering my career away from such work, and most of what I do is not the sort of thing you can look up on StackOverflow. Perhaps there are things an AI tool could do to help me, and perhaps someday I will be curious enough to try; but for now they seem to solve problems I don't really have, while introducing difficulties I would find annoying.