Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Elixir – Why the dot when calling anonymous functions? (dashbit.co)
222 points by weatherlight on Aug 14, 2023 | hide | past | favorite | 151 comments


For those of you who are interested by Elixir but find the lack of static typing an issue here are somethings to be aware of:

1. Static Typing is planned and currently the top priority of the team

https://elixir-lang.org/blog/2022/10/05/my-future-with-elixi...

2. There is a type checking tool

https://github.com/jeremyjh/dialyxir

3. You can go a long way with pattern matching and guides in the meantime and have alot more guarantees that a typical dynamic typed language.


Interesting, will check out. I am still evaluating Gleam, a language that's targets beam, but the compiler is written in Rust. It has actual strong typing, and I rather like it's Rust like syntax.


I was too until they strayed from the more Standard ML-like syntax to Rust-like syntax all for trying not to alienate the Algol crowd and Rust aficionados. For popularity. I like APL, BQN, and J so I'm not swayed by it.


I really like Gleam, and I am a huge fan of more being built on the EVM.

Personally I want the EVM to (eventually) get the same level of adoption and mindshare as the JVM.

And currently wider Elixir use/education seems like the path to that.


EVM is the Ethereum Virtual Machine. You mean BEAM, right?


EVM stood for "Erlang Virtual Machine" (of which BEAM is one implementation) before Ethereum even existed.


Except majority of people uses BEAM name for the last 13 years. Erlang VM or Erlang Virtual Machine is ok too. EVM name is just super not-popular, idk why.


Erlang's creators use EVM, which is where I learned it from. Not sure why it didn't catch on, either; I guess for a long-enough while BEAM was the only EVM implementation in common use, so it became the Xerox™ or Velcro™ of Erlang virtual machines lol


I do like me some static typing but it took me a while to realize that the guards and pattern matching provides runtime guarantees and not just compile time guarantees.


In theory, compile time guarantees renders runtime guarantees useless as long as you don’t interact with untyped code.


What’s currently the best static typed language for web development?


ReScript is excellent. Same type system as ocaml, which is IMO the sweet spot between power and straightforwardness. You lose the "you can represent ANYTHING" capability of TS which is both a pro and a con. In exchange it's much simpler to work with and has excellent type inference for day to day work.

The externals binding system for JS libs works well for integrating libraries. And react bindings are included in the standard lib, they also work great. The compiler is fast and produces generally quite reasonable JS.

You could argue that it's not "better" than elm in a theoretical sense. But for practical work it's much closer to the mainstream of web dev in mindset and syntax. Easier to learn, much easier to integrate with other frameworks or into existing teams & codebases. And it's less opinionated about rendering: the tooling generally assumes you're using react, but you don't have to and it can emit anything if you do some extra work wiring it up. I have used it with node and even, irresponsibly, deno fresh.

Been using it whenever I can for frontend stuff for a year now and haven't enjoyed a language this much since... well... elixir.


Purescript has by far the strongest type system of any mature compile-to-js language, good FFI, footprint comparable to react.

Elm is a heavily stripped down purescript, plus an ambiguously-benevolent dictator for life. Also truly terrible interop. It's a good learning environment but I would strongly recommend against using it for anything big.

Rescript is basically Ocaml with half-js syntax.

Typescript is bizarre. On a scale from 0 (C) to 10 (Haskell), it's a 12, a 6, and a 2 duct taped together. It's got some incredibly powerful features (template literals, conditional types), but it's missing most of the standard "advanced"-but-production-ready type system features (e.g. no HKTs, no type-directed emit) and the foundation is absolutely riddled with soundness holes.


Also worth mentioning Idris, although its tooling and ecosystem are still pretty much nonexistent.


Yeah, that's what I meant by "mature". Agda also has a js backend, but neither one can really stand alone. Haskell is closer, but GHCJS is still very much a second class citizen of the ecosystem.


Clean also has WASM as the compile target via its IR, and its tooling is great. Unfortunately though I couldn't find any sort of a community when I was looking into it, but other than this the language looks quite mature and very industry-oriented. Then there's Lean which also can compile to WASM via some sort of a bridge, it has decent tooling but the ecosystem is still nonexistent.


Typescript, by a large and deserved margin.


It's not sound :(. Type systems should be bomb proof or get out of the way.


Typescript's unsoundness isn't a good thing, but it is way down the list of type-unsafe things inherent in the Boschian nightmare that is frontend development.

Compile-to-JS languages add a big impedance mismatch when you inevitably need to use JS libs. Typescript's JS + types philosophy helps with this issue (it still sucks though).


Should have added other than Typescript :)


Then I would vote Go (best stdlib in the business), then Java.


F# is also part of the OCaml family, has a great to-JS transpiler (https://fable.io/) and F# code can also be used in .NET projects.


Few people on the web side know about it but it's secretly Scala.


The only popular choices: Typescript, Java, Go (although I don't think it is that popular as a webapp backend language) or C# (I think).


Rescript, 100%


Rescript


I'm glad we're in the age of static typing. The age of dynamic typing as a response to incorrect static typing was a necessary evil. We had to take a break from broken languages in order to realize what needed to be fixed in those languages. I'm convinced that JavaScript will eventually be one of the best, fastest, and safest languages, as long as we keep this cycle up of abandoning necessary languages and returning to them with very thorough fixes to much better understood and detailed problems with them.


I'm convinced that until Javascript makes it easy to do the right thing by default (instead of its current state of doing many wrong things by default) the Javascript ecosystem will continue to have a lot of low-quality library code intermixed with a few high-quality ones. My neutral regard for JS had rapidly degraded when I had to actually start writing code in it or understanding code others have written.


What ways does do the wrong things by default? I can't think of a single one. You might say, well, == is broken and you have to use ===, but that's not "the wrong thing by default" unless you assume that == is what ought to be the default operator and === ought to be secondary. But that assumption has to come from somewhere, and probably comes from the fact that other languages do it that way. But there's nothing inherently incorrect about === being the default operator and == being the rare secondary one. I think the same principle applies to any example you might be able to give me: if you assume that another language is objectively right, only then can you say that JavaScript is objectively wrong in comparison to that language which apparently descended from the heavens.


I would argue that implicit conversions in a dynamically typed language is a mistake in general. It's the "wrong thing by default" in the sense that it's too easy to write broken code by accident. And I think that's what is usually meant by "wrong thing by default".

I think if you really want implicit conversions, then you want to do what Perl does (never thought I'd say that): different operators for different types.


Well === was added to make up for deficiencies in == so seems reasonable that saying == is the default that does things incorrectly.


You're assuming a certain sort of perspectivalism: the only reasons X could be better than Y is that, from someone's pov, some language has X.

This is kinda insane. People can give reasons X is better than Y on grounds of various theoretical virtues of X -- that has nothing to do with a preference for any language.

Here's an example virute:

Behaviour should be consistent unless specialised. When specialised it should be obvious which specialisation is chosen.

(Violation: basically all of js' operators).


It's pretty easy to footgun when you're mixing event/callback based flow with promises/async.

It's really easy to forget a closure in an event driven function and to produce difficult to debug, error prone code.

A classic example I just screwed up:

    export function getSFTPConnection(options) {
        let key = readFileSync(options.keyfile_path, 'utf8');
        let ssh = new ssh2.Client();
        return new Promise(
            (resolve, reject) => {
                ssh.on('error', reject);
                ssh.on('ready', () => {
                    ssh.sftp((err, sftp) => {
                        resolve(sftp);
                    });
                });
                ssh.on('close', (err) => {
                    reject(new Error('SSH connection closed: ' + err));
                })
                ssh.connect(
                    {
                        host: options.host,
                        username: options.username,
                        privateKey: key,
                    }
                );
            }
        )
    }
The bug here isn't obvious, at least to me - but it caused me much heartache and pain because I was moving too fast and not thinking - but a good example of unexpected behavior by default.

The bug is that there's not a closure around the ssh instance, so if you call this function multiple times it'll actually return the same connection instance.

The extra fun part of this bug is that the code worked - but because I was using the same instance, when I'd download multiple files in parallel, it would interleave the data, corrupting the files.

And the thing is, I know better. I've been doing js for decades.

I love js, but stuff like this is pretty frustrating, and can be a nightmare for new devs.


> The bug is that there's not a closure around the ssh instance, so if you call this function multiple times it'll actually return the same connection instance.

I’m not entirely sure what you mean by this, but it doesn’t sound like a correct diagnosis. Each call to `getSFTPConnection` creates a new `ssh2.Client` instance (unless `ssh2.Client`’s constructor does something really weird), and the promise can only resolve with the value `ssh.sftp` passes.

(The error handling does look broken, though – I would expect the 'error' event to be able to fire at any time, and the `ssh.sftp` callback is missing a check.)


Yes, you're 100% correct, I mixed some stuff up when I was posting the code for simplicity.

Bad example!


> The bug is that there's not a closure around the ssh instance, so if you call this function multiple times it'll actually return the same connection instance.

You've either adjusted the code for posting or you misunderstood what the problem was, because no, that's not the case. The ssh variable is local to the getSFTPConnection function and is not reused between multiple calls to the function, and throwing a pointless extra closure in there wouldn't do anything.



It's a huge stretch to claim that === isn't secondary to ==.


== was already a silly operator, === is just ridiculous.


Maybe it's just me but there's something fundamentally unappealing about bolt-on static type-systems on top of pre-existing dynamic languages. Sure, what's come about with e.g. TypeScript is pragmatic and very useful, but if imagining a future utopia then TS/JS++ and similar would not feature in the dream.


It's a nightmare, not a dream, if the language is mutable. Mutability is the worst and least fixable issue with almost every popular language today.


I don't really like Typescript, I think I would rather just code in Javascript. Typescript adds a lot of structure and tries to shoehorn more of a Java like programming style. A lot of the type checks seem unnecessary. If I am going to use types I would rather use Fable or ReasonML, something that really just has types from the start.


Why not? I can't think of anything off the top of my head that I think is fundamentally broken about TypeScript, or even that I strongly dislike about it.


- Too many statements instead of expressions.

- Lacks structural pattern-matching.

- Lacks first-class sum-types (although they can be clumsily encoded as objects with "kind" properties).

- Static type-checking not directly (or at all) contributing to the execution performance of the code. This is particularly egregious with e.g. CPython.

Presence of the first three are the sort of things that people appreciate in Good static languages (as opposed to the status quo static languages you alluded to that Python/Ruby/JS/etc were rebelling against in the 2000s).

The last is the dead giveaway of not utopia but rather a bizarre compromised situation.


Depending on who you ask, the fact that the type system is unsound could be considered a form of brokenness.


A small sample:

The existence of `any`; worse yet, the use of `any` in the standard library.

Mutable arrays are treated as covariant: the classic `cats : Cat[] ; animals: Animal[] = cats; animals.push(dog);` problem.

Methods are both co- and contravariant in their arguments by default, which is comically wrong. Member variables which are functions, meanwhile, are handled correctly.

`readonly` is a lie:

    const test: {readonly a: number } = {a: 0}
    const test2: {a: number} = test
    test2.a = 5
`Record<string, string>` is actually `Record<string, string | undefined>`. There's no way for an interface to specify that it really does return a value at any string index (e.g. for a map with a default value).

`{...object1, ...object2}` is typed as the intersection `typeof object1 & typeof object2`, which is not correct.

    const a: {a: 5} = {a: 5}
    const b: {a: 4} = {a: 4}

    const spread = <L, R>(l: L, r: R): L & R => ({...l, ...r})
    const impossible: never = spread(a, b)
You have to resort to bizarre conditional type hackery to control when unions distribute and when they don't.

You can constrain generics as `<T extends string>foo(t: T) => ...` but not `<string extends T>foo(t: T) => ...`, which makes many functions (e.g. Array.includes) much too strict about what they accept.


Its type system is apparently Turing complete. That’s a pretty major flaw, imo. Meaning that static analysis isn’t always possible.


Turing complete type systems are extremely common. C#, C++, Java - it's hard to avoid if you have subtyping and generics. In practice it almost never comes up.


Everybody knows elixir is ass, just use Erlang.


Edit: this was meant to be a reply to https://news.ycombinator.com/item?id=37122798 but I messed it up.

I don't see a good reason Elixir wouldn't allow it, since Erlang (yes I know it's not Elixir) often compiles anonymous functions ("Funs") into top-level definitions...

Here's some IR showing how a top-level fun from this Erlang module...

    -module(foo).
    -export([f/1]).

    f(X) ->
      G = fun (Y) -> X + Y end,
      G(X * 4) * G(X * 2).

...gets converted into a top-level function called '-f/1-fun-0-':

    {module, foo}.  %% version = 0
    
    {exports, [{f,1},{module_info,0},{module_info,1}]}.
    
    {attributes, []}.

    {labels, 9}.
    
    
    {function, f, 1, 2}.
        [SNIPPED FOR HN]
        {allocate,1,2}.
        {move,{x,0},{y,0}}.
        {swap,{x,0},{x,1}}.
        {call,2,{f,8}}. % '-f/1-fun-0-'/2
        {'%',{var_info,{x,0},[{type,number}]}}.
        {gc_bif,'*',{f,0},1,[{tr,{y,0},number},{integer,2}],{x,1}}.
        {move,{y,0},{x,2}}.
        {move,{x,0},{y,0}}.
        {move,{x,1},{x,0}}.
        {move,{x,2},{x,1}}.
        {call,2,{f,8}}. % '-f/1-fun-0-'/2
        {'%',{var_info,{x,0},[{type,number}]}}.
        {gc_bif,'*',{f,0},1,[{tr,{y,0},number},{tr,{x,0},number}],{x,0}}.
        {deallocate,1}.
        return.
    
    
    {function, '-f/1-fun-0-', 2, 8}.
      {label,7}.
        {line,[{location,"foo.erl",5}]}.
        {func_info,{atom,foo},{atom,'-f/1-fun-0-'},2}.
      {label,8}.
        {gc_bif,'+',{f,0},2,[{x,1},{x,0}],{x,0}}.
        return.


If you stripped the arity how would you know how many register need to be moved into the new call routine's register list, so that reordering, clobbering, etc. at the call site are optimized?


I am a bit rusty on the specifics of the BEAM's registers (the VM that Erlang and Elixir run on) but IIRC the short version is these are registers in a VM and the VM takes care not to let them clobber.

Also, in the BEAM, intra-module and inter-module calls operate differently.

The Erlang/OTP did a blog post on this a little while ago which I think is great: https://www.erlang.org/blog/a-brief-beam-primer/

Here is some useful info:

> BEAM is a register machine, where all instructions operate on named registers. Each register can contain any Erlang term such as an integer or a tuple, and it helps to think of them as simple variables. The two most important kinds of registers are:

> * X: these are used for temporary data and passing data between functions. They don’t require a stack frame and can be freely used in any function, but there are certain limitations which we’ll expand on later. > * Y: these are local to each stack frame and have no special limitations beyond needing a stack frame.


> VM takes care not to let them clobber.

The Erlang VM absolutely clobbers registers within functions all the time, since there are only 64? of them and they are a 'limited' resource. My point is that the logic to efficiently move data around is more complicated when you don't know the arity ahead of time.

You can't, for example, keep a register file as a linear slice. You have to allocate a whole slate of 64 registers on each call with the last (n) registers blanked.


Yes, you’re right.


Why not allow function literals that define multiple arity implementations? Something like...

    def sum(list) do
      plus = { 
        fn -> 0 end ; 
        fn x, y -> x + y end  }

      Enum.reduce(list, plus(), fn x, y -> plus(x, y) end)
    end
Also throwback to when I decided to rant about Elixir for some reason 6 years ago, with the final comment:

    Converting a Module Function to a First class function (&Math.square/1): 
      Why do you make me lose the ability to use Elixir's admittedly powerful Pattern matching on arity feature?" 
https://gist.github.com/JacksonKearl/57b617de38b1c647ec41404...


> Why not allow function literals that define multiple arity implementations? Something like...

Perhaps I was unable to get my point across but that's what the blog post is meant to answer. The TL;DR is two fold:

1. In order for the feature to be worthwhile, we should remove the distinction between name-arity pairs altogether from the language (so module functions and variables effectively exist in a single namespace)

2. However, this double namespace is a core feature of the Erlang VM, so it would require radical changes to it (or you would need a statically typed language with FFI bindings to Erlang in order to "work-around" this efficiently)

Overall Elixir is a Lisp-2 language, with two distinct namespaces, and it requires conversion between functions of those namespaces. It does not have currying, it does not support point-free style, but in practice guards and pipelines help alleviate those concerns.

Regarding your gist, I am honestly not sure if 15 minutes is enough to evaluate a programming language, but in case you want to dig deeper, many questions are answered in the official guides. Here are some quick links:

* On maps vs keywords: https://elixir-lang.org/getting-started/keywords-and-maps.ht...

* On do-blocks and syntax: https://elixir-lang.org/getting-started/optional-syntax.html


Right, I wasn't putting that it forward as a sign of my deep investment in the topic. More just a laugh at dumb things I got worked up over in college. thanks for the links.

That all said, I don't see why you'd need to remove the double namespace feature in order to have function literals that define multiple interpretations. The value namespace still has just the single value-land binding to the literal, only the `call` procedure (the .( operator, so to speak) needs to be modified to dispatch to the appropriate function-land name as determined by airity and the literal the value was bound to.


I see. :D I am lucky my time in college was just before the internet "became permanent". Double lucky that all pictures and recordings from my cover band disappeared with it!

Anyway, regarding the dot, we could make `fun.(...)` dispatch to the correct arity, but that would make every function dispatch slower. However, even if we assume that's an ok price to pay, it wouldn't take long for people to request function capture without an arity, such as `&Foo.bar` (otherwise it would feel incomplete). And this feature would add further penalties as we further postpone the call.

Both would also reduce the amount of compile-time checks we can emit and hurt integration with the overall Erlang ecosystem. On the large scale of trade-offs, I don't think it is worth it. :)


I think the real question we all want to know is why lambdas don't have "do"


Because `do` binds the farthest away. In these examples:

    case some_fun(1) do
    end

    case some_fun() do
    end

    case some_fun do
    end
We all know `do` binds to case. Therefore, if `fn` used `do`, the `do` would bind to the farthest call to:

    # this code
    Enum.map list, fn do
    end

    # or this code
    map list, fn do
    end
would both bind `do` to `map`.

Of course we could special case `fn do` but having `do` bind to different places based on `fn` would be extremely confusing.


Does anybody actually call Enum.map without parentheses? That's actively discouraged currently.

    Enum.map(list, fn do _ -> :ok end)
Is unambiguous

Honestly I do appreciate not having to type two characters, but I'm currently onboarding a bunch of n00bs and they're very puzzled by this one inconsistency


Well, I didn't have much luck last time I responded to you, but I'll try again since you haven't gotten a response yet!

Regardless of whether or not it's actively discouraged, it still must be supported since Elixir is so heavily bootstrapped. Since `defmodule`, `def`, etc are all just macros written in Elixir, there are no special rules around which functions/macros are allowed to be called without parens. There was some discussion about that on the forums a while ago (like requiring parens for functions and not for macros) and the answer is that that will never happen.


Well I mean it wouldn't have been a big deal, if the do block binds to the outermost, then in most cases just would get a compiler error since usually a private fn/0 doesn't exist. Still though in the early days elixir more strongly preferred no-parens, so avoiding that is understandable.


True!


Lambad definitions don't take arguments, they jump straight to argument matching.

Take:

    case value do
      true -> "true"
      false -> "false"
      _ -> "uh..."
    end
vs

    fn
      true -> "true"
      false -> "false"
      _ -> "uh..."
    end
`do` is simply syntactic sugar for a keyword list argument containing a `:do` key:

    if(true, [{:do, "hi there"}])
So it wouldn't make sense for `fn` to take a `do`.


    fn do
      true -> "true"
      false -> "false"
      _ -> "uh..."
    end


As much as I despise people responding in pure code, you're right, it's technically `fn/1` [0]. Even though the docs say `fn(clauses)`, `cond/1`, which requires the `do`, also says `cond(clauses)` [1] so looks like I'm wrong.

[0] https://hexdocs.pm/elixir/1.14/Kernel.SpecialForms.html#fn/1

[1] https://hexdocs.pm/elixir/1.14/Kernel.SpecialForms.html#cond...


>Why do you make me lose the ability to use Elixir's admittedly powerful Pattern matching on arity feature?

If every first class fuction had to check for arity at runtime, wouldn't there be a performance cost? Also would reduce drastically the amount of errors caught at compile time. Maybe a different mechanism for dispatching like that (a macro?) could be done, how useful would that be?


There already must be a arity check at some point, no? If the type contained the set of airties the lambda accepts rather than just the singe one, all the same type checking could be done when it's already done.


There's no type checking on function dispatch unless guards are explicitly added. The arity dispatch for functions captured into first class is done at compile time and require explicitness. That said, I think you can create your own dispatch macro that's probably as efficient as possible for doing what you proposed.


As far as I as an Erlang developer can say, I think there's a misunderstanding about arity in Elixir (and probably even more misunderstandings on everything else that's borrowed from Erlang). There is no "Pattern matching on arity feature". Normal functions are defined by a name and arity, `foo/1` and `foo/2` are two different functions and not two clauses of the same `foo` function. Erlang makes this clear when it forces you to export both instead of just exporting `foo`, maybe Elixir just tries to hide this limitation. Funs/lambdas are limited by this too, as another comment already showed, funs are sometimes (often?) compiled to normal functions.


What if you wanted your local function to shadow some of the arities of the outer function but not others?


There's no value/function shadowing in Elixr so it isn't possible regardless. But one could always just call the outer Function from the labmbda's innards.


If you look at how anonymous functions are compiled at a low level you'll understand why that's not possible: a function is literally "the module it's in" + the bytecode "line number" + arity (so that it knows how many register items to instantiate). For inlined anonymous functions, a secret private function gets created.


I consider this to be pretty much an implementation detail though and not necessarily set in stone. The Function data structure is their public interface though and it does define an arity.


Okay, so make the lambda's binding refer to several of those 'structs' and have .( dispatch to the correct one based on the airity at the call site.


That will introduce unnecessary overhead in the basic case. At some level you do want to access the "low level" function (interfacing with Erlang, e.g.) if you really want that sort of dispatch you can write your own polyfunction data type.


So the answer to "why does Elixir require a dot?" is "because of dubious design choices for the innards", rather than any sort of reason that makes sense based on its actual syntax or features.


As other languages that run on existing environments most likely have done. C#, F#, Clojure, TypeScript, Swift, and many others probably had to take similar considerations. Then at a lower level, if you want to maintain ABI compatibility, that may also impose restrictions.

Much of the software we write needs to deal with the constraints of its environments. It would be unfortunate if we decide to give all of them the uncharitable description of being defined by "dubious design choices for the innards" even when known well-defined patterns surface from designing within constraints.


It would be unfortunate if we were unwilling to call design choices dubious and learning from past mistakes instead of rationalizing why they're actually fine. I don't know Elixir but you're telling me in 2023 I'm supposed to learn that a language calls anonymous functions with a different syntax than regular ones and not be annoyed? Like.. that just sucks. If you have to do it that way for unfixable reasons, at least deliver the bad news with an apology instead of a white lie about why it makes perfect sense.


I am completely fine with calling past dubious design choices. This isn't one of them. That was your labeling, provided with no evidence or reasoning, not mine, and that's what I was criticizing.

The Lisp-1 and Lisp-2 discussion is several decades old, with many discussions arguing the benefits of one over the other. The article highlights some of those trade-offs, which should be clear if someone is willing to get past superficial syntax notes.

However, given how intent you are on putting words in my mouth and on distorting contrasting opinions as white lies, let's call it a day.


This reminds me of when Paul Graham asked a question about Python and someone said they weren't going to do their homework for them.


Ah, I didn't mean to say your take was a white lie. The white lie I referred to was the implicit claim in the article, that it had to be this way, instead of that it's correcting something unfortunate.


As a developer who came to Elixir from Clojure I had the same reaction to fun.(), “this sucks”. Then someone explained why. So, okay, there is a good reason but “bleh.” I think that lasted maybe a day, by which time I had accepted it as part of the language. It’s just not a big deal when the language offers so much in return. And I liked Clojure.


That's not a dubious design choice. It's actually amazing. How would you design a lambda that can be pickled and rerun across time (run on a different invocation of the vm), or space (sent across a network and executed)?


Well like... the same as it is except allowing multiple arities in the definition.


Calling function with dynamic number of arguments would be slower, if you pass that as higher-order-function. Because VM would have to figure out at runtime if it needs to execute plus/0 or plus/2 depending on number of passed arguments.

I.e. slowdown without any major benefits (i.e. use case of passing function with known number of arguments is much more used comparing to "unknown number of of arguments").


Slowdown would be imperceptible with modern branch prediction. Lay out the airity implementations in memory as a linked list, instruct users to make the first one the common case, you're golden. An index would also work, not really necessary though.


yea, maybe. Repo is otp/erlang. PRs are welcomed ;)


An unsolicited PR that adds new syntax and changes the bytecode layout would 100% not be welcomed. There are stages to redesigning languages and that is the last one. But instead of having a discussion about it (stage one), people like to pretend like the current implementation is God's gift to mankind and anyone who disagrees should DIY or shut up.


why? would be pretty welcomed as proof-of-concept to start the discussion. Nobody has said it would be merged immidiatly. At least could be used to run load tests. You can always start with EEP if you wanna to discuss or propose new feature - https://www.erlang.org/eep


Did you actually read the article?


Did you read the HN guidelines for commenting?


Hopefully when the entire article is an answer to exactly the question you're asking in the comments there is a little leeway.


See the comments in other threads where an actual topic of contention was presented and accordingly follow up discussions could be had. The reason for the HN Guidelines is to foster good conversation. "Did you read the article" doesn't.


The followup discussion of the author just summarizing the article for you? Ok.


That's all you've been able to comprehend from reading this thread? Surprising.

I'd tell you to get off your high horse, but it seems to be your identity.

To summarize: the only reason presented not to do what I said is dubious claims about perf, which were not mentioned in the article whatsoever.


The language is OSS, go fix it. Apparently you have the attitude of someone who knows how.


Dang was right, these comments do attract that absolute lowest tier of discussion.


I've barely used the dot syntax since Kernel.then was added. It's good that it exists even if you never use it.


Then is really nice.

I always used to write "clean" pipelines in my gen server and such, that ended in an anon function to return the needed tuple for the respective handlers. Inevitably we'd get a squabble in the code review, where some wanted everything put in a variable then the explicit tuple at the end, while others agreed that the dot syntax, while a bit odd and line-noisy, was ultimately better

Then solved that. The line noise complaints went away, code readability went up, and everyone was happy


Yes the syntax to pipe the first argument to an anonymous function for another function and use it as second+ argument was akward and hard to remember.

Like:

"hacker news" |> (&Map.put(:site, "ycombinator", &1)).()


The real question is why they do Module.function() and then function.() instead of .function()?

It makes no sense to have the dot at the end of an anonymous named function.


`function()` is stored inside a `Module`, so we call `Module.function()`.

An anonymous function "`()`" is stored inside a `variable`, so we call `variable.()`.

It does make some good sense. It's even kinda elegant!


I've sometimes wondered if I am the only person who actually likes the syntax :D There's a reason for it, but additionally, I like the fact that it's explicit — I can look at `some_call.()` and specifically know it's an anonymous function.


I'm a barewords fan, but even I don't understand the hate towards the dot.

Honestly, it feels like low level fruit ripe for baiting engagement.

Sure it would be cool if it wasn't there, but does it really materially change anything?


I like barewords in Ruby but they don't hold the same value for me in Elixir. In Ruby I was trying to write code in a way where it didn't matter all that much where stuff was coming from. In Elixir I want to know exactly where stuff is coming from and what it is (like how we generally don't `import Enum` or the like).

Not to say you shouldn't like barewords! It is nice that Elixir enables that possibility for those who want it!


I feel this same way now. Although it's been a few years growing on me


I'm repeating myself from another thread but ah well! It has that nice parallel to Erlang too where the calls look distinct: `f()` for function call, `F()` for anonymous function call! All that to say that I agree and I also like the dot syntax :)


Oops, it says this in the article. Embarrassing for me :grimace-emoji


When it's explained that way, it makes a lot of sense.


You can do the same thing in Ruby, although I think it's cursed:

    fn = -> x { puts "hello #{x}" }
    fn.('world')
In Ruby, .() is an alias to #call.


And because in Ruby it is a shortcut, you can invoke it on any object:

    irb(main):008:0> class Integer
    irb(main):009:1> def call(b); self.+(b) end
    irb(main):010:1> end
    => :call
    irb(main):011:0> 1.(2)
    => 3
:D


It seems like different issues are being conflated here. Should function arity overloading be permitted, and when? Should we require a specific syntax for calling locally defined "anonymous" functions, for whatever reason? I don't see what these two things have to do with each other.

(Haskell doesn't support function arity overloading, despite what the article suggests. Haskell also doesn't use a special syntax for calling local functions)

Other questions to ask: If we do permit overloading for toplevel functions why not do it also for local functions? If we think a special syntax is needed to use local variables of function type, why not use the same syntax for local variables of all types? E.g. $localvar, $localfunc() vs globalvar, globalfunc().

It doesn't feel fully thought-out.


My goal was to say that Haskell has a single namespace, not that it supports arity overloading, but your interpretation is valid as the text is currently phrased. I will address that (edit: now fixed). However, I don't believe it says Haskell has a special syntax for local calls.

The arity overloading is related (albeit not required!) because it adds to the expressive power of Lisp 1. As the examples show, you need a single function for defining the initial accumulator and the reduction operation, instead of passing two arguments or a composite type (see this example from Clojure [1]). By allowing a single function to encode more information via multiple arities, you also only need to override a single name (which must adhere to all arities). It is also an important characteristic of Elixir, so the article would be lacking if it was not included in the discussion.

[1]: https://clojure.org/reference/reducers#_using_reducers

> If we think a special syntax is needed to use local variables of function type, why not use the same syntax for local variables of all types? E.g. $localvar, $localfunc() vs globalvar, globalfunc().

Elixir doesn't have global variables, so trying to unify from that angle would not necessarily help (it would only make local variables of all types more verbose).


Dijkstra used this notation for his formal work later on[1]. He didn't like invisible infix operators, so he introduced "." as the function application operator and made it left associative. So f.x.y is the same as (f.x).y. I find it very elegant now that I've gotten over the usual knee jerk aversion to new syntax.

[1] https://www.cs.utexas.edu/users/EWD/transcriptions/EWD13xx/E...


    Elixir still runs on the Erlang VM, which is
    dynamically typed, so we should not expect any
    meaningful performance gain from typing Elixir code.
This doesn't follow at all. Lots of code runs on the x64 machine which is mostly untyped and still gets performance gains from type information in the source language.

The whole point of a compiler is to have different behaviour between source and target language. If the source language has a static type system that the compiler uses to control code transforms, you get performance/errors regardless of whether the target has a type system.


The key difference between x64 and the BEAM is that the BEAM does dynamic type checking on the fly no matter how thoroughly you type check your code ahead of time. x64 just sees bytes, so if you want to leave off type checking you can, hence the performance gains.

EDIT: Also, when I go to look for this quote I can't find it. Did you somehow end up on last year's announcement of an upcoming type system for Elixir [0]?

[0] https://elixir-lang.org/blog/2022/10/05/my-future-with-elixi...


People did not like that example choice.

The type checking beam does can coincide with the checking the source language does but that's not fundamental. Say you compile SML to beam. Would you conclude that the compiler isn't able or allowed to do anything with the types in the source language?


I'm not sure where you're going with that. I'm not saying that a compilation has to be 1-to-1, I'm saying that Erlang's VM will always spend cycles type checking at runtime regardless of how thorough your compiler is. And I believe that's all José is saying in the article you're quoting from.


The code the VM spends time checking is different if the source language compiler emits different code, such as if it uses type information instead of discarding it, and different code has different runtime performance.


It's in a sibling post about type systems, not in the op, and I have an off by one in where I have replied to


Hi Jon, wrong thread but I got it. :D The Erlang VM bytecode contains the type operations in there, such as get_map_key or get_tuple_element or get_list_head. So it would still be checking the types unless we rely on a mechanism for annotating the bytecode with additional information (which they are already using for the JIT). It may be possible in the future but it is not an immediate concern at the moment.


> Erlang VM, which is dynamically typed

> x64 machine which is mostly untyped

Dynamically typed and untyped are not the same, no?


Also: x86 is very much statically typed. Your types are 8-, 16-, 32- and 64-bit words. Each machine instruction knows the exact types of its operands, so there's no overhead on determining the types of values at runtime (like you'd need to do in case of dynamic typing).


It's a tangent but a good one.

Machine code, at least x64/aarch64/amdgpu and the like, cannot be considered statically typed.

1/ there is no static checker

2/ if there was, the interpreter would run the failing code anyway

3/ memory can be integers, floats, machine instructions all at the same time

4/ registers overlap, e.g. writing to a 'i32' probably zeros the adjacent 32 bits in the same register

5/ instructions on the same register sometimes refer to floats and sometimes to integers

It also isn't dynamically typed, though a kernel might kill the program or hardware crash on some operations.

I think it's most usefully considered untyped in that there's no ahead of time verification and there's no runtime checking either.


What is "untyped" exactly? Google tells me it's the same as "dynamically typed".

Remember the context. The thread started with Elang VM making it impossible to make code faster, due to the need to accommodate Erlang's dynamic typing features and do runtime checks for that. Top comment said that dynamic typing per se can't be the obstacle for programs being faster, and gave x86 as an example of a presumably dynamic VM which does not limit the speed of the programs.

So "dynamically typed" from the article here meant that there's some runtime checks you can't unsubscribe from, and due to them there's a cap on how much faster you can make your code. It applies to some extent to x86, there's also checks there like segfault for example, and maybe you could save some space on silicon if you removed them and required the machine code to be verified to never cause segfaults. But I argue that this is too much of a stretch. Runtime checks that x86 performs are negligibly simple, and x86 is not a valid counter-example for the article's point. In my view the article's point is valid.

> 1/ there is no static checker

"return 0" is the static checker for x86 machine code.

But seriously: this is not a necessary condition for something to be statically typed, it can't be.

> 2/ if there was, the interpreter would run the failing code anyway

Failing code does not exist. All code is valid.

Haskell is undoubtedly statically typed. But division by zero will still cause a runtime error.

> 3/ memory can be integers, floats, machine instructions all at the same time

Ok. Disregard what I said about machine words. The only datatype in x86 is byte. Some operations use 2 bytes, some use 4. They may interpret them differently (some may see an int, some float), but it changes nothing, since both int and float operations are operations on bytes.


Spent so long trying to get quote formatting on a mobile that I didn't notice I've replied to the op, not the subthread. Bad times.

Whether x64 is typed or untyped feels like the start of an argument about whether xmm registers have the same type as stack memory which is kind of interesting but orthogonal to elixer.

I would agree that static, dynamic and untyped are distinct things. You can compile from a source language with any type system to a target language with any type system. I think, there may be edge cases around excessively constrained languages.

As an extreme example, the target machine says nothing about whether you can constant fold 1+2 into 3 in the source language, but the source language can definitely block or enable that minimal optimisation.


Welcome to to ruby-esque syntax, which nobody likes. It’s really what the beam needed /s.

Odd that a language that takes it’s syntax from a language about 50 years old is 10x more syntactically clean than elixir.


I’ve missed Dashbit posts!! Good to see another one!


People have argued that Elixir could replace Python, but tbh, Elixir’s syntax is too ugly to replace something as elegant and simple as Python.


I mean, have you looked at Python's syntax for closures? You call that elegant?

Any high-level language that is afraid of map/reduce and other functional constructs is a waste of time to me.

Python has a lot of mindshare because it looks simple, but by God if it isn't an absolute kitchen sink of a language full of weirdnesses and gotchas. And still, completely afraid of functional constructs.

Nice bait though.


> I mean, have you looked at Python's syntax for closures? You call that elegant?

You mean anonymous functions surely? The syntax for closures is basically nothing, you just make a function.


> Any high-level language that is afraid of map/reduce and other functional constructs is a waste of time to me

Elixir is afraid of most "functional concepts".


I'll bite: Impossible to write pure functional style code in python, hence it loses any elegance contest right out of the gate. ;)

I work in a python shop now, and the code I've written the last years must be the ugliest of my career. If it's too complicated to be solved with a list comprehension, it will be so much uglier than lambdas in a language with a form of piping.


> Impossible to write pure functional style code in python

What does that even mean?


When I first started writing Elixir, I had no idea how much I would come to love immutable data as a language feature. "functional" can mean so many things, but for me immutable data as part of the runtime (i.e. not bolted on later) is one of the most important things. Never having to ask: did I pass by reference or value? Never having worry about data changing out from underneath you. It's pretty amazing.

Sure there are times that a need for performance calls for mutable data. But for me those times are pretty rare, and when they happen it's usually easy enough to quarantine the mutation.


They probably mean you need to import some modules to do things like partial functions. But it's not like functional programming is not possible in Python. I'd argue that Elixir's syntax is more of hindrance to FP than Haskell's elegant syntax.


No, if importing something was enough it would be fine. It's more that how a lambda can only contain a single expression means you need to split and name things everywhere, how you can't pipe/chain things means you need to have intermediate variables or an ungodly amount of nesting etc. Yeah, technically feasible, but it's not "pythonic" so you solve it some other (to me) less elegant way.


> It's more that how a lambda can only contain a single expression

That's exactly how lambdas (and all functions) work in actually functional languages like Haskell. And it's a good thing,lambdas shouldn't allow statements.

> how you can't pipe/chain things

Ironically after switching from Elixir to a language where I can have any operator I wish, I found myself using the equivalent of Elixir's |> only on a very rare occasion. IMO they're vastly overrated.


Hmm, you can surely use multiple lets in a lambda in haskell, as in every other function?

And it doesn't have to be the pipe operator, just something similar. Like list.map().filter().reduce() instead of reduce(filter(map(list))) which quickly gets unwindly.


> need to split and name things everywhere

It can sometimes be annoying yes, but when you come back to the code 6 months later it's mostly a good thing.


But in other languages it's clear what the code is doing. If you have a chain of map, filters and other operations, they are all named and easy to grok. In a python list comprehension a lot of things will be going on, without named parts. So then you need to name stuff to keep track. So you got foo, then foo_filtered, then foo_sorted, then foo_grouped. And that doesn't really convey anything more than in a language where you would do foo.filter().sort().group().


I'd have to disagree. Having used Python professionally since the mid 90s and Elixir since 2015, I think Elixir's syntax is cleaner and easier to read. I also think language features like pattern-matching, guard clauses, immutable data, green processes, and a complete lack of OOP make Elixir a great language to work with.


"i'm not falling for that hot take. that's clearly someone with a fetish for getting yelled at. i refuse to participate in that kind of perversion"


I prefer Erlang to Elixir but I think I'm in the minority. Rubified Erlang has its place I suppose.


Imo that's not Pythons great strength, although I like the syntax. Python is great because of the C interop. FFI in Java is horrible, and I assume it's not great in Erlang/Elixir either. In Python? It's pretty damn great. Which is why Pythons library ecosystem is so enormous.


Erlang has a well defined and clear layer for integrating with C code, including functionality to help you ensure the code you plug into will be thread-safe and work with the VM's underlying concurrency models: https://www.erlang.org/doc/man/erl_nif.html

Erlang does not have low-ceremony ways of calling C from Erlang, as I believe Python has. It could be done but that would be frowned upon anyway, because it would defeat the preemptive and fault-tolerant guarantees of the runtime.

So I guess it depends on what you mean by great. FWIW, Elixir has no problems integrating native code from XLA, LibTorch, Polars, that also powers equivalent libraries in Python.


Elixir C interop is actually pretty pleasant. After Java I was expecting it to be a nightmare, but it was shockingly easy and well though out.

The big thing in Elixir community right now is to write the native/performance code in Rust[1] and interop with Elixir.

[1]: https://github.com/rusterlium/rustler


There are equivalent libraries for zig and Nim too


Simple? Sure.

Elegant though...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: