I do agree with this, but also, I don't really understand a lot of the tradeoffs, or at least to me they are false tradeoffs.
Her first example is excellent. In Haskell, we have global type inference, but we've found it to be impractical. So, by far the best practice is not to use it; at the very least, all top-level items should have type annotations.
the second one, structural typing: have your language support both structural types and nominal types, then? This is basically analogous to how haskell solved this problem: add type roles. Nominal types can't convert to one another, whereas structural types can. Not that Haskell is the paragon of language-well-designed-ness, but... There might be some other part of this I'm missing, but given the obviousness of this solution, and the fact that I haven't seen it mentioned, it is just striking.
on dynamic dispatch: allow it to be customized by the user - this is done today in many cases! Problem solved. Plus with a global optimizing compiler, if you can deal with big executable size, you have cake and eat cake.
on JIT: Yes, JIT can take some time, it is not free. JIT can make sense even in languages that are AOT compiled, in general it optimizes code based upon use patterns. If AOT loop unrolling makes sense in C, then I certainly think runtime optimization of fully AOT compiled code must be advantageous too. But, today, you can just about always figure that you can get yourself a core to do this kind of thing on, we just have so many of them available and don't have the tools to easily saturate them. Or, even if you do today with N cores, you probably won't be able to on the next gen, when you have N+M cores. Sure, there's gonna have to be some overhead when switching out the code, but I really don't think that's where the mentioned overhead comes from.
Metaprogramming systems are another great example: Yes, if we keep them the way they are today, at the _very least_ we're saying that we need some kind of LSP availability to make them reasonable for tooling to interact with. Except, guess what, all languages nowadays of any reasonable community size will need LSP. Beyond that, there are lots of other ways to think about metaprogramming other than just the macros we commonly have today.
I get her feeling, balancing all of this is hard. One think you can't really get away from here is that all of this increases language, compiler, and runtime complexity, which makes things much harder to do.
But I think that's the real tradeoff here: implementation complexity. The more you address these tradeoffs, the more complexity you add to your system, and the harder the whole thing is to think about and work on. The more constructs you add to the semantics of your language, the more difficult it is to prove the things you want about its semantics.
But, that's the whole job, I guess? I think we're way beyond the point where a tiny compiler can pick a new set of these tradeoffs and make a splash in the ecosystem.
Would love to have someone tell me how I'm wrong here.
Her first example is excellent. In Haskell, we have global type inference, but we've found it to be impractical. So, by far the best practice is not to use it; at the very least, all top-level items should have type annotations.
the second one, structural typing: have your language support both structural types and nominal types, then? This is basically analogous to how haskell solved this problem: add type roles. Nominal types can't convert to one another, whereas structural types can. Not that Haskell is the paragon of language-well-designed-ness, but... There might be some other part of this I'm missing, but given the obviousness of this solution, and the fact that I haven't seen it mentioned, it is just striking.
on dynamic dispatch: allow it to be customized by the user - this is done today in many cases! Problem solved. Plus with a global optimizing compiler, if you can deal with big executable size, you have cake and eat cake.
on JIT: Yes, JIT can take some time, it is not free. JIT can make sense even in languages that are AOT compiled, in general it optimizes code based upon use patterns. If AOT loop unrolling makes sense in C, then I certainly think runtime optimization of fully AOT compiled code must be advantageous too. But, today, you can just about always figure that you can get yourself a core to do this kind of thing on, we just have so many of them available and don't have the tools to easily saturate them. Or, even if you do today with N cores, you probably won't be able to on the next gen, when you have N+M cores. Sure, there's gonna have to be some overhead when switching out the code, but I really don't think that's where the mentioned overhead comes from.
Metaprogramming systems are another great example: Yes, if we keep them the way they are today, at the _very least_ we're saying that we need some kind of LSP availability to make them reasonable for tooling to interact with. Except, guess what, all languages nowadays of any reasonable community size will need LSP. Beyond that, there are lots of other ways to think about metaprogramming other than just the macros we commonly have today.
I get her feeling, balancing all of this is hard. One think you can't really get away from here is that all of this increases language, compiler, and runtime complexity, which makes things much harder to do.
But I think that's the real tradeoff here: implementation complexity. The more you address these tradeoffs, the more complexity you add to your system, and the harder the whole thing is to think about and work on. The more constructs you add to the semantics of your language, the more difficult it is to prove the things you want about its semantics.
But, that's the whole job, I guess? I think we're way beyond the point where a tiny compiler can pick a new set of these tradeoffs and make a splash in the ecosystem.
Would love to have someone tell me how I'm wrong here.