All of this functionality could be accomplished with simple functions. They may not be as cool, but they are obvious in the calling code and they are easy to search for. Magic stuff like this conflicts with community conventions. New people are going to be perplexed as to what's going on (or worse, they learn from your code and conversely get perplexed when x[-1] doesn't work on a normal array). Also, you risk confusing various tools in subtle ways.
I agree. While you can accomplish some really interesting things with Proxy, I think that this article presents some poor examples that are actively harmful. I don't expect `delete` to be O(N) (actually worse if you consider the `ownKeys` implementation. I don't expect setting a property (sort) to invoke an expensive sorting operation. The list keeps going.
While I appreciate what the author is trying to accomplish, I think this is an example of what not to do as far as code is concerned.
Most of my practical usage of Proxies to date have been limited to some debugging tools that I wrote for AngularJS to help detect property changes that occur outside of a digest cycle. Don't think I've had any use cases that I've shipped to production code yet, though I believe Vue is considering using proxies for their reactivity model in the future to avoid the need for `Vue.get` and `Vue.set` for previous unknown properties or deletions.
I could see it being extremely useful for inspecting and debugging, but I shudder to think of working on a large codebase with liberal use of proxies for production behavior. You wouldn’t be able to take anything at face value, not even the most innocuous looking expression. Getters and setters are tricky enough as it is.
Metaprogramming is incredibly powerful, and has its place, but it generally comes down to 'defining your own language'. It cannot merely be Javascript any more: one must learn the idioms of your own personal language which Just Happens to run as Javascript.
As a junior dev I used to make maximal use of every language and platform capability I could find. These days, I'd rather just stick to the idioms and conventions of the parent language, because I want neither to maintain a compiler nor teach our junior devs how to use My Custom Language. It's enough to know that those capabilities are there without playing with them constantly.
Boring is simple and maintainable. Best to confine the interesting to the 'what' (which it'll infest anyway) rather than the 'how' (which can at least be kept under control).
Of course it does. The point was to demonstrate the technique and give a trivial enough example to grasp what's going on. You wouldn't use it for this for the reasons you give.
The availability and accessibility of metaprogramming in Ruby is probably the closest one can get to an objective argument that it's a better language than other dynamic languages.
The superpowers metaprogramming gives you are abstraction power tools. If a particular abstraction doesn't really work for a given application, you can use metaprogramming to interface around the abstraction. You usually don't need it, but when you do, the alternative is a painful refactoring mess.
They allow you to quickly sketch out prototype code to get a trivial implementation working, and when you're done you can stuff all the code into a method on a module and call it a day. Tomorrow, when you need to instrument it somehow, pull out variables to be passed in, given default values, extend it to work with new formats, you don't have to worry about the language fighting you, and if it does, you can whip out metaprogramming to obliterate the offending semantics, all the way to down to BasicObject and eval if need be.
Metaprogramming allows you to mold abstractions like they were clay. In Javascript you just have to suffer if the language fights you. If you see code you can't understand, there's not much that can be done other than to beat your head against it until it works.
You don't have to understand how Ruby code works in order to work easily and effectively with it. All you simply have to know is that it works, then to extend you you can simply build an abstraction around it, even if it comes down to concatenating Ruby strings and then running eval on them. You can simply solve the problem quickly and dirtily, and then when time isn't a pressure, you can return to the code and clean up your mess.
Things you simply cannot do in other dynamic programming languages, to say nothing of statically-typed ones. The sheer speed at which you can work can be scary.
This might seem like fanboy worship but I seriously think that the only language that exists today that we'll still be using in a thousand years is a form of Ruby.
Lisp has these advantages too, to an infinite degree. It's a wonderful teaching language for this reason: you can literally express any concept any way, introspect upon it, and execute it however.
The downside is that you end up building a custom language in the process... It can make one developer incredibly productive, to an almost superhuman degree, and maybe you can scale to a small team, but inducting others into the priesthood can take a very long time. Code needs a limited number of conventions, patterns and cliches so newcomers can rapidly train their mental dictionaries with the common stuff, so they can see through it to what it actually does.
> You don't have to understand how Ruby code works in order to work easily and effectively with it. All you simply have to know is that it works, then to extend you you can simply build an abstraction around it, even if it comes down to concatenating Ruby strings and then running eval on them. You can simply solve the problem quickly and dirtily, and then when time isn't a pressure, you can return to the code and clean up your mess.
Building an abstraction around something requires that you understand it, or at least how it interacts with the wider world, if that abstraction is not to become the next serious problem itself. (Most docs, even the 'thorough' ones, cover inputs and outputs only: environmental and causal dependencies are rarely properly documented and can really hurt.)
Otherwise it's called 'wrapping it in a bin-bag' and should be labelled with 'don't use this anywhere else unless you're prepared to open it and breathe deeply.'
It's generally been my experience that if you don't know how it works then it doesn't work, and rarely does anyone sufficiently time-constrained to do it 'dirtily' the first time ever have the time to come back and do it properly later: if it has to be quick and dirty, it should at least be transparent enough that someone else can deal with it.
[edit] Also, in a thousand years, Lisp will still exist, because absolutely everything else including Ruby is a mere subset.
No one will use it though.
> The downside is that you end up building a custom language in the process
You'll do it in many complex programs in many programming languages. Creating the necessary infrastructure in a programming language is called 'Greenspunning'.
If we for example need to work with 'state machines', I would implement in Lisp the machinery for it and give it a nicer source code user interface via some macro. The effect could be that the particular state machine definition sees a reduction in code size by, say, factor ten. Now productivity may depend on the size of the code -> productivity goes up. Readability and maintainability of the code goes up, too.
Now, if we program the same thing in Java or C++ - would we want to manually code the state machine without linguistic abstraction, where we would like to get shorter source code? Probably not. There are now several ways to work around this. One would be to have an XML scheme describing state machines and a translator to Java code. Or defining a custom external DSL for it. This approach is quite popular. Now you need to learn Java and your custom configuration language, plus its implementation.
Lisp just has a different solution for it: developers tend to implement kind of an internal macro-based DSL for these problems. The solution for that may look different from what Java or C++ uses - but they also need to abstract the code. If the code does not -> then you have here the reason why Lisp teams are smaller -> they are more productive. If a C++ program gets large enough, team size may go up ten times... I once heard an example for that from a Lucent manager who worked with a team of hundred people using Lisp.
Good point, but creating infrastructure doesn't necessarily mean creating idioms on the compiler level: neither DSLs nor macros. I would reject both without solid evidence that a good method and object API in the host language couldn't solve the problem first. My approach in C++, Java, C#, JS, etc would be to provide classes and methods which assist in building the state machine definition. If their existing compiler capabilities can be used to optimise it, so much the better, but I'd stick to the host language unless I had other reasons to delve into building my own language.
As a concrete example: some time ago we tried creating a domain-specific language to define some fairly hefty structures in our application. The DSL was more concise and was intended to provide a 'pit of success' for adding new such structures, by adding some extra verification and getting rid of boilerplate which might get miswritten or forgotten.
Some years later we finally found the time to throw out the DSL and replace it with straightforward (albeit slightly more verbose) compiled code with helper methods. Which, funnily enough, was a direct translation of what the DSL interpreter/compiler was doing anyway under the hood.
We threw out an extra build/start-up phase, a few thousand lines of pointless glue, and a massive debugging headache. We gained the strong typing which our platform provides by default.
The mistake we made was that the DSL was merely an abbreviation of code which was otherwise running in the exact same context as everything else. It was a syntax hack. It was something which could be done better by simply making proper use of the idioms and capabilities of our host platform, and ignoring the syntax entirely.
If you don't need to mess with syntax, you probably don't need metaprogramming. And syntax is not really the hard part of programming (if it's consistent), it should be just a minor and necessary hurdle before you get to the important things, so you probably don't need to mess with syntax either. Boilerplate and bloat can be solved with good API design, but only if the pointless repetition is really pointless repetition. If it's not easy to fix with a few helper functions, odds are it's not actually as common and repetitive as you'd like to believe.
Creating an entirely new language just to slim things down a bit is rarely appropriate for a single project.
The caveat here of course is that, with Lisp, there's no difference between macros and API (the attempt to differentiate is itself meaningless). And that's fine, but it's not the case for other platforms which lack Lisp-style macros, and it doesn't help with accessibility. Layering things is an important part of how we think and sticking to common concepts in a given layer are a useful tradeoff.
(It was rather fun to abuse C# `this[]` getters to force assignment of a function result via the compiler, at one stage. While useful as a temporary refactoring tool, I'm glad that hack has finally left our codebase.)
(The dark side of 'no DSL' is the fluent interface. These are really hard to design well, and most are terrible. The examples always read nicely of course, because they're basically the design documents.)
(I'm also aware that there are valid use cases for building DSLs. I've just never encountered such a use case myself...)
> provide classes and methods which assist in building the state machine definition
Now you shift the complexity to a language which is potentially bloated and less suitable to express domain level concepts in a concise way. We've see a lot of OO-architectures which try to recover flexibility by providing complex meta-level mechanisms. For a state-machine one would implement kind of an interpreter over an OO-data model - or a code generator from that oo-model -> greenspunning.
> It was something which could be done better by simply making proper use of the idioms and capabilities of our host platform, and ignoring the syntax entirely
For Lisp this would be natural. One can easily hide an implementation behind a domain-level descriptive representation of the problem. The distance between both is very small. It's actually what I would try to approach: working on a descriptive level with domain concepts and hiding the implementation.
> Creating an entirely new language just to slim things down a bit is rarely appropriate for a single project.
That depends on the 'single project' and its size. Larger applications usually contain a multitude of such tailored notations and machineries to implement them.
> The caveat here of course is that, with Lisp, there's no difference between macros and API
The API consists of exported and documented macros.
> Layering things is an important part of how we think and sticking to common concepts in a given layer are a useful tradeoff.
Sure. If you look at the Common Lisp Object System, it was originally an extension to Common Lisp to implement an object system with classes, functions, methods, inheritance, etc.
The original implementation was in several layers:
The lowest level was a layer of objects. CLOS implemented in terms of itself. Like the definition of a class for classes. It's a bit unusual, since it is an implementation in itself.
The next layer was a layer of extensible functions and classes. Like creating classes by calling functions.
The developer layer is a level of macros where, for example, classes are specified via macros. The macros assemble the necessary language constructs - like how to descriptively define classes in a convenient notation.
The layers are documented and the lower layers are collections of protocols over classes and functions.
This layered language approach in Lisp is described in the book 'The Art of the Meta-Object Protocol', short AMOP.
All of this functionality could be accomplished with simple functions. They may not be as cool, but they are obvious in the calling code and they are easy to search for. Magic stuff like this conflicts with community conventions. New people are going to be perplexed as to what's going on (or worse, they learn from your code and conversely get perplexed when x[-1] doesn't work on a normal array). Also, you risk confusing various tools in subtle ways.