A transducer transforms a reducing function. Its signature is rfn->rfn. The resulting rfn can then be used to reduce/fold from any collection/stream type into any other collection/stream type.
I don't see what your functions have to do with that.
(HN is going to collapse this comment because the code makes it too long).
My functions are exactly equivalent to transducers.
My link in the original comment goes over it at a more theoretical level.
But if you want runnable code, I've included Clojure below that translates between the two representations (there's some annoying multi-arity stuff I haven't handled very rigorously, but that's mainly an artifact of the complection in the traditional Clojure representation of transducers and goes away when you think of them as just `a -> List<b>`).
There's a line in the original article that the author doesn't go far enough on. "Everything is a fold." Yes more than that, folding is not just a function over a list, a fold is a list and vice versa (this holds for any algebraic datatype). Transducers are just one example of this, where Clojure has decided to turn a concrete data structure into a higher order function (wrongly I believe; although I haven't gone to the effort of truly specializing all my functions to verify my suspicions that you can actually get even better performance with the concrete data representation as long as you use specialized data containers with specialized behavior for the zero and one element cases).
(defn tmap
"Map transducer"
[f]
(fn [x] [(f x)]))
(defn tfilter
"Filter transducer"
[f]
(fn [x] (if (f x) [x] [])))
(defn ttake
"Take n elements"
[n]
(let [n-state (volatile! n)]
(fn
[x]
(let [current-n @n-state]
(if (pos? current-n)
(do (vswap! n-state dec) [x])
[])))))
(defn simple-transducer->core-transducer
[simple-transducer]
(fn
[rf]
(fn
([] (rf))
([result] (rf result))
([result input]
(reduce rf result (simple-transducer input))))))
(defn core-transducer->simple-transducer
[core-transducer]
(fn
[x]
((core-transducer #(cons %2 %1)) [] x)))
(defn catcomp
([f g]
(fn [x] (mapcat g (f x))))
([f g & fs]
(reduce catcomp (catcomp f g) fs)))
(def example-simple-transducer
(catcomp
(tmap inc)
(tfilter even?)
(tmap inc)
(ttake 2)))
(defn example-simple-transducer-manual
[x]
(->> ((tmap inc) x)
(mapcat (tfilter even?))
(mapcat (tmap inc))
;; Stateful transducers are hard to work with manually
;; You have to define it outside of the function to maintain the state
;; This is true for traditional transducers as well
;; (mapcat (ttake 2))
))
(def example-core-transducer
(comp
(map inc)
(filter even?)
(map inc)
(take 2)))
;; Yields [3 5]
(into [] (simple-transducer->core-transducer example-simple-transducer) [1 2 3 4 5])
;; Also yields [3 5]
(into [] example-core-transducer [1 2 3 4 5])
;; Yields [3]
(simple-transducer 1)
;; Also yields [3]
((core-transducer->simple-transducer example-core-transducer) 1)
Maybe not "elegant", but quite readable compiler impl. compared to what I have seen. And which (real world) compiler has an "elegant" implementation anyways.
It's quite different, the way you model things, it's entirely different. Even though both are able to deliver highly concurrent computation.
A process in Elixir has an id, state, and is sent messages to its address. The message are queued inside the process, and handled one at a time.
One process can spawn another, and so on.
It's more like white collar workers sending emails to each other.
In core.async, a Process is anonymous, it doesn't have an id and doesn't have an address. You cannot send messages to it.
Instead a process is more like a worker on an assembly line with conveyor belts.
What matters are the conveyor belts and what's on them. Where things go from one belt to the next and what happens to the things as they flow through. You can have multiple workers working a belt and if something jams dependent belts stop. The belts are called Channels.
Sorry, these claims are just not true. AI generations in these categories are impressive on release, but blatantly generic, recognizable, predictable and boring after I have seen about 100 or so. Also, if you want to put them to use to replace "real work" outside of the ordinary/pretrained, you hit limitations quickly.
The scaling laws of the Internet were clear from the start.
Your example is a better search engine. The AI hype however is the promise that it will be smarter (not just more knowledgeable) than humans and replace all jobs.
And it isn't on the way there. Just today, a leading state of the art model, that supposedly passed all the most difficult math entry exams and whatever they "benchmark", reasoned with the assumption of "60 days in January". It would simply assume that and draw conclusions, as if that were normal. It also wasn't able to corrrectly fill out all possible scores in a two player game with four moves and three rules, that I made up. It would get them wrong over and over.
It's not a better search engine, it's qualitatively different to search. An LLM compose its answers based on what you ask it. Search returns pre-existing texts to you.
Such interview processes are big red flags. The company can't afford taking a risk with you and at the same time tests how desperate you are by making you work for free. They are likely short on cash and short on experience. Expect crunch and bad management. Run.
As I understand, they're less stressed about having things at precise points in time and are fine with waiting. I guess they would say something like "13 zero zero" when necessary...
I don't see what your functions have to do with that.
reply