It's a backend development platform that can handle all the data ingestion, processing, indexing, and querying needs of an application, at any scale. Rather than construct your backend using a hodgepodge of databases, processing systems, queues, and schedulers, you can do everything within Rama within a single platform.
Rama runs as a cluster, and any number of applications (called "modules") are deployed onto that cluster. Deep and detailed telemetry is also built-in.
The programming model of Rama is event sourcing plus materialized views. When building a Rama application, you materialize as many indexes as you need as whatever shapes you need (different combinations of durable data structures). Indexes are materialized using a distributed dataflow API.
Since Rama is so different than anything that's existed before, that's about as good of a high-level explanation as I can do. The best resource for learning the basics is rama-demo-gallery, which contains short, end-to-end, thoroughly commented examples of applying Rama towards very different use cases (all completely scalable and fault-tolerant): https://github.com/redplanetlabs/rama-demo-gallery
What do you mean by "platform"? Is this open source? Can I run everything locally?
Is this basically an RBDMS and Kafka in one? Can I use SQL?
I understand the handwaving around programming semantics, but I'd like clearer explanations of what it actually is and how it works. Is this a big old Java app? Do you have ACID transactions? How do you handle fault tolerance?
It may be early, but I believe folks will be curious about benchmarks. And maybe, someday, Jepsen testing.
Can you please elaborate more on the open source aspect of this?
Will it be an industry revolutionizing, open-source project like containerd (Docker) that every little developer and garage-dev can built upon or will it be benefiting only the big tech corporate world that controls and benefits from power and might which will be able to pay for this?
Especially since you chose to use the name Rama, I am wondering whether this will be for the benefit of all, or only for the benefit of the few who already control more than a fair share of power(finances)?
I like this description. Most one point one i've seen in the thread and you doc. So its not really a tool to use, but more of framework to follow. Wouldn't be the first framework to provide tools / setup processes and workflows in a better than ever tradeoff of features/complexity/skill floor/etc.
But yeah, quite a lot of hype and red flags. My favorite from the website: "Rama is programmed entirely with a Java API – no custom languages or DSLs."
And when you look at the example BankTransferModule.java:
> .ifTrue("isSuccess", Block.localTransform("$$funds", Path.key("toUserId").nullToVal(0).term(Ops.PLUS, "*amt")))
Yeah, it's probably fair to call that a DSL, even if entirly java.
Anyway, hope to get the chance to work with event based systems one day and who knows, maybe it will be Rama.
I consider a DSL something that has its own lexer/parser, like SQL. Since Rama's dataflow API is just Java (there's also a Clojure API, btw), that means you never leave the realm of a general purpose programming language. So you can do things higher-order things like generate dataflow code dynamically, factor reusable code into normal Java functions, and so on. And all these are done without the complexity and risks of doing so by generating strings for a separate DSL, like you get when generating SQL (e.g. injection attacks).
not quite, through more like anything which is widely known
I have worked (a small bit) on very similar (proprietary non public internal) systems before ~5 years ago and when doing so have read block-posts about the experience some people had with similar (also proprietary internal) systems which at that point where multiple years old ...
I guess what is new is that it's something you can "just use" ;=)
yesn't to some degree it's the round trip back to "let's put a ton of application logic into our databases and then you mainly only need the database" times.
Just with a lot of modern technology around scaling, logging etc. which hopefully (I haven't used it yet) eliminates all the (many many) issues this approach had in the past.
Rama runs as a cluster, and any number of applications (called "modules") are deployed onto that cluster. Deep and detailed telemetry is also built-in.
The programming model of Rama is event sourcing plus materialized views. When building a Rama application, you materialize as many indexes as you need as whatever shapes you need (different combinations of durable data structures). Indexes are materialized using a distributed dataflow API.
Since Rama is so different than anything that's existed before, that's about as good of a high-level explanation as I can do. The best resource for learning the basics is rama-demo-gallery, which contains short, end-to-end, thoroughly commented examples of applying Rama towards very different use cases (all completely scalable and fault-tolerant): https://github.com/redplanetlabs/rama-demo-gallery