<div class="heading_h4" style="grid-column: span 2; margin:15px 0">Introduction</div>
Nathan Marz
Applying the term “drawback” to Clojure is, like, weird because Clojure is such a flexible language. And until you get to that point, you should be living in pain, right? Our implementation ended up being ten thousand lines of code at the end at scale, which is literally 100X less code than Twitter wrote to build an equivalent, just the consumer product.
Artem Barmin
That’s hard to explain that some libraries are actually perfect. They’re not bad, not too much to do to improve them. For such amount of data, should we name it eventually consistent but not instant?
Nathan Marz
If the job would involve that like you’re in some sort of programming circus where you’re performing live for an audience, then that interview would be appropriate for that. I would actually choose JavaScript. I think I would choose JavaScript.
Artem Barmin
Haven’t you thought about, you know, creating an alternative core library for Clojure?<br/><br/>Hey guys, welcome to the seventh episode of our podcast, “Clojure in Product: Would you do it again?”
Vadym Kostiuk
Today we’re meeting with Nathan Marz, the founder of Red Planet Labs. Besides that, Nathan is also a creator of Apache Storm, he’s a creator of an open-source library called Specter, and he’s the author of the book that called “Big Data: Principles and best practices of scalable realtime data systems”
Artem Barmin
And also Nathan and his team created Rama, a platform for building backends at any scale, with a programming module that generalizes event sourcing and materialized event views.
Vadym Kostiuk
Rama provides composable building blocks that can be combined in different ways to handle any or all the computations and storage needs for the backend, greatly reducing the amount of code that is needed.
Artem Barmin
And also, we have a question passed from our previous guest Yehonatan Sharvit, software architect from CyCognito. And the question is, “What’s part of Clojure, besides macros, that you consistently find challenging to explain to teammates?” <br/><br/>So it’s interesting to hear the answer. And let’s begin, map over questions and macro expand them into real-life Clojure experience.
<div class="heading_h4" style="grid-column: span 2; margin:15px 0">Full Version</div>
Vadym Kostiuk
Hello, Nathan, and welcome. Thank you for joining us on our seventh episode of “Clojure in Product: Would you do it again?”
Nathan Marz
Good to be here with you guys. I'm looking forward to some hard-hitting questions.
Vadym Kostiuk
Can you please share a bit about your background, how you first encountered Clojure and what is actually that firstly attracted you to the language?
Nathan Marz
I've been using Clojure full time now for 15 years. I started with it in 2009. My journey with Clojure actually starts before that. I would say it started my sophomore year of college. This is 2004, so way before Clojure came out. But Paul Graham had published a post called “Beating the Averages”, where he talked about how they use Lisp. It is a very famous post. <br/><br/>They talked about how they use Lisp at their startup and how that was their secret weapon, how Lisp enabled them to do things that you couldn't do in any of the languages and that enabled them to do as a business what other businesses could not do. He talked about a direct connection between those things. And in particular, he's talking about macros, right? Like Lisp macros as being this very unique thing. So that really compelled me, especially that connection. So I became very interested in Lisp at that point. <br/><br/>Shortly after that, I actually got to meet John McCarthy, which was an amazing experience. The older I get, the more like surreal that is that I like sat with him for two hours. <br/><br/>But anyway, so basically I had Lisp in my head. So I was primed for it. And so then fast forward and I had done this, I've been doing stuff with Hadoop. So like big data MapReduce stuff. And it struck me that Datalog would be really good in MapReduce.<br/><br/>And so I did a project where I actually made like a data log compiler that would execute as MapReduce. And I very quickly saw the problems with doing it as a custom language, right? Just like you want to be able to like, just easily like having custom functions in it. And so I had to make this like a registration system and it was very complex. but I had listened to back of my mind. I had heard, I had found out about Clojure, this list that compiles a JVM. And I thought that could be really good. It seems like with macros, I should be able to make a very, very beautiful fine tune API. <br/><br/>That eventually became Cascalog. And that's what I did. So Cascalog was actually my second project with Clojure. And that was the reason I started using Clojure. First, I did a smaller project just to get to know the basics of the language. But then after that, I did Cascalog. And that was also my first big open source project. Then I was kind of off to the races from there. Yeah, that's how I started.
Vadym Kostiuk
That's interesting. And back in 2013, you founded Red Planet Labs with its mission to reduce the cost of building software applications. So can you please tell us a bit about those early days of the company, what you've been working on and yeah, and why Clojure and how, how it was with Clojure back then?
Nathan Marz
Yeah. Well, just some like broad background of Red Planet Labs. So before I started Red Planet Labs, I had done this big open source project called Storm, right? Storm was implemented in Clojure, but it was at a Java API and it was primarily used by Java people. Just because Java people are, obviously there's many, more of them. <br/><br/>And I kind of had a decision back then in 2013. So I very easily could have started a Storm company. I had tons of users all around the world like begging me for like support and consulting services. And that was a very tried and true model, big successful open source project, raise money and then build premium features or whatever to monetize it. <br/><br/>But that wasn't so interesting to me, because that probably would have been about monetization. And I, at the same time, was working on my book, which was about a broader subject, which was just about just the theory of building large scale systems end to end.<br/><br/>That's where I coined the term called Lambda architecture. And it's really about how you can think about building large scale systems in terms of pure functions. And so I had some insights there, which I realized formed the outline of like a truly next generation platform for building software backends. So that's why I decided to focus on. I didn't really know the details of how it would work. So I knew it would be a research project. <br/><br/>So that's what I started doing in 2013. We announced it for the first time last year. So a little bit over a year ago, it's called Rama. And so Rama is basically Rama is a big platform has huge applicability. But to just summarize it very, very concisely Rama generalizes and integrates the ideas of event sourcing and materialized views, such that you can build entire backends with very diverse computation and storage needs on a single platform in much less code than it would take otherwise. <br/><br/>And the amount of code, the amount of less code that it takes compared to using traditional technologies like a relational database, for example, increases the higher scale you get. <br/><br/>So as an example of this last year, we built a clone of Mastodon, which is basically the same as the Twitter consumer product. And we built it at scale, like actually being able to handle Twitter scale, handling all the difficulties of that product, such as the fact that the social graph is very unbalanced and that creates all sorts of tricky things in implementation. And our implementation ended up being 10,000 lines of code end to end at scale which is literally 100x less code than Twitter wrote to build the equivalent, just the consumer product. So that was like a big example of a true demonstration of what it could do. So yeah, I'm sure we'll get more into it, but that's like the very broad overview of what Rama is.
Artem Barmin
And I'm curious for your company Red Planet Labs, so the main goal was to create drama or you do some other kind of consulting work or anything else?
Nathan Marz
No,just Rama. That's the goal. Yeah. Now we do some consulting for the purpose of helping drive Rama adoption. Think about Rama. It has these huge benefits, but it does have a high learning curve because it is a paradigm shift. It's way different than anything that's ever existed in the software industry. And so we need to consult to help our users get over the learning.
Artem Barmin
So it took 10 years to build Rama from the year of foundation of your labs.
Nathan Marz
Yes. So the first six years was just me just doing essentially R&D. It was a brutal process to develop it because I had to discover the abstractions. And, you know, it was always like two steps forward, one step back. And then in 2019, I had figured it out what the abstractions were. And from there, it was just a matter of now engineering it so it can operate as a robust production worthy large scale system. And so that's when I raised money and built out the team. And then it took, you know, four years of pretty hard core engineering to make it a reality from there.
Artem Barmin
Cool, that's really impressive. Can you tell a bit more about the connection of Rama project and Red Planet Labs with Clojure? So you've choose this language in 2013 or maybe a bit later?
Nathan Marz
Yeah, yeah. So I don't think Rama really would have been practical to do in any other language. Of course, it would have been possible to build it in Java, let's say, but it would have taken a lot more than 10 years. Like controlling, there's basically two reasons for this.<br/><br/>One is just like the principles of Clojure, especially immutability helps a lot for a large project in controlling complexity and making it possible to reason about it. The second thing is that Clojure, you have a ton of flexibility for defining abstractions. And Rama, probably more than any other Clojure project ever, really relies on that.<br/><br/>So like at the foundation of Rama is actually a new programming language, but it is defined within Clojure, right? Like it uses Clojure macros to define the language, but it's still Clojure, right? Cause it's macros, even though the language has totally different, not totally different, but very different semantics in Clojure. <br/><br/>This new language, it's a new programming paradigm. It's a data flow language and there's a ton to talk about there. We're actually going to publish another blog post specifically about the language in a few weeks. <br/><br/>It's incredible that this can even be defined within Clojure. So it's based on what we call a fragment, which is a generalization of a function. Now a function has been the core primitive in programming forever, since the beginning, right? You have other things like logic programming languages is a little bit different, but like the function is the core, right? <br/><br/>What a function does is you call it with some input arguments. It doesn't work. And then the last thing it does is it returns a value back to the caller. Rama generalizes that. Right. So I call that call and response. You call it and then it responds. That's the last thing it does. <br/><br/>So a fragment is based on calls and emits. And basically you can think of Rama data flow, this new language as like reifying continuation passing style into a language and making it performance.<br/><br/>So basically with a Rama operation, you don't return back to the caller, you emit to your continuation, which is given to you by the caller. And so when you're doing it this way, you can emit many times or you can emit zero times or you can emit asynchronously, which is really powerful. And emitting asynchronously is the core primitive that turns Rama that basically makes distributed parallel code no different than regular code. Everything composes beautifully.<br/><br/>And so that's like the general idea of Rama data flow. Rama data flow is like, we expose that as like the Clojure API to Rama. Like you use this to program these large scale fault tolerant, acid compliant, applications, but we also use it as a general purpose language to implement Rama itself. And so it's this really powerful paradigm. <br/><br/>There's a really powerful language, but we're able to define it in Clojure. And it's all seamless, right? We can use Clojure from Rama Dataflow. We can use Rama Dataflow from Clojure, and we do that all the time. And this has been essential. <br/><br/>Another example of this actually in terms of mixing paradigms is Specter. So Spectre is another library that I open sourced a long time ago. And Spectre does not have the scope that Rama does, but Specter just makes it really easy, very concise, very elegant, and very performant to query-manipulate data structures, regardless of their complexity. And so this is something that is like.<br/><br/>Basically the origins of the specter came because I was working on like the initial Rama compiler and I was getting overwhelmed with how difficult it was to manage just information because like the Rama compile, like it's a language, right? So you have different nodes, represent different kinds of operations. The nodes have different kinds of fields in them. If it's declaring a constant or invoking an operation, but then a node could be declaring an anonymous operation. So it has nested within it another graph within it. And that could have nested, nested, nested. <br/><br/>And to build a compiler, a lot of the stuff you have to do is like transformations across the abstract graph, right? Which could be like deeply nested. And that was like nightmarish to do with just vanilla Clojure. Like so much boilerplate and overhead to handle the iteration, et cetera, et cetera. And so Specter came out of that as a very elegant way to deal with nested data structures. And then it grew to be more than that.<br/><br/>Spectre is not only critical for implementing stuff like the Rama compiler, but it's also part of the Rama API now. It turns out to be an amazing API for dealing with durable indexes. Durable indexes in Rama are just nested data structures of any size. <br/><br/>Cool thing about Spectre is that Spectre outperforms Clojure itself on HWIL operations. So like you compare like calling get in, right? One of the core Clojure library functions and the equivalent code in Specter, it's the exact same thing. It's just a different name, right? But the code is essentially the same. Specter is 30% faster than Clojure. But we're able to do this within Clojure. Like that's the magic of Lisp is that you can mold the language. You can mold the language itself, right?<br/><br/>You can build abstractions, which are as good or better than what the core language provides. And no other language besides the Lisp family that capable. And so that is something we leverage a lot.
Artem Barmin
And what is the correct term to describe Rama? Is it framework? Is it library? Is it language? What is it?
Nathan Marz
I would call it a platform, but you program it as a library, right? So it's not, there's no like custom languages or anything. You can either use the Java API to program it or the Clojure API. And it's cool.
Artem Barmin
But you mentioned that you have an idea to rewrite Rama in Rama itself. I see. It's very unusual because we usually have people that build some systems.
Nathan
No, no. Rama is already written largely in Rama the language. Yeah.
Artem Barmin
You know, that’s very unusual. We usually have people that have built some systems. And you actually have a lot of contributions into open source and into this platform. Do you have an idea or do you have already some production cases of usage Rama in real life?
Nathan Marz
Yeah. Well, yeah, we're currently in private beta, so we haven't publicized anything yet. So we're using the private beta as an opportunity to first of all, get some users in production, which we have many, and also use their feedback to improve Rama. So a lot of the work we've been doing has been based on that feedback. And a lot of it's like little improvements, API improvements, and other things are like big new features. <br/><br/>Yeah, it's been very successful. The most interesting ones are the ones that rewrote their whole platform, like the whole application on top of Rama. So we have one that did it. And I think this is something we're consistently seeing that even at small scale, the Rama version of the code ends up being about 50% less code, which is interesting. This was a Clojure company that did this. <br/><br/>I think like that is interesting, but I think even more impactful is just the reduction in complexity. So essentially what Rama lets you do is rather than having to twist your application to fit your backend tooling, to fit this data model of your database, which is fixed, you can't do anything about that data model, Rama lets you mold your infrastructure to perfectly match your application. So you can get rid of all of those impedance mismatches, which traditionally exist.<br/><br/>For example, something like an ORM is pure complexity and it's only… it's a way to deal with an impedance mismatch, but it doesn't really deal with it because it leaks and it creates a lot of problems as people have discovered over the years. So Rama completely avoids all those problems because and just to give like… Like one aspect of that, I talked about like molding your infrastructure to fit your application, right? I mentioned databases, right? Databases have a data model, which is fixed. Document, graph, relational, whatever, right? <br/><br/>Now, a data model, it's really just data structures, right? It's just a particular combination of data structures. Document is a map of maps. Column oriented is a map of sorted maps.<br/><br/>You know, graph is maybe two, one for nodes, map of maps and one for edges, right? Whatever, right? And so what Rama does, it lets you define all of your indexes, your materialized views, like literally as data structures. But they can be, they're durable data structures that can be of arbitrary size. Even your nested data structures can be larger than memory. That's something called sub-indexing in Rama. And so you're able to define your indexes in exactly the shape that's optimal for your application, right? Maybe for, and you'd find as many of them as you want. <br/><br/>So maybe for this one use case, you want a map of sets. Maybe for this other use case, you just want a single number, right? That's where one exists in every partition. Maybe for this other use case, you want a map of sorted maps, of lists of maps or whatever, right? It completely depends on the use case. And so that's a really powerful thing in that, that's what I mean. You're able to mold your infrastructure to fit your application rather than the other way around.
Artem Barmin
And I'm curious about the largest installation of Rama that you have in production. Maybe you can name some numbers, how many terabytes of data, events, views, because if we talk about production usage, I can imagine that these materialized views should evolve a lot. And sometimes we need, for example, to reconstruct the data from the very beginning of the history. And I'm curious how it really works in production.
Nathan Marz
So our users range from less than one event per second to like about a million events per second. The one that's at a million isn't quite in production yet, but Rama is able to very easily handle that throughput because it is a scalable system. Yeah, you brought up an interesting point, right? Like being able to recompute views from scratch. And that is something our users are doing. And so that's something that's very powerful. That's something that's enabled by event sourcing, right? So the Rama programming model is…<br/><br/>So event sourcing means that rather than directly modify your indexes like you do with databases, where you modify the current state of the world and then that's permanent, you actually keep a log of all the events that happened and that log is immutable and then you materialize these views on top of it. And with Rama, you materialize that incrementally by programming these things called topologies. This is what you use Rama's Dataflow API to do. And there's a few different kinds of topologies, but basically they incrementally materialize the views, right? And you define how do you go from depots to these views? The views are called P states, which means partition state. <br/><br/>And so when you have your whole history of everything that happened, you have the ability to recompute your view if you need to. And that's a really powerful thing. That's something you don't have. That's a big thing that you don't have when you use a database, like a relational database. <br/><br/>And we have seen, and it's interesting to us that we see our users doing that on a pretty regular basis. Sometimes it's because they actually had a, they deployed a bug, right? And so it's, they've essentially corrupted their P-state, but they can fix the bug and recompute it from scratch. Other times it's just, have a new feature that they want to do and they'd like it to be historical, but they already have all the data on their log called depo, and so they can just read computer from scratch. So that's been a really cool thing to see, just to validate that that's such a useful thing that our, even our early users are doing it so frequently. <br/><br/>Yeah, we're actually releasing, doing a new release today. I'm do it shortly after we finish, but we made something, we made a migrations feature. So recomputing a new view from scratch is great, but it can take a while, right? And so what migrations lets you do is, you just like you do with database migrations, just modify your existing view in place. What's really cool about our migrations feature is it's instant. So even if your view is 10 terabytes, the migrations are instant. You just click a button and boom, it's immediately instantly migrated. <br/><br/>And the way it works is that you put you so some migrations you might do would be like, let me just change the type of this value within this view, right? Like change it from an integer to a string or something. That would be like a very basic migration. And the way it works is that you provide like an arbitrary transformation function of how do you want to like modify these targeted values. And what we do is we apply your transformation function on read, which is why it's instant, while we slowly and durably migrate on disk in the background. So it's really cool that it's instant, because this is something that just hasn't existed before.<br/><br/>You know, if you ever tried to do like an altered table on like a really big, relational database, you have to take downtime. Like that's unacceptable. Like you might not even be able to do the migration. And so you might have to just do something much more complex. Like, you know, have, have your, have, have your application able to handle multiple versions of the values within it. It gets very crazy. I've had to do that before and the complexity gets insane.<br/><br/>And so this instant migrations thing, essentially, it's a big leap forward because now you can deploy these huge migrations and not have to worry about it.
Artem Barmin
But for such kind of, for such amount of data, should we name it eventually consistent but not instant?
Nathan Marz
No, it's instant. It's not eventually consistent, it's immediate. Anyone reading from that view is gonna see the migrated data. And the fact that it's not migrated on disk yet does not change anything about semantics.
Artem Barmin
I heard a lot about in the Clojure world about working with data with even sourcing style, handling this infinite persistent clock of events, building the views on top of this and maybe this question is stupid, but can you compare Rama with Datomic and its approach? Because it sounds kinda similar for me.
Nathan Marz
Well, you don't really control how stuff is indexed at Datomic. Datomic, you have indexes it the way it does, and then you're able to query ranges of that data onto your peers. So that's a huge difference. <br/><br/>Datomic has a data model, and you have to conform to that data model. You have to put data like that, and that's all it can do. Datomic is a great system. It's very flexible but it doesn't have the capabilities of Rama. <br/><br/>There's like smaller things like Datomic is not scalable on writes because it has a single writer thread. Rama is fully distributed, writes happen in parallel and distributed. So they're quite different. There's another product called XTDB, which is guess similar to Datomic. So actually the user I mentioned that is rewrote their application from scratch to be on Rama, they're rewriting it from an XTTV based application. <br/><br/>We could go through like feature sets, right? But like, there's huge differences in reactivity and stuff like that as well. But in terms of big picture, like Rama very explicitly makes a difference between event sourcing, which are logs and that's stored one way and materialized views, which are stored another way. And they're both durable and they're separate, but they're connected by your topologies.
Artem Barmin
I see. Very interesting. returning back to choose of Clojure for building such system, can you name some drawbacks or problems that you faced by working with Clojure? You faced already one about Specter that vanilla Clojure doesn't allow to work with complex, nested data structures. Maybe something else that you found on your way.
Nathan Marz
Yeah, well, applying the term drawback to Clojure is, like, weird because it is such a flexible language that you can essentially change the language at the user level, right? So they're not really drawbacks in that respect. I do think Clojure has some weaknesses and the more experience you get with Clojure, the more you understand those weaknesses and then you can deal with them. <br/><br/>So Clojure is interesting, especially with performance. And this is an area where we've had to do… we've had to do stuff to handle, guess, some, what we consider to deficiencies in Clojure. And this is not a criticism Clojure's design, because like no language is going to be perfect, right? So for example, like lazy sequences, I do not think that should be part of the core API Clojure. There should be like a namespace, clojure.lazy, and it all should be in there. But the default functions for map and filter and whatever should be eager.<br/><br/>So the problem with lazy sequences, they add a lot of overhead, even if you are fully realizing them. So they're slow. And even worse than that, if you're not fully realizing them at the call site, if there's an exception, it actually, and I'm sure you've experienced this, it moves the exception somewhere else and it's really confusing to debug. <br/><br/>We actually do use lazy sequences legitimately for laziness. I think it's in two spots in our entire code base. The code base is about 200,000 lines of Clojure, just the source code, right? We have another like 250,000 lines of tests. So in that whole code base, we use laziness twice and we call or we do those operations mapping and filtering is all the time, right? So almost every single call 99.99% are eager and we want them to be eager. We do absolutely do not want them to be lazy. <br/><br/>We use Condo for linting, a great tool. And so we lent to just disallow or build fails if you use any lazy functions because there's just no reason to. Doing the eager version using matv instead of map is no more work, but it's better, right? It'll be faster, less overhead. And yeah, we'll have this issue with stack traces. So that's one issue. But again, not really a drawback because you can deal with it. <br/><br/>And then there's something I find odd about the language, how it is both very focused on performance, but then other times it seems to just not care at all. So like protocols, transients, these, just like the whole, the implementation of person's data structures are fantastic. These are fantastic high performance features. <br/><br/>But then you have other stuff like the function satisfies. Another thing we lent for just don't use that function. Instead of using satisfies what we do and we have to do stuff like stuff like that. But instead we call extends on the class. It turns out the cases where we use it were directly extending it anyway such that it's equivalent for those cases right satisfies is like ridiculously slow and it could be faster. It could use inline caching whatever just like protocols do. But it is like super slow. Like it's frustrating to be looking at a performance profile and it's like satisfies, it's like using up like all the CPU. <br/><br/>I won't even mention last, I strongly disagree with the design of last, because it's always linear time, it's always O of N. You know, I've read Rich Hickey's rationale for it, I disagree with that. I think it should be as fast as possible on the data structure it's running, but fine. He has a rationale for that. <br/><br/>But then you look at first, first is one, but it's a slow old one because it goes through the lazy sequence API to do first. So there's a ton of overhead. So in a lot of places in our code base, we have to not use first and use a faster virtual. So I would rather like we wouldn't, if Clojure core library made all the functions as fast as possible, we wouldn't have to do stuff like that. So that's a little bit frustrating sometimes, but you learn and can deal with it. These are not like fundamental deficiencies. These are small deficiencies that you can deal with, but it is part of the learning curve for building, you know, high performance systems in the in Clojure. <br/><br/>This stuff doesn't matter if what you're doing is not performance intensive, right, but it matters a lot for us.<br/><br/>Oh, another interesting one. I just thought of is, and again, we were able to work around this, but something that a bit, so one of the most common sources of bugs we had in our code base was just like a typo on accessing a field of a record, right? Like your field is named value and you misspelled it for some reason, right? And Clojure does return nil, right? Cause like records are open. You can access whatever you want. <br/><br/>But it turns out with our records, we never use field like open fields. We always only want to use the fields that are in the record. So we made something called Def Record Plus, which will actually throw an exception if you act, try to access their field. And that has totally eliminated that class of bugs from our development, which has been really nice. <br/><br/>Again, it's an example of something we find, you know, Clojure’s design for records is not great when you're actually trying to use it on a complex code base, but Def Record Plus looks exactly like Def Record and it fixes that problem for us. So again, we're able to mold the language to be more optimal for how we're using it.
Artem Barmin
Haven't you thought about creating an alternative core library for Clojure? Because some languages have different implementations of a standard library.
Nathan Marz
What do mean, like actually making like Clojure.core 2 and just re-implement it?
Artem Barmin
Yeah, kind of this, I don’t know. Clojure.core.performance? I don’t know.
Nathan Marz
Yeah, I mean, think people have done stuff like that. Like those are like utility libraries. I think most of the core libraries are fine, right? It's just like a few functions which are bad. Like a lot of this stuff, a lot of this learning about doing stuff like as efficiently as possible is actually baked into Specter, right? So Spectre is like, as I mentioned, is very, very high performance though for something as basic as just calling first. Like Specter does add a little overhead. So it's never going to be as fast as like directly calling like a faster version of first.<br/><br/>I don't know about releasing another core library. I think that, like the core library, like that's, that is the, that's the core of the language, right? Like, not going to replace that.
Artem Barmin
For C sometimes we can use different implementations of std.lib, sometimes it works.
Nathan Marz
Yeah. But you can't you can't actually replace the core library in Clojure. It's all just baked into the jar.
Vadym Kostiuk
As you mentioned, you're continuing to develop and improve Rama itself as a technology. Can you think of what maybe lessons have you learned about building larger scale systems with Clojure that you think are critical for other developers to consider and understand?
Nathan Marz
Yeah, we actually a few years ago did a post of about this. It's called “Tour of our 250K line Clojure codebase”. That was more like 450K lines, but yeah, a lot of stuff we wrote about in that post, I think does apply to learning just like things we learned about just managing a code base of that size. That's all in Clojure. <br/><br/>So think one thing that's really important is just how you manage types and schemas. So I started working on Rama way before spec was released. So very early on, we started using this library called Schema from Prismatic, and we still use it. I think Clojure.spec is good, and I also think schema is good. So we just continue using schema because we had no reason to switch. But I do think making your types like tightly defining what are your types and what are like the fields within them is important. <br/><br/>And because everything's dynamically typed, you can do things besides just like a type, you can do stronger things like predicates, right? Like this value is not just a number, but it's between zero and 10 or stuff like that. And we do stuff like that all over the place. And I think it's important, it's like little things that are important. It's important when you're doing this to actually validate the types that you're creating. So, this Def Record Plus thing that I mentioned. So besides the fact that it has this functionality to disallow access to the undeclared fields, it also has schema defining built into it. So when you define your field, set just like with schema, you define what the types are. And it generates a bunch of constructors for actually constructing the type. <br/><br/>So if we create like a Def Record Plus on Foo, we don't construct it by using the record constructor. We have something called valid. Valid foo and that will actually do type checking. So on construction, it'll throw an error if there's any sort of schema mismatch. And we integrated with assertions. So we're able to like do it so that it only does those checks on if assertions are on so that we catch the same sort of development, but we can turn assertions off for our production release so that you don't have the overhead of those assertions in production. Those assertions are just catching, you know, development time problems. So that's really important, right? <br/><br/>And then, you need to think about in your code base, like, where do you actually check schemas? And turns out in Rama, like, besides construction of types, there's a few central places in the compiler where it makes sense to do some schema checking just to make sure everything's fine. <br/><br/>So that's a big thing, right? Like, I think something for like programmers that come from like a statically type language, whether it's Java or something else, dynamic typing scares them, right? Or they think they're losing something by not having static typing when that's absolutely not true at all. You're actually losing, once you learn how to do it, what you're losing is complexity, the complexity of static typing, and you're gaining a much more powerful type system because you can do things that much more things dynamically than you can do statically. But you have to learn to do it. And I think it is a very, very critical part, especially as a code base gets large.
Artem Barmin
I have a question regarding the schema usage because we have the question from one of the previous guests and he asked about, usually schema is used on the boundaries of the system. So we get from the external API some request and we need to validate it against schema. Can you tell a bit more about how deep inside the code base are you using the types and or using it on the boundaries, you mentioned the core of the compiler. It's also interesting where exactly are you validating these types?
Nathan Marz
Yeah, I find it useful to basically default to when you declare a type, a new type, you give it a schema. So it's basically everywhere. And like, you can think of it as like, within a big system like Rama, well, there's lots of internal boundaries, right? You have workers and task threads, you have replication, you have depots and P states and whatever, right? And within the compiler, you have many different kinds of operations in the compiler, which would be nodes.<br/><br/>And the thing about like the problem with not putting a schema on things is that when you mess something up, I use the word when not if, because you are going to mess things up. If you're not catching it at the time you created the type, it is so much harder to debug and track it down. If you get like the you always in all situations in program programming, you want the error to happen as close to the source of the error as possible. Otherwise, it just gets further it is, the more difficult it is to figure out.
Artem Barmin
Answer is everywhere.
Nathan Marz
Pretty much, yeah. I mean, I don't think we use it literally at 100%. There's probably very localized places where it's like purely internal plus one function, but pretty much everywhere. Because it doesn't, it doesn't add overhead. It doesn't like make it more difficult to develop. Okay. You got to spend a second annotating the types. It's like, all right, this is a string. This is a long, I'm done. Right. And then instead of using the normal record constructor, you use the validation constructor, which is just a few more characters, that's it.
Artem Barmin
And as I understand this checks are always on, even in production builds?
Nathan Marz
We now in for the Rama releases, we turn assertions off. And so those checks will not be on, but we're talking about things that are internal, right? So it's important that we catch these problems during development. It does use CPU to do these checks. So we turn them off for the production release.
Artem Barmin
I see. And as I understand, for these checks to properly work, you need to have a pretty extensive test coverage of the code base. You mentioned that you have pretty similar size of tests like the main code base. Am I right? Do you track some coverage of the code base by tests?<br/><br/>And maybe overage by schema, that's also interesting. But I think it can be done with CJL Condo or something analyzing.
Nathan Marz
Yeah, we don't test code coverage metrics. You mean like actually measure how many lines of code are exercised in the test?
Artem Barmin
Or many lines maybe, yeah, at least. Not branches.
Nathan Marz
Yeah, we don't specifically check that. It's not a bad idea for us to do something like that. I was also looking at something even more interesting than code coverage called mutation testing. I don't know if you've heard of it. It might not be practical, but the idea of mutation testing, it's a great idea. The idea is that if you randomly modify your source code, what's the probability that your test fail, right? That's the real metric of coverage, right? Just because you're exercising the code doesn't mean you're testing all the different things it needs to handle. <br/><br/>Now mutation testing, especially with something like Rama is gonna be tricky given that we kind of have to build that ourselves because it's like existing pooling is not gonna understand Rama code at all, right? Like what's a valid mutation that it can do and whatnot. And there's questions about like, what kind of mutations can it even do?<br/><br/>So anyway, I don't think that's actually practical, certainly not in the short term for us, but an interesting idea. But yeah, testing is super important, but that's not any different than, it doesn't matter what tool I'm using, testing is important, right? <br/><br/>That's another thing that is, like a lot of people from statically typed languages think that statically typing reduces the amount of tests you need to write, and that's completely false. Like 100%, absolutely false.<br/><br/>And so nothing changes. But yeah, something like Rama, as it's gotten bigger, the rate at which we write test code versus source code has increased. So right now it's, I think, 250,000 lines of test code versus 200,000 lines of source. I'm expecting that discrepancy to only grow. <br/><br/>Rama's a very, it's a large, complex system and it needs to provide very strong guarantees. It needs to be able to operate under very bad conditions. So a lot of our tests are doing stuff like chaos tests where you do stuff like kill workers randomly, partition workers, and so on, so on. And the system needs to keep on working. There shouldn't be any search errors. It should maintain all its guarantees around durability and atomicity and so on. So most of our time is spent doing stuff like that. <br/><br/>And that's actually talking about strengths of Clojure. Clojure is very strong when it comes to testing because of the way you can use the language. So like as an example of this, there's one technique we use where we use with redefs a lot in tests. And now sometimes it's to do something like it's essentially dependency injection, right? Mock out this internal function and have it do something different. And that's fine. And it's very easy to do that in Clojure. But the more interesting usage of it, and we wrote about this like a couple of years ago, is we will put into our source code, no op functions. And we always, we have a naming convention to always prefix it with hook colon, right? So it'd be like hook colon, finished replicating entry or whatever. the definition is just empty. Def in that and then that's it, right? <br/><br/>And what we can do is what that essentially provides is an a la carte event log on everything happening in the system. So we can do with redefs and actually capture the events and then we can start on those. So it's a really powerful technique, which is a really unique way to use Clojure’s facilities. Also in a way, it's like almost no code. It's very powerful and very simple. And that's been like a very useful technique for us in testing this thing.
Artem Barmin
Very interesting technique. I will apply it to my code base because I use hooks but in a bit different way, but these definitions are pretty good.
Nathan Marz
And with Rama, we can do like even cooler stuff with hooks. Like one thing we do sometimes we'll do a no-op fragment, right? So it's just essentially a fragment that takes in no arguments and then it just omits one time, right? So when you call it, it does nothing, right? It immediately comes back. But we can actually redef def that to become asynchronous. So we'll introduce essentially a yield or a pause into some code.<br/><br/>And that lets us test, especially certain race conditions that could happen. By being able to inject these yields in a place which normally would not yield. That's beyond Clojure, but that's something very interesting that we do pretty frequently.
Artem Barmin
Are you using core.async in Rama?
Nathan Marz
We do not use choreosync in Rama. Essentially Rama itself is able to do the kinds of things you would do with core.async, but it's more powerful.
Artem Barmin
I'm curious about, it's not about maybe only technical stuff, but also about cultural stuff of Clojure engineering team. Sometimes I've seen cases of over engineering something because Clojure is very flexible, it allows to do a lot of things. You can build DSL, you can use MacroSys, can build embedded DSL using macroses, and you can create your own language inside the Clojure. Have you faced maybe some cases of overengineering that you need to maybe manage even now, or you already refactored this?
Nathan Marz
Well, I wouldn't say over engineering because I think we're just like we're a strong team. Like we know we're doing. I mean, the philosophy that I follow with development, which helps a ton in avoiding over-engineering is basically don't build abstractions unless you have a really good reason to. So when I say something like we built our own new language and a brand new novel programming paradigm from scratch, I did it because I had to. didn't, it wasn't something that sounded cool. So I did it. <br/><br/>I did it because I had no other choice. I had no intention of building a new language when I started working on Rama. Nor did I think I needed to, right? It just turned out that I was forced to do, you should always feel forced to bolden abstractions. And until you get to that point, you should be living in pain, right? And because you don't have that perfect, beautiful abstraction yet, right? You follow that approach. You are going to avoid a lot of the over-engineering that a lot of people do, right? Now, of course, we of course have technical debt. Anyone working on any sort of system over a long period of time is gonna develop technical debt. And I don't think this is unique to Clojure. <br/><br/>Now you do bring up a good point though, right? Because Clojure is a much more flexible language, it gives you more ability to shoot yourself. And that is true, right? So I do think you need to be a more skilled programmer to work in Clojure. I would say it has a lower floor, but higher ceiling, right? If you're a good programmer, if you're skilled, then you're able to hit that higher ceiling without going to that lower floor. <br/><br/>And so like something we do in terms of technical debt at Red Planet Labs is we dedicate one day a month - we call it sweep-o-rama - to all we do on that day is address technical debt, right? So, you know, there's actually like some nice things about doing this way, but we basically keep on our Jira. We have a label called sweep-o-rama, which, when you encounter something that could be better, like this could be refactored. This naming could be better because it's confusing how it's named currently. We're able to just punt it, just open up a sweep-o-rama ticket for it. And then we can all get to it as a team when we get there.<br/><br/>And that's the kind of stuff we do, right? We clean up code, we rename things, we factor stuff out. And so when we've been working on Rama as long as we have, by just doing this once a month, being diligent about it, we're able to keep the code base clean and elegant. But I think this is true regardless of what language you're using. You have to explicitly think about tech debt and how you're going to combat it.
Vadym Kostiuk
I have a question. You mentioned it quite a few times by yourself that, yeah, Rama is a complex tool and that you have to have a really strong engineering team in terms of building something like Rama. So I'd like to ask you about the team behind Rama. How big is the team? What is your role within the team? How involved are you personally into the coding day to day.
Nathan
Well, the team is five right now, so we're a small team. I mean, most of my time I spend on development, right? So we're all developers. We don't have anyone besides developers on the team. So I'm leading the engineering as well as handling all the other stuff that goes along with running a startup.
Vadym Kostiuk
Interesting. And also one more question regarding this. Due to its nature, Clojure normally attracts really experienced engineers with varied experiences and approaches. And when you have a bunch of experienced engineers sitting in one room, it's both challenging and rewarding. So how do you approach the tech decision process within the team to ensure the smooth cooperation and collaboration within the team?
Nathan Marz
Yeah, well, we're not all in one room because we're a fully distributed team. Yeah, metaphorically, we're all in one room. Basically, the that's just like the way we approach development is that let's say, like whatever your project is, you own it and you own the decision on that project. And you should use your, you know, the rest of the team, including me, as necessary to make the good decisions there. <br/><br/>So like one thing we do there is in terms of development process, whenever you're about to embark on a bigger project where there's like design flexibility or there's choices to make in the design, we use something called a premortem, right? So you just write up just briefly, what's the project? What are the risks and what are your initial ideas on how to approach it? And then at the next team meeting, we'll all look at that and talk about it together and give feedback. But otherwise, it's your decisions to make. And that has worked for us. Now, mistakes have been made where after the fact, we see that that wasn't the right way to approach the design, but then we have to just go and change it. And that's worked for us.<br/><br/>Especially if you're on a strong development team, if you're good, you're gonna listen to the feedback that you're getting. And if you're not listening to other people's feedback, well, you're not gonna be an engineer at Red Planet Labs, right? And obviously, I've worked with people in the past at other companies which were like that, and I just wouldn't hire people like that. So.<br/><br/>Yeah, I guess that's how to answer the question. It's not particularly innovative, but you just hire good people and we just have very simple processes in place to provide feedback.
Vadym Kostiuk
And I'm wondering, how do you approach the process of finding the right people? So you mentioned it yourself, it's not an easy task given to attention that you basically will be working with this person day to day on your own product, on your own technology that you'd like to develop. So it's important to qualify the person for the position. So how do you do this?
Nathan Marz
It always has been and will always be the most difficult part of running a company is hiring. Clojure is interesting because it's a much smaller hiring pool, but it's also like the average Clojure program is much stronger than the average programmer. So that compensates for that. And also Clojure programmers, Clojure programmers generally really care a lot about working in Clojure, especially working on something like Rama, which is a really interesting project, the fact that you can also do that in Clojure is a very rare opportunity. <br/><br/>So basically sourcing candidates has never been a problem for us. Still finding a good camp, like a good person to hire, it's always been difficult and frustrating. We have a pretty tough interview process. It involves three take-home projects and a number of interviews. And honestly, even with all that, it's not perfect. Like I wish there was a better way. <br/><br/>The only way you really know how good someone's gonna be is when you work with them. And so, you know, like what I care about with people we hire is like, you have a mentality of getting things done, which unfortunately, I would say most people do not have that even if they think they do. You do whatever it takes, you're resourceful, you just get things done.<br/><br/>Like working on Rama, I talk about how using Rama has a learning curve. But I'd say it takes like one or two weeks of using Rama before you get over that learning curve, right? Working on Rama has a much, much bigger learning curve. Like we have very advanced systems we've developed, especially with how we test Rama. We didn't talk about simulation at all, but simulation is this like incredibly powerful testing framework that we've built to test Rama.<br/><br/>And learning how to use all this stuff and learning how the Rama code base is structured, all different components and all different ways you can manipulate it, especially in a testing context, is a very, very high learning curve. <br/><br/>Basically my answer is that I don't have the answer for how do you hire great people consistently. I think with the take home products we do, it's a hell of a lot better than doing a live coding interview. Because if you're doing that, you're basically essentially just hiring at random. If you're considering someone's or if like you're putting a lot of weight into someone's, the school someone went to, it's even worse. <br/><br/>And so at least our take home projects, we're at least having them do something which is related to the job they're gonna do.<br/><br/>It's like doing a live coding interview, like it has nothing to do with the job at hand, right? It's a toy problem with the time pressure of having to perform to someone live, right? Like if the job would involve that, like you were in some sort of programming circus where you are performing live for an audience, then that interview would be appropriate for that. Otherwise it makes no sense whatsoever. <br/><br/>Even take home projects, feel because of take home project, not be that big in scope. It's not a, it's a better measurement, but it's still not a perfect measurement that has noise. So basically I wish I knew a better way to hire, but I don't. And I don't really know what other tools are even potentially at my disposal to use. But the only other thing you can do is like a work trial, but that's not possible. That's only very rarely possible because usually when someone's interviewing, they're currently working somewhere else. So they literally can't do a work trial. That's certainly a better way to hire, but it's a very rare thing that you can do. So, yeah.
Vadym Kostiuk
Yeah, that's actually an interesting thought. Basically, I agree with you. I've been participating in numerous interviews that have been involved with live coding. And I've seen when really experienced guys in Clojure, but not only, like in JavaScript, Python, they were really good engineers. But because of the time pressure of the necessity to explain yourself to another person while you're doing something. They were just unable to actually perform the task just because of that. If you would give them this task on a normal day, know, just as a task on a Trello board or in Jira board, they would just complete it in 20-30 minutes. But because of these other factors, it's just getting harder for them.
Nathan Marz
I've seen the same thing. There's very strong engineers who, like they look like the weakest, they look like they've never programmed before in a live coding interview. And it just shows how bad that process is. Cause they would be a very good hire, but your measurement has so much noise associated with it that you would actually very wrongly filter them out.
Vadym Kostiuk
The actually approach that I really kind of like is not when you're trying to ask some deep questions about, I don't know, Clojure or JavaScript or anything else, like, you know, some academic level questions, but when you give a theoretical task to an engineer, not a coding one, just a theoretical, and you have a conversation with an engineer and try to hear how the engineer would approach the task, what are his thoughts on this or that, and basically have more like a conversational one interview. And this way you kind of test out how can your real work with this person will play out at the end, because it's the part of your work with such an engineer.
Nathan Marz
I do that a little bit with take home projects. I'll talk with them afterwards about how did they approach it. And that it's very fuzzy though, still. I do think that like ultimately comes down to just being empirical about it, right? Whatever your hiring process is. Like actually, especially at these, it's amazing that so many larger companies don't do this, but actually track some stats about how people did on their interviews and then how are they as employees later. And that will tell you something about your hiring process.<br/><br/>And most people don't do that. And think any sort of idea you have for another way to measure, like another way to measure how good someone is, well, you should also try to measure your process and be data driven about it. And if a company is doing that, it won't take long for them to realize that live coding interviews are completely useless.
Vadym Kostiuk
More importantly, have your own engineers try those live coding tests that you are putting the candidates through. Because we actually had such a case with our partners. So they've been trying to find engineers in-house. They've been like, all the candidates were failing just because they weren't able to complete the task. So our partners tried with their own in-house employees to pass the test and they failed.
Nathan Marz
Yeah, I know I've heard. I think think I think I believe Google has done that multiple times and then other large companies. I actually don't know. Do they? Does Google still do live? There's crazy live coding interviews.
Vadym Kostiuk
I know there is about eight different or six different interviews you have to perform. It's a really long cycle, but I don't know specifics.
Nathan Marz
Yeah, I'm just wondering because they have found that exact thing that their own engineers were failing their own interviews. And I don't know if they actually adjusted their planning process though.
Vadym Kostiuk
That's interesting. But yeah, I totally agree with you. So the best way is to basically actually have engineer work with you on real project and pass some time. You will eventually understand whether it's the right person or not.
Artem Barmin
Yeah, that's pretty interesting to hear your thoughts about team building. That's always a question, especially in the Clojure. And I want to go back a bit into technical questions. And I want to ask you about how do you assess the current state of Clojure ecosystem? So 2024 plans, conferences, libraries, everything else. So do you think, Clojure is growing or it's stagnating or it's declining?
Nathan Marz
I'm not really that in touch with how much it's grown or stagnating, right? And because it doesn't really matter for me personally, right? For me, Clojure is by far the best language to use to build what we're building. I think the ecosystem is great though. You know, like tooling that exists now that did not exist 10 years ago would be like a big one is CLJ Condo. That's a great tool. And linting is a great thing to add to your development process to just improve the code that you're doing. Like we lint for all sorts of stuff to keep our code clean and consistent, right? Like I mentioned, doing stuff like disallowing certain Clojure core functions because they're so slow. We do other stuff, right? Like we ensure that the same namespace is always aliased the same way across the whole code base and that we never use the same alias for multiple namespaces. And that helps a lot with code readability. You see the alias, you know exactly what it's going to refer to.<br/><br/>And then tons of stuff like that we do just to keep the code consistent. There's a lot of interesting stuff happening in Clojure. Like besides Rama, like another interesting tool is Electric. It's a really interesting front end tool. they're trying to push the boundaries of what's possible with front end tooling, like similar to how Rama is pushing the boundaries of what's possible with backend tooling. Some of our users are using both Rama and Electric, so they are really on the cutting edge. And so I think it's really cool and it's really good foreClojure that there's such innovative stuff still happening within or using the language, within the language. <br/><br/>I'm very happy with Clojure and the nice thing about technology like this is that it doesn't really matter who else is using it, right? If it's good, it's good and you can use it. It's not like a social network where it matters a lot who else is using it, right? That's actually the entire value of the system. If the technology is good, you can use it.
Artem Barmin
And sometimes this is kind of objective that for people outside of Clojure ecosystem that a lot of libraries are not updating very often. A lot of libraries are kind of abandoned, but it's hard to explain that some libraries are actually perfect. They're not abandoned. There's not too much to do actually to improve them.
Nathan
Yeah, like the open source Specter library. I think it's been years since there's been an update. I just haven't had to. Actually, I did merge in a pull request two weeks ago, but that was the first one in years. But it didn't need any updates, right? It was fine. And so that's not really a big deal.
Artem Barmin
And the last question from our podcast is actually, would you do it again? Would you choose Clojure if you would start in 2024 for your project?
Nathan Marz
Yeah. That's really crazy if after everything I just said, if I said no, I would actually choose JavaScript. I think I would choose JavaScript. Yeah, of course I would choose Clojure again. There's no doubt about that.
Artem Barmin
We need this part just to make teaser because not everybody will listen to this point of interview. Okay. Actually, we have question from the past, a previous guest. Jonathan, he's a software architect at CyCognito and he asked, “What the part of Clojure, aside from macros, that you consistently find challenging to explain to teammates?”
Nathan Marz
Aside from macros. That's interesting. Yeah, I'm thinking about what for new employees that start up, what's like the most challenging things in Clojure for them. The thing is like the people we hire generally are already pretty experienced in Clojure by the time we hire them.<br/><br/>I mean, there's like more advanced stuff with it's still macro related, but whenever you have to use like, you know, like the special end and form stuff, that's something most people have never encountered. And we've had to do some stuff with that. <br/><br/>I have a pretty esoteric one for that, which a lot of stuff we do in Rama is, like a big part of how Rama works is we serialize functions. So we'll like serialize an anonymous function. It's actually able to send it over the wire. And what we do is we send over the wire, the class name of the function, and then the serialization of the Clojure. Yeah. And so that part of the code base is very much relying on internal knowledge of how Clojure works to be able to do that, be able to then reconstruct it on the other side. <br/><br/>So that's, I'd say for new employees when they, if they ever have to encounter that code and how that works, that requires some explanation. that is something, I don't think any other company has ever had to delve that deep or actually utilize that kind of information within Cloder about like the specifics of how Cloder generates classes such that we can serialize a function over the wire.
Artem Barmin
So you basically capture the context of a function, yeah, the code of the function and you send both of them.
Nathan Marz
It relies on both sides, both the sender receiver having the same compilation so that they have the same class names because Clojure will actually randomize, especially for anonymous functions, Clojure gives it a randomized class name. So this relies on both having the same class name and then understanding when you actually look at the class itself and look at the fields, actually understanding how that's going to translate to the constructor for the anonymous class.
Artem Barmin
You wrote this code or one of your engineers?
Nathan Marz
One of my engineers wrote it like many years ago. This was like, we first did this like four, I think probably three or four years ago. Yeah.
Artem Barmin
Thank you for response. So the most complex part of Clojure is serializing functions with Clojure. Yeah. Okay. We record this. Everything else is simple and easy to understand.
Nathan Marz
Yeah, well, it's a difficult question to ask someone who's very experienced in Clojure, who's gone through the learning curve so long ago and who hires experienced people already. We don't get tripped up by transducers or transients or all that other stuff that might trip up a new programmer.
Artem Barmin
Yeah, okay. And you can ask questions for the next guests.
Nathan Marz
Oh yeah. All right. So it's the next guest. I'll ask, what's the last time you spent all night working on a Clojure project?
Artem Barmin
Nice. I did this on Friday, I think.
Nathan
Nice, I'm glad to hear that.
Artem Barmin
Till 4 a.m. That's not the whole night, but the big part of the night.
Nathan Marz
Yeah. Yeah. My last time was a couple of weeks ago, debugging something.
Vadym Kostiuk
Thank you very much for joining us, Nathan. It's been truly pleasure. And thank you for sharing your insights and your path with Rama, with building the scalable platform. Actually, the platform that helps building scalable solutions. It's really an interesting story of Clojure in product and I'm sure that our listeners will be interested to hear about your path. Thank you very much for that.
Nathan Marz
Yeah, nice talking to you guys.
Artem Barmin
Yeah, that's really, that was really interesting. Thank you, Nathan.
Vadym Kostiuk
To our listeners, if you're curious to learn more about Rama and the Red Planet Labs, we'll be including links to their company website and social media. So yeah, please be sure to check them out.
Artem Barmin
Thank you, Nathan. And to the audience, see you next time. Bye-bye.
Nathan Marz
All right, take care guys.
Vadym Kostiuk
Thank you. Until next time.
In the 7th episode, we speak with Nathan Marz, founder of Red Planet Labs and creator of Apache Storm, about his 15-year programming journey and the challenges of building scalable systems in Clojure. Nathan shares his experiences with Clojure and his innovative work on Rama, a platform that simplifies software development.
Our conversation covers testing techniques like with-redefs for debugging, the challenges of hiring Clojure developers, and the architecture's approach to event sourcing and materialized views. Nathan explains why Clojure was the ideal choice for Rama and highlights his team's learning curve and the need for strong engineering expertise.
Tune in for valuable insights into the complexities of building scalable applications with Clojure and its evolving ecosystem. Subscribe to our podcast for more!