Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Clean Architecture (2012) (8thlight.com)
82 points by Flenser on April 28, 2017 | hide | past | favorite | 80 comments


Prudent to remember that Robert Martin is a consultant and it's lucrative to sell enterprises a story where their domain logic is a billion dollar asset and following his very particular design pattern will enable them to seamlessly evolve this asset with new technology.

Here's where the story breaks:

- Most app's domain objects and rules are small. You could describe them on a few sheets of notebook paper. The technology implementation (e.g. database or http logic) is a much larger portion of the codebase.

- In most teams, especially startups, the domain rapidly evolves. The technology stacks (databases, SQL, etc) are quite stable and have already been heavily abstracted for reuse over the years. Using these proven technologies and excellent existing interfaces is how you go fast.

- Technology choice isn't just about code interface. Each comes with its own assumptions of system behavior and theory of operation. Plugging a different implementation into your storage adapter interface is the least piece of work you need to think about unless it's nearly a 1:1 swap like postgres -> mysql.

So following a strict clean architecture approach will probably pour concrete around something changing all the time and create friction and extra work in using excellent available technologies.

Instead I'd rather follow pragmatic design guidelines:

- The concept of each component should be clear

- Sensible responsibilities for each component

- Abstractions should serve the composability of your existing design; introduce adapter layers if you truly have more than one implementation

- Keep it simple; optimize for reading the code end-to-end and easier refactoring

These aren't inconsistent with Clean Architecture but probably more productive than adding religious rules.


The design pattern in question is not that much special in the global sense. It is special in the J2EE/.NET/Enterprise world because it should allow you to automatically generate most of the UI in same way what eg. Flask-Admin does while also taking all the bussiness rules into account. One of the things I'd like to do is implement subset of Apache Isis (which is implementation of essentially this pattern for J2EE) in Python on top of Flask.


Automatically generating a UI from a domain may please a programmer but isn't likely to please a user. That's how you get UIs like the one Salesforce has.

That could be the right choice in some situations -- it certainly worked out well for Salesforce and allowing extensibility in their model. However if you want to build an app with consumer usability you'll be putting a lot of unique development effort into your view layer.


Reasonable UX for consumer facing application is something completely different from what you want for the back office. And these architectures target applications where the back office part is majority of the UI.


> Most app's domain objects and rules are small. You could describe them on a few sheets of notebook paper. The technology implementation (e.g. database or http logic) is a much larger portion of the codebase.

That not true. Poorly understood business logic is where most of your code is. Once you create and depend on business object it is really difficult to change. Because you cannot change you end with writing more code...

Your code will largely reflect organisation and process. Only way to have "Clean Architecture" is when business owners are fully committed to project and willing to adapt/change organisation. But because it is easier to change code than people, we end up with multi million LOC projects in COBOL.

What this blog post is for enterprise managers that believe in bullshit graphs and layers. Technology is important because it will guide you towards solution. People selling ideas about "You can swap out Oracle or SQL Server, for Mongo" to my corporate masters should be hanged. Next year I will have a project to change database: CP to AP thanks to people like author.


The domain does indeed rapidly evolve, but the technology stacks that you list ("databases, SQL, etc.") are the areas where the least reversible decisions tend to get baked in, so it seems like it would be wise to keep the rapidly evolving domain evolution separate from those areas.


Yes, you are right. I didn't mean to advocate mixing high level and low level logic, different kinds of concerns together in one component. I just take issue with the Clean Architecture bullseye diagram and its preoccupation with layering, classifying those layers, and strict rules about who can refer to who without creating some additional abstraction to pretend the thing you are talking to isn't the thing you are talking to.


Has anyone ever done this in practice before without it mattering:

> 4. Independent of Database. You can swap out Oracle or SQL Server, for Mongo, BigTable, CouchDB, or something else. Your business rules are not bound to the database.

MongoDB vs Oracle.. Ok architecture astronaut, you're gonna have a real bad time when you swap those out for each other.


I don't think the main advantage of this is switching databases. It's switching mindset.

Lots of developers think database first, then build on top of that. That's good for simple crud apps. Creating/updating/deleting stuff is what databases are good at.

For complicated apps you should think domain or business problem first. Model your domain, make it good for reasoning about your business issues, and then afterwords think about persisting that to a database.

You domain is your application core, database persistence is a technical issue outside of that core. You write an adapter to persist that domain to whatever database you want. It just so happens this means you can write many adapters for any database to persist your domain to it. Utilizing whatever advanced features your database has to speed things up in that database adapter.


> For complicated apps you should think domain or business problem first. Model your domain, make it good for reasoning about your business issues, and then afterwords think about persisting that to a database.

Yes, you should design your work first, and figure out what type of data you need to store, and how it should be stored.

Just about any application that uses Oracle cannot be "adapted" to use mongodb. It's an entirely different scope, different persistence model, different features... If we were comparing mysql vs postgres, there would be novel-sized comments about how you can't just switch between them.

mysql : postgres :: bicycle : recumbent bicycle

mongodb : oracle :: tricycle : underwater nuclear submarine

They're just soluctions to entirely different business problem domains. This kind of "if the architecture is clean enough it can run on your microwave" attitude is a disease.


Well if you model your domain, then if you find only a certain database can persist that domain well, then write a specific adapter for that database and stick with it. If your using DDD + OO most of the adapter will be converting to and from your persistence model and your domain model.

However there are plenty of applications why there is no technical reason why they can't work on both though.


Nope, data abstractions always leak.

Writing application for Oracle I know that I have ACID and transactions, Writing my data access layer I will take advantage of this. I will also consider locking issues and add some kind of cache like Redis to boost read performance.

Writing for MongoDB I have schemaless data and different consistency guarantees. Because my data objects do not have consistent schema my access layer need to be flexible enough to deal with object of different shape. I would probably use dynamic languages like JS/Ruby.

Database choice will guide design and implementation data access layer. It is non trivial to change this. Even MySQL to PostgreSQL can take a lot effort when you have a lot live data.


That's the thing your persistence adapter is a separate thing to your domain.

You can implement it however you want.


Your domain representation is your data. How you store/interact with your data will depends on storage. You cannot decouple this unless you create adaptor with only lowest common denominator.

How you will create an adaptor that support transaction for both Mongo and Oracle? Answer: You don't because Mongo do not support transaction.

This is like fridge and bookshelf. Both provide storage but cannot be used interchangeably. What would be adaptor for bookshelf to store meat?


Well it isn't usually much of problem in DDD because between aggregates eventual consistency is used. Only for a single aggregates is all work is expected to fully completed for an action which mongo is fully capable of.


Your business is the database. Models matter.

Anyone who wants "database neutrality" (least useful common denominator), vs leveraging the awesomeness of pgsql, mssql, oracle, should just use flat files. Or mysql.


> Your business is the database. Models matter.

If you tie your model to your database, you might be out of business sooner that you think. Just ask any of the oracle customers who paid $$$ to migrate away from that license fee sinkhole.

> Anyone who wants "database neutrality" (least useful common denominator), vs leveraging the awesomeness of pgsql, mssql, oracle, should just use flat files. Or mysql.

Alternatively, simply invest a little bit more time into proper design and implementation. In one of my products I am "leveraging the awesomeness of pgsql" but switching to another storage engine is still a matter of one or two hours. After all, it's only about 200 lines of code: The connection itself and the adapters for the query engine (and no, it's not loosely coupled - it's strongly typed and my compiler inflects on the model and the constraints imposed on it by the storage backend).


You can leverage whatever database feature you want, just a long as you do it in the database adapter/gateway.

I write mine in raw sql normally.


Versus in situ (eg stored procedures)?

I currently agree.

There was a time when the database engine was the "app server". Before client/server & ODBC.

I miss those days. That strategy is overdue for a comeback.


My raw sql can call out to stored procedure if that's the best tool for the job.

A database is a perfectly good application core, if you application primary about creating/updating/deleting records.

If your application is business workflows, and complicated business logic. Then this is something to be considered.


Good to see I'm not crazy! I think this can be easier than maintain, and with tools like Datagrip now, seems to make a lot of sense. Coming from the data science side, I observed many talented people spending time on projects like https://github.com/cloudera/ibis to abstract SQL into something that I often thought was more verbose, less literal and less portible in reality.


s/mysql/sqlite

Almost foolproof.


Turgid.

Most applications copy a string from here and paste it over there. Input, processing, output. What we used to call data processing.

Add some defensive programming for sanity. Validation rules, "schemas", type systems.

Favor composition over inheritance, a useful programming language over "dynamic typing" (aka type hostile).

Extra credit for logging, monitoring, auditing, alerts, rolling deployments.

Life time victory achievement bonus award for setting breakpoints (debuggable) and easy reproduction steps.

---

Instafail if you use mappings (eg ORM), observer/listener, factories, singletons.


> Instafail if you use mappings (eg ORM), observer/listener, factories, singletons.

Wait, why?


What I've seen many times with ORM tools is blind interaction with the in-memory object graph abstraction over the data store which results in many un-needed queries. It's like the option of making one DB query which returns a flat result set isn't even on the radar.

As far as I can tell Factories have been replaced with DI frameworks or just hand rolled DI.

Singletons aren't so bad if you're working with an object oriented language and the singleton is just an instance of a service which holds no state and has methods that operate on a limited number of data types. At that point it's essentially just a namespace for a functional library.

Observer/Listeners ... Not sure here unless the commenter is advocating message queues or eventing.


One thing I would note is that it's not always necessary to abstract away your database layer. It is if you are developing software for installation into an existing infrastructure at a customer's datacenter. It's not necessary if you're building your own application on your own infrastructure.

In my experience I've never seen an enterprise company change databases over night. Uber perhaps being the exception to the rule. If you're choosing the target platform that your software runs on you should exploit that platform for all it's worth to get the best benefit from it. Yet I've seen plenty of software shops that insist on writing/running heaps of code to abstract away the database server on some imagined future point where someone decides they're going to run it on Mongo now instead of MySQL.


I don't think the main advantage of this is switching databases. It's switching mindset.

Lots of developers think database first, then build on top of that. That's good for simple crud apps.

For complicated apps you should think domain or business problem first. Model your domain, make it good for reasoning about your business issues, and then afterwords think about persisting that to a database. You domain is your application core, database persistence is a technical issue outside of that core.


Believe me, I do think in terms of business domain problems first. One of my customers' primary concerns is ensuring that their data is consistent. Always. They make very expensive decisions based on the consistency of that data. If they have to shut down a plant because a value was off they lose millions of dollars.

I see our customers' data as our core and any code we write as a liability. We're constantly looking for ways to reduce our liability and ensure our customers' data will always be consistent. We push all of our business logic down to the database layer where the RDBMS server is responsible for ensuring its consistency and integrity.

While we're not perfect we do have some business logic in our web processes. However for the most part the job of that component is to parse HTTP queries into queries on our public schema. If we wrote business rules into our software it would be very difficult to verify them, search for them, and keep track of how they've changed over time. It's also too easy for a programmer to make an error which could cause our customers' data to enter into an inconsistent state or worse. We avoid that as much as possible.


Well we have gone past the point where running on a simple master/replica database is an option. We are distributed.


Everyone is writing distributed systems whether they acknowledge it or not these days.


Can you give an example of an application core where the database is just a technical issue?


Anything where there is calculations, multi-step business process, regulations, complicated business language which takes time to understand is in use.

You first step in implementing in a feature is understanding the business domain with the help of people who know it well, then modeling it in the core of the application.

I'm currently working in the retail and warehouse distribution domain.


It is surprising how often all these things get reduced to bunch of relational tables with slightly non trivial integrity constraints because sooner than later there will be something that is outside of normal bussiness processes and thus requires exceptions in all these rules.

Essentially all big ERP packages follow the model of bunch of relational tables directly exposed to user with bussiness rules and processes as an afterthough. One can say that this stems from historical reasons, but unforeseen usecases that have to be somehow handled right now are also significant reason.


Having worked on ERP systems, I find the dB often becomes a major bottleneck because of this reason.

You end up creating a really beefy single database, with tons of ram, infinityio etc

We have a bunch of different DBs for specific purposes. Some have to be super fast for reading, some just store streams of events, others need to maintain consistency.


I'm in a similar niche. Anything you could recommend (books?) to learn more about the organization/design of large amount of data (i.e. choosing the right DB for Inventory Management System, designing complex schemas, etc.)


Abstracting the database has another benefit you are neglecting. It is easy to mock the database and test your application use cases if the database layer is abstracted. This is nearly impossible if database access is baked into the app.


But do you really need to mock the database when you can just deploy a containerized one for testing?


We used a real database for testing. In the short term it was less work than architecting the application in a way that allowed the DB to be mocked.

Long term, the result was a 45-minute test suite which spent most of its time setting up records in the DB, and which could not be parallelized except by adding more instances of the real DB (otherwise the tests would interfere with one-another's assertions about the DB state).


but now with containers you can have a snapshot that's already set up in the right state. A separate build.


This is a powerful technique, though I'd say it's more useful for the build pipeline than for developers running UTs while coding (since the schema can change more frequently while doing so).

To provide a counterpoint though, an advantage touted by advocates of complete decoupling from the DB is that your UT suite can run in O(one minute), rather than O(ten minutes). E.g. see https://www.youtube.com/watch?v=tg5RFeSfBM4.

I'm not 100% sold on this approach yet (separating your domain objects entirely from the ORM wrapper is an uphill struggle), but it's interesting.


We run our tests on databases and it's on the order of one minute. Maybe it's a hardware issue? We all have beefy computers with fast SSDs.

Also since we strive to make our databases upgradable, it's important that the actual schema update scripts themselves are tested and used directly.


Design from the data first. You can build testing into your change management at the database layer. In my experience building applications this way reduces the chance for the database to enter into an invalid state.

When I build web applications on top of this they have less to do. They literally parse HTTP and shuttle data. No big MVC framework needed. When the data model changes we change the data model. In the database. And we use the abstraction facilities in our server to keep the public schema clean.

Premature abstraction is just as dangerous as premature optimization. Maybe even worse in my experience.


Another benefit is getting database stuff out of your domain code, makes solving the business problem a lot cleaner.

In fact i just write a domain model, solve the problem, then write database adapter to persist the domain model. I always think about modeling the domain first, then persistence is an after thought. The persistence adapter can be done however you want, raw sql orm, nosql. So it's not really "abstracted away", it's just no the focus of the application.

Everything just plugs into the domain model.


Your data is your domain. If you don't understand your data then you don't understand the problem. If there's a business rule for how that data is handled then it's most consistent to define it in the server that is purpose built to manage your data. Especially when it has built in facilities to constrain possible states.

I've seen too many programmer bugs to trust putting business logic outside of the DB. Separation of concerns here too just on different lines of concern.


> If you don't understand your data then you don't understand the problem.

That's a rather sweeping statement, that many in the industry do not agree with.

https://en.wikipedia.org/wiki/Domain-driven_design https://martinfowler.com/

> If there's a business rule for how that data is handled then it's most consistent to define it in the [db]

To be clear, you're proposing that all business rule validation should be implemented as stored procedures in the DB? One of my domain model aggregates consists of thousands of lines of code just enforcing business rules and constraints. Am I to put that all in the DB?


> That's a rather sweeping statement, that many in the industry do not agree with.

And some who do

https://dataorientedprogramming.wordpress.com/tag/mike-acton...

I'm not going to appeal to authority here. I'm speaking from experience. We all know how source code gets over time with hundreds of programmers working on it. A clear, consistent specification of your data model, rules, invariants, and transformations is far more valuable than the abstract-soup of trying to model your business domain in source code. I think we can all agree that the less code there is to understand then the easier it is to verify it is correct.

Verifying the requirement that "when record A is written to the database then B is appended with the delta change if such and such is True" is guaranteed at the database level along with all of the other constraints on those relations. If it's nested in one of these rings behind an abstract factory somewhere it's harder to verify.

> Am I to put that all in the DB?

That's where I would start. But I'm not you and I don't understand the problem you're trying to solve.

My original point was that abstracting out the platform if you're not really concerned about switching platforms is a form of premature pessimization and a source of errors. If you control the platform, target the platform and don't bother with the abstractions.


A domain can become more about behaviour, intergration and business proccess. Rather than pure data.

I find most SQL languages are not particularly great for general purpose programming. Especially tooling for testing.


I find relational algebra most useful for anything involving querying, validating, aggregating, and transforming data. SQL99 is by far one of the more useful implementations of it that I know about. And yes it does have limitations.

That's why most mature RDBMS servers ship with at least one procedural language. Though it'd be nice if there was an option to use OCaml or Haskell in PostgreSQL.

I'm not suggesting to throw out all your code and build your entire application in SQL. I'm just saying that if you control the database, use it, exploit it and don't abstract it out unless you absolutely have to (because you need to ship your application on-premises to clients who may run MySQL servers and others who run Oracle.


> if you control the database

That is a big if in the enterprise. It is getting better, but for most enterprise apps I have worked on, there is a db team that owns the db and you have to go through them for all changes.

Perhaps db abstraction can be thought of as an instance of Conway's Law.


It's depends on your domain. However most domains are just as much about behaviour and data then just data.

Bugs are just as likely to occur in the database programming language as they are in application logic.

In fact i think tooling around unit testing is far more mature in general purpose languages.


Agreed, when's the last time you saw a debugger for a stored procedure? There aren't any. If you want to debug such a thing, you've got to write a bunch of print statements like it was 1982


I've seen unbelievable messes developed as sprawling stored procedures. And, no unit tests, since no one ever wants to test database logic, they just assume it's gold.


The same is true for the sprawling mess of Java interfaces and Factory patterns. Except now you have two sources of truth. Having worked with regulated clients and having to audit the entire stack it's much easier to to do with less code than more.

I've seen databases like that and similar teams that didn't take care with their change management.

Either way it's never pleasant to work with such systems.


I think a more common occurrence than "let's move everything from MySQL to Mongo", is "we need these user preferences really fast, so let's cache them in Redis". If you need to do something like that, you can handle all the implementation details in that domain/service layer without changing the API to the rest of your application(s).


The thing to follow is that you shouldn't build a tower of babel abstraction stack but a limited number of clearly defined abstractions. It's ok if you go deeper than 4 in some cases but these things should then already be pretty low level.

Too deep a stack will lead to code "scavenger hunts" when you want to figure out what something does.


This is a very similar idea to Gary Bernhardt's: Functional Core, Functional Core, Imperative Shell: https://www.destroyallsoftware.com/screencasts/catalog/funct....

I prefer Gary's explanation of it because he jumps right into the meat of the problem and shows code that models this architecture. The video doesn't require you to know Ruby to understand what he is saying, just as long as you know some basic testing phrases; Mocking, Stubbing, Etc, you should be able to follow along.


I like Bernhardt's rendition of this concept too. His talk "Boundaries" is my favourite summary:

https://www.destroyallsoftware.com/talks/boundaries

Note that the Functional Core architecture makes an additional restriction on top of the Clean/Hexagonal architecture, namely that the core should be functional; the OP doesn't make such prescriptions on how you implement the Entities and business logic (though it doesn't discourage you from doing so either of course).


You are right that the above article doesn't specifically mention the core being functional. I merely interpreted that from one of the stated properties of his and the other architectures he showcases:

"2. Testable. The business rules can be tested without the UI, Database, Web Server, or any other external element."

If that doesn't scream functional, then I don't know what does. :)


It also kinda reminds me these two articles, previously discussed on HN:

http://degoes.net/articles/modern-fp http://degoes.net/articles/modern-fp-part-2


For the Python programmers, there is also a better explanation of the same concept by Brandon Rhodes: The Clean Architecture in Python.

https://www.youtube.com/watch?v=DJtef410XaM


These things are always nice in theory but I would really appreciate a small sample app following the architecture.


Martin is working on a book about Clean Architecture:

https://www.amazon.com/Clean-Architecture-Craftsmans-Softwar...

I'm sure the book will have many real world code examples, as is fairly typical of his previous works.


Android devs "(re)discovered" clean architecture around 1-2 years ago so there are plenty of sample apps there you can understand with very little android experience.


And as always happen it ends misapplied to mobile apps that only paint on screen the JSON received from the server.

Not everything is black or white (no architecture vs clean architecture). You have to think about what you are doing (having fucked up on previous projects help) and don't follow anything you've read blindly.

Silver bullets, yadda yadda


Clean architecture in this case is just the name of the architecture. There are many others.


Agreed. Also, there is no such thing as clean architecture. Every architecture has holes and limitations


Can you provide a link to one or two of them you think are good examples?


The article doesn't mention it, but this architecture is good for functional programming as well. If you model those three inner circles as pure functions operating on input and returning results + effects, your app becomes easy to test and easily maintained by a good type system.


http://retromocha.com/obvious/ is an implementation of these ideas.


Thought this was an article about https://en.wikipedia.org/wiki/Clean_(programming_language)

Disappointing.


The title should mention it's a 2012 article.


Shouldn't DB be part of Entity - right in the middle? Why it is on the edge of the circle outside gateway?


A lot of these architectures assumes DDD. (I'm not sure if clean architecture does)

Domain driven design. You model you business problem using plain objects and methods. The domain should do nothing else other than modeling the domain, and solving the business problem you app is designed to solve. It should be persistence ignorant.

Then everything else simply has adapters for interfacing with the domain. Including a persistence adapter.

This is basically ports & adapters, and a lot of these architectures are basically variations on that.


Thanks. I think I see this reference in the article - http://jeffreypalermo.com/blog/the-onion-architecture-part-1... which explains why DB should be outside. It makes sense in the services and NoSQL world, but i would not put at the edge. Feels like we are doing something wrong & insecure. It could be my mindset issue :)


I don't why it would be insecure...


I mean from block diagram perspective, we are always used to keep data in the center secured by all other layers. But that pattern is changing in the days of AWS & Google Cloud, where data can be anywhere. If data is a service, it doesn't matter where it is located. But if Data is a layer within the application, we need to surround it by protective layers such as app, gateway etc.,


Well those layers are really protecting the domain.


> The overriding rule that makes this architecture work is The Dependency Rule. This rule says that source code dependencies can only point inwards. Nothing in an inner circle can know anything at all about something in an outer circle.

You can get pretty far just worrying about this part.

I tend not to find these acronym names and the diagrams very helpful in terms of actually creating code. Separating concerns, as we all already know, is crucial, as is being careful about where knowledge is located. But I've found that if you buy in to these architecture patterns, you quickly become confused about which part of the pattern a given class or module or area of responsibility falls under. Like, is my `BazFrobber` a Gateway, or a Presenter, or a Controller... ?


This kind of thinking bothers me, but I'm not sure how to put my finger on what exactly it is that bothers me, or how to put it into words.

The thing with these abstract architecture principles is they are not always practical if you try to be puristic about enforcing them.

I get where these ideals come from. I've seen novice programmers take a stab at writing mildly complicated apps, and the code is a nightmare to read because everything is jumbled up together. This is bad and we can all agree on that.

But I've also seen projects where everything is split into a tiny little function or object that don't seem to be doing anything meaningful. Presumably these projects are following the principles of "single responsibility" and "loose coupling", but it's so loose that it's hard to put the pieces together in your mind and nearly impossible to follow the flow of the program.

I consider generic rules of the form: "<X> related objects should not be doing <Y> related stuff" to be harmful. (An example of such "bad rules" can be found in the article under the heading "Use Cases")

Instead I prefer practical rules that make the code easy to read, understand, and updated, without making it any more complicated that it needs to be.

* If a piece of logic can be contained in a function, let it be contained in one function without splitting it across 10 different objects and factories and coordinators. Even if that function is slightly long, there's no need to split it apart just because it's over the arbitrary threshold of say, 25 lines.

* Create abstractions around a set of vocabulary that you can use to describe the problem domain and the process of doing things within the application. Make sure to document your vocabulary well and try to keep it as small as possible (but not smaller)

A good example of this is git's vocabulary for commits, trees, and blobs.

Every operation doesn't need to be a class (or worse, a series of classes and factories). Operations can be functions that operate on the structure's (objects) you've defined.

* Keep related files and functions together.

If you have a server side module implementing an json API, and an html page designed to display the API, and a javascript module designed to drive the UI on the html file, and a sql file describing the data you're displaying, then let's put all those files together in the same directory, instead of spreading them thin across separate folders

    app/controllers/api/X/X.py
    app/templates/X/X.html
    app/js/views/X/X.js
    app/css/X/X.css
Why not put them all under:

    app/X/X.py
    app/X/X.html
    app/X/X.js
    app/X/X.css
Related material:

Object-Oriented Programming is Garbage: 3800 SLOC example

https://www.youtube.com/watch?v=V6VP-2aIcSc


I think you're conflating clean architecture with a bunch of other things. All it is is a set of rules about how different layers of your app can depend on one another. It doesn't say too much, if anything, about code organization or design patterns like SRP or loose coupling or whatever.

I say this because we follow it pretty religiously at my work and it never feels impractical nor does it really feel like it adds too much extra work to anything we do, nor does it feel like related code is far apart nor are our functions too small.

In practice, following the clean architecture means, for example, that my business logic can't explicitly depend on some database related code. If we have business logic that needs do stuff in the database, it defines an interface of some methods that it expect the database repository to implement that it can use. It then has an implementation that satisfies that interface passed to it using dependency inversion (as pointed out in the article).

So it really means that if some business logic related to, say, tweets needs to retrieve some from the database, it calls something like "tweetRepository.findAllTweets()", which returns a bunch of "Tweet" domain objects. It never sees database rows, it knows nothing about which database we use, etc. Our business logic is focused solely on dealing with the use case, and nothing to do with the nitty gritty of how we interact with the database or how we eventually return that data to a user that needs it.

Which is great, because it means if we ever change how our database layer works, we only need to change our database repository code. We don't need to worry about changing business code if we can satisfy the same interface as before.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: