Seems just a bit too black and white. Surely there can be reasons for splitting up a monolith. Some domains might require very strict boundaries for shared memory and concurrent software on a given system. CE Certified class C medical software comes to mind.
Isolating something to a simple deployment exposed through an RPC API might make it far easier and straight forward to validate and pass requirements.
Micro-services can be used and misused. Good engineering rarely follows these culture-trends. If it makes sense, it makes sense.
Or data bearing enums, which bring some of the best of all of those worlds, with (value declaration, not usage) exhaustiveness enforced at compile time:
They do different things but in the wild are used in the same way. Data classes are not immutable so it would not be ideal for what I used named tuples for.
Data classes can be made immutable using ‘frozen=True’. It sucks that it’s not default though.
Can you give an example of how you use enums and named tuples in the same way? That just doesn’t seem to make a lot of sense to me, but maybe I’m missing something.
I don't use them on prototypes because I will throw away that code. But IMHO TDD is at it's best when your design is highly likely to change. The only protection you have against a moving target is your test suite. The requirements or our understanding of the requirements change so we change the code. In large code bases we can only be certain that the code change doesn't break anything if we have good tests. TDD is the only realistic approach to achieve 100% test coverage. You can certainly write good tests to cover your code at 100% but I've never seen it done by anyone consistently without TDD.
I've always relied on unit tests with mocks with the rule that I only mock modules that are tested and I mock or stub all db and network calls. I also like integration and end to end tests but not may of them. I rely on unit tests for regression and should be able to run the whole suite in less than an hour. I set my precommit hook to run unit tests with coverage and static analysis.
So there is a tipping point when the technical debt you acquire from not testing catches up with you and cripples the project. I think this tipping point comes much faster than people realize. Most people have no idea how bad their tech debt is until it's too late. Then you see one failure after another while management tries to fix the train wreck. At this point any good Eng is going to bail in this market and you're stuck with the scrubs.
The crippling debt is caused by not refactoring, not lack of testing. Admittedly testing will make the refactoring easier, but thats not the root of the problem.
I think the only unethical thing is holding back on the results. It's great to automate tasks and the company doesn't need to know about it unless it is explicitly stated. But your obligation is to deliver high quality results as fast as you can.