I posted this comment in response to the previous submission [1] of this same "paper":
Also note that the prior work they repeatedly cite (no less than 5 times in as many paragraphs, and indirectly referenced several times in relation to the 20+ "sociotechnical factors"), i.e. reference number (9), is based on "semi-structured interviews with 21 developers". While a lot of the observations and recommendations are correct (and are obvious to any seasoned engineering manager), this seems like an academic exercise in picking three somewhat disparate aspects of development and trying to fit them into a geometric shape (an equilateral triangle) and then calling it a framework.
This by the same team who brought you the DORA metrics and the SPACE framework. I suspect it is really a vehicle to sell books and consulting, despite the article largely making sense.
Thanks for pointing that out. Although I do think DORA metrics make some sense, it seems like an unnecessary framework considering what the folks in queueing theory have done over the years.
Anyone claiming we need to invent our own metrics needs to read Reinertsen.
Well, yes, taking established findings and twisting them into a memorable or easily usable shape is what one does when trying to popularise concepts. (E.g. how the Shewhart/Deming cycle is related to the scientific method, or how the cost/speed/quality triangle restates "everything is a trade-off".)
That says very little substantive about their results, though, so I'm curious how you think that matters to the discussion of developer productivity!
> Although DevEx is complex and nuanced, teams and organizations can take steps toward improvement by focusing on these three key areas.
I feel like they are aware that not everything fits into the framework perfectly, but if your organization was trying to improve DevEx, this framework is a place to start.
Engineering leaders have long sought to improve
the productivity of their developers ...
A goal every organization can support, no doubt.
What the rest of the article fails to discuss is how upstream-processes determine developer productivity far more than the three dimensions identified. To wit, the clarity and focus of "what and why" for an effort determines the "how and when." Which makes the answer to "knowing how to measure or even define developer productivity" no longer elusive, but instead quantifiable.
Few, if any, qualified developers I have worked with have a poor DevEx when the work needed to be done is well-defined (what) and can be explained such that a solution is identifiable (why). Solutions almost always flow from these (how), with an ability to communicate the work effort possible (when).
Skip the up-front investment by stakeholders on "what and why", "how" will remain nebulous, and "when" will be just a guess.
I agree the types of data they consider could be widened.
How would you quantify how much “what and why” has been defined? Some kind of diff between the planned architecture or requirements on day 0 and the final codebase?
> How would you quantify how much “what and why” has been defined?
Has been defined or needs to be? I'll assume the latter for sake of discussion.
When skeletal feature specifications can be coded against requirements such that stakeholders and engineers can have a meaningful discussion of the functional expectations (as captured by the specs), along with the engineers having enough understanding to perform impact analysis on the existing system (if any). This is usually an iterative process engaged before any significant implementation is undertaken.
Essentially, the upstream-processes I reference regard establishing a shared understanding. As such, most delivery metrics are not relevant at this point. However, an important benefit is that unneeded effort is often identified and not engaged.
> Some kind of diff between the planned architecture or requirements on day 0 and the final codebase?
In general, system architecture and code bases exist in the "how", not in the "what and why." At a macro (business) level, sometimes architecture decisions such as whether or not to use a cloud provider or data centers do influence the "what."
It's kind of like branching in a programming language. Typically, the earlier an execution flow is determined, the less are needed later. So, too, the sooner stakeholders clearly define "what and why", the less effort is needed to identify "how."
As opposed to a statistic like lines of code written. As opposed to ‘experience’ in the prescriptive sense like ‘The developer experience here uses Docker and Webpack and therefore is modern and therefore is good’. As opposed to ‘experience’ like years-of-experience in a certain role/technology.
I’m not the author, but I found that to be a highly important distinction for them to call out. It is about the way that each individual developer feels during each individual day, in a way that is hard to capture with statistics and summaries and lists of technologies and descriptions of processes.
I think people mostly use it as a pretentious way to say “self reported experience” or “data from living people.” Things that are based on personal experience (anecdata) and perception rather than anything tangible. Which is important data to consider but often not very useful since perceptions are coloured by beliefs. Using case studies of personal experiences can also be biased towards more extreme experiences or perceptions that are outside the norm.
I have heard of someone describing their “lived experience” of discrimination. They perceived discrimination in many situations where they had no evidence that the other party wasn’t impartial. But often the implication is that “lived experience” represents some form of truth that cannot be questioned.
Lived experience is when you encounter something and you live through that experience.
Learned experience is when you read about an encounter someone else lived through and they explain condensed points of emphasis.
The social media plugin and reporting system for the SaaS I'm working on is an idiotic, over-engineered monstrosity. But, after trial and error, I've mastered it from a what-goes-where-standpoint.
If I were to document the entire thing from my vantage point, it'd take me a week, and even if I did it, it'd take another dev about a week just to wrap their head around it to implement a new feature AFTER reading what I wrote. Or, I could do it between standup and lunch provided I had no distractions.
Lived experience stands in contrast to observational experience.
I as an outsider may observe friction in a developer--customer meeting. Ask the developer, and they may reveal that they found the arguments during the meeting highly valuable and productive. A lot of case studies and experiments in productivity improvement are observational. You do something and then you look at the results, rather than how the lives of the people involved changed.
When an outsider looks at a situation, they often see something very different to what the people in it did. That's the difference between lived ("insider") and observed ("outsider").
(Of course, strictly speaking, you don't get lived experience from interviewing people. People being interviewed tell you, to some extent, what they think you want to hear at that time. To really get closer to lived experience the researcher has to embed with the group being studied and work with them for some time.)
lol @ ebay and DevEx. (one of the examples in the article) When I was there (up to 2017) the team that made the internal web framework, Raptor, refused to let anyone see the source code. I mean people that worked at the company, they wouldn't let people sitting 15 feet from them see the source code to the web framework that they used all day to work. Of course you could easily see it in any Java IDE, but they didn't seem to know that. Another team was merging branches by copying the entire source tree and going through the files one by one and manually merging changes in. eBay had been 100% on git/github for over 5 years at that point, nobody on the team had ever learned how branches work, and they had managed to never have anyone tell them either. There had been about 17 large initiatives to fix the MyEbay pages, since every tab was implemented on a different generation of web framework, going all the way back to one that worked entirely via XSLT and merging xml documents which had been retired for well over a decade at that point. All of the efforts failed, and I just checked and my eBay is still using multiple generations of web framework spanning decades. This wouldn't be such a big deal except they don't have the competence left in the company to update the styling so as you click through tabs the page changes from reactive to fixed width to %width depending on what you are looking at. (try it, go to myEbay and click eBay Bucks, or click Messages) They can't even get the color scheme aligned. Again, if they said 'nobody cares' that's just prioritization, but I know that this has been attempted at least 5 times with teams of a dozen people spending up to a year on it. There isn't some secret complexity that thwarts them, it's just an organization that is politically driven and leadership that is more interested in harassing journalists than the business of the company. eBay's DevEx was bad, but their DevCompetence was shocking.
> They can't even get the color scheme aligned. Again, if they said 'nobody cares' that's just prioritization, but I know that this has been attempted at least 5 times with teams of a dozen people spending up to a year on it.
Not being able to get color scheme misalignment fixed with multiple team-years’ of effort is shocking and appalling.
No, it's quite normal. Normal levels of pathology you can see in every small medium and large business. Being able to make sweeping changes at the expected marginal cost - that is rare, almssot unique.
Most human organisations are crippled by pathologies mistrust and misaligned incentives.
Fixing those almost always involves ending the org and building a new one.
If you were on the web in the late 90s you remember a time when the default web page background was grey. The background on some pages on myebay section of ebay.com are still grey (some of them) for that reason!
I only skimmed the article, but I couldn’t see any tangible measures? Even the KPIs seem to be based on perceptions.
Would it be possible to also include data on value of software created? Maybe use historical sales data to map development effort to revenue? Maybe there’s situations where developer experience is optimal (lots of freedom, no meetings, personal project budgets) but limited commercially useful work gets done.
> The Three Dimensions of DevEx
Our framework distills developer experience to its three core dimensions: feedback loops, cognitive load, and flow state
. These dimensions emerged from real-world application of our prior research, which identified 25 sociotechnical factors affecting DevEx.
Very solid groundwork for analyzing your development.
for everyone there is a salary / guarantee of future income point where they can just "get on with it" - go hell for leather at work and be happy their family's future is secure anyway.
Welfare states reduce that level, this subsidising businesses (!) but at some point you gotta pay if you want more than the basics. It's not extrinsic motivation - it's much more a hygiene factor to allow intrinsic motivation to come tomplay
I can tldr this as "give devs better tools and get out of the way", but boy, this 'paper' is a tedious read. If they would dump all the pretentious language and just say "hey, we asked some devs what they would like and here's the gist of what they said", then shorten this blog post 10x, it would be better. Sometimes you wish authors would use GPT-4.
I think so too. Seems to be a common characteristic among compsci publications. It's like "here's a simple thought, now watch me describe it in two paragraphs using the most convoluted lingo I can come up with..."
Also note that the prior work they repeatedly cite (no less than 5 times in as many paragraphs, and indirectly referenced several times in relation to the 20+ "sociotechnical factors"), i.e. reference number (9), is based on "semi-structured interviews with 21 developers". While a lot of the observations and recommendations are correct (and are obvious to any seasoned engineering manager), this seems like an academic exercise in picking three somewhat disparate aspects of development and trying to fit them into a geometric shape (an equilateral triangle) and then calling it a framework.
1. https://news.ycombinator.com/item?id=36006561