tl;dr - Productivity over time can be improved by using principles from linear systems theory in the decision making process.
Most of the talk around software engineering these days revolves around agile-related methodologies, and part of it is that team cooperation is an inherent part of the work, leading to good results.
While I strongly agree with this concept, I would like to raise an issue which I think causes a lot of problems in the software engineering industry and leads to a lot of frustration for both engineers and product personnel.
When several team members cooperate in order to find a solution to a bug, or try to think about the best way to build a needed feature, there is almost always a background goal, in which the team tries to find the minimal solution in terms of time/effort/changes/service-interruption etc. Some would say that teams are expected to find the optimal solution and not the minimal one, but from my experience in many cases it's only a local optimum, still aimed to solve a minimization problem.
In teams comprised of good+ engineers, this usually leads to very good and satisfying local optima, on which all members and managers would probably agree and consider it as a good decision. So in theory, we've had a synergetic process - The value of all participating team members was larger than the individual contribution of each team member.
However, I can imagine that almost every software engineer knows the feeling of looking at a piece of code and saying (to himself and sometimes to everyone who listens) "Who is the incompetent engineer that wrote that piece of !@#$?". Sometimes, I hear myself thinking like that, only to discover that I'm this incompetent engineer... However, in many cases, if this engineer would have participated in all the decision-making points related to the evolution of this piece of code, she would probably agree on most if not all the decisions that have been made.
So what happens? How come a series of good judgement-calls leads to a crappy outcome? I call it The Efficiency Fallacy:
"A series of local-optima solutions can lead to a sub-optimal outcome, both local and global"
or
"A series of small synergies does not necessarily lead to a larger one"
Since software engineering is mostly non-linear, this can happen very often, except for very simple cases.
So why does it happen?
I believe that a major part of the problem is related to the fact that software is less restricted in some aspects than other forms of engineering, meaning that entanglement is easier in software than in electronics or in building-architecture, for example. Due to the virtual nature of software, it is relatively easy to make the mistake of adding behavior B to behavior A by changing A a bit instead of having to treat them as two separate concepts and find a way to interconnect them to operate together. This is harder when physical constraints are in place, such as in some other forms of engineering, since the higher costs of making a mistake are more obvious.
This kind of mistakes increases the non linearity properties of the system, sowing the seeds for bigger problems in the future, for other engineers and many times even for your future self. Down the line, this kind of behavior has effects in several aspects.
- Managing Complexity
The total complexity of any subsystem or layer grows beyond any person's capabilities, leading to additional processes and methodologies in order to keep things in line, and slowing down overall progress.
- Side Effects
The side effects of every change become larger and larger. This stems from the fact that entanglement serves as an abstraction leak. When you have A, B and C entangled together, changing any of them without impacting the others becomes more difficult. In a way, every change becomes an interface change and not an implementation change, leaking to other subsystems or layers. This often leads to slowing down development, since people become too afraid of making large advancements (and sometimes not afraid enough...).
- Effort Estimation
Accurate effort estimation becomes much more difficult, because the entanglements discussed above are implicit in many cases and cannot be known in advance before actually developing the feature. untangling becomes a necessary part of the feature development, forcing engineers to go down a rabbit hole in terms of effort - Many times, the only way to untangle one part of the logic means you have to untangle other related (and unrelated) parts, depending on how much "efficiency" was invested in these areas in the past. In time, this can break estimations so badly, that it can even affect the trust relationship between dev groups and product groups.
Prevention
So I believe that a good way to prevent this kind of issues is to try and preserve the linear properties of the system as much as possible. Specifically, this means striving for systems which adher to the superposition principle, taking this aspect into account at every point of the decision making process.
Superposition is a term taken from Systems theory, widely known in physics and engineering. A system adhering to the superposition principle needs to have two properties: Homogeneity and Additivity.
- Homogeneity - Being able to multiply a behavior many times. In order to encourage homogeneity, try to treat everything as a building block, regardless of the current use-case. Ask yourself the following questions:
- "How can I make it easy to have more Xs?"
- "Can I perform the same X over and over without any changes/side-effects?"
- Additivity - Being able to combine two or more concepts/behaviors. In order to encourage additivity, try to treat subsystems and layers more as physical entities, where there is a relatively high cost of modifications and changes, especially after it's already in service. Ask yourself the following questions:
- "Where does X belong?"
- "Does X belongs with Y or is it related to something else?"
- "What would be hurt if I put X inside Y?"
- "Would it be better if we pay the cost of putting X inside Y now, to prevent higher costs in the future?"
It is important to emphasize that I do not believe in puristic solutions - our industry is a business, and as such, it needs to constantly find balance between multiple aspects, including quality (especially code quality). However, many businesses, even in the "build fast and sell" startup world, start becoming more and more concerned with "lasting value" and longevity, and in order to make that transformation, the decision process needs to be adjusted accordingly.
The ideas presented in this article can be further expanded to linear properties of the development process itself and even to the behavior of the engineering organization, but this might be a topic for another post.
Harel Ben-Attia
Side note 1 - What is "the system"
Each system has multiple subsystems and layers. In my view, the ideas proposed in this article are relevant for all layers and subsystems, whether a class or a script, a distributed mechanism spanning multiple machines, or a concept such as scheduling or data-retention.
Side note 2 - Why I think that Software Engineering is mostly nonlinear?
- Homogeneity is limited - In most cases, you can't just multiply logic without side effects. For example, you can't easily take the logic of one user and apply it to a million users without any changes.
- Additivity doesn't come for free - Two concepts, each defined and working well, cannot be combined easily without some form of entanglement or an additional concept. For example, you can't take the concept of a scheduler and the concept of a persistent store, and just say "work together, you two...".