I often ask myself, “What is software architecture?” I asked this same question to all of my co-workers. I struggled defining what is software architecture, and what is it that a software architect actually delivers.
A typical answer to that question is that a software architect designs the interfaces of a software system and the important decisions made to complete it. They will understand the how and the why.
This definition makes perfect sense during a conversation, but I find that definition has less impact when written down, although there is some truth to the above definition.
Is a software architect just a bad ass developer that is good at making sketches on a white board?
I have walked away from many meetings that had architectural sketches in it thinking that although it was a step in the right direction, it was useless as a software architecture deliverable. Quite often, that informal sketch was often the only software architecture guidance.
In this post, I’ll do a deep deep-dive into software architecture in order answer “What is software architecture?”
The answer is multifaceted, and my next sentence is very dense.
Software architecture creates an abstraction of the structures within a software system that can realize the primary functionality of that system. This abstraction, as a deliverable, documents the design decisions, and contains abstract diagrams that represent it. After all, a diagram is an abstraction, isn’t it?
I was searching for a means of being a software architect that was more concrete, repeatable, and iterative than the above definition.
In my research, I discovered that the Software Engineering Institute from Carnegie Melon University had already worked on this problem. They created an iterative process called Attribute Driven Design version 3.0. It goes as follows.
Pre-Software Architecture Steps
Collect the Use Cases or Stories for the given software system.
Software Architecture Attribute Driven Design Steps
- Determine the Design Objectives.
Explain why you are designing. Remember, all of the concepts within a system do not require abstraction because many will be obvious. - Identify the Primary Functional Requirements.
- Synthesize and catalog Quality Attribute Scenarios.
Each scenario will identify a quality attribute, source, stimulus, artifact, environment, response, and response measure. A standard list of quality attributes is: Availability; Interoperability; Modifiability; Performance; Security; Testability; Usability. For example, given the quality attribute, “performance”, a quality attribute scenario might state, “When an exchange performs a bid request to the bidding engine under normal operating conditions the bidding engine should respond with a bid response within 100 milliseconds.” - Identify which Use Cases or Stories are related to each Quality Attribute Scenario.
- Prioritize each quality scenario by stakeholders according to importance to the success of the system (H, M, L) and by the the architect according to the technical risk (H, M, L).
- Identify the constraints or restrictions.
For example, these can be technical, organizational, or provided by the customer. - Identify the concerns.
Concerns represent design decisions that should be made whether or not they are stated explicitly as part of the goals or the requirements.
At this point, the following artifacts exists:
These are the inputs.
- Review the inputs.
Ensure that there is clarity on the overall design problem that needs to be solved. - Establish iteration goal and select inputs to be considered in the iteration.
Divide the design problem into several sub-problems. An iteration starts by deciding which sub-problem to address. - Choose one or more elements of the system to decompose.
- Choose one or more design concepts that satisfy the inputs considered in the iteration.
Identify a tactic, architect pattern, reference architecture, deployment pattern, design patterns, algorithms, frameworks, or existing software component, that satisfies the the inputs considered in the iteration. - Instantiate architectural elements, allocate responsibilities and define interfaces.
Catalog your design decisions. Provide a table that lists the Design Decision and Location, and Rationale. The Rationale should also include a table that lists the Alternative, and Reason for Discarding. - Sketch views and record design decisions.
Catalog the elements in your design. Provide a table that lists the Element, and Responsibility. Then provide a diagram with these elements. Sketches may include Module Views, Component and Connector Views, Allocation Views, and other views that address Quality Attributes. - Perform analysis of current design and review iteration goal and design objectives.
Create a table that lists Not Addressed, Partially Addressed, Completely Addressed, Design Decisions Made During the Iteration. The columns Not Addressed, Partially Addressed, and Completely Addressed have Quality Attribute Scenario, Constraint, or Concern listed. The Design Decisions Made During the Iteration column provides a brief description of the design decision. - Iterate over the next set of inputs if necessary.
After following ADD 3.0 in my own personal project, I can attest that ADD 3.0 provides a repeatable process that leads to high quality deliverables. However, after speaking to several colleagues, I realized there is a need to break down the process into an agile methodology.
Although I would deeply appreciate the chance to use this process in delivering an architecture, I think most organizations will find it too formal for their culture. Unfortunately, this means they might end up with no architecture.