This view concludes the story by mapping the modules onto physical systems (called nodes) and defining the node name, platform, resilience model and scaling model. It should be recognized that the earlier the logical / functional architecture is done in the project process (a good thing), the more this part will be guesswork. Nevertheless it is extremely useful guesswork even if it has to be revised later on.
Figure 10 - Deployment View
By Platform, we mean:
The physical hardware device - including Vendor, Model, CPU class and number, memory, disk, NICs, etc,
Standard Vendor Software – For example Database software, ETL software, application server, middleware, etc
By Resilience model we mean, how a device failure is handled so that single device failure does not result in service failure. An initial list of models is:
Hardware LB: A hardware load balancing appliance is used to distribute the load amongst multiple devices.
Software LB: A software algorithm is used to distribute the load amongst multiple devices.
Clustered: Multiple physical devices appear as a single device to customers of its functionality. They often use something like Virtual IP addresses and a heartbeat mechanism to detect whether a node in the cluster is healthy or not. .
Manual: A manual operational procedure is required to fail over from one set of devices to another. Examples include changing configuration files or remapping DNS entries.
None: there is no resilience model for the device – if it fails, there is a service outage.
By Scalability model we mean, how we adapt to increased load. It is good practice to specify the number of instances we intend to deploy as our initial scale and what the unit of scale is. An initial set of values for Scalability model is:
Horizontal: To add capacity, we add more devices of the same type.
Vertical: To add capacity, we increase the physical resources of the device. For example: adding CPUs, increasing disk space or I/O capacity, increasing network I/O capacity, etc.
Often devices do not act independently for resilience or scaling. Instead they operate as a Unit, where a group of devices fails over as one and to scale you add another Unit, consisting of multiple devices
It is interesting to note that the Resilience model and the scalability model are often highly correlated. For example, a web server farm may use a load balancer to share transaction load across multiple identical instances. In this case losing a single web server is handled by the load balancer reallocating transactions across the remaining devices. Handling additional load is achieved by adding another web server and having the load balancer allocate a portion of the transaction load to it.
But even though they may be solved in the same way in certain circumstances, resilience and scalability are different problems and it is important to not confuse the two.
Some of the deployment information may not be known at the time the logical architecture is produced. Placeholders should be inserted for such cases. All of the information should be known by the time the physical architecture is produced (duh!)
For the sake of clarity, following is a list of information we specifically DO NOT intend to capture in the logical / functional architecture:
Low-level information regarding network segments, subnets etc.
Low-level information regarding the number of interfaces on a machine
Management software, anti-virus software etc.
Detailed Business Continuity Strategies
Monitoring and Backup strategies
Detailed Security Analysis
These are reserved for views of the Physical Architecture.:
Through the Deployment View diagram we begin to understand how modules are distributed across physical machines, and how the logical message flow between modules crosses physical device boundaries and physical zone boundaries.
By placing the device nodes within network zones, we begin to infer network security, and the communication paths which must be facilitated between the nodes that communicate with each other. In this manner, we can easily identify any physical anti-patterns, and security issues, before they become issues.
We can also begin to infer initial device sizing and software licensing and so can approximate cost. Doing this early in the project cycle and doing it well is one of the major value propositions a good architecture process brings to an organization. Delivering stuff that actually works is another.
1.12Character “Bios” - The Glossary
The glossary provides a single, general definition of the overall system (service, solution, et al) being depicted. This definition serves as a “mission statement” for which all modules, data flows, and elements of contained diagrams should subscribe.
Figure 11 - Glossary
Additional terms and descriptions are provided for every module referenced in a system view. The description should be short, but descriptive enough to get across what the nodule is for.
In keeping with our metaphor of comparing production of an architecture diagram to writing a story, the Glossary represents the one paragraph “biography” for each character in the story (based on the correlation of a module to a character in the story).
1.13The Afterward – Deviations from the Ideal
(Or what actually happened vs. what we wanted to do)
Few implementations (probably none) ever precisely match the architecture (at least not in the real world). So in order to make architecture relevant, we find ourselves with two additional challenges:
How to preserve the architecture the way it was defined and not lose what is the right thing to do; noting thatthe best architecture is one that actually gets built. If you don’t compromise that does not happen
How to actually represent the reality of what got built and its relationship to the architecture, since we need the architecture to be real and not “ivory tower” for it to be useful and relevant.
To accomplish these seemingly contradictory goals, we define the concept of a tactical deviation. In other words, we try to keep the architecture as pure as possible, but note where the implementation has deviated from the architecture.
If the diagrams become so littered with deviation symbols that the underlying architecture cannot be communicated, then it is time to change the underlying architecture and compromise. Clarity is crucial in these cases.
What we are trying to avoid is the situation where those who come after us look at what we have produced and misinterpret some ugliness we were forced to do under duress release as something we intended to do. Just telling those who come after us that these things ‘that make no sense’ can and should be changed, is a huge help to them. We have this invented special symbols that essentially mean “We did it, but we did not like it. Not only can you change it, but you should”.
There are two classes of these symbols (which are further described in Section 1.18):
Temporary Deviation – Something you intend to fix, maybe in the next version
Permanent Deviation – Something you have no intention of fixing but which is still architecturally wrong.
Done correctly, the architect will not have to significantly change the architecture documentation as the implementation incrementally evolves toward the architecture “vision”. Only the deviations need be removed.
Some examples of deviations:
The system was architected to use Web Services, but due to incompatibilities with client software the legacy APIs need to be maintained for a period of time.
The strategic authorization system was not ready in time for implementation, so a tactical module based on an operationally maintained file was put in place instead.