Blockchain – The Potential

The Beginning

In 2008, on Halloween, someone with the name “Satoshi Nakamoto” published a whitepaper with the name “Bitcoin: A Peer-to-Peer Electronic Cash System” (No one still knows who is Satoshi Nakamoto till date, whether an individual, an organization or a country). It proposed a vague idea of peer-to-peer electronic cash transfer that doesn’t involve any “Trusted Agency” in between. It was against the idea of a well-established principle that a financial institution (trusted agency) is essential as a third party for any transaction between two parties. It suddenly generated an interest among developer communities. Within the next ten years, it has already seen three fundamental evolution phases of any groundbreaking idea – Bang, Bust and Build.

What is it at its core?

The foundation of Blockchain is, “Trust without any intermediaries”. The moment one realizes it, one can see the potential it has to fill the missing trust coefficient in almost every domain. After all, how do you trust that third parties that you trust with your data are treating it the way they claim. It in itself is a very strong idea that any transaction that happened on data cannot be manipulated.

The Traction

The original idea of blockchain was of peer-to-peer money transaction but people are realizing its far bigger potential now. It has generated so much interest among organizations that the same financial institutions that are being challenged by blockchain (It started as a digital currency without the need of these institutions/governance bodies) are among major players to hold Blockchain Patents.

Missing Trust

In almost all types of business, people put faith in institutions for keeping their data away from prying eyes but it is just an assumption and there is no way to ascertain it. Trust with digital data is among the biggest challenges of this century. In the recent past, there have been several cases where this trust was breached, either deliberately or because of lack of tools to prevent it. Clearly, the foundation of trust is flawed and digital businesses are constantly looking for a way to help them overcome these issues including but not limited to counterfeiting, digital surveillance, and identity theft. Blockchain has potential to significantly reduce these leakages and reinstate the trust.

Emerging Technologies

Emerging technologies like IoT, AI or Cloud have further increased the trust gap. We do not know what all data is transmitted by Siri or Alexa to servers and then what AI does with that. It is our assumptions that amazon never snoops into AWS data or AI at google does not use personal data for any manipulation or one’s Tesla is not eavesdropping. There are regulations and compliances, but can it really be established? Even auditing is worthless as they validate the processes and current state, and are helpless to determine what happened behind the scene. Blockchain can accelerate adoption of these technologies by providing this missing element of trust.

Blockchain and Artificial Intelligence

AI is shaping the world and we are still at the early stage of AI (Weak AI). No wonder, every business wants a pie of it, however it brings its own challenges. The first challenge posed by AI is its integrity towards use of an individual’s private data. In 2016, at an art exhibition, people gasped after finding a painting of a Dutch painter, ‘Rembrandt’. Rembrandt was a great Dutch painter of the seventeenth century and none of the audience present in the exhibition had ever seen this painting, nevertheless they were sure that it is a painting by Rembrandt. This was a painting that Rembrandt could never start but after three hundred years, technology (AI) produced it flawlessly. AI produced it by examining all the paintings of Rembrandt. It raises the question of what AI can do with your personal data at its stake if it just decides to breach the boundaries. It can predict your next password or generate a biological identity. A blockchain ledger can be used to license an individual’s data to AI providers under agreed terms.  Trusted AI models is another challenge. How do we know that AI training models are not manipulated to have a desired output or what all variables were considered during training. With blockchain we can track all this and prove that the training model was not influenced in any manner for wrongdoing. Similarly, another challenge with AI is “Explanation of Decisions”. AI is used for decision making by playing with massive data and in multidimensional. But still, it should be verifiable by humans. If blockchain is adopted in this process, every data point and decision making will be stored as blocks in blockchain which will make it verifiable. These were few among many issues within AI technology that can be solved using Blockchain.

Blockchain and Internet of Things

We are living in a connected world which is smart. This smartness comes from the fact that numerous devices around us feed data to the server where it is processed. But the basic question remains the same, how do we ascertain the authenticity of the source? How do we know that a truck sending its movement data is actually moving on a defined path and the sensors are not tricked? Blockchain can help in building this trust. An IoT device can integrate with blockchain at the time of manufacturing that ensures that everything can be verified at any time. Smart contracts are the business logic of blockchain networks. If these smart contracts are embedded natively in IoT devices, any proposed transaction results in execution of these smart contracts. It makes it appealing as these can be trusted now.

Conclusion

Blockchain is still emerging but it has already shown its potential beyond cryptocurrency. In fact, it has already been adopted and researchers are working hard to overcome its limitations and make it more useful in everyday business. Clearly, there is a long way to go but whatever we have till now is already very exciting.

References:

Whitepaper – Bitcoin

Titans of Technology: Blockchain

The Next Rembrandth

Is Monolith Dead?

Monolith architecture has been very successful in shaping the software world the way we see it today. However, the last few years have seen a sharp decline in its adoption, especially with the advent of Microservices. The popularity of microservices was caused by the need of scalability and changeability which in turn is caused by the penetration of IT in almost every entity, animate or inanimate. Modern applications see no boundary when it comes to scale and these applications are fond of change and this is where Monolith doesn’t fit at all.

Microservices, at least in theory, is “The Silver Bullet” that will solve all the problems and will serve humankind till eternity, but it doesn’t happen. Microservices bring lots of challenges which were nonexistent earlier. Still, what it does, it does it beautifully and efficiently and most importantly serves the purpose.

The popular idea is to have very fine grained services where each service is responsible for a single task. Practitioners give a contempt look to coarse-grained services as it is deemed against the philosophy of Microservices and this is where Monolith is left for slow death. Is monolith really that bad and if so then how it was one of the most successful architecture for years?

Fine grained microservices have their own challenges e.g. transactions or latency. To make matters worse, the management overhead is overwhelming and agreeing to the fine-ness is no easy job. Fine grained microservices are preferred because there is no single point of failure, possibility to scale independently, ability to change and deploy often and list goes on. However, if you look at these goodies carefully, all of this can be achieved only by complex design patterns and development discipline. On the flip side, there are challenges like unavoidable network latency which in turn results in degraded performance (unless it uses additional complex systems like caching), huge management overhead, complex transactions and many more.

A monolith by definition is a system that consists of every part of a system, but for ages organizations have been building monoliths that at least have different processes for UI and backend and these parts integrate via interfaces. This segregated model is referred to here.

Monolith systems have the edge when it comes to simplicity. If development process can somehow can avoid turning it into a big ball of mud and if a monolith system (as defined above) can be broken into sub-systems such that each of these sub-systems is a complete unit in itself, and if these subsystems can be developed in a microservices style, we can get best of both worlds. This sub-system is nothing but a “Coarse Grained Service”, a self-contained unit of system. 

A coarse-grained service can be a single point of failure. By definition, it consists of significant sub-parts of a system and so its failure is highly undesirable. If a part of this coarse grained service fails (which otherwise would have been a fine-grained service itself), it should take the necessary steps to mask the failure, recover from it and report it. However, the trouble begins when this coarse-grained service fails as a whole. Still, it is not the deal breaker and if the right mechanism is in place for high availability (containerized, multi-zone, multi-region, stateless), there will be very bleak chances for it.  On the flip side, it takes away the complexity of failure management for sub-parts like needing to employ circuit breakers. There is a trade-off but it is worth evaluating.

Scaling a coarse grained service is not very different from fine grained one. If its boundaries are defined carefully and if it is developed as a stateless, scaling these services is trivial. They can run inside containers and can even be deployed in serverless manner on cloud (e.g. AWS Fargates).

One of the challenges a coarse grained service faces is to react to change. In the past, during the Monolith era, the journey from code to production was manual and tedious. However, with modern methodologies it can be automated easily. A coarse grained service is not very fine, still it is not supposed to be of the scale of a Monolith and so reacting to this change is not that challenging as it seems. There can still be some trade offs but it is worth considering them.

A coarse grained service is often a complete unit in itself and so it can take advantage of running in a single process, which means network calls can be replaced with method calls which not only improves performance but also simplifies management of components.

Quite evidently, there is a need for an amalgam between a Microservice and Monolith. In fact, microservices is not really about building very small services but it is an architecture style to build software as a service app that can be built independently and that interact via interfaces. All of this can be weaved into a monolith or a coarse grained service which allows to reap benefits of both the architecture style.

And so, Monolith is not dead but it just incarnated into a different form to serve for time to come.

References:

1.       https://microservices.io/patterns/monolithic.html

2.       https://12factor.net/

Architectural Decision Making : Core Architecture Principles

Decision making is the one of the fundamental characteristics of an architectural practice and it often puts an architect in two minds. Computer science has come a long way from abacus and Turing machines, and is very mature and sophisticated in almost every aspect. It offers multiple solutions for same problem in the same context. However, it causes a bigger trouble, the problem of plenty.

Selecting one option among many is a daily challenge for architects and system designers. Often, it even affects the pace of work as lots of time is spent in reaching to an agreement. It is common to find architects arguing to reach a consensus but in vain. Even if an agreement is reached, convincing other stakeholders remains an uphill task as they have their own experiences and biases.

This choose one among many problem is not unique to computer science only. In daily life one often faces this challenge. For example, when one walks into an electronic supermarket to buy a laptop, one can easily be perplexed by the diverse range of laptops put on sale in store. However, often this little venture results in one buying a laptop without much confusion. How the problem of plenty doesn’t have much impact in one’s decision making while shopping?

There is something that helps people in there routine life in decision making and if we can apply this something into architecture decision making, it will take the pain away from an architect’s life. In above example, when one went to buy a laptop, one might already had few things sorted,

  • Ecosystem – Apple, Windows, Google
  • Preferred Brand (If not in apple ecosystem) – Dell, Lenovo, HP or something else
  • Must have feature – E.g. Capability to play graphic heavy games

It is like having some ground rules before stepping out to buy a laptop, and if something similar can be practiced during architecting and designing, it will potentially help in the process. These ground rules are Core Architecture Principles.

Core Architecture Principles act as a guide to all the stake holders and make the whole decision making process much simpler. There are two simple rules to craft these architecture principles –

  • Define at very early stage
  • Define at a very abstract level

Consider an example where an enterprise is looking forward to modernize their legacy system. Leadership specifically is looking to build a digital platform that enables them to offer new capabilities on regular basis without any disruption on existing one. Focus of new system should be towards achieving better time to market to get an edge in highly competitive environment. Cost is another factor and needs to be optimized. Organization has a bias towards public cloud.

Backed with above vision statement, along with some tailor made interviews with stakeholders, following rules can be crafted:

  • Cloud First
  • Some particular cloud provider First (AWS First or Azure First or GCP First)
  • Serverless First
  • PaaS First
  • SaaS First

Above rules are very abstract but in real world they act as a guiding principle for all the stakeholders and ultimately help in decision making. Above rules might result in following architecture.

Above baseline architecture can actually be defined with just one day of huddle. It might be just a jumpstart and it is bound to see variations in due course. However for every deviation from the outlined principles, team will have a reason. E.g., assume that after few weeks team finds that AWS API is not a good fit. At the same time, team has better clarity what capability it wants from API gateway and even among those capabilities, which ones are deal breaker. This information along with core principles will help in getting common consensus and it makes decision making far easier.

Core Architecture Principles are very abstract by design as they should act just as a guiding principle for architecture and should never dictate it. Often, few of these principles are defined not at system level but at enterprise level. Moreover, there is no standard process to define them. One has to carefully get to the essence of work along with all the constraints and biases to define it. Last but not the least, these principles are dynamic in nature and so suppose to be change with time, though the rate of change is very slow.

Infrastructure | A First Class Citizen

Modern applications are generally distributed in nature, e.g. in last 4-5 years, all the solutions that I architected were Microservices based. Distributed applications have a different philosophy, many architecture characteristics or concerns that were afterthoughts in traditional applications, are now mainstream. Infrastructure is one such concern that should concern an architect, a developer or product owner at a very initial phase. Infrastructure is actually a First Class Citizen.

Infrastructure needs in a traditional software systems are usually static in nature. I seldom saw anyone paying too much attention to it and it is mostly left to IT engineers. Architects and developers are only concern to define infrastructure specifications e.g. RAM or CPU and that too at a later stage. These infrastructure requirements has almost no effect on how application is crafted or developed. The most obvious consideration during development is limited to when system is stateful and behind a load balancer or it needs to keep a flag (e.g. in case of scheduler).

Things changes dramatically in distributed application, especially in microservices. These systems are dynamic, e.g. one of the basic ingredient of microservices is elasticity which means network addresses of resources will change frequently. Traditional methods of defining a static list of IPs will not work anymore and some new patterns are needed (Service Discovery). Similarly, observability is another concern that is not so obvious in these systems. There is no single log file, rather sophisticated infrastructure needed for logs aggregation, or tracing is not straight forward but distributed in nature and require another framework.

All of these concerns needs to weaved in the application development itself and in fact needs attention at solution phase. Infrastructure platforms, tools and framework influence development greatly. Clearly, Infrastructure is a primary factor of architecture and development.