Profile

Saturday, March 29, 2014

Banking. A raison d'être. Efficient Capital flows



Having defined the scope, Partitioning and the Primitivization techniques can now be used to start building the Retail banking system.  In this blog the word core banking is used for all of retail banking.  The two words are used interchangeably.

Partitioning is not really required as retail banking is not such a complex system.  Partitioning would have been necessary if either the system was complex or it was perceived to be complex because of limitations of domain knowledge.

Primitivization will be the primary technique used for designing the core banking system.  To recap Primitivization requires that we reduce he system to its core behaviors.   Identification of core behaviors require an exploration of the Primary need serviced by the bank i.e. the reason for which banks exist.

The starting point for building a resilient core banking system then is to explore the raison d’être’s of a bank.  History or rather the origins of the system (in this case banking) provides a useful starting point for understanding what fundamental need does a system fulfill.  This is because at the very beginning of the evolution, the functions that a system performs, caters only to the very essential needs. It is only with passage of time that the subsequent layers of sophistication, that obscure the essential purpose of the system, get added. 

If one looks at the origins of banking, the needs that a bank satisfies are of a simple kind. These are needs that arise early in a community that engages in commerce or industrial activity.

The essence of commerce is the exchange of value.  Often commerce started not with money but with barter as a fundamental tool.  Barter required a system to equate goods of different nature.  The practice of Barter introduced the fundamental concept of “value”.   It required for instance two bushels of wheat to be say equal in value to 1 bundle of silk. Such a system that worked on value-pairs could work as long as, the number of commodities traded were limited.  As the number of commodities multiplied the number of value-pairs to be maintained increased exponentially and made the system too complex to sustain itself. 

Burgeoning commerce led to the emergence of a standard measure of value, which in the early stages was often Grain or Silk.  All commodities were equated in value to certain amount of Grain or Silk.  This laid the foundation for emergence of Money as the store of value.  In due course of time different kings introduced their own coins that acted as a value store.  It was often the responsibility of the kingdom’s treasury to issue the coins.

This function of issuing of money is today performed by the central banks of the country. This is the essential function of the Central banks of the world.  I will build upon this in a later blog when I discuss how to design a resilient central banking system. The current blog is dedicated to building a resilient core banking system and hence I will return to the topic of commerce.

The creation of coins i.e. currency removed significant friction from trade. It allowed people to trade more easily. As the trade flourished, traders desired to expand their operations. Since the capacity to trade was limited to goods that one had, there was a natural desire to increase this capacity by borrowing. The advent of money allowed people to borrow from each other more easily and use that borrowing capacity to be deployed in trade.   

This fundamental need for borrowing is at the root of banking.  As trade flourished further, the process of borrowing capital from individuals became the constraint.  In response to this constraint emerged large lenders who had surplus money with them.

Commerce often resulted in surplus money with merchants from time to time.  So while on the one hand commerce related pockets of demand for money and on the other it created pockets of surplus money.  This surplus money sough both safety from theft as well as avenues for gainfully deploying that surplus.  The process of connecting pockets of demands with pockets of surplus money was not efficient when individual merchants became moneylenders.

Banks emerged in respond to the need for efficiently connecting the “surplus pockets” to “demand pockets”.  

This – the need to efficiently connect surplus pockets of capital with demand pockets – is the fundamental raison d’être for which a bank exists.

In the process of acting as a connector it serves two powerful needs of the economic being
(1) The need to borrow capital to embark upon enterprises beyond their means
(2) The need to seek a return on surplus capital

The Scope of the Blog



The scope of discussion for this Blog will be Retail banking

Wednesday, March 26, 2014

Core banking. Segmenting/Partitioning the Banking Space




Having understood that the main enemy of Resilience is complexity and having investigated the techniques of addressing complexity, it is time to apply these techniques towards our main purpose – the development of a resilient core banking system.

The first question in system design is always the scope – “What is the scope of the system being covered?”  This is a question that every system designer needs to confront right at the outset.

This question of scope best lends itself to analysis in a top down approach

The scope of a system is always defined in terms of the needs that it will cater to and the customers it will serve

The first question then for this blog is to address what should be the scope of the Core banking system that it will seek to cover.

The primary objective of this blog is to design a resilient core banking system.

A useful way to segment the banking “services/needs map is as given below. It identifies two kinds of customers – Individual or Corporate - and 2 kinds of needs – Transactions or Capital.  





The above Banking Matrix framework divides the banking space in to four different types of space: 
(1)Retail Banking
(2)Private banking
(3)Corporate banking
(4)Investment banking

Sunday, March 23, 2014

Primitivization and Inheritance Hierarchy. The technique for reducing Extrinsic Complexity



The second way to reduce Complexity is through a process that can be best named as Primitivization.

Primitivization is a process that allows us to cull out the common elements of all services.  By culling out common elements it reduces the number of programs where changes have to be applied when external conditions change.

Often there arises a confusion between the two techniques Primitivization and Partitioning .

Merriam-Webster dictionary defines Primitivization as the process of becoming Primitive

Wikitionary describes it as reducing something to a primitive state.

The word Primitive itself is defined as

“relating to, denoting, or preserving the character of an early stage in the evolutionary or historical development of something.”

In the context of complexity reduction of IT systems, primitivization means the technique of reducing the System to its most primitive state.   A system can be said to be in its most primitive state when it exhibits just enough behavior to be recognized as the pre-genitor of its more refined form.

A primitive system will exhibit only the core-behaviors. A behavior-set can be deemed to be “core” if all other behaviors that are exhibited by the more refined versions of the system can be derived from the “core behaviors”.   For example while a primitive version of a system may exhibit a behavior of emitting sound when subjected to a stimuli the more refined versions of it may emit different variations of sounds such as whisper, scream, etc when subjected to different stimuli.  Each one of the latter behaviors retain the property of “sound” but refine it through two modifiers – Intensity and Pitch.

The core behaviors are likely to be common across services.  So the process of identification of core behaviors is equivalent to yielding elements that are common across services.  It is this separation of common elements that leads to complexity reduction.

Primitivization is beneficial in another way.  Primitive systems are not complex and lend themselves well to design.  There is one problem though. Our real life systems are not primitive.  Therefore a technique is required to build more complex systems from the primitive system.  A technique called Inheritance hierarchy allows us to do just that. This technique of layering is called Inheritance hierarchy.  

Inheritance hierarchy is a technique that layers systems on top of one another with each layer building upon the previous layer. It is a technique that allows us to start with a primitive system and then build a hierarchy of increasingly complex systems with each hierarchy building upon the constructs developed in a layer below it.  Each layer in the hierarchy inherits the behavior from the system at a lower hierarchy and then introduces new functionality in that layer.  In each layer only that much functionality is introduced that is common to layers above it.  This hierarchical layering helps us reduce extrinsic complexity by identifying common service elements.

Evolution provides us a useful model to understand the technique of Primitivatization and inheritance hierarchy.   Each step in the hierarchy can be thought of as an evolutionary stage that inherits from the previous stage and build upon to it yield a higher-order system.  In the case of evolution of animals the primitive stage would be a bacteria and its most refined form, human beings.

To summarize Primitivization is the technique of building a primitive system first. Primitivization works by identifying common core behaviors.  Inheritance is the process of progressively building more complex behaviors on top of the primitive system. This technique works by progressively introducing elements that are common to layers above it.

Inheritance hierarchy extends the same approach with each subsequent layer introducing elements that are common to layers above it. In this system each child layer inherits the behavior and mechanisms of its parent layer.

To be precise then it is not just the technique of Primitivization but it is the technique of Inheritance hierarchy coupled with Pimitivization that leads to complexity reduction.  

Partitioning. The technique for reducing External (Intrinsic) Complexity





Partitioning as the name suggests is the technique of dividing the system into smaller parts. It is a fair assumption that each partition of the system is required to exhibit a smaller number of behaviors then the overall composite system. It is also a fair assumption that the smaller number of behaviors required of each sub-system will result in a less complex design for each system.

It is in this indirect way that Partitioning leads to a lower complexity.

It must be noted that partitioning imposes an integration cost on the system.  The partitioned systems need to be integrated with each other for the system to work.  This means that in effect, a two step hierarchy of systems is created. At the lower step are the individual partitioned systems each of which is made up of its own set of interacting elements. At the higher level is the system of these interacting partitioned systems.  

It needs to be noted though that Partitioning if improperly done can increase complexity. This can happen, if each sub-system provides similar services as a result of which there is need to create similar objects (programs) in each partition.

This technique works because each partitioned system being simpler lends itself to better comprehension and therefore better design.  

Saturday, March 22, 2014

Two Techniques for Complexity Reduction






Two techniques that help reduce this complexity are

(1)                      Partitioning technique that helps reduce the Intrinsic complexity
(2)         Primitivization and inheritance technique that helps reduce the extrinsic complexity

The principle used by both these techniques is divide and rule i.e. divide the system under consideration into smaller subsystems.

Complexity - If you can measure it, you can control it




Before an attempt can be made to reduce or increase a quantity there needs to be agreement on some metric that allows measurement of the quantity. In the absence of such a metric it will not be possible to assess whether the quantity has increased or decreased. Therefore if complexity has to be reduced we would need a metric that will allow us to measure complexity. 

Once a measure is available it will at least become theoretically feasible to address the issue of how to drive down complexity.

The definition of an IT system is a good place to search for our complexity measure.  An IT system is a collection of programs that model the behavior of real world elements under different conditions. An IT system simulates the real world system by changing the observable values of IT elements (behaviors) as the inputs (conditions) are modified.

Merriam-Webster defines behavior as

“Anything that an organism does involving action and response to stimulation”

An intuitive observable measure of Intrinsic Complexity of a system is the number of possible behaviors that a system can exhibit

Extrinsic complexity is more difficult to measure and while there may not exist a good measure to compute the absolute value of extrinsic complexity, there exists a mechanism that when given two systems can tell which system has greater complexity.

It is intuitively clear that when in response to an external change a programmer needs to make changes to multiple programs then he would spend much more time to do so if he was required to make change in just one place.  The system that requires many changes to be made has greater complexity then the system that requires fewer changes to be made.

Techniques are available for reducing both intrinsic and extrinsic complexity.  

Tuesday, March 11, 2014

COMPLEXITY - The Primary reason for loss of Resilience




With Resilience defined as well as IT systems defined let us now combine the two and study the issue of Resilience of IT systems.

To recapitulate, a resilient system was defined as one that exhibits the following characteristics

1.    The relationship between Time & Effort spent (and therefore cost) to make a change and the number of changes is more or less a linear function of the latter.  A mathematical representation of this can be Y (effort) = aX(number of changes) + b and   Z(time) = cX (number of changes) + d 
2.    The Value of "a" and Value of "c" is small.  That is there is only a small increase in effort or time with every additional change introduced in the system

On the other hand an IT system was defined as a collection of programs that model the behavior of real world elements under different conditions. An IT system simulates the real world system by changing the observable values of IT elements (behaviors) as the inputs (conditions) are modified.

Real world systems are subjected to continuous change.

Since IT systems are a representation of the real world systems they also need to change as the real world systems change.  Changes to the IT systems require changes to the programs.  It is after all the programs that simulate the real world systems.  

A resilient IT system then is one where changes can be made to the programs without much expenditure of effort and where the amount of effort required to make changes does not increase with number of changes.

To build a resilient IT system, it is important to understand what are the factors that lead to greater amount of time to be spent by programmers in making changes.

The greater the complexity of a system the greater the time it will require to make a change.  Complexity is the biggest single cause of loss of resilience.  Complexity requires programmers to expend significant amount of time and effort to understand the full impact of the changes being contemplated by them.

Complexity can have two sources (1) Intrinsic complexity: The complexity inherent in the system being digitized (2) Extrinsic Complexity: complexity in the way that the system has been digitized

If complexity is the single biggest cause of loss of resilience then complexity-reduction has to be the solution to building resilient systems.

Monday, March 10, 2014

What is an IT System





The purpose of this blog, as started earlier, is to “Design a Resilient Core banking IT system”.  A secondary objective is to extract the principle of IT systems resiliency.  Having defined Resilience and understood its importance let us now understand IT systems.  Understanding what makes an IT systems will help us address our main issue which is:  What set of factors taken together help build a resilient IT system.

Let us start form the fundamentals.

Let us first address as to, "What is a system?"

A system is a set of interacting elements that exhibit a specific behavior under a specific set of conditions when subjected to specific stimuli.

Extending the above definition, an IT system consists of a collection of "interacting Programs" that exhibit a specific behavior (output) when faced with a given input (i.e. when subjected to a specific stimuli).

To get a better understanding of IT systems let us delve deeper into understanding the function of a program.

Programs are instructions to machines (computers) to manipulate values.

For programs to achieve something useful the values must have some meaning to the users i.e. they must represent something that happens in the real world.  

Because these values are “representative’ of some “thing” in the real world, therefore the rules by which they are manipulated must also be the same as the rules that are applicable to the real world “thing”.

Of course you can always write programs that manipulate values in ways and means that have no linkages to real world phenomena.  But if you do so then the question to ask is, what purpose does such a program serve?

For computer programs to be useful these programs need to “model” something useful in real life.  Successful modeling requires  (1) that we identify the elements  (or “things”) whose behavior we want to study,  (2) that we identify the behaviors they exhibit under different conditions and (3) that we identify the different conditions under which we want to study the behavior of these elements.

We can now advance our definition of IT systems as not just merely collection of programs but as a collection of programs that model real world phenomena. Each "program" can be thought of as modeling a real world element.

Elaborating Further: An IT system is a collection of programs that model the behavior of real world elements under different conditions. An IT system simulates the real world system by changing the observable values of IT elements (behaviors) as the inputs (conditions) are modified.

Each Program in the IT system can then be thought of as representing and entity in the real world. In other words each IT-program can be thought of as representing a real world object. There is a special technique called the object-oriented design that provides us the necessary nomenclature and rules to represent the real world objects into IT-programs.  There are also special languages called object oriented languages that allow us to easily write programs such that each program represents an object.

This understanding of IT systems is sufficient for us to study the problem of resilience in IT systems.

Understanding Reslience




Let us start from defining the meaning of Resilience.  Stockholm resilience center of the university of Stockholm defines Resilience as “::

“The long term capacity of a system to deal with change and (to) continue to develop (in face of this change). For an ecosystem such as a forest, this can involve dealing with storms, fires and pollution, while for a society it involves an ability to deal with political uncertainty or natural disasters in a way that is sustainable in the long-term.”

For an IT system, the above definition suffers from a shortcoming that the phrase “capacity of a system is not well defined.  Let us define this “Capacity” as the Time required to make a change as well as the Effort (and therefore cost) required to make a change to an IT system.  We can therefore modify the definition of a Resilient system as follows:  A system’s ability to deal with change in short time and without much expenditure of effort

A resilient system then is one that exhibits the following characteristics

1.    The relationship between Time & Effort spent (and therefore cost) to make a change and the number of changes is more or less a linear function of the latter.  A mathematical representation of this can be Y (effort) = aX(number of changes) + b and   Z(time) = cX (number of changes) + d 
2.    The Value of "a" and Value of "c" is small.  That is there is only a small increase in effort or time with every additional change introduced in the system

Experienced practitioners know that there are many systems where the time and cost of making a change is a non-linear function of the number of changes that the system is subjected to.  Systems that show this non-linearity cannot be deemed to be resilient systems. Such systems are non-resilient. 

The cost non-linearity exhibited by non-resilient systems has a significant impact on the competiveness of companies.  The amount of unnecessary money that a company spends in maintaining these systems is the money that they could have used to source new infrastructure – infrastructure that would have made their companies more competitive.  There are many real life companies where so much money is spent in maintaining old systems that there is not much money left to buy software for other areas of business.  

Often in large corporations software systems run mission critical operations. The perceived risk in changing these systems is very high. Since the operations being supported are mission critical, everyone remains fearful of replacing old systems by new lest something stop functioning.  As a result software systems in large corporations tend not to be replaced for decades. To remain contemporary, these software are regularly subjected to modifications.  As they are modified it is important that these systems remain resilient.  If they do not remain resilient, then as we discussed earlier, the corporations will be forced to spend increasing amount of monies just to maintain the old systems.

This topic of Resilience is not just a matter of concern of large corporations but also of equal concern to product companies. The case for building resilient systems is clearly compelling for enterprises. The case is equally compelling for software product companies.

For Software Product companies building Resilient products is a matter of survival.  This is because their products have to serve a diverse set of markets.  Each market has its own requirements. Each market has a certain rate of change driven by factors such as regulatory requirements, evolving customer needs and competitive dynamics.  Products are continuously subjected to these changes that emanate from market realities. The greater the number of markets that a product is installed in the greater the quantum of changes that the product is subjected to. For a product to remain competitive it must be resilient to allow for quick and effortless incorporation of these changes.