Microservice architecture is not a software development approach

Microservices are often presented in contrast of monolithic applications which are frequently associated to a Big Ball of Mud. If your application is a BBoM don’t expect microservices will help. They are not a turnkey solution to good design and modularity.

I think that it is really important to understand that microservices are just an architecture style. It’s nothing more than a Service Oriented Architecture with a couple of opiniated patterns, principles and practices. These tools give guidelines in order to create distributed systems. The main goal of this kind of architecture is to enable different teams to work autonomously on different part of a system and thus improve the overall productivity. When well applied microservices are an investment that will allow you to quickly evolve and make choices independently from other teams.

Microservices won’t help you to improve the quality of your code. This architecture style is not a solution to better understand what your application do and how. It doesn’t tell you how to organise the code to make it less messy or less buggy.

If you ended up with a monolithic BBoM application and you haven’t learn from your mistakes you will probably still end up with some micro-BBoM. Moreover I think that the sum of each micro-BBoM is probably greater than the monolithic BBoM.
A solution relying on bad designed and implemented microservices is worst than a bad monolithic application.

Building, maintaining and monitoring distributed systems is a lot of challenges. The additional complexitiy of distributed system architectures plus the complexity of your application’s domain could make the overall system a bigger mess.

In order to get the advantages of the microservice architecture in your projects I think you must have the certain level of maturity.

Here are some examples of additional complexities introduced by distributed architectures.

Domain knowledge and design

The Sam Newman’s definition of a microservice is:

« Small autonomous services that work together around business domain » – Sam Newman

Domain Driven Design is a good approach in order to organise microservice according to the domain.

At the beginning the domain model will probably evolve a lot in same time you learn the domain. Your will probably want to refactor the model toward deeper insights.

The risk of falling too quickly in the microservice envy is to have the domain model distributed accross different systems. This situation could make it harder and/or slower to try new domain models or improve the existing ones. For instance:

  • you may want to split some parts but your microservice is a dependence of another and if you change something you will break the API and thus compromise all of systems that depend on yours;
  • you discovered that some concepts distributed accross several microservices are tightly related and it could be better to group them. Unfortunatly these micro services are owned by another team so you have to be synchronised with them.

The risk of applying microservice architecture too early in the application life is to make experiments harder. So you may end up with a couple of microservices that don’t really do the right job or do the job right.

It is better to start applying microservices when you already have a good knowledge of your domain and your models are mature enough.

Development practices and skills

Release

A major goal of the microservice architecture style is the capability of each small service to evolve autonomously. Each service has its own release planning and versionning.

An application only exists to solve its end users problems. When composed of distributed systems an application’s version is equal to the set of all versions of all microservices involved.

If your microservices pass the acceptance tests with some versions of its dependencies would it be still ok in the production environment where the versions of your dependenances are not the same? The key thing is how in a microservices environment you deal with services dependencies, their versionning and their release.

The worst case is to solve this problem by creating a monolithic-distributed application where all of your microservices are released together and push togerther in production. It is totally against the microservice phylosophy.

Probably some practices like agile acceptance testing, consumer-driven contract and continous delivery could solve (at least partially) the problem.

However in order to apply these practices you have to have the culture of automation.

Evolving APIs

Microservices communicates together through APIs that hides implementation details. The problems is when your have to evolve the API and introduce changes that could not co-exist with the existing. You have to create a new version of the API that is not backward compatible with the previous one.

In a microservice environment you don’t control the consumers of your API. You can’t force all of your clients to use your new API. You must still support the old one until all services that depends on you updates their code.

To solve this problem the API versionning comes in the game. But once you have a new version you have to deprecate the old one, communicate to other teams and do the needed stuff in order to one day remove the old endpoints. This also add complexities and rigor in practices.

Documentation

A service only exists in order to be called. To be called you have to provide to your consumers a high quality documentation. Your must also keep it up to date as your code API evolve.

The risk of a bad documentation is the wrong use of your API which could cause buggy client code and compromise the overall system.

Monitoring and Debugging

In a monolithic application monitoring is the application himself. Debugging step by step is easy. In a distributed environment it could be a lot more tricky.

Failure

How do you avoid the propagation of a failure all over your system?

Conclusion

Working with microservices is really hard due to their complexity. The set of potential problems related to distributed systems contains more points than the small list I mentionned in this article.

Maybe microservices are a solution for you but maybe not. Falling too early in the microservice envy could move you away from the first purpose of your application: solving business problems.

Microservice architecture have a lot of advantages in terms of deployment and release management. But it is only true if you apply it correctly. Applying it correctly requires skills and good practices.

It could be harder to deal with problems issued by distributed systems than dealing with monolithic application problems and it might not be worth.

My advice is that we should first design a monolithic application and try to apply a good and modular design and then move step by step to a microservices architecture… only if needed. If you have designed a modular monolithic application it should be easy to evolve to a distributed application.

Approach like Domain Driven Design really helps you to deal the complexity of your application and create modular design. Principles like Clean Code gives you some practices in order to improve your code quality. These tools can really give your keys to create better software. Not the microservice architecture.

History log of my week 52/2015

Here is my technical diary of the last week of 2015.

Domain Driven Design, Application Layers and Validation

I read a post of Mathias Verreas about the different layers of an application and what kind of validations reside in each one (http://verraes.net/2015/02/form-command-model-validation). His ideas join what I previously wrote about java.validation (https://erichonorez.wordpress.com/2015/12/20/history-log-of-my-week-512015/).

It is sometimes tempting to use an unified validation model across all architecture layers in order to ensure state consistency. This temptation could comes because you want to avoid some kind of duplication in your validation or just because the form filled by the user will impact a specific entity. So the entities are used as model to validate user input in a form (presentation layer), used to validate command (application layer) and business rules (domain layer).

However with that kind of unified model you may break the Single Responsibility Principle. Your domain model is poluated by responsibilities of layers above. Validations become hard to maintain and evolve because of the complexity and fragility. Even more if it is a multi tenant application and tenant specific rules are also in this unified model.

Validation in these different layers have different goals. Of course if a form field is required because in will fill a property of an entity your will have some kind of duplication because this field will be required in the several layers. But models used in a form are used to validate the format of user input and help the user to correct his mistakes.

Validations in he domain model ensure that the model never goes in a invalid state. It is a serious problem if some invariants of your domain layer are not respected. When these severe situations happen exceptions should be thrown and the goal is to not help the user the correct what he did.

Some rules in the UI could be more restrictive. The illustrate the fact that rules applied at the several layers have different goals just imagine the case of an accountable application. In your domain model a transaction could have an optional communication field. However for a specific client this field should be required. So instead of modifying your domain model you could just move this requirement at the application or UI layer. If some clients have specific requirements on their business model it should not impact your domain model.

In a multi tenant application it is really important that your domain is to help tenants in their domain so your domain is different of your client’s domain. If some client have specific requirements the should be moved as far as possible at the boundaries of your system.

History log of my week 51/2015

Here is my technical diary for the 51th of 2015.

Javax.validation

At work I started to write a document  to explain how we can simplify our complex validation rules by using the java.validation API properly. Today we already rely on this API to validate objects in our multi-tenant application. But because we use use it at the wrong layer of the architecture and/or we not use all the power of the API it make our validation really hard to maintain and evolve.

What I learned.

We should avoid conditional logic in the validations and promote a self-documented code. In a multi-tenant application validation rules on objects could depend on the tenant to which the user who makes an action belongs to or the object on which the user operate belongs to. The worst scenario is when these differences are hidden in the code. As described in his book on Domain Driven Design Eric Evans promotes a self-describing code by making implicit rules explicit.

An application of that is that each variance could be represented by a specific type. Instead of having a single object and a lot of logic inside the validation class to try figure out in which case it applies to the given object the goal would be to have a class per variance.

In order to don’t repeat yourself by writing several time the quite same object with only a part that varies we can use inheritance and interfaces. In that case annotations on overridden methods in the subclass will be applied cumulatively. Validations on interface could be used as traits.

To deal with the case where a single rule could have different implementations depending on the type of the object on which it applies it is possible to specify a list of validators as parameter of @StatisfiedBy.

If for a rule there is only one validation process that can be parametrised with some values it is possible to use parameters in annotations.

Another reflexion I had is that we should avoid at most as possible the use of annotations on private fields. Validation rules that apply to an object are part of the API. By specifying them on private members of a classe we hide them.

Moreover I think that javax.validation should only be used in the application layer only. A mistake is to use it in the domain layer for several reasons:

  1. The more you rely on infrastructure tools in the domain layer the harder it is to evolve. These tools evolve and something become deprecated. Sometimes there are variation between two implementations of an API. The use of this kind of tools introduce risk and fragility in the domain layer.
  2. Relying on annotation to validate an entity means that it could be for some time in a invalid state which should never be the possible IMHO. If for some reason the the validation does not work as expected it means that you could persist invalid entities.

So my rule of thumb is to only use javax.validation in the application layer.

In the future I’m planning to write articles with examples to go deeper in these topics.

Ideas for examples:

  • Inheritance
  • Interfaces as trait
  • Parameters in annotations
    • Primitives
    • Objects

UUID

I’m working on a web application during my free time. This Web application expose URL containing the unique identifier of accessible resources. The entities are until now persisted in a MySQL database and IDS are auto incremented integers.
What I want is avoiding to expose these IDs for security reasons. Even if the app does not deal with sensitive data and users are anonymous I don’t find it very nice for a user to be able to access the entire data just by incrementing a value in the URL.

So I looked for other solutions to generate unique identifier. One way I found is to use UUID (stands for Universal Unique IDentifier). These UUID are a value encoded on 16 bytes.

Generated identifiers like UUID are used to generate unique identifiers in a decentralised way. So you not rely on a database to generate ID anymore so you can scale better. Moreover you avoid to a a single point of failure for these ids: your database. http://fr.slideshare.net/davegardnerisme/unique-id-generation-in-distributed-systems

With this generated identifier there always has a risk of collision which mean the risk to generate two times the same identifier. For instance for service like Twitter and the number of tweet/second the risk to generate two times the save 128 bites value is high. For my case this risk is probably inexistant.

Using these UUID as unique identifiers as primary has performance impact on MySQL system as describe in the following post: https://www.percona.com/blog/2014/12/19/store-uuid-optimized-way/
http://kccoder.com/mysql/uuid-vs-int-insert-performance/

Other pros of UUID and is that it is easier to merge data from different data sources just because even in different system there is a low risk of collisions between ids.

There are other projects that generate unique identifier using decentralised k-ordered generation. These projects may reduce the risk of collisions:

Another idea that I had but didn’t explored is instead of exposing directly ids why not just obfuscate them? So in database we keep auto incremented integers but my endpoint services encrypt them every time they send one and decrypt them when receiving.

 

If you don’t need decentralised id generation – which is my case – it is a possibility.

I don’t remember the problem I faced with id generation but it is a solution since you not rely on the db to get an id.

Vaadin

At work we use Vaadin to build the UIs for or business tools. What I learned is that it is possible to embbed a vaadin application in a HTML page as discribed here: https://vaadin.com/book/-/page/advanced.embedding.html

A possible use case would be to facilitate a migration from vaadin to a modern JS framework.

How we can dynamically inject and execute javascript from Vaadin: https://vaadin.com/book/vaadin6/-/page/advanced.printing.html

Docker

Super easy to get a mysql server but the ip address to use is

192.168.99.100

FlyWay

Database migration tools super easy to hand on because of its use of .sql as a source for migration.
Can use Java migration too in case of too complex to be done by sql script.
Very well integrated with spring framework. Just add the flyway core dependencies, create db/migrations in your resources folder and it migrate when the application run.

Hibernate

By default, hibernate will load import schema.sql and data.sql at the root of the class path.

Vault

In order to create applications without passwords ?
Interesting if we dont want to share production passwords with team?

Overview of the Vert.x event-driven architecture

According to the official documentation Vert.x is a framework that helps you to build reactive application. « Reactive » is defined more or less precisely in the “Reactive Manifesto” which describes what behaviour a system should have in order to build softwares able to meet today’s challenges. Today’s applications have to handle more and more concurrent connections, access growing data sources while meeting user’s expectations which are more and more exigent in terms of response time and availability.

Now we’re expecting that a website directly push information that may interest us without requesting it. Yesterday when we were on a forum application we had to refresh the page to see if something new has been posted. Today we are expecting that the page displays new data in real time without any user action. Moreover we’re expecting these applications being accessible through different ways on different platforms (web browsers on desktop and mobile, mobile applications). We must be able to stay connected every time, every where.

With these today’s requirements and features the traditional « thread per request » approach reaches its limits in terms of scalability and efficiency. Long connection polling for realtime notifications requires one thread per user which is not acceptable.

Like node.js, Vert.x has an event-driven architecture and implements the reactor pattern to handle concurrent requests. So the goal is to receive concurrent requests and to put them on a queue. An event-loop will dequeue each of these events and dispatch them sequentially to an handler. The process of these events by the event loop is synchronous – one by one.  The handlers are asynchronous. Something very important to understand is that the event loop and the handlers are executed in the same thread. So the golden rule is that a handler must never block. If it blocks for some reasons (access to a database, file or external service) it blocks the process of all other requests, events.

event-loop.png
Event-driven architecture of Vert.x

A vert.x application involves small number of thread so it consumes less memory and lose less CPU time during context switching. Its event driven architecture will help you to create more scalable application than if you did it with another framework based on the traditional “thread per request  model”.

Unlike of node.js, Vert.x will create as many event-loops as the number of  available CPU cores.

An hello-vertx example

Let’s take the example of the the vert.x website homepage:

vertx-hello-world.png

What we are doing here is to create a http server listening on the port 8080. Vert.x will transform all incoming http requests in events and put them on a queue. The event event loop will then dequeue one by one each of these events and call the registered handler. In this example java 8 lambdas is used to create one that just creates a response with an http header and “Hello from Vert.x!” as body. The end of the response produce an event which is queued before to be processed by the event-loop and dispatched to the handler that will send it to the client. Every thing is an event in Vert.x.

In real web applications handlers will probably access a data source or call other services. The operations of dequeuing of incoming events and the calling to the registered handlers are done in the same thread. So it’s why handlers are asynchronous and must never blocks!

Resources

How to create a REST API with Node.js and Express

This post will show you how you can quickly build a simple REST API with JavaScript, Node.js and Express. Why these technologies instead of other widely used like WCF of ASP.NET Web API? My goal wasn’t to build a production ready API. It’s was just to mock a real REST service thus the reason why I started to use these tools is the simplicity and rapidity with which I could have something working.

Node.js and Express installation

  1. The first thing you have to do is to install Node.js if it’s not already. Just download it from the website (see here), click, click, next and it’s completed!
  2. Open the Node.js command prompt, go in the directory in which you want to create the server and install Express (which is the middle ware we use to create the web server):
    $> npm install express
    

That’s it. It’s time to create our API.

The Todo List API

The example I took is a basic Todo List. A Task is described by a name, a description, a due date, a status and a unique identifier. The API uses JSON for data transmission and here is its functional description:

  • GET /tasks/ : return the tasks list
  • GET /tasks/:id : return the task identified by the given :id
  • POST /tasks/ : create a new task corresponding to the JSON object given in the body request
  • PUT /tasks/:id : update the task with values of the JSON object given in the body request
  • DELETE /tasks/:id : delete the task with the given id

These methods should return:

  • an HTTP 200 status code if they are invoked by other HTTP verbs than GET and the operation is successful;
  • an HTTP 404 error code if the task we try to access doesn’t exist.

Implementation

Data

The purpose of this post is to focus on how to create a Web API. Thus I deliberately simplified as many as possible the data persistence layer. In the given example, tasks are stored in memory: in a Javascript Array. I just created a simple class ‘TaskRepository’ that creates a simple abstraction of the Javascript array. Here is its public interface:

function TaskRepository() {}
/**
 * Find a task by id
 * Param: id of the task to find
 * Returns: the task corresponding to the specified id
 */
TaskRepository.prototype.find = function (id) {}
/**
 * Find the index of a task
 * Param: id of the task to find
 * Returns: the index of the task identified by id
 */
TaskRepository.prototype.findIndex = function (id) {}
/**
 * Retrieve all tasks
 * Returns: array of tasks
 */
TaskRepository.prototype.findAll = function () {
    return this.tasks;
}
/**
 * Save a task (create or update)
 * Param: task the task to save
 */
TaskRepository.prototype.save = function (task) {}
/**
 * Remove a task
 * Param: id the of the task to remove
 */
TaskRepository.prototype.remove = function (id) {}

Create an Express Server

To create an instance of an express server and to configure it to parse JSON objects contained in request body just write the following snippet:

var express = require('express');
var app = express();
app.configure(function() {
    app.use(express.bodyParser()); // used to parse JSON object given in the request body
});

Get the task list

Let’s configure the router to return all tasks when requests are made with an HTTP GET on ‘/tasks’ url :

/**
 * HTTP GET /tasks
 * Returns: the list of tasks in JSON format
 */
app.get('/tasks', function (request, response) {
    response.json({tasks: taskRepository.findAll()});
});

You can find further information about application routing here

Get a task

You can retrieve a specific task by making a HTTP GET request on the following URL: ‘/tasks/:id’ with :id equals to the taskId you want to retrieve. If no task is found the return is a HTTP 404 status code.

/**
 * HTTP GET /tasks/:id
 * Param: :id is the unique identifier of the task you want to retrieve
 * Returns: the task with the specified :id in a JSON format
 * Error: 404 HTTP code if the task doesn't exists
 */
app.get('/tasks/:id', function (request, response) {
    var taskId = request.params.id;
    try {
        response.json(taskRepository.find(taskId));
    } catch (exeception) {
        response.send(404);
    }

});

Create a task

To create a task you have to execute a HTTP POST on ‘/tasks’ with a serialized JSON corresponding to the task you want to create in the request body.

/**
 * HTTP POST /tasks/
 * Body Param: the JSON task you want to create
 * Returns: 200 HTTP code
 */
app.post('/tasks', function (request, response) {
    var task = request.body;
    taskRepository.save({
        title: task.title || 'Default title',
        description: task.description || 'Default description',
        dueDate: task.dueDate,
        status: task.status || 'not completed'
    });
    response.send(200);
});

Update a task

/**
 * HTTP PUT /tasks/
 * Param: :id the unique identifier of the task you want to update
 * Body Param: the JSON task you want to update
 * Returns: 200 HTTP code
 * Error: 404 HTTP code if the task doesn't exists
 */
app.put('/tasks/:id', function (request, response) {
    var task = request.body;
    var taskId = request.params.id;
    try {
        var persistedTask = taskRepository.find(taskId);
        taskRepository.save({
            taskId: persistedTask.taskId,
            title: task.title || persistedTask.title,
            description: task.description || persistedTask.description,
            dueDate: task.dueDate || persistedTask.dueDate,
            status: task.status || persistedTask.status
        });
        response.send(200);
    } catch (exception) {
        response.send(404);
    }
});

Delete a task

/**
 * HTTP PUT /tasks/
 * Param: :id the unique identifier of the task you want to update
 * Body Param: the JSON task you want to update
 * Returns: 200 HTTP code
 * Error: 404 HTTP code if the task doesn't exists
 */
app.delete('/tasks/:id', function (request, response) {
    try {
        taskRepository.remove(request.params.id);
        response.send(200);
    } catch (exeception) {
        response.send(404);
    }
});

Start the Express server

app.listen(8080); //to port on which the express server listen

The result

Here is the final result: https://gist.github.com/ixzo/4750663

Tests

Now it’s time to test our API. In your Node.js command prompt launch the server:

$> node rest_api.js

At the beginning the task list should be empty:

$ curl -i http://localhost:8080/tasks/
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 17
Date: Sun, 10 Feb 2013 12:37:45 GMT
Connection: keep-alive

{
  "tasks": []
}

Create a task

Let’s insert a new one with default values:

$ curl -i -X POST http://localhost:8080/tasks --data '{}' -H "Content-Type: application/json"
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/plain
Content-Length: 2
Date: Sun, 10 Feb 2013 12:39:13 GMT
Connection: keep-alive

OK

Now the task list should contains one task with id equals to 1:

$ curl -i http://localhost:8080/tasks/
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 146
Date: Sun, 10 Feb 2013 12:40:38 GMT
Connection: keep-alive

{
  "tasks": [
    {
      "taskId": 1,
      "title": "Default title",
      "description": "Default Description",
      "status": "not completed"
    }
  ]
}

$ curl -i http://localhost:8080/tasks/1
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 101
Date: Sun, 10 Feb 2013 12:40:15 GMT
Connection: keep-alive

{
  "taskId": 1,
  "title": "Default title",
  "description": "Default Description",
  "status": "not completed"
}

Update a task

Let’s update the description of the task by setting its value to « blabla » instead of « Default Description »:

$ curl -i -X PUT http://localhost:8080/tasks/1 --data '{"description":"blabla"}' -H "Content-Type: application/json"
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/plain
Content-Length: 2
Date: Sun, 10 Feb 2013 12:42:39 GMT
Connection: keep-alive

OK

Let’s retrieve the task to see if the description is correctly updated:

$ curl -i http://localhost:8080/tasks/1
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 101
Date: Sun, 10 Feb 2013 12:40:15 GMT
Connection: keep-alive

{
  "taskId": 1,
  "title": "Default title",
  "description": "blabla",
  "status": "not completed"
}

Perfect!

Delete a task

Now it’s time the test the deletion of tasks:

$ curl -i -X DELETE http://localhost:8080/tasks/1
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/plain
Content-Length: 2
Date: Sun, 10 Feb 2013 12:43:31 GMT
Connection: keep-alive

OK

‘/tasks/1’ should return a 404 not found:

$ curl -i http://localhost:8080/tasks/1
HTTP/1.1 404 Not Found
X-Powered-By: Express
Content-Type: text/plain
Content-Length: 9
Date: Sun, 10 Feb 2013 12:44:18 GMT
Connection: keep-alive

Not Found

The task list should be empty now:

$ curl -i http://localhost:3000/tasks/
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 17
Date: Sun, 10 Feb 2013 12:44:33 GMT
Connection: keep-alive

{
  "tasks": []
}

Conclusion

Through this post I have tried to show you how you can quickly and easily create simple REST API. It’s absolutely possible to add complexity to this example to handle authentication or to support real persistence technologies like MongoDB (NoSQL) or a traditional SQL database. You can take a look to resources for other posts related to the same topic.

Resources

A basic web server with Node.JS and Express

Here is a memo on how you can create a simple web server delivering static content with Node.js and and Express. The content could be html files, js, css or any other kind of files that doesn’t require server processing (like PHP, C#, Java, …).

But why Node.js instead of using Apache or IIS? Like you are going to see in the few lines below creating a web server with Node.js takes only few lines of code and can be up and running in 30 seconds (max!). When your purpose is only to test things the configuration of a « traditionnal » web server can be overkill.

A couple of days ago I played with the WebSocket API (HTML5 rocks!) by creating a WhiteBoard that can be used by several users at same time (see it on Github). With Node.js and Express I could quickly setup a web server that delivery html, css and javascripts files to my test users over the network. And everyone that want to pull source code and test the project can easily execute the embedded web server directly.

Setup

  1. The first thing you have to do is to install Node.js if it’s not already. Just download it from the website (see here), click, click, next and it’s completed!
  2. Open the Node.js command prompt, go in the directory in which you want to create the server and install Express (which is the middleware that we use to create the webserver):
    $> npm install express
    
  3. Create the web server:
    var express = require('express');
    var app = express();
    app.configure(function () {
        app.use(
            "/", //the URL throught which you want to access to you static content
            express.static(__dirname) //where your static content is located in your filesystem
        );
    });
    app.listen(3000); //the port you want to use
    
  4. Execute your Web Server:
    $> node server.js
    
  5. Your web server is reachable through http://localhost:3000/

Resources

Memo: Design Principles in Object Oriented Programming

What are Design Principles?

Design principles are guidelines to produce software’s that are easier to test, to maintain and to extend.

SOLID

SOLID is a mnemonic acronym that stands for:

Single Responsibility Principle

Says that a component should only focus on a single responsibility, the one for which the module exists. A module can be different things like a use case, a package, a module or a class. Of course, the lower you go through layers the more your classes will focus on specific things.

Examples

  1. “Model-View-Controller” is example of this principle. The design pattern promote the separation of concern through multiple layer – each one responsible on a specific concern.
  2. Another simple example of this principle is to never instantiate new objects inside your classes. Because each time you use new and you create a static dependency you add to you class the responsibility to make its dependencies.

Open/Closed Principle

Says that a component should be closed for modifications but open for extensions. This means that if you have to change the internal working of a class you should be able to create a new one that inherit from the one you want to modify and apply your changes on it. You should never change the code and the internal working of your existing components.

If you apply SOLID principle you will be able to replace the old implementation by the new without any impact on the solution.

Example

Diagramme de classe 2

Here is the class diagram for a simple application that reads a file in specific format and writes what it reads into another. Actually the application read XML files and produce a JSON file. If you want to customize the internal working of the “XmlReader” you should absolutely avoid to change its code. You should be able to create a new class (“XmlReaderNg”) that inherit from “XmlReader”, change the internal working and after tell to the “FileTransformer” to use your new class instead of the old one.

In conclusion: Always extend, never change.

Liskov Substitution Principle

Says that a base class has to be able to be replaced by a child class without any impact on the solution.

Example

If you take a look at the Open/Closed Principle example and you replace the “XmlReader” by the new implementation “XmlReaderNg” the replacement shouldn’t have any impact on the way of “FileTransformer” works.

Interface Segregation Principle

Promote the use of multiple simple interface instead of a big and complex one.

Examples:

In C#: IDisposable, ISerializable.

Dependency Inversion Principle

Says that:
–       high level modules should never depends on low level modules. Both should depends on abstractions;
–       Abstractions should not depend upon details. Details should depend upon abstractions.

I have already wrote an article on it: https://erichonorez.wordpress.com/2012/11/24/dependency-inversion/

Inversion Of Control

The Inversion of Control is design principle that aids to create loosely couple applications. In fact this design principle will act as a glue for SOLID principles.

The aim of the Inversion Of Control is to provide dependencies of a component at runtime instead of compile time. For that purpose a “dependency provider” is used. There are two different implementations of this principle and the main difference between the two is the way you use them.

Service Locator

The component is responsible to retrieve its dependencies form the service locator.

Example

Here is a very simple implementation of Repository class that retrieve tasks from the persistence layer.

public class TasksRepository : ITaskRepository
{
    private ILogger _Logger;
    public TasksRepository(IServiceLocator serviceLocator)
    {
        serviceLocator.GetService<ILogger>();
    }
}

This class use an ILogger object to log what it does and retrieve an instance of ILogger by requesting it to the IServiceLocator.

Cons: By using the ServiceLocator implementation of IoC you create a dependency between the components that the container and its API.

Dependency Injection

The dependency injection container provide (inject) dependencies to the component. This injection can be done by different ways:
–       Constructor parameters
–       Setting some properties
–       Calling methods

Example

By using a Dependency Injection Container your TaskRepository will look like more like that:

public class TaskRepository : ITaskRepository
{
    private ILogger _logger;
    public TaskRepository(ILogger logger)
    {
        this._logger = logger;
    }
}

The caller will request to the container to provide him an instance of ITaskRepository. The ILogger will be injected directly.

Resources