Here is my technical diary for the 51th of 2015.
At work I started to write a document to explain how we can simplify our complex validation rules by using the java.validation API properly. Today we already rely on this API to validate objects in our multi-tenant application. But because we use use it at the wrong layer of the architecture and/or we not use all the power of the API it make our validation really hard to maintain and evolve.
What I learned.
We should avoid conditional logic in the validations and promote a self-documented code. In a multi-tenant application validation rules on objects could depend on the tenant to which the user who makes an action belongs to or the object on which the user operate belongs to. The worst scenario is when these differences are hidden in the code. As described in his book on Domain Driven Design Eric Evans promotes a self-describing code by making implicit rules explicit.
An application of that is that each variance could be represented by a specific type. Instead of having a single object and a lot of logic inside the validation class to try figure out in which case it applies to the given object the goal would be to have a class per variance.
In order to don’t repeat yourself by writing several time the quite same object with only a part that varies we can use inheritance and interfaces. In that case annotations on overridden methods in the subclass will be applied cumulatively. Validations on interface could be used as traits.
To deal with the case where a single rule could have different implementations depending on the type of the object on which it applies it is possible to specify a list of validators as parameter of @StatisfiedBy.
If for a rule there is only one validation process that can be parametrised with some values it is possible to use parameters in annotations.
Another reflexion I had is that we should avoid at most as possible the use of annotations on private fields. Validation rules that apply to an object are part of the API. By specifying them on private members of a classe we hide them.
Moreover I think that javax.validation should only be used in the application layer only. A mistake is to use it in the domain layer for several reasons:
- The more you rely on infrastructure tools in the domain layer the harder it is to evolve. These tools evolve and something become deprecated. Sometimes there are variation between two implementations of an API. The use of this kind of tools introduce risk and fragility in the domain layer.
- Relying on annotation to validate an entity means that it could be for some time in a invalid state which should never be the possible IMHO. If for some reason the the validation does not work as expected it means that you could persist invalid entities.
So my rule of thumb is to only use javax.validation in the application layer.
In the future I’m planning to write articles with examples to go deeper in these topics.
Ideas for examples:
- Interfaces as trait
- Parameters in annotations
I’m working on a web application during my free time. This Web application expose URL containing the unique identifier of accessible resources. The entities are until now persisted in a MySQL database and IDS are auto incremented integers.
What I want is avoiding to expose these IDs for security reasons. Even if the app does not deal with sensitive data and users are anonymous I don’t find it very nice for a user to be able to access the entire data just by incrementing a value in the URL.
So I looked for other solutions to generate unique identifier. One way I found is to use UUID (stands for Universal Unique IDentifier). These UUID are a value encoded on 16 bytes.
Generated identifiers like UUID are used to generate unique identifiers in a decentralised way. So you not rely on a database to generate ID anymore so you can scale better. Moreover you avoid to a a single point of failure for these ids: your database. http://fr.slideshare.net/davegardnerisme/unique-id-generation-in-distributed-systems
With this generated identifier there always has a risk of collision which mean the risk to generate two times the same identifier. For instance for service like Twitter and the number of tweet/second the risk to generate two times the save 128 bites value is high. For my case this risk is probably inexistant.
Using these UUID as unique identifiers as primary has performance impact on MySQL system as describe in the following post: https://www.percona.com/blog/2014/12/19/store-uuid-optimized-way/
Other pros of UUID and is that it is easier to merge data from different data sources just because even in different system there is a low risk of collisions between ids.
There are other projects that generate unique identifier using decentralised k-ordered generation. These projects may reduce the risk of collisions:
Another idea that I had but didn’t explored is instead of exposing directly ids why not just obfuscate them? So in database we keep auto incremented integers but my endpoint services encrypt them every time they send one and decrypt them when receiving.
If you don’t need decentralised id generation – which is my case – it is a possibility.
I don’t remember the problem I faced with id generation but it is a solution since you not rely on the db to get an id.
At work we use Vaadin to build the UIs for or business tools. What I learned is that it is possible to embbed a vaadin application in a HTML page as discribed here: https://vaadin.com/book/-/page/advanced.embedding.html
A possible use case would be to facilitate a migration from vaadin to a modern JS framework.
Super easy to get a mysql server but the ip address to use is
In order to create applications without passwords ?
Interesting if we dont want to share production passwords with team?