Here are some of the questions we should consider.
- Should you choose an ORM implementation or simply allow a tool to craft your data services to your actual and immediate needs?
- Do you need both options in your projects?
- Can an ORM implementation satisfy the performance requirements, the ease of use and ease of changes?
- Can you really avoid SQL at no cost?
- Does an object query language make our lives easier or harder?
- Does any ORM implementation address the Object-Relational Impedance Mismatch?
- Choosing a technology for your database access layer seems to be a challenging decision for many projects today.
There are many advocates of different ORM implementations and object query languages of some sort. Some claim one object query language is simpler than the other; some ORM implementations support all the major object query languages and some developers write their own abstraction layer to the object query language of the specific ORM implementation that they have chosen. To really benchmark an ORM implementation is extremely challenging since different schemas carry their own complexities and application requirements. In terms of tuning the queries, the choices appear to be quite limited with ORM implementations and require "expert knowledge" to do it right if it can be done at all. Multiple times, the compromise of multiple fetches to generate in memory domain structures from multiple domain objects deprecates performance but at least it gets the data.
In reality there is no one seemingly easy way to judge performance. We can compare an ORM's performance against the equivalent SQL and may turn out that this test can be misleading. Some important performance optimizations (for example, the transaction-level cache) can actually reduce performance for the kind of very trivial benchmarks that people typically write. This is more a problem of the triviality of the benchmarks than anything else. Problems that affect ORM performance in nontrivial use cases may not even exist.
The biggest problems really come from what we call the "problem of graph navigation", where the pattern of data access used by an object-oriented application is a fundamentally inefficient way to access relational data. A lot of times the data required are not a one to one mapping of domain objects to tables; multiple columns from different tables can make up a model object. In that case an ORM implementation is relatively useless.
How about if we have an integrated development studio that allows us to build our own data services layer, that would allow us to edit/modify/benchmark all of our database access layer in one place with no ORM engine code underneath but only clean vendor-independent code with a choice of different drivers? If a hand crafted auto code generated data services solution is available then this is something that can be considered? Sometimes the problem is sort of cultural. Some developers come to using ORMs because they are uncomfortable with SQL and relational databases. ORM architects state that this is the wrong reason to use an ORM. Being comfortable with relational databases and SQL are requirements for using an ORM.
One of the major reasons that databases succeeded is the pure power of SQL. How about if we could harvest the power of SQL, ADO.NET, the OOP concept and bring them together in a powerful visual mapping studio that allows automatic generation of data services code based on the visual mappings between database columns and objects? Is that a compromise or an improvement?
Some debate that objects to query syntax is very complicated and they prefer to deal directly with SQL. It is also a fact that database administrators can participate in the optimization of SQL if they had easy access to it. The concept of a Mapping is a one compilation unit that is composed of two sub units; one being the SQL statement and the other the model object that translates into a method in the data service class. If both of these units are edited by their own visual editors then tuning should be a breeze.
Mappings are part of a project and the project compiles to a data access library. In this case the data tier grows based on the application requirements and the data services themselves can be distributed in different projects.
The application services have a choice to link to the data tier libraries that they need. Simply reference the libraries and access your data. If we have a choice to custom create and fully control the data services in terms of maintenance and growth, modularity and abstraction, benchmarking and round trip engineering, in an IDE that finally focuses only on all of these requirements of the data tier, should we consider it? Even if we choose an ORM that is not custom, could we use this in addition to that? Orasis software introduces the custom IDE data services crafter, the Orasis Mapping Studio 2009 and it provides the choice to utilize the better of the two worlds.
Relational databases and class hierarchies should not relate as the ORM implementation specifies but they should relate on what the business requirements are. True mapping of objects to SQL should address object graphs at any depth level. If an object graph has contained members of abstract classes and interfaces, the mapping process should allow implementation classes of these instances to be replaced. Even contained members that are instantiable classes should be allowed to be replaced by derived ones.
Data architects and developers can now orchestrate the development of the data tier layer and guarantee performance. The benchmarking features of the IDE on the data services code and the SQL perform the ultimate test. The ability to drill into any mapping through a visual editor to modify/enhance it reveals a powerful feature that needs to be taken into serious consideration. No system goes into production before it is tuned. Can a traditional ORM technology that generates model objects that map one to one to a database schema satisfy all the needs for data access in a system? Or even a more enhanced approach of an ORM like Entity framework that still shares the majority of the features of a traditional ORM can satisfy all of the needs of a true isolated database tier that targets performance in the enterprise? The flexibility to map easily database fields to object properties in object graphs, auto generate custom data services layer code with no introspection code and validating it's performance without having to write tedious testing code for it, in now an option.