Deleting by ID (or primary key) with Fluent NHibernate

I wasn't a big fan of so-called Fluent Interfaces but my fondness is growing as I use Fluent NHibernate on an ASP.NET MVC project. When I started on my project I checked out the repository pattern example from Google Code to see how this was being im­ple­ment­ed. The supplied repository interface provided the following interface:

public interface IRepository
    T Get(object id);
    void Save(T value);
    void Update(T value);
    void Delete(T value);
    IList GetAll();

Looks great apart from the fact that you need to pass an entity to the Delete() method, which would in turn result in iteration over a large number of objects if I had to do a lot of deletes. If I received an integer with an ID from an ASP.NET MVC controller I'd have to retrieve the object before sending it to the Delete() method. This is not very efficient.

To remedy this problem I added an additional Delete() method to the interface taking an object ID as the only parameter:

public void Delete(object id)
    using (var session = sessionFactory.OpenSession())
    using (var transaction = session.BeginTransaction())
        var queryString = string.Format("delete {0} where id = :id", typeof(TEntity));
               .SetParameter("id", id)


This is a very simple method that will generate more efficient SQL under the covers for deletion of objects from the database.

Tagged with databases, fluent-nhibernate, nhibernate and orm.

Data Access Pain

One of the things that I find most frus­trat­ing about on .NET projects is working with relational data sources. My experience with DataSets in the 1.x days was far from positive. They proved too in­ef­fi­cient and difficult to debug. This has changed in 2.0 with the many im­prove­ments to the API and the in­tro­duc­tion of vi­su­al­iz­ers to the integrated debugger. I'm still not sold on this solution, but at least things are improving ;)

My preference has been to develop a layer of custom objects which get called from the upper layers of the ap­pli­ca­tion. This is very flexible and easy to debug. In addition, you can create these objects without having any back end developed so that pro­to­typ­ing is simpler. To be fair this can be a bit time consuming, and I have tried to augment this with code generation using CodeSmith. Working this way lets me deal with objects in a fashion native to the .NET platform, take advantage to in­tel­liense and simplify unit testing.

I'm looking at two other solutions - LLBLGen Pro and NHibernate. LLBLGen seems to be better suited to my needs at present since it has a better user experience. Both of these tools map generated objects to the tables in the database, so you can avoid switching back and forth between pro­gram­ming models. Complex queries are expressed using custom syntax and this is where the story sours for NHibernate and LLBGen to a lesser extent. LLBGen makes it simple to wrap existing stored procedures so this is po­ten­tial­ly useful when the the SQL gets complex. Ideally I'd like to rid myself of the relational model and SQL altogether but I guess we're going to have to live with it forever.

On this topic it's worth reading a paper by Ted Neward on the object-relational divide and various tech­nolo­gies that have been developed to bridge it. The paper was for MSDN so it covers the LINQ technology that will likely be part of C# 3.0.

Tagged with databases, llblgen and nhibernate.