C# FluentValidation – why we’re using it

A bit of background

I’ve been working in the C# world for a few months now. While the code is very similar to Java,the culture around open source could not be more different.  Where open source as business as usual in the Java & Javascript worlds, it’s very much exceptional circumstances only in the .Net one.  It took a bit of an extra push from myself to convince a client they should be looking at Fluent Validation for their validation needs, but I’m fairly certain (and more importantly they’re fairly certain) it was the right idea.

Fluent Validation

Fluent validation is an apache 2 licensed library that’s just moved from codeplex to github.  It enabled developers to code validation rules in a fluent manner and apply them to models.  In contrast to many modern validation approaches where the rules are declaratively mixed up with the models themselves using attributes (or annotations in Java parlance) fluent validation very firmly separates the models from the rules and has you define Validator classes instead.  Like this:


The scenario

Their requirements are not exactly simple, but not particularly odd either.  They’ve got a large legacy code base that encapsulated a bunch of business logic, validation and data access.  They’re gradually migrating away from it because it’s slow and difficult to maintain, which is a pretty common scenario.  Due to the way the code base has evolved and the varying requirements they’re attempting to fulfill there are now four different ways into the database. To be clear, that’s four different implementations of their domain model and bindings to the underlying SQL schema.   Oh, and of course there are also custom validation rules coded by individual clients in various places and ways, not to mention that many of the rules for one entity depend on data from another. Currently, the default ‘always-apply’ rules are only in one place – the slow and inflexible legacy components.

The solution to all this is to extract the validation rules out of the legacy components and create a modern, fast and re-usable validation library that is data layer agnostic and can support configurable rule-sets.

Why Fluent Validation?

The main reason was flexibility.  We needed something that could attack such a wide variety of use cases it had to be able to validate anything in any context.  Fluent Validation fitted the bill

Other reasons included:

  • Speed of development – it’s really easy to work with.
  • Decoupled rules and models – meaning we could write multiple (and possibly conflicting, don’t ask) rule-sets against the same models.  In fact, enabling customers to write their own validators was pretty trivial exercise (see a future post on this)
  • Speed of execution – the rules are immutable objects meaning you can create them once and cache them.  When the rules are part of the model you’re needlessly setting them up every time.  This is not a problem when working with one simple entity, but quickly becomes a performance problem when you’re working with multiple complex validators over 50,000 rows. Benchmarking showed that we saved considerable time doing this – one import went from 12 minutes to 1 minute when we started caching validators.
  • Ability to work with odd relationships and dependent entities – many of the entities we work with have dependencies on other things in the system, even though they’re not ‘formally’ related.  With fluent validation we could write validators that handle this easily.

ORCiD Java Client now supports schema version 1.2!

Thanks to the hard work of Stephan Windmüller (@Stovocor) the ORCiD client library now supports version 1.2 of the ORCiD schema.  He’s also updated the companion ORCiD profile updater web app to use the new library.

Users have probably noticed that github have siliently dropped support for the way we were hosting the maven repository and we’re now looking to move it to Maven Central.  Not only is that  great news for anyone out there who was thinking of upgrading, it’s great news for the library’s future.

Goodbye to the British Library, hello corporate life.

I’ve moved on.  I had a great couple of years working at the library and met a ton of really enthusiastic folk.  The ODIN project came to an end and there was little left for me to do, so I’ve found myself a new workplace more local to home.

I’m a delivery engineer, apparently.  I’ve been here a couple of days and I’m still not sure what I’ll be delivering, but otherwise first impressions are very good.  I’m currently sat in a room full of graduates writing some sort of game and they seem to be enjoying themselves.  There’s a Waitrose round the corner meaning I get a couple of free coffees a day and soon I’ll be able to cycle to work.  Working from home has piled on the pounds so burning it off with cycle power would be fantastic.  It’s a corporate code shop, but seems agile and modern in it’s approach.  Time will tell, but all my kit is top notch and they support staff and HR have been great.

I’ll continue posting technical blog posts on whatever I’m working on as and when I encounter new stuff myself, which I think is going to be rather a lot in the coming months 🙂

eta: I’m now a Tech lead!

A different view of the British Library – photos

Once you get inside it, the British Library is a beautiful building.  I’ve taken to photographing it and its contents during my lunch break.  Here they are, click on them for the bigger versions.

Check out my flickr stream for more.

The age of geocities – Bubba says HOWDY!!!

There’s a fantastic project out there that’s taking screenshots of random Geocites pages as they would have appeared when they were live.

It’s strangely compelling viewing.

bubba says howdy!!!

bubba says howdy!!!

Sites like these showcase an important aspect of our cultural heritage.  Back when the internet was called the “information superhighway” and people were still talking about the “digital frontier”, Geocites was where you could stake your claim.

I did it myself once.  I set up my own little homestead that hosted a Java Applet I’d written – a Java version of the classic game Elite.  I couldn’t get the flight engine quite right, but you could trade from Lave to Diso, view things on the radar, travel between systems and view all the original craft in their rotating vector glory.  The site is sadly lost but the memories remain.

Grab yourself a bit of nostalgic indulgence here: http://oneterabyteofkilobyteage.tumblr.com/

More background of the project can be found here: http://rhizome.org/editorial/2014/feb/10/authenticity-access-digital-preservation-geocities/




I didn’t go to university to get myself a job

Chris Bourg has written a great piece about the insidiousness of neo-liberalism and education-as-an-investment over at her blog, check it out here: The Neoliberal Library: Resistance is not futile

I am one of those hopeless idealists who still believes that education is – or should be – a social and public good rather than a private one, and that the goal of higher education should be to promote a healthy democracy and an informed citizenry. And I believe libraries play a critical role in contributing to that public good of an informed citizenry.

In the neoliberal university, students are individual customers, looking to acquire marketable skills. Universities (and teachers and libraries) are evaluated on clearly defined outcomes, and on how efficiently they achieve those outcomes.  Sound familiar?

I’ve managed to make this blog of mine really dull tech stuff and zero politics for a while now, probably out of a desire to keep myself sane.  That said, the almost inevitable (and widely ignored in the press) move in the UK towards a for-profit education system should strike fear into the hearts of anyone who stops to think about it.

There’s two sides to this, the education-as-an-investment and the for-profit education system and they go hand in hand. Ever since the introduction of tuition fees in the UK, the ideology of “investing in your education” has gained a lot of traction here.  The next stop will almost certainly be the ramping up of the for-profit private education market, starting with the lift of tuition fee caps and ending with a two-tier education system that pumps out workers and  perpetuates inherited privilege.

Chris also talks a lot of sense about the ridiculous focus on the personal within politics, the focus on individualism at the expense of the wider movement.  Check out her blog, it’s a refreshing blast and a welcome change from the celebrity twitterati politics of ME that seems to pass for political discourse nowadays.  Sure, it’s important to understand that other people have different experiences that you.  Essential in fact.  But just understanding gets us nowhere and changes nothing.  It’s Acting, Doing.  That’s what we need.

For more info on these topics, and to actually help do something, tale a look at campaign group Public University https://twitter.com/public_uni and the UCU http://www.ucu.org.uk/index.cfm

What I do all day – the digital electoral register

This one is for the non-programmers out there.

I’m writing a program that takes electoral registers from around the country and sticks in the same place.  The reason for this is that it’s a Good Thing To Do.

Hopefully I’ve not lost you yet.

Local authorities are obliged by law to send their unabridged registers to the library in “electronic format”.  What the morans who made this legislation failed to realise (or realised but didn’t care about) is that “electronic format” is a virtually meaningless term.  So we get eleventybillion different formats that just about have this in common; they contains a shed load of names and a shed load of addresses.  We get umpteen types of spreadsheets, masses of word documents and piles of PDFs.

Names are fun.  Mine can be written like this: Tom Demeranville.  It can also be written as Thomas N. Demeranville.  Or Demeranville, T.  Now for a human that’s easy to recognise as the same name, it’s easy to parse. For a computer it’s a bloody nightmare.

Addresses are fun too.  What exactly does Address1 mean?  Or Address7?  Why did they stick the postcode in Address5 and leave Address1 empty?  Those and other questions I’ve given up on answering but the code works nonetheless.

So what I do all day is write code that can parse this data into a well defined format, stick it in a database of some kind and makes it available for search. Theres some, er, oversimplification in that description but that’s about the size of it.

Hopefully this will make the lives of those that currently curate the data and those who want to search it much, much easier.