Demand Driven Architecture or REST or Linked Data?

I recently listened to David Nolen‘s talk from QCon London conference from back in July called Demand Driven Architecture. Before continuing, you should have a listen.

Ready?

I really like a lot of things Mr. Nolen has done and really enjoy most of his talks and posts. I was less enthused with this one. I think my main hang up was his mis-representation of REST and resources. I get the feeling he equates resources with data stores. If you watched the video and then skimmed that Wikipedia page, you will quickly see that the notion of “joining” two resources is nonsensical. I think Mr. Nolen is really referring to that “pragmatic” definition that means POX + HTTP methods, which really would correlate well to data stores.

Continue reading
Video

F# on the Web

I recently presented to the Houston Functional Programmers meetup on the topic of using F# for web development. This is an update to my talks with a similar title based on my experience building a real application for Tachyus. I cover data access using the FSharp.Data.SqlClient and building web APIs using ASP.NET Web API with Frank. You can find the video on YouTube.

Web API and Dynamic Data Access

In .NET Rocks episode 855, Jeff Fritz commented on ASP.NET Web API being somewhat confusing in terms of its intended use. I don’t tend to agree, but I thought I would address one point he made in particular: that Web API is perhaps just another form of repository.

Web API is much more than a repository. And yes, it is indeed a protocol mapping layer. As Uncle Bob once noted, a web or api front end is just a mapping layer and is not really your application.

In many cases, however, one could argue that a web-api-as-repository is a fairly solid use case. OData is a great example. However, I was thinking of yet another argument I’ve heard for dynamic languages: when you are just going from web to database and back, you are not really working with types.

In that spirit, I set out to write a simple Web API using SQL and JSON with no explicit class definitions. You can see the results in this gist:

using System;
using System.Collections.Generic;
using System.Configuration;
using System.Data.SqlServerCe;
using System.Diagnostics;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using System.Web.Http;
using Dapper;
using Newtonsoft.Json.Linq;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
Nito.AsyncEx.AsyncContext.Run(() => MainAsync(args));
}
static async Task MainAsync(string[] args)
{
var config = new HttpConfiguration();
WebApiConfig.Register(config);
using (var server = new HttpServer(config))
using (var client = new HttpClient(server))
{
client.BaseAddress = new Uri("http://localhost/");
var cts = new CancellationTokenSource();
var json = @"{""title"":""Task"",""description"":""The task"",""createdDate"":""" + DateTime.UtcNow.ToString() + "\"}";
var postRequest = new HttpRequestMessage(HttpMethod.Post, "/api/tasks")
{
Content = new StringContent(json, Encoding.UTF8, "application/json")
};
var postResponse = await client.SendAsync(postRequest, cts.Token);
Trace.Assert(postResponse.StatusCode == HttpStatusCode.Created);
var location = postResponse.Headers.Location.AbsoluteUri;
var getResponse = await client.GetAsync(location);
Trace.Assert(getResponse.StatusCode == HttpStatusCode.OK);
var getBody = await getResponse.Content.ReadAsAsync<JObject>();
dynamic data = getBody;
Trace.Assert((string)data.title == "Task");
}
Console.WriteLine("Press any key to quit.");
Console.ReadLine();
}
}
public static class WebApiConfig
{
public static void Register(HttpConfiguration config)
{
config.Routes.MapHttpRoute(
name: "DefaultApi",
routeTemplate: "api/{controller}/{id}",
defaults: new { id = RouteParameter.Optional }
);
}
}
public class TasksController : ApiController
{
static string _connString = ConfigurationManager.ConnectionStrings["Database1"].ConnectionString;
public async Task<IEnumerable<dynamic>> GetAll()
{
using (var connection = new SqlCeConnection(_connString))
{
await connection.OpenAsync();
IEnumerable<dynamic> tasks = await connection.QueryAsync<dynamic>("select Id as id, Title as title, Description as description, CreatedDate as createdDate from Tasks;");
return tasks;
}
}
public async Task<dynamic> Get(int id)
{
using (var connection = new SqlCeConnection(_connString))
{
await connection.OpenAsync();
IEnumerable<dynamic> tasks = await connection.QueryAsync<dynamic>("select Id as id, Title as title, Description as description, CreatedDate as createdDate from Tasks where Id = @id;", new { id = id });
if (!tasks.Any())
throw new HttpResponseException(Request.CreateErrorResponse(HttpStatusCode.NotFound, "Task not found"));
return tasks.First();
}
}
public async Task<HttpResponseMessage> Post(JObject value)
{
dynamic data = value;
IEnumerable<int> result;
using (var connection = new SqlCeConnection(_connString))
{
await connection.OpenAsync();
connection.Execute(
"insert into Tasks (Title, Description, CreatedDate) values (@title, @description, @createdDate);",
new
{
title = (string)data.title,
description = (string)data.description,
createdDate = DateTime.Parse((string)data.createdDate)
}
);
result = await connection.QueryAsync<int>("select max(Id) as id from Tasks;");
}
int id = result.First();
data.id = id;
var response = Request.CreateResponse(HttpStatusCode.Created, (JObject)data);
response.Headers.Location = new Uri(Url.Link("DefaultApi", new { controller = "Tasks", id = id }));
return response;
}
}
}
view raw Program.cs hosted with ❤ by GitHub

I used Dapper to simplify the data access, though I just as well could have used Massive, PetaPoco, or Simple.Data. Mostly I wanted to use SQL, so I went with Dapper.

I also model bind to a JObject, which I immediately cast to dynamic. I use an anonymous object to supply the values for the parameters in the SQL statements, casting the fields from the dynamic object to satisfy Dapper.

All in all, I kinda like this. Everything is tiny, and I can work directly with SQL, which doesn’t bother me one bit. I have a single class to manage my data access and API translation, but the ultimate goal of each method is still small: retrieve data and present it over HTTP. That violates SRP, but I don’t mind in this case. The code above is not very testable, but with an API like this I’d be more inclined to do top level testing anyway. It’s just not deep enough to require a lot of very specific, low-level testing, IMHO.

Also, note again that this is just retrieving data and pushing it up through an API. This is not rocket science. An F# type provider over SQL would give a good enough sanity check. Why bother generating a bunch of types?

Which brings up another point for another post: what would I do if I needed to add some logic to process or transform the data I retrieved?

As a future exercise, I want to see what it would take to cap this with the Web API OData extensions. That could be fun.

LINQ to SQL and Entity Framework as Internal Object Databases

I’ve now used and/or tried LINQ to SQL, Entity Framework, and Fluent NHibernate and can now say I understand the differences as expressed in the ADO.NET Entity Framework Vote of No Confidence. Yet I still appreciate what the former two products from Microsoft offer. Well, I like LINQ to SQL anyway. After spending four hours today trying to create a simple example with Entity Framework and getting nowhere with many-to-many mappings despite several blogs’ assistance, I finally gave up. I think the problem with LINQ to SQL and Entity Framework is not in their usefulness but in the approach Microsoft has tried to take in marketing them as object-relational mapping technologies.

An object-relational mapping technology generally takes an object and maps it to a database, not the other way around. At least, I think that’s how it started. The abundance of MVC frameworks using the Active Record pattern seems to have changed that recently with generators creating models from database tables, though many of these actually create the tables in the database from the object definition, as well. Nevertheless, I’d disagree with Microsoft that LINQ to SQL and Entity Framework qualify as ORM technologies, though they do perform that role, as well.

LINQ to SQL and Entity Framework provide a language-integrated object database against which to create applications. This is huge! Let that sink in a bit. Now, I find nothing wrong with that approach. In fact, it’s quite nice! The trouble is that developers think, “Wow, I’ve got all these great objects ready to use!” Not so fast. You have entities that represent rows in tables, not business objects. Yes, LINQ to SQL and Entity Framework provide means of modifying those classes to mimic more class-like behavior (and that can indeed be a great benefit do easing domain model or active record development) but really should not be used for anything other than database records.

This greatly simplifies any data mapping you have to do between your domain objects and your database, and you can write everything in your OO language of choice. If you want something more automatic, you might try an object-to-object mapper (e.g. NBear–though I haven’t tried it myself and can’t imagine that object-to-object mapping would be that difficult).

As a final analysis, in case you care, I really like LINQ to SQL’s defaults. It’s super simple to get started and use, though it’s only for SQL Server. Entity Framework… I am just not a fan. If I can stay away, I will. Maybe someone will show me how to configure it so that it works for me, but so far it’s a FAIL. Also, keep tabs on DbLinq. It’s an open source tool that attempts to mimic LINQ to SQL for SQL Server and a number of other database technologies and should work on Mono. Of course, NHibernate is great for those who would rather connect to a real database, and I found Fluent NHibernate to be a great tool. I love its fluent interface for mapping to the database and its AutoMapping functionality. However, making everything virtual annoyed me and really makes me question how so many can prefer it for persistence ignorance when it so obviously requires that detail. (I get it’s a small sacrifice, but I wouldn’t code that way normally, so I am quite reminded that I’m connecting to a database through NHibernate.)

SQL Server Iterations with the Tally Table

SQL Server Central recently posted an article by Jeff Moden on using a Tally Table to iterate through database rows rather than a WHILE loop or CURSOR. The article defines tally tables and explains both how to create them and use them. Jeff even shows performance comparisons with a WHILE loop.

For those unfamiliar with tally tables, a tally table is generally used to parse data stored in a comma-delimited format but can also be used for other iterative actions. The tally table iterations also use set math and thus keep the CPU usage and row counts low consistently.

The next time you find yourself needing to iterate through table rows, try using a tally table.