Published on 10/30/2019
We’re completing the listing of blog posts in this part. Previously we build down to the domain layer. Now we need to go all the way down the stack to the database and get what we need.
This time bottom up
In the spirit of demonstration we’ll do this part from the bottom up. Previously we defined the Blogs
table, so now we need a way to read all the titles from it. Doing bottom up is sometimes hard if you don’t understand what you’re working towards. It is thus really important that your domain be defined and designed first. Fortunately, we already know we need to have the title and the key of a blog in order to make a representation with a URL of it, and now we just need some representation of that list. We can easily write a LINQ query and stick that into some representative entity. Because this entity doesn’t necessarily represent a table entry exactly, I will put it under a different namespace, one specific for queries.
namespace helloserve.com.Database.Queries { public class BlogListing { public string Key { get; set; } public string Title { get; set; } } }
In our repository implementation, we now need to construct this query. But of course we start with a unit test.
[TestMethod] public async Task ReadListing_Verify() { //arrange string key = "key"; string title = "title"; var blogs = new List<Database.Entities.Blog>() { new Database.Entities.Blog() { Key = "key1", Title = "title1" }, new Database.Entities.Blog() { Key = key, Title = title }, new Database.Entities.Blog() { Key = "key2", Title = "title2" } }; _contextMock.SetupGet(x => x.Blogs) .Returns(blogs.AsDbSetMock().Object); //act IEnumerable<BlogListing> listings = await Repository.ReadListings(); //arrange Assert.IsTrue(listings.Count() == 3); Assert.IsTrue(listings.Any(x => x.Key == key)); Assert.IsTrue(listings.Any(x => x.Title == title)); }
This looks very similar to our other repository test, but with a few changes. Also, is that the same BlogListing
query object ? It gets returned by the ReadListings
method on the repository which is also new. BlogListing
here is the domain representation of the query object we created. Remember that our unit test operates only on IBlogDatabaseAdaptor
, so it can only know about the abstractions and types of the domain layer. What happens inside the implementation of that adaptor (our repository) is not known to the unit test. Remember also that it is the primary purpose of the adaptor to map to and from the domain models. This time it’s easy: our domain model is essentially equivalent to our query result object. But it is important that you understand the distinction between where they are defined and what they are for. This is why the bottom-up approach is more difficult to grasp and apply, both for the developer and the code reviewer. Also, to prevent code duplication in my domain models I created the listing model as a base type of the blog model.
public class BlogListing { public string Title { get; set; } public string Key { get; set; } } public class Blog : BlogListing { public DateTime? PublishDate { get; set; } public bool IsPublished { get; set; } public string Content { get; set; } }
Red
Naturally our test fails because we only have the default implementation of the ReadListings
method that code completion created for us. As we fix that we see that we need a few things. We need to reference System.Linq
now, and we also need a new mapper. The repository implementation doesn’t yet know how to map from the new query object to the domain model. Let’s consider the mapper first.
CreateMap<Database.Queries.BlogListing, Domain.Models.BlogListing>();
The mapping conventions help us out here and we don’t need to specify anything explicitly, so we don’t need to write another test here. We already have one that does the general validation checking for us, which is sufficient. Let’s move on to the extension method that gives us access to this new mapper we defined.
public static IEnumerable<Domain.Models.BlogListing> Map(this IEnumerable<Database.Queries.BlogListing> collection) { return Config.Mapper.Map<IEnumerable<Domain.Models.BlogListing>>(collection); }
The Automapper library is clever enough to understand that it can apply that mapper to a collection of the source type and gives us back a collection of the target type. Our existing repository unit test will cover this method, so let’s finish the repository implementation and get to run our test again.
public async Task<IEnumerable<BlogListing>> ReadListings() { return (await _context.Blogs .Select(x => new Database.Queries.BlogListing() { Key = x.Key, Title = x.Title }) .ToListAsync()) .Map(); }
Now it gets fairly technical and I’m only including this for those that also read through the commits of this project on my github. Feel free to skip to the next section.
So at this point I expected my tests to complete, but here there seems to be a problem with the mocking code for DbSet
that I used before. Previously we operated on the DbSet
directly, executing SingleOrDefaultAsync
. This time we’re first doing a Select
and then a ToListAsync
, and it appears that my mocking code doesn’t support the underlying .NET Core 3.0 preview code anymore (I have used this mocking code with great success on 2.2 and before). I get the following error:
I’m not sure which “source IQueryable” this is referring to here. The Blogs
property is a DbSet
, which is also an IQueryable
, and we successfully use it in another unit test. But Select
also makes an IQueryable
, and I suspect that this is where the problem lies. I don’t control that result, and I cannot mock Select
because it’s an extension method.
It’s apparent that some code changed on the IAsyncQueryProvider
interface in 3.0 preview. This is on an internal namespace and there was never any guarantee that this will remain operational anyway. It is not the first time I’ve run into this problem. So I have to move on from this and start testing using their InMemory provider after all.
Now with InMemory!
The changes required wasn’t substantial. In fact, I could delete some code and that is always a good thing. I no longer need the interface of my context since I’m not mocking that with the extension methods anymore at all. However I am still interested in injecting the context through the constructor of the various adaptor implementations, so that bit looks a bit different for the unit test class. And then there were some unit test changes required also. In particular:
private DbContextOptions<helloserveContext> _options; [TestInitialize] public void Initialize() { _services.AddTransient(sp => new helloserveContext(_options)); //sets up for injection _services.AddRepositories(); _serviceProvider = _services.BuildServiceProvider(); } private void ArrangeDatabase(string name) { _options = new DbContextOptionsBuilder<helloserveContext>() .UseInMemoryDatabase(name) .Options; } [TestMethod] public async Task ReadListing_Verify() { //arrange ArrangeDatabase("ReadListing_Verify"); string key = "key"; string title = "title"; var blogs = new List<Database.Entities.Blog>() { new Database.Entities.Blog() { Key = "key1", Title = "title1" }, new Database.Entities.Blog() { Key = key, Title = title }, new Database.Entities.Blog() { Key = "key2", Title = "title2" } }; using (var context = new helloserveContext(_options)) { await context.Blogs.AddRangeAsync(blogs); await context.SaveChangesAsync(); } //act IEnumerable<BlogListing> listings = await Repository.ReadListings(); //arrange Assert.IsTrue(listings.Count() == 3); Assert.IsTrue(listings.Any(x => x.Key == key)); Assert.IsTrue(listings.Any(x => x.Title == title)); }
Now each test still has the ability to configure its own version of the database context, and is only resolved at runtime by that Func
that provides an instance of the context to the service provider. The only real difference in the unit tests of course is that I have to setup a context instance manually and put whatever data I want into it first. This as opposed to setting up the mock from before to return that set instead.
Bridging the gap
With this we’ve completed our build from the bottom up to the middle tier, or domain layer. Previously we mocked up a listing view, which was then hooked up with the domain layer also. All we need to do now is complete that gap, by building the domain layer bit which will serve the data of the blog listing to the requesting front-end. So let’s write a unit test to verify that.
[TestMethod] public async Task ReadAll_Verify() { //arrange List<BlogListing> listing = new List<BlogListing>() { new BlogListing() { Key = "key1", Title = "title1" }, new BlogListing() { Key = "key2", Title = "title2" }, new BlogListing() { Key = "key3", Title = "title3" }, }; _dbAdaptorMock.Setup(x => x.ReadListings()) .ReturnsAsync(listing); //act IEnumerable<Blog> result = await Service.ReadAll(); //assert Assert.AreEqual(result, listing); Assert.AreEqual(3, result.Count()); }
This is pretty straight forward. No processing or change is done on the data coming from the adaptor. It is served straight up by the domain layer. And the implementation is thus a basic one-liner.
public async Task<IEnumerable<BlogListing>> ReadAll() { return await _dbAdaptor.ReadListings(); }
Except that previously I typed the return type of this method as Blog
, and we introduced BlogListing
in this part as a base class. We’ll have to change the declaration, and work our way up to make sure it all still works. This required quite a few changes:
- Retype the interface’s method declaration, and all subsequent references.
- Base BlogViewItem on the Blazor page has a PublishDate property, I had to include PublishDate on the query object and domain model.
- Adjust unit tests to verify null published date values from the query. Users should only see published blogs while Admins should see all blogs.
- Add null check into the Blazor page for the PublishDate property.
Of course none of this is final and I will need a bit of work to separate out the listing of the blogs for users from that of admins, but this is an illustration of what could happen if a miscommunication or misunderstanding occurs between those working top-down and those working bottom-up.
And then you end-up in this sort of situation.
Why does this happen?
On the front-end side the mapper simply took Blog
from the domain and mapped it to BlogViewItem
. This target type (or view model) only has the properties to view the item as part of a list view. The mapper is intelligent enough not to bother mapping properties that doesn’t exist on the target type. So the front-end team simply did what they thought was best and perhaps assumed that they would be getting a list of all the blogs, and extract what they needed from that list.
Meanwhile the back-end team figured correctly that it’s inefficient to read data from the database that will not be used. It adds additional overhead to the database engine to stream all the data over to the repository when all we want in this case is the key and title. So they do work to separate out the BlogListing
data requirement as a model from the entire Blog
requirement. And then they had to go back for rework to include the PublishDate
property too, because they were misaligned.
SOLID: S for Single Responsibility
It is entirely possible for the back-end team to only load the requisite data into instances of Blog
, and leave the “unused” properties null or with default values. However, that sort of behavior is utterly ambiguous, and it will make the code extremely hard to maintain at a later stage, when you are not sure when the state of some specific property is valid and when not.
Clean code will have us do one thing with one piece of code. Similarly for models. Long ago in part 2 and 3 we covered how we don’t add code to the Blog
model since it’s a simple representation and it results in mixing immutable properties with deterministic functionality. This also applies to hydrating the model. Hydrate the entire model for the purpose of using it what it was meant for, or create a separate model with a different name that has a different purpose. Specifically, we will use the Blog
model to represent an entire blog entry, not just some part of the blog entry.
Simplicity over performance
The final part to discuss here is the aspect of keeping your code simple (as opposed to clean), and when to sacrifice performance for it. As I mentioned before, often it’s not a good idea to read and load data that you won’t use. But as always, it depends. For my personal blog site, in total I have probably about a hundred blog posts and I serve around five users a month. These sorts of numbers are tiny, and the load might be acceptable in the short to medium term. So it would have made sense to simply stick to reading out entire blog posts and only mapping the data I need for the list view. In any other situation these blog posts can become pretty big and when loading a long list for many people often it could take a while and become a bottleneck. Then you should absolutely only load (and cache!) what you need to build the display or API endpoint. Another example would be when loading a customer list, omit loading unnecessary details like addresses which cannot effectively be displayed or included in a list view or report.
So often, keeping it simple and only sticking to your entities and basic domain models is not that simple! As your system evolves you’ll find yourself introducing various different query entities and other specialized model objects that you’ll use to streamline your maintenance overhead and keep performance in check.
Conclusion
In this part we finalized the listing page by implementing the adaptor to the database to fulfill the read request and link it through to the domain layer, and on to the presentation layer. We saw how top down and bottom up came together and resulted in additional work because of miscommunication or misunderstanding (or simply not reading the specification). In reality I completed this part about 3 weeks after the previous part and I had simply not paid enough attention to looking back to what I had done previously. This worked out pretty well and it took the direction of this part on a very insightful trajectory. And truthfully, this happened to me on a simple side-project because I had not pinned down my understanding of the requirement through proper abstractions and definitions inside the domain. I started with the front-end, with the listing screen, because it’s fun and I was experimenting with Blazor. This resulted in the depending implementations (presentation, persistence) to misalign exactly because of this lack of definition on the domain. Essentially, I had my dessert before my proper food. Understanding the requirements, and defining and sharing our understanding through our definitions and abstractions around the domain is crucial.