In this part, we start to look at the final part of the Publish verb - pushing content to different social media outlets. This will be the hardest task we’ve done so far, and it requires some upfront research first. Understanding the problem that you need to solve is much more important than trying to design a solution (This is also true when you’re all about your startup idea). That makes logical sense of course, but I’ve often seen developers barely finish reading the user story's first line, and they are already furiously typing up code or drawing database and architecture diagrams, believing they ‘see the big picture’. And sometimes that might be true. Perhaps they’ve solved a similar problem before or they have lots of experience in a domain industry like fintech. But neglecting the details and the exceptions (stuff like edge cases for example) will always result in rework. And often that rework takes the form of what I like to call “programming through permutations”. This is when the code almost works but the developer is not sure why it breaks on a particular use case or edge case. And what you’ll often see is that they will make a small change and test, make another small change and test again, rinse and repeat, until it works. Often this can take up an entire afternoon, simply moving bits of code around or changing greater-than comparisons to greater-than-equal ones. And when it finally works they still have no idea why it broke or how their final change caused it to work.

Plowing through the API documentation

So in order to understand this problem, we need to see what we will face at the various social media outlets. That is, what are the requirements for each social media syndication, and how do they overlap. Our work here will not involve dealing with the respective social media APIs directly yet (we have an abstraction for that), but we do need to understand what these APIs will accept, and so what it is our abstraction IBlogSyndication will have to support as a minimum. For now, I’m going to concentrate on RSS, Twitter, and Facebook. I don’t have an Instagram account (I’m old, hey).


RSS doesn’t have an API of course, it’s just a standard (or a convention, if you will). It has an official specification and they provide some sample documents, just the thing I need. There’s the channel with some details on it and a bunch of items inside the channel. Think about these channels literally as channels on the TV. If you have the DStv app on your phone, it probably uses some extension of RSS (or something very similar) to convey that data to the app. I might only have one channel, or I might make a channel per project. Each item has only a few values: title, link, description, publish date and a Guid. Now we already have a small set of properties that our candidate blog syndication model must support, and I can already guess that the Guid will have to be persisted. But RSS works differently from social media services. RSS is a reactive thing, that is, it reacts to someone that wants to read from it, as opposed to me posting something somewhere into the cloud. Typically I will have a link to my site like, and this will serve an XML file like that sample page with all the details in it. Who-ever (or what-ever) reads that XML can then decide how to display it or make sense of it according to the standard. For example, you can add my RSS link to your Feedly account, and it will read the channels and items and decide how to display it to you within their app experience. In order to do that I have two options. Prepare an XML file beforehand and just load it up whenever someone visits that link, or generate the XML file on demand from prepared data and serve it. We’ll talk about this later again, but here’s a spoiler: the first option is really the same as the second option with some tweaks.


I came upon the POST statuses/update API reference within a few minutes of searching. Without worrying about stuff like authentication and so on, the list of parameters show quite a few items that go into the POST. The important one appears to be status. Yep, there is only one. We can include an image perhaps, but the link to our blog post will have to be inside the status (or the tweet) itself. So it seems that we will have to construct the final value of status from different parts of the blog. This will be proactive syndication, meaning I will have to call their API and deliver the content to them. It’s easier in the sense that I don’t have to persist anything, but it’s harder in that I will have to deal with HTTP failures and other problems.


And finally, the daddy. Unfortunately, I think we’re going to have some trouble here. Facebook has plenty of official SDKs, but not one for C#. That’s expected, actually (C# is still considered a “for corporate” language by most). Microsoft maintains a UWP app SDK however, so that might work. But here’s the thing, on Facebook I have a page for helloserve Productions, and I want to syndicate to this page instead of to my own profile (I typically share what I post to the page). However, the Business SDK, which has the this page deals with publishing to (or as) a page. It appears that there are only two values to set as query parameters: message and link. So this is very similar to the Twitter API.

Where is the overlap?

Now that we have an idea of the problem, we can design a solution for it. I really only need two things for the social media posts: the text, and the link. For RSS on the other hand, there is a title and a separate description. So where is the intersection between these syndications? Twitter has a strict limit on length, and any sort of general description is probably going to be too long. I think this explains why you typically see standard messages like “I posted a video to youtube” with the related link. Facebook, on the other hand, will accept a large piece of text and is only limited by the maximum length of a URL. But here is another consideration: when you manually make a post on Facebook, and you paste in a URL, Facebook automatically shows you a card for that link, sometimes with an image, sometimes with a “blurb”. How does that work? Some googling show me that it relies on metadata within the HTML page of the source. So now we have title and description again, albeit within the actual blog post’s HTML page.

Base it in reality

In order to decide how I’m going to overlap these three syndications, I’m going to look at some of my old content compared to other’s content. In my old blog posts, I typically had a “cut” as a first paragraph. This was also what I display in my blog ‘cards’ on my site, together with the title. It was actually a completely separate column in the News table and represented as a separate paragraph in the HTML page. At the time Facebook typically looked for the <h1> tag and the first <p> tag when you pasted a link as a post, and that’s what I did then. So I actually already have this concept of a title and a description in my current blog data. Now let’s look at a syndicated tweet:

The article that this links to does not contain the text in the tweet at all. In fact, this tweet appears to have been manually authored. But the amount of metadata in that HTML page of that article is phenomenal. That’s a good learning. Another example:

Compare this to the same article’s Facebook post:

They use the same text, save for the hashtags, but it doesn’t appear in the meta tags of the actual article page.

So I’m thinking instead of trying to “overlap” these different pieces of information, perhaps I should actually separate them and just help myself out with good UI. Meaning, I should provide myself space in the authoring of the blog post to put in the text I want to use as a tweet, and as a Facebook post. The UI can pre-populate it as soon as I fill in the description field. Then I can separately edit the field for Twitter to shorten it or include hashtags for example.

SOLID: S for Single Responsibility

But this gives me another problem. I don’t want to litter my blog model with disparate fields about social media posts. And I certainly don’t want to carry along data in my database that would live there and never be used again. On a higher level, the entire concept of the blog should have a single responsibility. And the syndication abstraction should inform the rest. I need to devise another way to capture and process these bits of text that will accompany the various social media posts without having to persist it in the database or domain models.

So here’s my idea: The front-end receives data from configuration that informs the page about the various options for syndication, and generates the required UI for each one. The ones I fill in are then sent back to the API and arrives at the Publish method separately from the blog model. This is then transformed into different instances of the IBlogSyndication abstraction through a syndication factory, and then the captured fields are passed to the related instances for processing in the background service.

Plan it

I think now we have a final list of items that we need to support, so let’s plan our work for this. We have enough to complete the Blog entry, and we can add most of these properties on it. But how do we test it? Do we even test it? Simply adding a property doesn’t warrant a unit test. There’s no code there, except the declaration. And you only unit test code that your write. Next, I think we can fill in our abstraction IBlogSyndication a bit too, and then we can test that we make use of it, somehow. But now comes the tricky part - in order to make use of it, we need an out-of-process way to call three different instances of the same adaptor. You’ll recall from our flow diagram in part 5 that we branched off into the “process” column, where we had a loop that pushed to the various adaptors. This was surrounded by a box labeled IHostedService.

Concept: Asynchronous vs Background

Let's talk about asynchronous flow first. Simply put, when the incoming web request completes (you responded back to it), the .NET Core runtime shuts down the thread that handled that request to make it available for the next request. So it signals to the stack, using a CancellationToken, that it’s finished and wrapping up. If you launched any Tasks without awaiting them, they are gone and completion of those tasks are not guaranteed, meaning they might exit based on the token, or they might not (often null is passed as the token). And we don’t want to hang around waiting for a list of syndication processes to get done while our user that published a blog post has to wait for the screen to load. We want to process these syndications completely out of band, while responding to the user as quickly as possible.

We can achieve this by using the System.Threading API in conjunction with a singleton service, but I’ve seen posts about the .NET Core IHostedService API and I want to see what this is all about. It turns out that even the WebHost that we use when we build our API (see the Program.cs file) is based on this API. That page details three different implementations, and the one I’m interested in is the queued background tasks one. Three things are required:

  • A singleton queue manager, IBlogSyndicationQueue. This is behind an abstraction because it is constructor-injected into the BlogBackgroundService together with a logger.
  • A background processor BlogBackgroundService that extends BackgroundService. This in turn implements IHostedService.
  • I have to worry about setting up a scope so that the syndication can make use of dependency injection to get other services, like HttpClient.


The singleton queue manager concrete has to do two things: queue and dequeue. The background service has to poll the queue, take items and process them. And then the blog service, when publishing, has to invoke the queue using the scope to put items onto it. Lets list these.

  • Test queue operation on BlogSyndicationQueue
    • BlogSyndicationQueue_Enqueue
    • BlogSyndicationQueue_Dequeue
  • Test BlogBackgroundService process
    • Dequeue and process queued items
    • BlogBackgroundService_Process
  • Test BlogSyndicationFactory
    • Return correct instance based on input from configuration
  • Test that BlogService enqueues as many syndications as configured
    • Should be generic Action that invokes functionality on IBlogSyndication abstraction
    • Define and read config
    • BlogService_Publish_EnqueueSyndications

All of this can be depicted as follows:

This flow diagram looks slightly different from the previous one, but it still has all the same parts (I left out the repo due to space). This evolution of the one from part 5 was informed by my deeper understanding of the problem. In part 5 I had already included in the initial plan some tests related to the syndication configuration, hosted service and so on. Those remain valid of course, but again have been evolved due to my additional understanding of the problem in this part.

Concept: Extensibility

Maybe you’re wondering why the queue service is abstracted by an interface, since it is entirely feasible to just register a class as a singleton and stick that into the constructor. And for a narrow implementation like this blog that will be fine. But consider this: in a broader system where you have more mission critical things happening, you don’t want that queue to die when something happens to your API instance. So while this example implements a concrete around the ConcurrentQueue collection, in real life this type of implementation will probably focus around a RabbitMQ instance which will be running as a separate service. In this way your queue is persisted outside of the API service, which is a much more robust design. Here is a separate, very detailed overview of queues and implementations of it. Then there are the following to consider.

SOLID: D for Dependency Inversion

Since we’ve practiced this a few times now you should be familiar with the idea of providing an abstraction for the queue service that decouples the blog domain from the queue implementation. The BlogService has no dependency on the specific queue implementation details.

SOLID: L for Liskov Substitution

Additionally, we see how we can provide different implementations by using an abstraction, but have the same results. In your development environment you can start the API service with an in-process implementation of the queue and expect a specific behavior. Then during user testing and later in production, you can start the API with a more resilient implementation around a separate, persisted service, and still expect the exact same behavior. Different concrete implementations that adhere to the contract defined by the abstraction.


This part has taken much longer to complete than I anticipated, so we’re not going to write any code now. But it was a most interesting part to research. We’ve seen how it becomes necessary to deep dive into the domain that you’re working in and how you have to understand and follow through on the concepts and processes within the scope of your problem. Figuring out the solution is a natural progression when you understand the problem, and so we’ve evolved our process flow (your client will also show much appreciation of your demonstrated understanding of their problems). We’ve also discussed and contemplated different background workers and different queue implementations. Both these concepts are crucial elements in any moderately complex IT-system, and they come in many different shapes and forms.

Following the plan

Looking at the plan from part 5 gives us a nice path to follow to completion. We’ve already detailed exactly where we need to start (the tests!). So, without further ado...


public async Task Publish_IsPublished_MarkedTrue()
    Blog blog = new Blog();

    await Service.Publish();


So we followed the first item of our plan, and we have a call to Publish() detailed in the test and we assert for the property value. If we follow through and create the code, what do we put into the Publish() method? Where does Publish() get the blog from that it sets the property of to true? The requirements that we planned from says: “I want to select a previously created blog post and mark it as published”. This implies that we’re not passing in the entire blog like when we create it. The only other place we can get the blog from then is the database. But to do that we either have to call the Read() function, or talk directly to the adaptor. Either way, we will need the title for the blog.

public async Task Publish_IsPublished_MarkedTrue()
    string title = "hello_test";
    Blog blog = new Blog();

    await Service.Publish(title);


And then we code-complete and then fill in the Publish() method.

public async Task Publish(string title)
    var blog = await _dbAdaptor.Read(title);

    blog.IsPublished = true;

Our test doesn’t yet pass, we have to make one modification to it so that we can actually use the blog instance that we arrange here for ourselves. We set up the mock to return it.

public async Task Publish_IsPublished_MarkedTrue()
    string title = "hello_test";
    Blog blog = new Blog();
    _dbAdaptorMock.Setup(x => x.Read(title))

    await Service.Publish(title);



Our test pass. I don’t see anything to refactor at this point. We move one to the next item on the plan, the publish date. There are two tests we need to write.


public async Task Publish_NoDate_IsSet()
    string title = "hello_test";
    DateTime expectedDate = DateTime.Today;
    Blog blog = new Blog();
    _dbAdaptorMock.Setup(x => x.Read(title))

    await Service.Publish(title);

    Assert.AreEqual(expectedDate, blog.PublishDate);

public async Task Publish_DateSet_IsNotSet()
    string title = "hello_test";
    DateTime expectedDate = DateTime.Today.AddDays(-2);
    Blog blog = new Blog() { PublishDate = expectedDate };
    _dbAdaptorMock.Setup(x => x.Read(title))

    await Service.Publish(title);

    Assert.AreEqual(expectedDate, blog.PublishDate);

Only one test doesn’t pass. The date is not set if it’s null. We can update the code of the Publish() method to fix it.

public async Task Publish(string title)
    var blog = await _dbAdaptor.Read(title);

    blog.IsPublished = true;
    blog.PublishDate = blog.PublishDate ?? DateTime.Today;       


Our test now passes. Is there anything we can refactor? That line of code we just added, we’ve seen that before. When we were working on the Create() method, we had this line of code as well. At the end of that red/green/blue flow, we had moved that code into a validation method.


Perhaps we can just call that method here too?

public async Task Publish(string title)
    var blog = await _dbAdaptor.Read(title);

    blog.IsPublished = true;


Is that the right thing to do? It makes sense to validate the blog one last, final time before it is marked as published, right? Let’s test it. This is the power of unit testing. Suddenly we have three failed tests. Three! Wow. Clearly, this has a big impact. And they are all the new publish testing methods. So, why are they failing?

System.ArgumentNullException: Value cannot be null.
Parameter name: Title

Well, of course! The first thing that is validated is that we should have a title so that it can be turned into a key for storing it into the database. So how does that play for our Publish() method? Well, it actually loads the blog by the title from the database adaptor, so I think here we can assume that the blog will have a title filled in already. After all, it must have been saved previously with a title. It’s just that we didn’t fill it in when we supplied it to our mock. And if it doesn’t have a title, if something is wrong with the data, it will let us know immediately through failing fast and throwing that ArgumentNullException, just like the unit tests do. It feels like a real win to validate the blog entry again now. Let’s add the title in all these tests, and run it again.

Blog blog = new Blog() { Title = "Hello Test!", PublishDate = expectedDate };

And suddenly all our tests pass!

SOLID: S for Single Responsibility

We created the Validate() method previously, and now we were able to reuse it. Similarly, we created the Read() method before also, and we could also have reused that too. This was only possible because those two methods have one single purpose each, and they do only what it says on the box. If these methods had a dual purpose or if the outcomes were determined by some conditions hidden inside them or if they had a dependency on something, it would have been much harder to reuse them in a different context or with a different dependency. When you write simple and deterministic methods with no or very little dependencies, you will always be able to trust them and reuse them. It will also be really easy to test them with unit tests.

Why not call Read() also?

So I did not call the Read() method of the service and instead read directly from the adaptor. It would not have been wrong to do it, but my decision is based on the following reason: I don't want to mix business rules or create dependencies between different business rules. If the Publish() method now relies on the Read() method, and the definition of the Read() method has to change at some point in the future, my Publish() method will be affected. For example, perhaps we need to add some additional transform into the Read() method to fulfill a change in business rules. Such a change will then have side effects on my other business rule.

Of course, it is also entirely plausible that we might need to make a change in how the adaptor presents the data as part of its own Read() method. Which would affect both the service Read() and Publish() methods. But this will also require a change in the abstraction (perhaps a different parameter, or a change to the Blog model?). So this decision is all about where you want to put your risk. There is no wrong or right answer here - I'm opting to put my risk all in one basket (the adaptor's Read() method) instead of spreading my risk around via dependencies. And typically lessening dependencies is always more beneficial, as we've already seen (and will again).


In this part we worked partially through our plan and built some of the unit tests on the list. We saw how the unit tests helped us execute our actual code early and often, through each step of your development process, to continually reaffirm our understanding of both the problem and the solution. And even though we did not use one of the principles in this part, we saw how it helped us that we applied it before and that we were able to reuse code. You might remember in part 5 we already saw as part of our plan that we’re repeating the unit tests for the publish date that we did in the Create() method work. Our principles and our plan align! This leaves us feeling at ease for the future. And if the requirements around the published date now change, we only have one place to go and make that change. And we have a bunch of unit tests that will guard us when we make those changes.

In this part we’re looking at the last verb of the blog entity, Publish. In part 2 I said that this was a special verb, and that in the real world the specification should explain these kinds of verbs.

I showed you this example.

A blog post will remain invisible to the public until published. When a blog post is published it should post to selected social media outlets, including the RSS feed.

Another way that this might be specified is through a user story.

As an administrator, I want to select a previously created blog post and mark it as published so that anonymous users can read the blog post.

Functional requirements:
  • The blog post must be marked as published.
  • The blog post’s publish date must be set.
  • The blog post should be shared on social media outlets
Non-functional requirements:
  • Publishing of the blog post must happen in less than 2 seconds

I don’t have a real requirement document for my own website that we’re building here, I’m making this up as I go along. I had forgotten about the piece in part 2 when I wrote down the user story for this verb here in part 5. And as it often happens in the real world, the two pieces of text reflect essentially the same requirements, but the first one is just very poorly worded and leaves a lot of information implied instead of being specific. During your career you deal with different people, some of whom will be more competent than others. And so it is not uncommon to have to do real work based on very poor requirement documents. You’ll have to learn to read between the lines and across different parts of the specification to form a complete picture. And you’ll have to learn to ask questions. There is never anything wrong with asking questions.

A plan to design

So far we’ve taken the requirements directly to the unit tests, and let a lot of the implementation details emerge to us as we tested and coded. This is fine for small projects and one- or two-man development teams. However I want to convey a more professional approach from which you will get a lot more mileage in bigger teams and projects. And that approach is to plan before you start testing and coding.


You’ve heard the term before. Perhaps you’ve also heard about extreme programming. And scrum. These all revolve around two basic concepts - you commit to some work upfront, and that work is completed before any further work (or changes to the work) is considered. To be able to do this, for you to be able to commit to some work, you have to understand what that work will entail. Typically you have to size the amount of work or effort involved to complete it. And typically you have to break the work down into smaller pieces that you can fit into the “heartbeat” of your agile workflow. Sometimes that’s a two-week period, sometimes a three-week period. Regardless, you will have to plan your work in advance in order to successfully deliver on your commitment (or risk being under- or over-committed). In the workplace you do this during what the scrum-masters call "sprint planning".

Break it down

Planning for a piece of work obviously requires that you understand all the bits that you have to do in order to complete it. We will need a list of things to understand. And as before, we start with the list of things we need to test for. Always start at the tests.

  • Test that a blog is marked as published.
  • Test that a blog’s published date has been set.
  • Test that we push to each social media outlet.

Well, that was simply repeating the functional requirements, wasn’t it. Perhaps we can do a bit better than that.

  • Test that a boolean property is set to true on a blog entity to mark it as published.
    • Property: IsPublished
    • Unit Test: BlogServiceTests.Publish_IsPublished_MarkedTrue
  • Test that the publish date is set to today’s date when the date wasn’t set by the user.
    • Property: PublishDate
    • Unit Test: BlogServiceTests.Publish_NoDate_IsSet
    • Unit Test: BlogServiceTest.Publish_DateSet_IsNotSet
  • Test that we push to each social media outlet when publishing a blog
    • Iterate through a list of `IBlogSyndication` adaptors and call into each one
    • This process might take longer than 2 seconds though
      • Will need an out-of-process long-running task
      • Perhaps look at IHostedService
      • Special processing abstraction that we can mock.
      • ISyndicationProcessor, async Task Syndicate(string url, DateTime date, string description)
    • Find out what the common elements between all the different social media platforms are. RSS has at least url, date, description and title.
      • IBlogSyndication.Push(string url, string date, string title, string description)
    • Unit Test: BlogServiceTests.Publish_Verify_LaunchHostedServiceProcessing
    • Unit Test: SyndicationProcessorTests.Syndicate_LoadSyndicationConfig
    • Unit Test: SyndicationProcessorTests.Syndicate_Verify_SyndicationAdaptors
  • BlogService method: async Task Publish(string title, DateTime? publishDate)

That looks a lot more detailed, and there is a lot to cover. Where did all this information come from? It came from thinking about and discussing the user story with your fellow team members. It came from actively designing the processes upfront so that it will fulfill the requirements. It came from discussing the test cases with the QA people during the planning session.

On a bigger project the design of the solution is always a team effort. This is because an individual’s logical thinking and common sense is not infallible and they will miss something. And also because everyone in the team needs to understand the requirement and the implementation of the solution.

I’ve revised that list above probably 5 times now and spent at least three hours on it. There isn’t a real team involved here. It’s just me so it remains to be seen how much foresight I really could apply here. Planning is really hard, but it is also really crucial. Don’t skimp on it. I paraphrase from an old Top Gear episode: if you fail to plan, then you should plan to fail (who can tell me from which episode that quote is from?).


The items on this list would now typically go onto some board. Perhaps on Trello, or TFS or Jirra. Maybe even a real, physical whiteboard with Post-It notes. The list constitutes tasks that need to be completed, and those tasks might be assigned to different people.

Another output from planning is additional documentation artifacts that assist in understanding, and can sometimes be considered deliverables to the client as well. One such an example is sequence or flow diagrams (typically one per feature or story). In part 1 and 2 we saw the high level domain diagram and the first level solution diagram. The sequence diagram helps us to understand on a very detailed level what the steps are that we need to follow in order to complete the task.

This diagram shows us a few things. It shows us the orchestration that needs to happen when a blog is published. It shows the data that is being sent and will be returned. It shows us our integration points with our abstractions. It even shows us the API endpoint and the HTTP result code. It shows us that we will do a full read of the blog entry from the database and then update that entry again, and then before we return we will asynchronously call into the IHostedService to process some stuff. You can go into a lot more detail here if you want, like showing one or two error sequences also.

Changes to previous designs

Those paying attention will have noticed that we’ve included in our plan some overlap with previous parts. We have the same PublishDate unit tests here as what we built in part 4 for the Create verb. This happens all the time, where you uncover functionality that overlap, or new requirements that will change what you did before. This is normal and should not raise any alarms or cause concern. Change to a software system is absolutely normal, even while it is still being built.


In this part we discussed how to plan and how important it is. This part has taken me tremendous effort to write. I have a small little whiteboard here next to my computer that I use, and I drew this stuff out over and over again, until I got something that I was finally happy with before duplicating it digitally for this blog. At my day job we typically dedicate an entire day to planning the work for the next sprint. It is hard and it is tiring. But if you do it well, with enough detail and documentation, as a team, you will reap the rewards over and over again. This plan that we now have will probably keep me busy over the next few parts, so you’ll have to bare with me through the coming unit testing and read/green/blue rinse and repeat.

We’re now getting into the swing of developing this service, so we carry on with the next verb, Create.

Before we start let’s refresh ourselves with the requirements that we saw in part 3.

Functional requirements

The blog title should be in the url, with all escaped characters like punctuation removed.
The blog title will be displayed as a heading
The blog publish date will be displayed under the heading
A blog article image or lead-in image will be displayed if it exists
The blog content will be displayed

Non-functional requirements

The blog should load in less than 2 seconds
The blog should scale correctly on all popular screen sizes

Red/Green/Blue continued

We start with a unit test as part of the existing service test class.

public async Task Create_Verify()
    Blog blog = new Blog() { Title = "Hello Test!" };

    await Service.Create(blog);

    _dbAdaptorMock.Verify(x => x.Create(blog));

This is a simple test to verify that the service calls the database adaptor with the supplied object. Like before we use the code completion features to create all the methods. Fortunately, because we did that refactor in the previous part we can simply use the Service property which is already correctly initialized. Now we need to pass our test with some code changes.

public async Task Create(Blog blog)
    await _dbAdaptor.Create(blog);

That one was easy, but we have a bit more to do here. We’ve touched on the title in the URL in part 3. If we are to be able to lookup a blog entry from its title, then we have to persist that title in the database as a key (either as a primary key or on some index). As we’ve seen before, given the simple title from the URL, we have no way to reconstitute the original title. This means that we’ll have to transform the rich title to become a key when we save the blog. Let’s add that to the unit test also.

public async Task Create_Verify()
    Blog blog = new Blog() { Title = "Hello Test!" };

    await Service.Create(blog);

    _dbAdaptorMock.Verify(x => x.Create(blog));
    Assert.AreEqual("hello_test", blog.Key);

Obviously, the Key property doesn’t exist, so we code-complete that. And then we have to write code to assign it after we strip out all punctuation and encoded characters so that our test can pass.

 public async Task Create(Blog blog)
    blog.Key = blog.Title;
    char[] punctuation = new char[] { '`', '!', '@', '#', '$', '%', '^', '&', '*', '(', ')', '.', ',', '?', '\'', '"', '~', '+', '-', ':', ' '};
    for (int i = 0; i < punctuation.Length; i++)
        blog.Key = blog.Key.Replace(punctuation[i], Char.MinValue);

    await _dbAdaptor.Create(blog);

But our unit test doesn’t pass. We see from the failed assert that Message: Assert.AreEqual failed. Expected:<hello_test>. Actual:<Hello\0Test\0>.

There are three things wrong here. Punctuation is actually replaced by \0 and not simply removed (I learnt that Char.MinValue is not the same as empty or no value while writing this blog). The space was replaced as part of the punctuation, and we still have capital letters. We can address the char issue by switching to strings, and the missing space issue by handling it specifically instead of generally. The framework can do the heavy lifting for us on the casing issue.

public async Task Create(Blog blog)
    blog.Key = blog.Title;
    string[] punctuation = new string[] { "`", "!", "@", "#", "$", "%", "^", "&", "*", "(", ")", ".", ",", "?", "'", "\"", "~", "+", "-", ":" };
    for (int i = 0; i < punctuation.Length; i++)
        blog.Key = blog.Key.Replace(punctuation[i], string.Empty);
    blog.Key = blog.Key.Replace(" ", "_");
    blog.Key = blog.Key.ToLower();

    await _dbAdaptor.Create(blog);

Our unit test now passes, so let’s move to blue to refactor.

Look at that Create method again. It’s purpose is to tell the database to persist a blog entry. It’s secondary purpose is to transform the blog post into something that is meaningful to us for persistence. However it should not also be encumbered with the details of each transformation. The for-loop in that method adds to the conditional complexity of the entire method, and it’s a massive preamble that a reviewer or co-contributor has to work through before they get to the main purpose. To give a specific term to that purpose: it should orchestrate the persisting of the blog entry. Everything else is secondary and probably best detailed elsewhere. So, let’s refactor that bit out.

SOLID: S for Single Responsibility

public async Task Create(Blog blog)
    blog.Key = AsUrlTitle(blog.Title);
    await _dbAdaptor.Create(blog);

private string AsUrlTitle(string title)
    string[] punctuation = new string[] { "`", "!", "@", "#", "$", "%", "^", "&", "*", "(", ")", ".", ",", "?", "'", "\"", "~", "+", "-", ":" };
    for (int i = 0; i < punctuation.Length; i++)
        title = title.Replace(punctuation[i], string.Empty);
    title = title.Replace(" ", "_");
    title = title.ToLower();
    return title;

You’ll notice that this method works slightly differently from the original code. It doesn’t even know about the type Blog. It is only concerned with strings. Previously I first had to assign the Title value to the Key property, and then transform it in place. Now the private method leverages off of a value type parameter which can directly be manipulated and returned as a result, without affecting where the value came from. Why did I not take in the type Blog here?

Having to transform a rich title to a simple URL title seems like a useful thing to be able to do. It is actually an integral part of this requirement. So I want this piece of code, this method, de-coupled from the type Blog. This means that later I can move it out of the Blog namespace completely and just have it as an extension method perhaps. But whatever emerges or doesn’t emerge in the future of this development, this method is now responsible for only one thing and it has zero dependencies, where-as the Create method now just string along a set of calls in the correct order. It’s one job is to orchestrate. "But hold up" I hear you say. "What about that array of punctuation strings? That should be a constant declaration!" you say.

Clean code over working code

Well, if this was a string manipulation library or component, then sure. It would make sense to have something like that available to the different functionalities inside such a component. But here? I don’t think that littering the top of the BlogService class with the details of known punctuation where it will be completely out of context serves any purpose. Yes, there is a very small performance impact because it’s an additional memory allocation in the method (and we can make that better through ArrayPool also), but it’s marginal at best and totally insignificant when you consider the numbers I have to serve here. And while the effort to move it to a constant is also minimal, at this point it’s cleaner to keep the responsibility and dependencies of the method enclosed with-in the method itself.

To put it another way, I don’t think that the array declaration in this method is where my focus for performance improvements should be now, or even later on. It’s well known in software development that performance tuning happens too early, and happens on only 20% of the code. And while it might be an obvious improvement, it will also muddy the BlogService class and make turning this method to an extension later on more complicated. All for a very small gain that I will barely be able to measure.

But what about some LINQ?

There is another refactor that you can consider here. The for-loop is more than sufficient, but old-school. You can rewrite that using inline (or fluent) declaration and LINQ.

private string AsUrlTitle(string title)
    new List() { "`", "!", "@", "#", "$", "%", "^", "&", "*", "(", ")", ".", ",", "?", "'", "\"", "~", "+", "-", ":" }
        .ForEach(x => title = title.Replace(x, string.Empty));

    title = title.Replace(" ", "_");
    title = title.ToLower();
    return title;

I know a lot of people that prefer this. Personally I’ve got a split opinion. Again, the code is more concise and terse, but difficult to understand and read (I do recognize that it would read much easier if the array was a constant declaration outside of the method though). Also, the implementation detail is abstracted from you. Not everyone understands that there is still a for-loop happening here, it’s just not in your code. Most projects now use LINQ to objects almost everywhere, and I’d say use this code if you and your team is comfortable with it, can effectively read and maintain it, and understand the consequences of using LINQ extensions.

Deeper into the requirements

So we’ve done some refactoring and confirmed that our unit test still passes despite the code changes. This is one of the biggest benefits of unit testing. It is then time to look deeper into the requirements and see how we can fulfill them. The blog has a publish date. The user will probably be able to set this, but what if they don’t? It’s always a good idea to show the date of an article. How many times have you read a technical piece or a tutorial and you only realized it was out of date halfway through? This will require two unit tests.

public async Task Create_NullDate_IsSet()
    Blog blog = new Blog() { Title = "Hello Test!", PublishDate = null };

    await Service.Create(blog);

    Assert.AreEqual(DateTime.Today, blog.PublishDate);

public async Task Create_HasDate_NotSet()
    DateTime expectedPublishDate = DateTime.Today.AddDays(-4);
    Blog blog = new Blog() { Title = "Hello Test!", PublishDate = expectedPublishDate };

    await Service.Create(blog);

    Assert.AreEqual(expectedPublishDate, blog.PublishDate);

We need two because we have two distinct cases to handle. One where the date is null, and one where it isn’t. To pass our tests we need to add some code to the service method.

public async Task Create(Blog blog)
    blog.Key = AsUrlTitle(blog.Title);

    if (blog.PublishDate == null)
        blog.PublishDate = DateTime.Today;

    await _dbAdaptor.Create(blog);

Looking at this code, do you see the two different branches? One is if the date is null, and we give it a value. But the other?

Branching Code

The other branch is implied, and it is to do nothing when the date is already set. Always when there is no explicit else section, there is an implied else section, and we have to ensure our unit tests cover that one too. We do this to guard for a change in the code that would introduce an else if or explicit else section in the future.

Now that the tests pass again, what can we refactor? An easy one would be to reduce the multi-line if-statement to a single line null-coalescing operator. This is a feature in many languages that makes your code more terse and compact (it is similar to the ternary operator). If you feel that it is not obvious enough what the code does, or that your team isn’t ready to deal with this style of code, then do not use it. Being more explicit or verbose is always better if you’re unsure. For me and for my project, I’m making this change.

public async Task Create(Blog blog)
    blog.Key = AsUrlTitle(blog.Title);

    blog.PublishDate = blog.PublishDate ?? DateTime.Today;

    await _dbAdaptor.Create(blog);

Now consider, does the service method still have a single responsibility? Does it still only orchestrate persistence? It’s entirely correct to argue that this new line of code (or the if-statement) should be in its own method. Perhaps a Validate method, which will either correct or fill in what’s missing from the blog if it can, or throw an exception when it can’t. And this is exactly what I would do if there were other validations to do, or if I had decided to keep that multi-line if-statement. So before I decide on this refactor and because I changed to a null-coalescing operator, let’s consider if there are any other validations to do here that would constitute a new method. What is the only thing we need to exist in order to fulfill the requirement to save a blog? The title must have a value! Even if there is no content, no publish date or no image, without a title the blog post cannot exist in the database. It is our key, and our URL will result in a 404 Not Found error. Do we need more unit tests for this? Yes, because we have to cover the null case, and the not null case. Let’s do the null case first.

public async Task Create_TitleIsNull_Throws()
    Blog blog = new Blog();

    await Assert.ThrowsExceptionAsync(() => Service.Create(blog));

You’ll see that here the act and assert sections are combined, because we have to act in order to use the exception assert method. Now we update the Create method to handle this case for us in order to pass our test.

public async Task Create(Blog blog)
    blog.Key = AsUrlTitle(blog.Title);

    blog.PublishDate = blog.PublishDate ?? DateTime.Today;

    if (string.IsNullOrEmpty(blog.Title))
        throw new ArgumentNullException(nameof(blog.Title));

    await _dbAdaptor.Create(blog);

Our unit test still fails, but with an unexpected error: Message: Assert.ThrowsException failed. Threw exception NullReferenceException, but exception ArgumentNullException was expected.


What does this mean? ?NullReferenceException is encountered when a reference (to a class, interface, etc) is null and we try to use it. When we follow the stack trace we see that the exception originates in the method where we transform the rich title to a URL title. Specifically, it is trying to call the Replace method on a null instance. This means we need to change the order of our orchestration to first do the validation, and then do the transformation.

public async Task Create(Blog blog)
    if (string.IsNullOrEmpty(blog.Title))
        throw new ArgumentNullException(nameof(blog.Title));

    blog.PublishDate = blog.PublishDate ?? DateTime.Today;

    blog.Key = AsUrlTitle(blog.Title);

    await _dbAdaptor.Create(blog);

I also moved the date null-coalescing statement because previously we decided that this was also validation.

Concept: Error handling strategy

But why throw an ArgumentNullException? Let’s quickly consider different methods of reporting an error back to the caller, and why.

  1. You can return some sort of object that contains a string detailing the error. Perhaps it also has an error code property.
  2. You can return null, (or -1, or "Undefined", depending).
  3. You don’t do anything and rely at runtime on the NullReferenceException we’re seeing now.
  4. You can throw your own custom exception type, or throw one of the framework provided exceptions.

Certainly all these options are valid, but where all the options are equal, some are more equal than others. You want to provide the caller of your code with all the information they might need in order to help themselves when they encounter an error, so put yourself in the shoes of whoever is going to call your method.

Solution 1: return error model

Under the first solution, the caller have to understand your custom error model, and probably do some error code lookup if there is one. And where would the return data be if it wasn’t giving an error? On a property that is sometimes null and other times not? Perhaps there will have to be a switch statement in your code to determine the error code that can be returned from a larger set? Is this set exhaustive? Is it null when there is no error? Do you have documentation to provide them about this error model? Do you have unit tests that cover every possible permutation of the values of this error model?

Solution 2: return null (-1, “Undefined”)

Also known as error handling by convention. In this second solution all the calls to your code will have to be followed by an if-statement: if (result == null) or (if result == -1). And what happens when some time in the future null or -1 suddenly becomes a possible valid result, when no error occurred? It’s even worse when the return type is bool! True, False and… grey area?. Additionally, all those if-statements come with overhead for the CPU branch prediction engine. It’s also much harder to JIT branching code into compiled code that can run faster. Your caller will always have to incur these costs because they cannot call your code without doing if-statements afterwards (this is also actually the case for the first solution: if error code has value x).

Both these solutions introduce accidental and cognitive complexity and incur a lot of technical debt (more on these concepts in a later post). It also requires a lot of documentation and edge-case and branching unit testing. And not just for you, but also for who-ever is going to call your code.


Now consider the last two options. There is overhead with exception handling, no-one denies that. But that overhead is only prevalent when an error actually occurs. As long as you are in the try block, it is irrelevant. This is in stark contrast to the first two solutions, where no matter if an error occurred or not, the if-statement following it will always have to execute.

But here’s the rub. An exception is just a specific type of error model, like the first solution. The base System.Exception class has a message property and a stacktrace property, and the type of the exception is essentially the error code. The difference here is that the .NET designers have already thought about and answered all the questions put at the first solution. The result is that the C# language has a first-class citizen approach to exception handling, and you would be well advised to make use of it. Learn about the different exception types that the framework offers, and throw them in appropriate places. Create your own exception types by extending System.Exception and throw those in appropriate places. And then provide documentation about your exception types.

More generally, make an effort to understand what is the best-practice approach for the language that you use, learn it and understand it in-depth and make use of it, instead of trying to reinvent the framework.

Finally, you can omit a lot of defensive code and rely on the runtime exceptions (like this NullReferenceException from the unit test). Just know that it will make debugging your code and providing support to callers of your code much harder. NullReferenceException and ArgumentNullException have two very different meanings, and it might spill over to your API responding with a 500 Internal Server error as opposed to a 400 Bad Request error.


Now our test passes. We don’t need to write another test for the not null condition this time. Why? Because I copied the code from the Create_Verify test to create all the other tests. By doing that, all the other tests had a title set in the arrange section. Had I written the date tests from scratch I would not have set the title at all because it wasn’t being tested in those tests, and they would have failed on the same NullReferenceException. But because those tests do set the title, we’re already covering the not null branch multiple times e.g there is no exception being thrown.

There is value in not copying and pasting code. Even your own code. When you copy code you almost always include stuff from the source that you don’t need at the target. This is even more true when you copy code from sources like StackOverflow. Had I written the date tests from scratch, I would have encountered the NullReferenceException early and it would have emerged to me that I was missing additional validation. Instead I had to sit and think about my design and my tests and realize that I was missing the validation for the title. In fact, if I was not doing this write-up of my work, it would probably never have occurred to me at all. Beware the copypasta!


Now that we’ve finalized the last of the validation and all our tests pass, let’s reconsider that Validate method mentioned before. The Create method now has quite a few lines of code, and it starts with that if-statement. It’s not complicated to understand, but it looks messy. We can restructure this to be much more explicit about its orchestrating responsibility by creating that separate method for some of these lines.

public async Task Create(Blog blog)

    blog.Key = AsUrlTitle(blog.Title);

    await _dbAdaptor.Create(blog);

private void Validate(Blog blog)
    if (string.IsNullOrEmpty(blog.Title))
        throw new ArgumentNullException(nameof(blog.Title));

    blog.PublishDate = blog.PublishDate ?? DateTime.Today;


In this part we fulfilled two requirements when persisting a blog entry, and discussed single responsibility. We built on the infrastructure that we put in place in part 3. We wrote our first algorithm of sorts to cleanup a piece of text, and we could execute that algorithm without having a console or web app, but by using unit testing. We saw how unit testing helps us to immediately test for different conditions or branches in our code, and that a branch is not always obvious to us. We saw how copying code can lead to missed implementation or even errors. We saw how we managed to convince ourselves that we could extract another method as part of refactoring in order to keep our code clean. We also discussed method implementation details like keeping coupling and dependencies to a minimum, not wasting effort on premature optimization and the benefits of supporting exception handling by throwing.

Design never stops. From the highest overview to the lowest line of code, our design today is always important. Every single decision that we made in this part will play a role in the next part, and thereafter, for future-us. But what is also important to understand is that your design today cannot be perfect. The future will always bring changes that will challenge your design, and the only way to insure that you can change effectively and efficiently is with clean code.

Since we’ve completed our solution architecture’s design, we can now start putting an actual solution together in Visual Studio. But what will our starting point be?

Bottom up, top down, or?

Either approach is correct of course - we're not limited by physics. It all depends on what you have designed in enough detail at this point that will allow you to start on the code. In this series so far we have deferred all the decisions by using abstractions, so the only part that we can code now is the core of the blog domain. Let's put a solution together for that:


There are two projects here, and one of them is the cross cutting, or common, or shared bits that everything is dependent on. This is post-fixed with ".Abstractions" and separated out from the main domain project. I do this because again, it isolates those elements that are dependencies for other things. It reinforces the single-responsibility principle, and it tells me (and other developers that might look at this code) that here be no implementation details, instead only representations. The code for all of these abstractions so far are as follows:

    public interface IBlogDatabaseAdaptor

    public interface IBlogOwner

    public interface IBlogSyndication

    public class Blog

What do you notice here? There is literally nothing in the code.

What else do you notice here? There is a subtle difference between the project name and the different namespaces inside it: the namespaces omit the word ".Abstractions". The reason for this is simple - your consumer is interested in your domain, and from their perspective stuff that is public is available to them, regardless of if it’s an abstraction or not. They will call into BlogService to get an instance of type Blog, and for them it’s annoying to have two using clauses at the top, one with and another without the post-fix of ".Abstractions". This is also a standard that Microsoft follows in their Nuget packages through-out now. For example, if you are writing a class library and simply want to provide a method to register your services, all you need is access to IServiceCollection, which is in the Microsoft.Extensions.DependencyInjection.Abstractions package. However, when you need to actually call on the service collection implementation and build a IServiceProvider yourself, like in a console application startup, you need the whole nine yards which is the Microsoft.Extensions.DependencyInjection package. Those are two separate Nuget packages, the one is lightweight and the other has all the default implementations, but you only ever write using Microsoft.Extensions.DependencyInjection regardless of which package you're including. This sort of naming convention discussion is part of your API’s design. And it is part of what makes good API. This example is specific to .NET, but the concept of good naming convention for your API is universal. And here, by following the standards, we align ourselves with the tools (like how Visual Studio intellisense will make our code discover-able) and with how the rest of the community (and those in your team and company) will perceive your code.

Filling in the blanks

Now that we’ve got a feel for the core domain solution structure, we can start to fill in the blanks. Again, let’s continue with the abstractions. We revisit the requirements again, but this time we look at the verbs. However don’t start typing away at your interfaces willy nilly. This is where we now have to start considering the specific use cases or user stories, and that comes with some process.

By test driven development

We take the first user story from the spec:

As an anonymous user I want to browse to a specific blog URL and see the blog content.

Functional requirements:
  • The blog title should be in the url, with all encoded characters like punctuation removed.
  • The blog title will be displayed as a heading
  • The blog publish date will be displayed under the heading
  • A blog article image or lead-in image will be displayed if it exists
  • The blog content will be displayed
Non-functional requirements:
  • The blog should load in less than 2 seconds
  • The blog should scale correctly on all popular screen sizes

To satisfy this user story we need to get the blog content, title, all of it. Which parts of this user story is applicable to the BlogService? Which parts are about presentation? Which parts are about persistence? This is the most difficult bit of understanding your requirements. The requirement or story typically spans across your abstractions and touches on all your layers. If we were to create tasks for this one story we would have tasks for the UI, the service and the database. Until now we’ve only really designed the service, so let's just focus on that so long: The service should be able to supply a blog post representation given the title of the blog post. Lets implement that. We start with a unit test project and a unit test class.

public class BlogServiceTests
    public async Task Read_HasModel()

What we have here is a test class with a test method (I’m using MS Test here) as denoted by the attributes. The test method is called Read_HasModel since the verb that this story is related to is the read verb, and we want to assert that we get an instance of a blog back.


Now, there are three parts to a unit test: arrange, act and assert. Typically referred to as AAA. However, I never start with arrange or act, because I want to always first think about what the result should be, how I will test before what I need to build. I also follow the Red/Green/Blue method, which means that we have to fail (red) first, so we have to get our asserts in first. In this case we want a blog entity. Let’s put that in.

public async Task Read_HasModel()



But where does result come from? That should be what the service gives me back, right? Let's put that in.

public async Task Read_HasModel()

    Blog result = await service.Read(title);


So now we have something that takes a title and gives us a result, but what is that something? It is the BlogService of course, so let’s put that in.

public async Task Read_HasModel()
    var service = new BlogService();
    string title = "";

    Blog result = await service.Read(title);


Is our unit test completed now? Pretty much, but we still have to do a few things. It doesn't compile yet, for a start. We add a reference to the domain project that it is supposed to test. Once the BlogService reference is resolved, we use the IDE tools to create the Read method on that service for us. Notice how we didn’t have to write any code for that.

On a side note: notice how I didn’t use var result but was instead explicit about the type Blog of the result? This is on purpose. It’s not clear from the method call service.Read(title) what the type of the result is for someone that reads the code, and also the code completion could accurately type the result of the generated method. Had I instead used var result, it would simply have typed the method’s result as object, and that’s not something you want.

Now it compiles, so let's run the test. It fails with a NotImplementedException error detailed in the test result pane. This is because the code generation tool doesn’t know what to put in the method, so instead it makes it obvious and fails fast. That’s pretty cool, because failing fast is also really good API (we'll see more about that later).

The unit test requires that we get a blog post entity. So let’s give it one. We update that Read method accordingly.

public async Task<Blog> Read(string title)
    return await Task.FromResult(new Blog());


Now our unit test passes. Finally we have moved from red to green, and crucially as per the process, we’ve only written enough code to pass the test. Let’s move to blue, and consider if there is any refactoring to do. At this point it doesn’t look like it - there is only one line of code!


Now we need to go back to red by creating our second unit test as we work towards exhausting all the test cases. But what will we test? Let’s consider the first item of the requirements, about the blog title being in the URL. This is something a lot of blog sites do, in fact. Imagine you get a link from a friend to read something, but that link is You can’t see from that URL what it’s about. Instead, a link like is pretty clear (and also search engine friendly). What does that mean for our service though? If we use this kind of URL scheme the service has no way to locate the blog’s content using something like a blog ID; it only has the title. The title is the only input by which the content should be fetched from the database. So we need to verify that the service correctly passes it along to the database queries. We start with the asserts again so that we fail our unit test. We want to make sure that given a specific input, we get back a specific output. We also copy the rest from the previous unit test.

public async Task Read_Verify()
    var service = new BlogService();
    string title = "";

    Blog result = await service.Read(title);

    Assert.AreEqual("Hello Test!", result.Title);

It doesn’t build. So we again use the IDE’s code completion tools to generate us the Title property. We run our unit test again and it fails. How do we pass it? Let’s look at the Read method again, and change it to give the title a value.

public async Task<Blog> Read(string title)
    return await Task.FromResult(new Blog() { Title = title });


The unit test now passes, but there is one problem here. We’re passing in a value “Hello Test!” but according to the requirements the URL value will not have any punctuation or escaped characters. Our test value isn’t very realistic. Let’s fix that.


string title = "hello_test";

Our unit test now fails again. The error reads Message: Assert.AreEqual failed. Expected:<Hello Test!>. Actual:<hello_test>. Well, we kind of expected this, because our Read method code simply puts the title in the parameter back into the Blog instance returned.

This change to the unit test is important however. Until now we’ve been using the unit tests mostly to write our code for us by using code completion features by way of the Red/Green/Blue flow. But now it is actually testing the first functional requirement, where the blog title from the URL contains no punctuation or other escaped characters, while the blog title that is returned is the real title to be displayed as the heading. We want to proceed to fix or write code to get it green again, but we have a problem. Given the URL title, we have no way to know how to reconstruct the real title. Of course not, because the URL title is supposed to be a stripped version which is simply input into a database query and not used for display purposes. This means that we will need to start looking at that database adaptor abstraction. Let’s give ourselves an instance of that, and use it to get a blog entry with a title instead of trying to create one with a title that we cannot reconstruct.

public class BlogService
    readonly IBlogDatabaseAdaptor _dbAdaptor;

    public BlogService(IBlogDatabaseAdaptor dbAdaptor)
        _dbAdaptor = dbAdaptor;

    public async Task<Blog> Read(string title)
        return await _dbAdaptor.Read(title);

The database adaptor abstraction doesn’t contain a definition of that method we’re now calling, so we use code completion again to create it. Doing this will also require that we update the unit test class since the service now has a constructor parameter that we need to pass. But we can’t instantiate IBlogDatabaseAdaptor.

SOLID: L for Liskov Substitution

Here’s what we can do though. We can provide an implementation of that interface to be a substitute for the real thing during unit testing. This is referred to as a mock instance, or a fake instance. This principle says that any implementation (or subtype) of a type should be able to replace any other implementation of that same type, and the calling code should be none the wiser, e.g. the contract defined by the type (or interface in this case) is honored by either implementation. It is your duty to ensure that the substituted implementation conforms to the strong behavioral subtyping. Let’s construct such a mock inside the test project that we will substitute for the real thing.

public class BlogTestDatabaseAdaptor : IBlogDatabaseAdaptor
    public Task Read(string title)
        throw new System.NotImplementedException();

We can now pass an instance of this class to the service.

public async Task Read_Verify()
    var service = new BlogService(new BlogTestDatabaseAdaptor());
    string title = "hello_test";

    Blog result = await service.Read(title);

    Assert.AreEqual("Hello Test!", result.Title);

Now both tests fail! Oh no what have we done? We can see that both tests report a NotImplementedException in the test result pane. This is from our mock adaptor that the code completion put there. See how failing fast is good API?! We have to fix this mock’s method to help our specific test scenario. The first test expects an instance. Let’s do that.

public class BlogTestDatabaseAdaptor : IBlogDatabaseAdaptor
    public async Task<Blog> Read(string title)
        return await Task.FromResult(new Blog());

Notice how this is the exact same code as we had in our service’s Read method before? It has now emerged from our unit testing that this code we put in our service was supposed to go into the database adaptor. In other words, our database adaptor implementation is where new instances of Blog originates from, we shouldn't be "newing up" Blog instances in the domain layer. This passes the first test, so let’s look at the second test. It needs the title to be set.

public class BlogTestDatabaseAdaptor : IBlogDatabaseAdaptor
    public async Task<Blog> Read(string title)
        return await Task.FromResult(new Blog() { Title = "Hello Test!" });

Finally, both tests pass. But does it actually verify our required functionality? Can we confirm with confidence that the service's code calls the adaptor with the correct value? Let’s change the second test to a data test method.

Actually testing something

[DataRow("hello_test", "Hello Test!")]
[DataRow("title_two", "Title #2")]
public async Task Read_Verify(string paramTitle, string expectedTitle)
    var service = new BlogService(new BlogTestDatabaseAdaptor());

    Blog result = await _service.Read(paramTitle);

    Assert.AreEqual(expectedTitle, result.Title);

What this does is pair a specific input, paramTitle, with an expected output, expectedTitle. Then it compares the actual output with the expected output. It passes for the first case, but not for the second case.

This is because our mock doesn’t help us to actually test that the parameter is correctly passed to the adaptor. The mock always returns the same output regardless of the input. So we weren’t actually testing anything before. We can fix this by changing the mock.

public class BlogTestDatabaseAdaptor : IBlogDatabaseAdaptor
    public async Task<Blog> Read(string title)
        switch (title)
            case "hello_test":
                return await Task.FromResult(new Blog() { Title = "Hello Test!" });
            case "title_2":
                return await Task.FromResult(new Blog() { Title = "Title #2" });
                return await Task.FromResult(new Blog());


Our unit tests again pass. Now we’re getting into the realm of test cases and setting up data to test specific scenarios. So let’s go to Blue again and consider our implementation. There’s a bit of shared code between the two tests, specifically the creation of the service. We can probably move that into a class property and just reference the property instead.

public BlogService Service => new BlogService(new BlogTestDatabaseAdaptor());


We’ve also got a nice and easy to understand mock adaptor that caters for two specific scenarios. This is a really good start, but it doesn’t scale. As the BlogService class becomes more complex you will find yourself constantly having to update the mock in order to support the new functionality, but also to not break the data already being used by other unit tests. Also, what’s the point of writing a mock like this which has so many lines of code if you could rather spend your time writing the actual implementation? This mock is similar to unit testing with an in-memory database. Entity Framework has great support for this and if you simply build the real implementation instead you could use that. But it suffers from the same problems as this kind of mock implementation. The only realistic way to unit test using a central mock or database is to set up specific scenarios for each unit test. In this context it means that no two unit tests will be able to test using the same blog title. In more complex systems where you have a userId, a departmentId, a couple of relationships between them and perhaps a few statuses, things get arduous to set up very quickly.

In order to solve this in a better way, let’s consider some of the available mocking frameworks. There are a few libraries available that helps us in this regard. I prefer to use Moq, but there is also NSubstitute and FakeItEasy that I know of. Let’s add it to our unit test class.

readonly Mock<IBlogDatabaseAdaptor> _dbAdaptorMock = new Mock<IBlogDatabaseAdaptor>();

public BlogService Service => new BlogService(_dbAdaptorMock.Object);


If we test we see that both our unit tests now fail again. This is ok, because we substituted our old mock for _dbMockAdaptor.Object in the constructor call to the service. We’re now using the new mock that we set up with Moq, but it doesn’t yet know what it is supposed to do. We can easily configure it for the first unit test in the arrange section.

public async Task Read_HasModel()
    string title = "";
    _dbAdaptorMock.Setup(x => x.Read(title)).ReturnsAsync(new Blog());

    Blog result = await Service.Read(title);


Here we've replaced our custom mock Read method with a quick Func<Blog> when we setup the mock object. I prefer to use mocks in this way because the arrangements contains the input and output within the test itself. For me this is a great way to illustrate the complexity and usage of a specific feature at a glance, as opposed to when the data setup lives elsewhere in a script or a hard coded class file (or in an in-memory database).

We’re still red on the other one, but the other test can be changed completely now. The Moq framework has a built-in way to verify a call to itself.

public async Task Read_Verify()
    var title = "hello_test";

    await Service.Read(title);

    _dbAdaptorMock.Verify(x => x.Read(title));


In this way we can directly verify that the adaptor was called specifically with the input that we expect. It doesn’t require any setup. But what about the title being returned I hear you ask? It doesn’t really come into play anymore. We wanted to confirm that the adaptor was called correctly, so previously we leveraged off of the returned titles as a way to ensure that our "use case" was satisfied. But we don’t need it anymore, so we’ll put it aside for now.


For the final Blue phase we notice that in fact, the tests are practically the same. Because of how this mocking framework works, the first test is also testing that the adaptor is called with the specific input provided (the blank title string). In this framework you have the option to do the setup using "any of type" parameters:

_dbAdaptorMock.Setup(x => x.Read(It.IsAny<string>())).ReturnsAsync(new Blog());

This would make any invocation of the Read call return a new empty Blog object. However the way we’ve written it, using the title variable explicitly, means that only the invocation with that specific value will give us that instance, all other invocations will return null. It is effectively testing that the adaptor is called correctly, just like the second test. So, which one do you prefer? I like the second one, but it doesn’t include checking the return type of the method. A developer is free to change the return type of the service’s Read method without breaking the second test. So let’s keep only the first one.


In this part we wrote our first unit tests, and I can hear you thinking that this was a very long blog post and a lot of effort for what ended up being a one line service method and only one unit test method. This is indeed a simple case, but it serves to illustrate a few things:

  • How I leverage off of code completion features for a lot of the code that ended up being "real".
  • We have executed the "real" code long before we have a working web API that can start up, or a database in place.
  • Our understanding and design of the dependency model in our architecture was confirmed by using the abstractions and substituting a mock.

As your domain grows in complexity you will find that this is a much faster way of working. You can early on confirm with confidence that your code is working by defining all the required constraints of a scenario using the abstractions. Your unit tests will also always remain the vanguard of your code, protecting it from being changed in ways that break previously implemented features and functionality.

In this part we're still designing, but moving one down from the high level overview in part 1.

The Solution Architecture - Blog

The next logical level down from the domain level is the Solution Architecture level. This is the first "technical" level. We’ll focus on the blog “continent” of the domain. We already have two concepts here, the Blog entity and the IBlogOwner interface. As explained in part 1, it makes sense that the main entity will have a property that points to its owner, some instance of IBlogOwner. Now let’s look at the type of interactions the blog post has:

  • Read
  • Create
  • Publish

We’re creating blog posts, obviously. And we want to read the blog post back to the user. Obviously the blog post has to be persisted somewhere, like a database. I don’t care for now what database though. Secondly, we need to publish. What does that mean? Pretend we delve into the detail requirements document and we come up with the following text:

A blog post will remain invisible to the public until published. When a blog post is published it should post to selected social media outlets, including the RSS feed.


This alludes to quite a few actors that we need to keep in mind. "Actors" is a collective term for all the things that you will have to integrate with, whether it is humans, 3rd-party APIs, databases or something on the other side of an abstraction. We already abstracted one actor away - the blog owner. We can probably do the same for all the others. Let’s list all of them, and then see where we are:

  • User Interface - the user can read it
  • Database - we have to persist
  • Owner - the instance of IBlogOwner
  • A social media outlet
  • Another social media outlet
  • Another social media outlet
  • … you get the idea

So which are the low-hanging fruit here? Some of these look suspiciously the same, and we can probably lob them all under one abstraction and forget about them for now. Let’s call that one IBlogSyndication. What do we know about the user interface? Is it web? Is it an app?

User Interface

To be honest I don’t want to think about the type of UI until the end. There is some new tech (Blazor) that is still maturing that I want to consider first. So we want an abstraction here. But there doesn’t have to be an abstraction on the blog domain for it, meaning it doesn’t have to point to something. The UI calls "down" the stack and we just need to deliver the blog data (the content) to it. In that regard there is no dependency to define or invert. We can easily build and test our domain without knowing about the UI. But we have to define a service to build. The UI has to call something, and that something we refer to as a service since (for now) it serves the needs of the UI. Let’s name it BlogService. So this will have some API that the UI can call, and it will return the blog content, probably the Blog model itself. And something that can be called and returns something is also a method or a property. And these belong to classes (at least in C#). So here we have a special type of entity. It’s called a concrete class. It provides an implementation of the business rules and serves the data. In contrast, the Blog model is a model class (it can also be a struct) - it is a simple representation of a blog post and it’s associated data, typically referred to as a POCO.

Database and the Repository Pattern

The next item on the list is the database - we have to persist. Now, there is some best-practice process here that we first have to discuss. A lot of Microsoft articles talk about the repository pattern. There is some underlying culture that says "repository" is a synonym for database code - that it is a special pattern for that specific use-case related to persistence. And while 90% of your work will be to persist in a database that you control, it might not always be the case. As such, I don’t regard it as special.

Break the mould

Any place that you talk to to send data is persisting data. It doesn’t matter if it ends up in your database, or via the Facebook API in their database. The point is that it is persisted. You will never see the database on Facebook’s side, but it is still persisted. This is why I prefer to refer to the Adaptor pattern. It’s simply a catch-all term for a specific kind of implementation that assists in abstracting away the details of a specific method of persistence. It is the general form of a repository pattern.

So back to the Database

But don’t confuse this with an ORM. An (O)bject-(R)elational (M)apping library (like Entity Framework) does not serve the same purpose or role as an adaptor service. Specifically, the database tables don’t necessarily map directly to your domain’s model entities. For example, I can guarantee you that the Facebook API POST models looks nothing like our Blog model will. So the adaptor that we need to provide is responsible for mapping to and from our domain model entities to the data models of whatever underlying technology is persisted to. We can refer to this adaptor as the IBlogDatabaseAdaptor, but we’re not going to provide an implementation for it yet.

SOLID: S for Single Responsibility

But what will use this adaptor? The Blog model entity itself, or the BlogService concrete? Consider the following: The UI needs to show some content. That content is represented by the Blog model, so it has to be given an instance of Blog that contains (or represents) that content. Where will it go to get this instance? Perhaps we can add methods onto this Blog model itself? That would violate the first SOLID principle: stick to one purpose. If we bastardize the Blog model like this it becomes cluttered and shares immutable properties that defines a state with stateless method calls. For example, you’ll likely have a Title property (the title of the blog) and also something like GetByTitle(string title) method. You’ll be fielding all sorts of questions from those who make use of your domain library, like "Does GetByTitle return this instance or a different instance?" and "What happens if the title parameter doesn’t match the Title property of this instance?" It becomes clear we should separate that out, and that having the UI rather use BlogService is a better proposition. Now we can contain the orchestration of blog entries in the service and keep it separate from the blogs themselves. But the service isn’t concerned by how to look in the database at all, only with what in the database should be looked at (e.g. it orchestrates to the correct function call). It can refer to the adaptor abstraction which knows how to look into the database and how to convert that from database models (tables) to domain models (POCOs). So with the information that we have now, let’s make BlogService depend on IBlogDatabaseAdaptor. We’ve just deferred another decision!

Justification for Abstractions

You might be thinking "we’re creating more abstractions than we have entities in the domain". And that would be exactly what we need to do in order to isolate ourselves from dependencies and implementation details. But why the isolation?

  • Design in isolated chunks: We’ve already seen how the abstractions has limited our scope and enabled us to break our design sessions into different pieces. This even allows different teams to focus on each of the different pieces (sounds like micro services?).
  • Strongly-typed interactions: By using language features to define the abstractions (typically an interface or abstract class), we define the methods and actions of interactions between different entities in the strongest possible terms - an unambiguous source code file that you can compile your code against (this mostly only applies to typed languages).
  • Keep your domain fit for purpose: When you start dealing with interactions between your domain and other things (database, 3rd party API etc), you invariably come across mismatches between the models in your domain and theirs. It could be something simple like Name becomes Title, or something more complicated like an API Token that you have to carry along for authentication and sometimes renew when it expires. Using abstractions keeps your domain clean, and it’s the job of the implementations of those abstractions at the edges to deal with those mismatches. You can also view it as applying Single Responsibility on a domain level - that is to say your domain should only be concerned with representing your domain and should not include bits and pieces from someone else’s domain.
  • Test Driven Development: This is a method of working that requires you to first know how to test a feature before you even build the feature. After all, how are you going to prove that your code is working? Abstractions is the only way to achieve unit testing in isolation. Can you always rely on that Facebook API to be available when you want to test? Or perhaps you need to test for a very specific bug under very specific circumstances. What would be required to be setup and configured before you can perform your test? When you have your dependencies as abstractions, you can easily provide mocks for all of them. This means that you can construct a very specific narrative in order to test that one really tough bug. It also means that you can test posting a blog without requiring a valid Facebook account and API key to post against, even when you manually test your code. We’ll cover this in more detail in future posts (part 7 and 8).


Let's put up the solution architecture diagram with what we've covered so far:

Cross-cutting concerns

In the diagram above there are a lot of dependency arrows. The most referenced item is the Blog model. But there are other items that have more than one arrow pointing to them also. Those are the two adaptor interfaces. Both BlogService and any implementations of those adaptors depend on them. And they in turn depend on Blog itself. Here is where we start to see some common dependencies, and we can try to extract these into its own "group". Often you’ll see a namespace in a project like helloserve.CrossCutting or helloserve.Domain.Common or helloserve.Domain.Shared. Microsoft has opted to establish a standard within the .NET code bases now using the actual term: Abstractions. Clearly this "group" that we are talking about here is just a collection of abstractions and the commonly used types (e.g. models) that they rely on. So, we’ll refer to this group by its eventual namespace helloserve.Domain.Abstractions. Let's redraw:

A lot of the arrows are omitted now, and points to the collective abstractions block instead. This is because, in a more involved design, having arrows all over the place gets really messy (it's easier to include it in a nice structured drawing using Visio or That is why the names of these items are so important. It’s clear that the universal glyph for a database should be associated with the IBlogDatabaseAdaptor abstraction because of our wonderfully descriptive name, for instance.

But there is another way to visualize this, and that’s through the onion layer diagram:

Here the dependencies are easily visualized, with the most dependent in the middle. However, there is still some indirect association (database and its adaptor for example), plus this diagram doesn’t illustrate the different tiers so clearly.

Concept: Common 3-tier architecture

Our architecture diagram clearly illustrates the well-known 3-tier architecture model (numbered in green in the diagrams). Sometimes you will have more, but rarely will you have less. Why is this? Because by following clean code and SOLID principles you will always separate the concerns of presentation (view models, API models, JSON structure, etc), persistence (databases, 3rd-parties, file systems) and your cross-cutting dependencies and domain models from each other.


We have seen in part 2 how we derived the solution architecture by following logical thought processes, and how it emerged that there are even more dependencies and how those dependencies should ideally be structured. We continued to follow SOLID by inverting dependencies and containing (or deferring) functionality to separate entities to put us in a better position to design in isolation.


It is important to realize that we never know everything, and you’ll learn through experience that you cannot design for all possible futures, outcomes and exceptions. You can only design for what you know at this moment, and then you add some common sense to that. It is a continuously emergent process and this is why it is important that we can adjust our design quickly and without fear of breaking everything that has come before. This is where clean code and SOLID principles help us.

For a long time now I’ve been looking for, and not finding, good free training material that takes clean code (or clean architecture) and combines it with the SOLID principles in a good, practical set of reference steps. And for a while now I’ve been wanting to compile such a set of reference steps perhaps not as blog posts, but rather as a structured approach that follow the actual day-to-day, distilled into the crucial bits, that can be used as course material for self-study. Then, this quick thread happened:

And then this quick thread:

So I was inspired. But what will this series cover?


At the time of writing I need to revamp my personal website, and I have a few ideas that I want to spin into software as services as part of the new build to perhaps generate some alternate income from. This material will cover that build-process, focusing on the following pillars:

  • Clean Code
  • SOLID principles
  • Test Driven Development
  • Domain Driven Design

Throughout these posts I will introduce concepts that apply to the process in general. The first of these concepts are the different levels.

Concept: Levels

In all aspects of software development, there are different levels. Architecture isn’t just one document, and design doesn’t happen in just one place. There are high-level and system level overviews, infrastructure architecture, tech-stack and solution level implementation considerations as well as low-level integrations and exchanges. All of these things combine, so you have to design on all of these levels. To illustrate in a more physical medium, there’s no point in an architect drawing out only the roof of a new building, without considering pillars and other load-bearing walls.

As you move up and down the hierarchy of levels, you have to identify all the required entities and constructs to sufficiently complete that level. You have to make decisions on each level on the dependencies within the level, and you will certainly at some point separate different parts of the level, and focus on each of these independently.

And the most important thing to realise is that this applies just as much to the junior developer facing a single user story as it does to the solutions architect facing an enterprise requirement. The only way to build good API is to design good API. On every level.

Throughout these steps, I will always start at the highest level and get the essentials correct. Then move down a level (perhaps single out a specific area or break it up into smaller pieces), and move down all the way to a sufficiently enough level of detail to see how all of that will bubble back up to the top to support the high-level overviews, assumptions and requirements.

Concept: Abstractions

Perhaps you’ve heard of “analysis paralysis”? Diving down a rabbit hole of design iterations is dangerous if you don’t have an exit strategy. That strategy is typically different levels of abstractions. You have to learn to identify those levels of abstractions, and places or areas to rely on abstractions. In clean code literature, this is referred to as “deferred decisions”.

When facing even a single user story (not to say an entire requirements document), you might already have at least two places of abstraction to exit on, which is presentation and persistence. You can split your design at these abstractions and focus on the different parts at different times. The SOLID principles make identifying these places of abstractions pretty easy, and so you can time-box design sessions in a meaningful way and show progress quicker and more reliably.

But what is an abstraction? In its simplest form it’s a definition that promises or guarantees specific behavior. We’ll shortly see many examples of this.

The Domain

At the highest level of my domain for my own website and services, I have the following requirements:

  • Display project
  • Author project
  • Display blog posts (optionally related to a project)
  • Author blog posts (optionally related to a project)
  • Publish blog posts
  • Take in an order (a part, assembly or URL)
  • Process an order
  • Update an order
  • Notify about an order
  • Complete an order
  • Show an assembly
  • Drill down into assembly (show sub-assembly)
  • Drill down to individual part (show part)
  • Add new assemblies and parts
  • Link parts as substitutions (similar)
  • Maintain assembly and part details

Don’t worry about what these different concepts are (although blog and order are pretty straight forward). This is just the basic, high-level requirements.

Main Parts

Lets define the main parts of the domain, and establish a hierarchy of domain entities and concepts.

  • Project
    • Read
    • Create
  • Blog
    • Read
    • Create
    • Publish
  • Order
    • Create
    • Process
    • Update
    • Notify
    • Complete
  • Assembly
    • Read
    • Drill down
    • Create
  • Part
    • Create
    • Link
    • Update
    • Delete

Do you see the differences here? I’ve identified the main entities (or nouns) that appeared in all the texts, and then extracted all the interactions (or verbs) per entity. Another difference?


I changed some of the language. Author, Take and Add became Create and Maintain became Update and Delete, and Display became Read. But why did I choose to change the language? Regularly, a user has a different name for something than what we as programmers typically call it. They don’t update anything, they edit. And they add instead of create. This duality in language is always a problem, and it is a skill in itself to fluently converse with a client in their terminology, and convey that same message to your team in typical or common technical terminology. Practise it when you design. So, we have clear CRUD (Create, Read, Update and Delete) functionality on all the entities. The names now match the acronym. But there are also some interactions that didn’t change. Notify, Publish, Complete and Drill Down. These might mean something special (we don’t know yet), and it will be the job of the requirements gatherer (a business analyst or systems analyst) to define these terms using clear, unambiguous language for the designers and developers.

Plate Tectonics - a simile

On this highest level we already have well-defined separations. These are the main continents of your domain, and their interactions and abstractions now need to be defined. As we delve deeper into this design, some of these splits might move a bit, become more clear or result in smaller earthquakes that show us cracks in our design


A good communication tool is a diagram. Things like architectures are communicated this way. Let’s draw a domain diagram that you would typically draw in a whiteboard session:


What are those arrows though? If we glance back at the original high-level requirements, there is one bit of language there that we haven't considered yet. It appears that a blog post may be related to a project. This shows us there is a dependency here, but which way? The language defining dependencies are often ambiguous and unclear. A clue here is that the reference to a relation appears on the text about the blog, and not on the project. Thus it seems that the blog will be dependent on the project.

Similarly, an order can be for a part or an assembly (and a URL, but it is not a central concept here). The reference to the relation, again, appears on the order which gives us a clue. These relationships are included in the diagram already.

Dependencies as abstractions

Now that we have identified the dependencies, how do we go about catering for it in the main system architecture? Before we get into that detail, consider the following: do we always want to hog along references to Project whenever we talk, code and test against the blog entity? Will it help us in any way to always have to include using helloserve.Domain.Project in all our classes and code files for a blog? To do this is Accidental Complexity. It’s not wrong by any means, but we can probably do better. And that would involve using abstractions. Let’s consider an interface called IBlogOwner. A blog post carries around a reference (or property that points) to this interface. The Project entity can then implement the interface (or fulfill the guarantee of the abstraction). So what’s happened here?

SOLID: D for Dependency Inversion

We have inverted the perceived dependency that a blog post has on a project. This is apparent in more than one way:

  • A blog post now simply deals with its owner through a well-defined API that is this interface. It is now only dependant on an abstraction within its own namespace.
  • We can now test all the blog post code in isolation, and provide stub or mock implementations of an owner where required.
  • This is our “analysis paralysis” exit strategy around designing the blog post entity. We don’t have to consider anything beyond this abstraction.


  • Any entity (not just Project) that wants to be an owner of a blog can now define themselves as being an owner, e.g. they implement that interface. This means these entities are strongly typed as owners, and it is clear to anyone reading the code that they are owners, or to see through tools like “Find all references” all the possible types of owners.
  • The Blog entity doesn’t have to be maintained when a new entity is added that can be an owner. This means that changes to requirements later on carries less of an impact on the code base.

So how would we deal with the dependencies between a part or assembly to the order? You can bike-shed for hours now already whether Assembly should inherit from Part (perhaps it’s just a collection of Parts, right?) which means every Part is also a collection of Parts. Or, you can exit before all that by introducing an abstraction called IOrderableItem. The Order will have a list of these interfaces as it’s items, and that’s all we care about. We have now deferred the decision on how to deal with Assembly and Part by inverting the dependencies here too.

Concept: Naming

But what if we created instead an interface called IProject. Would that be the same? You could certainly employ it in the same way as explained above, but then later, if I wanted to write a blog post about a specific part, the Part entity would have to implement IProject. That is pretty poor, since it will lead to very confusing code (a part is also a project?). It will make it difficult to maintain and for new team members to understand. Naming stuff in your code has to be aligned with your design. It is absolutely critical to be as explicit and literal as you can be, and have good foresight and common sense when naming things.

Update the diagram

Now that we’ve got our relationships defined, lets update the diagram:


For the moment, we’ll stop here on this level. We have covered a lot of ground already, and unpacking Part and Assembly will probably result in too much redundant reading at the moment.

The domain overview level already has a lot of detail now, but never make the mistake of designing into too much detail upfront. By using dependency inversion we avoided an overly complicated discussion and created well-defined sub-domains within our overall domain. The broad strokes design at first feels superfluous, but it is crucial for clear understanding and for keeping complexity in check.

More winder troubles

Posted on 2019-02-10 in The Blue Car

Trim tools? What’s that?

That was the response from the sales staff at Midas when I asked where the trim tools are. I was actually surprised that they didn’t know what I was talking about, but I probably shouldn’t have been. Anyway, I needed to pull my door card off, but I didn’t want to use my hands again. You see, the NA MX-5’s door cards are, essentially, cardboard. A funny mix between cardboard and plywood, actually. And well, in 30 years those door cards have been on and off a few times and the slots where the clips fit is starting to show it.

The last time I had both of my door cards off was when I took it to the paint shop. When I refitted them I replaced some of the old OEM clips with replacement “close-enough” clips that I got in bulk from These are slightly bigger however, and so they fit quite a bit tighter than the OEM stuff. I found this out when I pulled that passenger door card off to investigate the winder motor that had stopped working not too long ago. The replacement pins didn’t pull out of the door that time, they pulled out of the door card.

Some shots of the back of my passenger door card. The white clips are the new ones, and I've had to already start patching up the frail cardboard/plywood with tape.

And much to my disgust, a few days after I fixed the winder switch the window itself got stuck. No amount of silicone spray or Q-20 (WD40) would do the trick. It did move up and down if you helped it with your hands, but being on the passenger side, that was not really practical. Queue me trying to find trim tools. Eventually though my package delivery arrived and I could get to work. Using the tools was a lot better and less error-prone than using my hands.

Part of the set of trim tools. Strangely enough, TakeALot doesn't list this set anymore.

Removing the window is really easy. First remove the two stoppers at the bottom of the window on either side of the rail. These block the window from going up too high. Then you simply undo 3 screws that hold the window to the winder harness. Once these are out (it doesn’t fall into the door, so you don’t really have to worry about holding it) you can simply slide the window out the top, tilting it slightly to get it past the rubber seal ends without tearing them.

You can probably guess that the culprit was old grease. There are two rails inside the door - one for the winder harness that has two rollers, and one that acts as a guide to the rear end of the window which also has a roller bolted onto it. And then of course the front side of the window slides up and down the rubber seal. There was a tremendous amount of black, tangy grease that came out of these two rails. I suspect that previously someone had just lubed it up without cleaning it out first. Probably more than once. And of course my rally saga didn't help.

The roller on the window before I cleaned it, and grease on an earbud.

Cleaning and reapplication of the grease took 4 rags and many earbuds. I also cleaned off the rollers on the window and the winder harness as best I could. The one on the window was so heavily caked up that it wasn’t able to rotate anymore. Putting it back together again is of course very easy. It’s somewhat improved from before it got stuck, but don’t expect miracles. As I’ve said before, the MX-5 window winders work in geological time scales.

The Cooler

Posted on 2019-01-30 in The Blue Car

My son, who’s 4 years old, refers to the air conditioning as “the cooler”. It was broken, and he was complaining.

A long time ago I got the air conditioning system refurbished with new (modern) fittings to be able to fill it with legal refrigerant, as opposed to the old CFC-laden stuff from before. This was 3 years ago - I did it when I put the car back together again after it was at the paint shop. But this December it stopped delivering cool air, and the rev counter dropped less and less every time I engaged the compressor. There was no more pressure in the lines.

So I took it back to CoolCo, and John and his team had a look and delivered the bad news: Their test showed dye all over the pulley. For a moment I considered whether I can just take the compressor off myself and let them refurbish it (saving on labour costs), but ultimately I booked the car in for the repairs. A decision that I was happy with a few days later when they called to ask if they can keep the car for an extra day, that there was a problem.

In the shop!

Here’s how I understand it. The compressor has a mechanical seal around the pulley shaft, and this seal was of course leaking. And since it was a system from before the new type of refrigerant, they had to fit that old-style seal. This didn’t last the night, and when they tested the next morning for leaks, it was already mostly empty again. I’m not sure why a replacement seal didn’t work, but John did mention that these mechanical seals are really hard to get off without damaging the compressor itself. In this case, it appears that damage was to the housing of the compressor, and the housing is what part of the seal presses against with its o-ring. Perhaps it was scored with a pry-tool.

This is an example of the old type mechanical seal. The left side is locked to the shaft, and rotates with it. It has an o-ring on the inside, against the shaft. The right side is a highly polished part that remains stationary, against which the rotating part, um.. presses and rotates. This has an o-ring on the outside, which seals with the housing.

Fortunately they were able to source a more modern replacement housing that accommodated a modern version of the mechanical seal. This did the trick and I (thankfully) got the car back before the sweltering heat of the weekend, and the park-off on Sunday.

But here’s the funny thing - you might recall how I struggled with the one bolt that hold the compressor bracket to the block. I told them about this and they investigated this too. When they were done they gave me a bolt back (that they had removed and replaced) with its thread completely stripped.

The bolt they gave me back (top) compared to the bolt from the scrapyard that I fitted (bottom)

I’m not sure yet what went wrong when I was working down there. This bolt clearly is not the same bolt that I found at a scrapyard and had to put in there using locktite. Anyhow, I’m not going to lose sleep over it (Jason’s words), and I’m again very impressed by their work, turnaround time and final cost of repairs.

Christmas Gremlins!

Posted on 2018-12-18 in The Blue Car

So the passenger window stops working, when you press the up button.

Only the up button? Yep, and so the window was stuck inside the door. I pulled the door card and tried to inspect the winder mechanism. I saw that the part where the cable is exposed is a bit slack, and I immediately think the winder spools have come undone. The problem is, I can’t get anything out with the window stuck in the door, and to get the window unstuck I have to lift it all up and out. About an hour later I have both the rails and the rubber track undone and removed. I pull out the winder motor and track, but it all appears intact.

Bench testing the window motor revealed it to be in complete working order. So I test the plug in the door, and I measure +12V and -12V as I toggle the switch. Seems legit, and after bending the plug contacts a bit to make sure of proper connections, I still have no result. So what gives?

Well, sometimes a fancy multi-meter is not the right tool for the job. If you follow the CAR WIZARD, you’ll know that he uses a test bulb, specifically an incandescent 12V type. This is so that there is actual load at the points or connectors where you want to test, because current leaks and volts don't. In layman’s terms, the potential for it working is there (volts), but in reality it might not (current).

I pulled one of my old headlights from storage and hooked that up to the plug in the door. Sure enough, pressing down lit it up, pressing up did not. But now, the volts read +12V and -0.5V. Because of the load (resistance) closing the circuit, some current was flowing through, but only enough to warrant 0.5V. So what does this mean? Basically two options:

Firstly, there is a leak to ground, and most of the current is flowing to the body. This typically happens when your loom (or a wire) has ground through from rubbing and shorts out against some metal surface. This is the harder problem of the two to solve, and might require you rewiring a significant piece of your circuit or loom. Hopefully it wasn’t that.

The second option is that, according to the math (V = IR), the rest of the “work” is being done elsewhere, meaning there is resistance somewhere else in the circuit that is drawing all the current. The only place this could be is in the switch or relay (or at the fuse, but that would most likely be an open circuit). Here you have a choice of either trying to open and repair the switch, or replace it if it is finicky or a sealed unit (like a relay). With stuff like headlights, winder motors and wiper motors, there is significant current draw because of the heavy loads and the switches and relays tend to burn carbon onto the connections as they spark. This puts a limit on their lifespans, and it also introduces resistance over time.

Fortunately for the NA miata, the window switches unit is an old, robust design from yesteryear, expensive to replace but easy to fix.

Carbon-covered connections

In the end, a quick and light sanding of the connectors did the job.


Posted on 2018-11-27 in The Blue Car

Entropy is a universal constant that will always catch up with you. It erodes, it rusts, it wears out, it becomes brittle. Oh wait, I’m confusing my physics lessons with my car again.

The good news is that it’s been a long and fun-filled time with the car, but since September it’s been in and out of the garage all the time. Old age, a poor choice of an aftermarket part and some well, bad luck, have reared its ugly head in the shape of seepage and leakage of oil and brake fluid, and messed up my steering.

So after my rally (or dirt oval) excursion everything was covered in dust. Even the underside of the hood. It’s always fun cleaning up after yourself! But, it also became very visibly obvious that I had a problem - the valve cover gasket was leaking.

Fine Malmesbury dust

This is a very simply rubber seal that fits all around the cover and along the middle around the spark plug holes. This cover has to come off when the timing belt needs to change and so on. And, for 8 years, every time I had to take it off I put it back with the same seal. Never had problems. Then last year during the annual service I saw that the no 4 plug was a bit oily. Heh, so probably time to replace that gasket then. So I did, along with the cam angle sensor seal. Except that I bought an aftermarket gasket. It fitted fine enough, but almost exactly a year later it started seeping oil along the entire length of the valve cover, on both sides. I guess that explains the (very big) difference in price from the OEM gasket. So of course I waited a few weeks for the Mazda one to arrive.

But to replace this, and to remove the valve-cover, the PCV first has to be unplugged. This is a rubber hose that attaches to the valve cover using a special pressure-release valve, much like a pressure cooker valve. Except mine wouldn’t budge. I tried with all my might, and finally it cracked in two. Bugger. All the year’s heat cycles had taken its toll. So I ordered a new one, and also the rubber grommet that seals it against the valve cover. This was a good decision, because when I tried to take out the old grommet it also simply tore up into pieces which I had to fish out of the valve cover chamber. What I didn’t expect to find though, was paper. Someone had stuffed a bunch of paper into the hole of the PCV at some point (probably to block it off, for some reason?). It was hardened and crusty from the old oil, but it came out easily enough. It could also have been old excess gasket-maker. Never use that on your valve cover. Never.

What was left of the PCV grommet.

Then suddenly my power steering starts becoming dodgy and stops working. The belt was slipping at first and then it came off. I put it back but it just came off again. Turned out, the air conditioning pump was completely loose and any strain on the belt would just lift it up, which slackened the belt. The reason was that a bolt was missing. There is a second bolt at the rear of the mounting bracket which was half-way undone, and this was all that was holding it in place. At first I thought this was a bolt similar to the swivel bolt that the power steering pump uses to adjust its position in order to set the belt tension correctly. It was difficult to confirm though, as I could not find any reference on the EPCs about the air conditioning pump mountings. It wasn’t the same. In fact, the missing bolt was not specific to the air conditioning at all. It was an oil pump bolt that also happens to hold and secure the air-con pump bracket in place, if air-con is fitted. And it’s an 8x1.25mm thread bolt, so I could not find any local supplier that could assist. A scrapyard in the UK and a week’s wait later, and I had the correct bolt. But it didn’t tighten enough. Clearly I (or someone before me) had stripped this hole’s thread in the past. So I just put on some Locktite and will hope for the best.

What was funny, however, was that I found two stray bolts between the air-con pump and it’s mounting bracket. It proved a real nuisance getting them out of there. Perhaps the air-con pump had been removed at one point, and a careless mechanic put the bolts in the bracket for safe-keeping later, and put the pump back right on top of them. Still, I now have two very large bolts extra.

I've encircled the tip of the one bolt where it was visible between the pump and the bracket. Oh, and all the dust too.

If you recall a few posts back when I refurbished my rear calipers, how big of a success that was. Well, unfortunately I wasn’t so lucky this time. My parking brake light started coming on intermittently again as a low fluid warning (this lesson I had learnt, right?). After a few top-ups and cleanups and wipe downs, it was clear that my master brake cylinder was leaking. But additionally, the grommet rubbers sealing the fluid reservoir was also leaking?!? The shelf against the firewall was pretty much covered in fluid by now. Taking it all apart is not trivial. The hard-line nuts are tight and you struggle for space with the spanner. The booster however is a particular royal pain in the ass to remove. The four nuts on the inside in the pedal box has so very little access that only my small ¼’’ socket wrench could fit somewhat, always at an angle on the nuts (so very easy to strip them), and then it could only ever turn for like 15 degrees or less. Don’t even attempt a spanner (ratchet or otherwise). And you’re upside down on your back brewing a headache. I read multiple forum posts about how guys found it easier to just remove their dashboards completely instead (especially the LHD cars). I had removed my steering wheel to get in there properly. And once those nuts are undone, you still can’t get it out because the fuse box and the thickest part of the loom is in the way, and the throttle cable and it’s bracket, and the hard-lines and the pressure regulator block… Sometimes I think neurosurgeons have an easier time.

The booster, upside down.

The booster looks worse than it is. There’s no moisture on its rubbers, and the boot around the actuator that attaches to the pedal is in very good condition still. I did clean it up and put a rattle can onto it.

So fast forward a few weeks and I’m prepping for spraying that shelf and part of the firewall. It was completely rusted out and there was a lot of cleaning up to do. The rust treatment (I used N1S1) worked very well, and the color coats and clear coat came out all right for my first time ever using a spray gun. It’s not an area that’s immediately visible, but I wanted to treat it properly regardless.

Before and after cleanup.
Wrapping, prepping and painting.

I had the master cylinder refurbished, but it still leaked after that from below the reservoir. The brake and clutch people said they reckon my original reservoir could possibly be compromised, and they dug out a spare reservoir from their parts cache; they believed the grommets were good.

But it didn’t work. I fitted it all back, the booster took a few tries because it is really hard to get it back in again, and because I forgot the dust gasket at first. It’s also easier to fit the regulator block first and secure its hard-lines and then to fit the master cylinder - something I’ll remember when I have to disassemble in the future. But after bleeding there was a puddle of brake fluid on the shelf again, ruining the paint job almost completely.

At this point I was pretty fed up and dejected, but my one friend got hold of those grommets for me and we replaced them. Do you see the irony here? I replace the grommet of the PCV right off the bat, but failed to do so with the master cylinder reservoir. Hey ho.

Pulling out the reservoir from the master cylinder is a really hard thing to do. My forearms still ache three days later. But with the new grommets there is no more leaking. We also had to swap out the fluid level switch from my original reservoir into the new one. At the moment there’s quite a bit of travel on the brake pedal, but the car stops good. I guess I’m just used to the CX-5 pedal now again, but we’ll do another round of bleeding sometime.

Fancy new eyewear

Posted on 2017-11-07 in The Blue Car

What does a 25 year old Mazda and a new JEEP Renegade or Wrangler have in common? Let’s just say, it’s remarkable how engineering standards and convention has held up over time, cultures and continents.

A while ago I saw on YouTube how one fellow enthusiast fitted the round 7`` LED headlights from new JEEP models to his Miata. There are a few different designs to choose from, and the aftermarket has even more options. I particularly liked the round halo design, so I took the plunge and ordered a set for my car. That video explains very well the process and the few pitfalls to look out for when fitting it.

I ordered my kit from e-bay, and yes, I had to dremel some of the cover away, and tie back all of the heavy wiring quite tightly too (twice!). But it is plug-and-play for the most part, unless you also want the indicator lights to function. And that’s the thing here. Two and a bit decades and an entire ocean apart, and the three notches on the new American LED lights fits exactly into the offset non-symmetrical pattern on the Japanese Miata’s fixtures. The three-way low and hi-beam plug is the same too, and literally just plugs in. The only real work I did here was to splice into my indicator wires to get a line for the halo’s amber circuit.

The only negative side effect of this install is that these units are rather heavy and bulky with solid aluminium housings. It adds a fair amount of weight over the standard bulbs, and it rattles the headlights on the hinges quite a bit on bumpy roads when they're up. Whether this will cause any long-term damage remains to be seen.

And as I understand, it also fits on the old Mark 1 (Rabbit in the US) and Citi Golfs here in SA too, since it’s the same size and presumably follows the same standard. In fact, the Germans probably set the standard with the Beetle?

The right-hand headlight, tucked in and all tied down. Count the cable ties - 5 of them! The LED control unit is rather bulky, and there's a lot of additional wire to keep out of reach of the guide-arm. You can also see where I took parts of the housing off for the fitment.
The final result. Mind the really dirty car. We have a truly critical water shortage in Cape Town.
The comments I got? It now looks like an Anime character, crying.

Some typical car maintenance

Posted on 2017-08-21 in The Blue Car

We always like to talk and blog about upgrading the brakes or suspension or engine. Sometimes however you have to deal with the boring stuff too.

The NA MX-5 electric windows are a resilient but slow design. They raise or lower in geological time scales. If you first pull up at an access controlled gate and then only wind it down to put your arm out, the queue will be hooting and shouting behind you before it’s even halfway.

To make things a bit easier I regularly spray Q-20 (WD-40) down the weatherstrip that also guides the window along the quaterpane. This mostly works but doesn’t last for very long. After 25 years however, the passenger side weather strip decided it'd had enough and tore open around the corner of the quaterpane frame. The result of that was the glass itself pulling the strip out with it every time you wind it up. So I ordered new ones through the local dealer this time. And I have it on good authority that this new pair of weather strips were the last pair at the factory in Horishima.

After removal of the old rubbers, I cleaned up all the old dust and grime that had built up on the surface first.

The removal and fitment is really easy. Both took me about 20 minutes per side. The glass comes with some stoppers which you have to remove while it’s still in the doors; give it the reach-around for that. The weather strips are basically fastened with a bunch of plastic clips, with the exception of one screw each side on top of the quaterpane frame. I really struggled and ultimately failed to get these screws to take thread to tighten it back down again. Their holes are heavily rusted and I probably stripped them on the first try. The clips are destroyed in the process of removal. Fortunately the weather strips came with clips attached, but I had ordered additional ones in any case. When fitting the new strips, I started by feeding the window guide in first for the driver's side, and on the passenger side I first clipped in the main part of the strip and worked around the quaterpane last. There wasn't much difference in the two approaches. Both had the same trouble getting the rubber into place up the quaterpane, and getting the corner seated around the very sharp metal edges at the apex of the quaterpane. I used a credit card and a sharpened toothbrush to help me pry it in without damaging it (never use any metal tools with weather strips and rubber parts in general).

Feeding the rubber into the window guide, using a credit card to slip the rubber in place over the quaterpane, and finally pulling the corner into place very carefully over the sharp edge.

While fitting that, I noticed that the rubber-surround of the front windshield had started lifting around both top corners. There’s nothing worse than A-pillar rust (apart from rear quarter panel rust), so prevention is key. I got some more Sika and proceeded to fill it in. It came out ace.

All taped up around the corner of the window frame. The gap under the rubber seal is clearly visible. Copious amounts of Sika pasted and pressed into the gap. I made sure that most of it goes into the gap. Cleaned it up a bit without squeezing the sealant out of the gap again, and then I removed the tape while the Sika was still wet and pliable.
All done! After it has settled and dried I will trim it slightly with a small razor blade, but it's hardly noticeable at all.

...and I mean, NO brakes!

Posted on 2017-03-15 in The Blue Car

My handbrake light started staying on. I figured the switch went bad and thought nothing further of it. But then on my way to work in late Feb I hit the brake pedal. Nothing. Straight to the floor. It was the most out of control I have ever felt, and I include the moments like my pirouette, when I ripped the power steering belt trying to powerslide a 96kw car, and my time on the skid-pan at Killarney during an advanced driving course. That day I learnt that if the handbrake light stays on, check your fluid levels!

There was a leak at the one rear caliper. I figured it was the cover screw for the piston adjuster that I hadn’t tightened properly when I adjusted the handbrake. Of course, later I would also learn that if brake fluid even reaches that far back you have a much more serious problem. At any rate, I figured a good exterior cleaning would do the trick. It didn’t. After fitting new lines (why not?) and bleeding all round it was time for a test drive. The handbrake locked up on the one caliper and smoke was pouring out of the wheel well by the time I got it back into my garage. The leak on the other one was still there too, although now I could clearly see where it was leaking - the lever arm that gets actuated by the handbrake cable.

The state of the one of the original brake lines and its banjo-bolt washers.
The brake pads after the handbrake lockup.

I read up, and yep, I had a much more serious problem. The O-rings around the piston adjuster spindles were most like shot. So I took it all off again and started the total disassembly process. It’s not difficult, apart from the cir-clips that holds in the spindles. They have to come out in order to remove the spindles so that you can remove the lever arm. If you don’t have the proper pliers you might still get them out with two pins. And even with the correct pliers, I had to file down the tips to get them small enough to properly fit into the little holes to squeeze effectively.

With everything removed the result was startling. Clearly a lot of dust and grime had worked its way into the calipers’ mechanisms, wearing away at the O-rings (that old African heritage of my car again). But then the brake fluid started coming out and created a grimy slimy mix that somehow held together for a long time, until I started messing with the piston adjuster last year in an effort to adjust the handbrake.

The mechanism of the one caliper. You can see the adjuster itself, the adjuster spindle with the O-ring and actuator-pin still on it, and the lever arm. The second photo is where the lever-arm inserts, with the seal removed (in the first photo).

The cleaning process isn’t hard, but there are quite a few things to look at, plus I have to get new seals, boots and O-rings which are on order from the UK. Here's a good step-by-step guide if you're interested.

When your base class serves as a common implementation (as opposed to a common data model) you might sometimes want to force an implementation or behavior, but also allow the behavior to be extended without it being modified or omitted completely. What do I mean by this?

Consider when you have a User domain or view model. Some users are players, and other users are spectators. They share a common set of validations and CRUD operations, but also have unique elements that set them apart. So, you write a base class:

public class User
	public string Email { get; set; }

	public virtual void Validate()
		if (string.IsNullOrEmpty(Email))
			ModelState.AddError("Email", "Email cannot be null");

Anything that extends this will have the Validate() method available, but nothing stops the implementer of the Player class to omit the base implementation:

public class Player : User 
	public int Health { get; set; }

	public override void Validate()
		//this is omitted, and now Email is not validated anymore

		if (Health < 0)
			Health = 0;

This violates the "O" of SOLID, that is, open to extension but closed to modification. The base implementation is effectively modified by the consumer of your base class. In most cases this is an oversight, and one of the leading causes of errors in the code. Static code analysis, and of course unit tests, will help in finding and pointing out such mistakes.

A better design is to not include required functionality in the virtual method. To do this we introduce a protected internal validation method which we make the virtual implementation instead:

public class User
	public string Email { get; set; }

	public void Validate()
		if (string.IsNullOrEmpty(Email))	
			ModelState.AddError("Email", "Email cannot be null");


	protected virtual void Validate_Internal() { }

Now, the user of your base class can provide any custom validation by overriding Validate_Internal(), yet the required base validation is unaffected and unmodified:

public class Player : User 
	public int Health { get; set; }
	protected override void Validate_Internal()
		if (Health < 0)
			Health = 0;

And of course, the accessibility (or public signature) of the User class remains unaffected; there is still only the Validate() method available.

This pattern is officially called the template pattern or the nanny pattern. I typically would just refer to it as the internal implementation of Validate. Not to be confused with the `internal` keyword. It stems from the fact that I post-fix _Internal to the method name, but you can choose any name you want here. For example, you could follow the old Microsoft standard for the Windows API and call it ValidateEx, or the event naming standard as used in the WinForms classes and name it OnValidated (specifically past tense because the method is called at the end) or OnValidating (if the method was called first). It doesn’t really matter, as long as you isolate and protect your required base implementation from the user of your class.

Rotational recovery

Posted on 2016-08-24 in The Blue Car

After the long months of stripping, painting and reassembling the car, I had high hopes for getting around in it for a bit. Alas, it was not to be.

I had previously replaced the rear wheel bearings as part of the initial restoration, and I guess I should have expected that the front bearings would retire at some point too. For a long time before the teardown I had lived with a slight rotational sound audible only at low speed, but figured it was not of any concern since it wasn’t getting worse. As it turns out, while standing around for a few months in my garage, the bearings had apparently deteriorated somehow. It was now making grinding noises under braking and the wheels were tram-tracking like crazy on bumpy surfaces. The former probably only became apparent after I had replaced both the front rotors and pads during the reassembly, the latter I had ascribed to the Konis and poly-bushes, but it was worse now and pretty disconcerting. So I pulled off the front brake and hub assemblies, after I acquired a 2 meter piece of pipe, and found grease all over the one dust cap with both bearings having sideways play in excess of 2 to 3 millimeters, which would explain the instability of the front wheels.

The two old hubs, spindle-nuts and dust caps. The nuts were properly destroyed too. Got those from the dealer easy and cheap.

So then I had to get hold of new bearings. This front-hub and bearing assembly is considered a unit, and Mazda doesn’t have a separate part number for only the bearing. The EPC only details the entire hub as a single part. So of course the dealer price per hub was around R4500.00, where-as I could get aftermarket parts from both the UK and Dubai for less than R1000.00 and R500.00 respectively. So I ordered and waited.

While waiting for the parts I got another set of stands and lifted the rear of the car off the ground so that I could also adjust the handbrake. This has never worked properly before, but it’s a simple process involving a small adjustment screw on either side. With that done, and the hubs finally here, I “piped” everything back together again in no time, and the results are...dramatic. The front-end feels way more solid and composed now, with the turn-in response better than it has ever been. I’m really happy now, and bearing life is something I will always consider and inspect whenever I buy another older car.

A very long piece of pipe over my torque wrench, to get to 200NM required for the spindle nuts. I used the same setup over my breaker bar to get them loose.

To Default or not to Default

Posted on 2016-08-17

Here’s a question for you: how quickly do you think it is best to fail in your code? And at what level do you think it should be handled? My answers to those two questions are immediately, and at no level. Let me explain.

In C# we have the wonderful compose-able LINQ extension methods, the sort of thing that has become almost a staple in every managed language now. As part of the framework it contains two methods, Single and First, which takes the only and the first item from a collection respectively. However, these two methods fail if there are no elements in the collection, and Single also fails if there are more than one. The framework also provides alternative extensions, namely SingleOrDefault and FirstOrDefault. These alternative extensions will return null (or the equivalent default value type) when the collections are empty, however SingleOrDefault will still fail if there are more than one item in the collection.

To get to the questions, let’s look at a use case:

public class User : UnitOfWork<User>
    public User Get(int id)
        return base.Set().<extension method here>(x => x.Id == id);

This example actually makes you decide what failure and error handling strategy you'll use by choosing the extension method that should plug into the provided space. For example, if you use FirstOrDefault you’ll never see an error from this code, but then you are faced with having to handle inconsistent state elsewhere in your domain logic and will probably have to do null checks. Similarly, if you use SingleOrDefault, null checks are in your future, and exceptional or failure program flow probably too for those times when there are more than one entry in the collection. But is that necessary?

Imagine how a user would get your code to call into this method: The UI is requesting a particular user entry, most likely to show a profile screen, or an account edit screen. Or, your controller is building an API data model that contains a User instance to send back as a RESTfull response. In any case, this is not a method that performs a search, or applies a filter, instead a particular instance is requested using a particular id. So what happens if the provided id parameter doesn’t match anything? Or the id somehow matches more than one? Is that actually your UnitOfWork’s problem? Well, consider if this is in a web application. Some URL /user/1234 has routed into a controller call which ended up in your UnitOfWork. A savvy user could easily have manipulated that URL to try out different user Ids (think Ekurhuleni municipality account hacks in 2013). Or your controller code made a wrong assignment (are you unit testing?) and your repository gets the wrong id to pass on. Or the database was changed by someone's script and now id is no longer the primary key. None of these cases is something that you want to suppress or ignore. You want to get that noob-hacker-user out of your IIS thread pool as quickly as humanly possible; an error page is all he should get. And you want to let your testers see an error page the moment something goes wrong. The bottom line: If your UnitOfWork code gets an id that doesn’t match anything, or matches multiple, you want to know about it immediately for a method like this.

So how do you do that? You fail fast and hard, and using the correct extension method is the quickest way, e.g Single. Your UnitOfWork will generate an immediate exception in the case of a bogus id being passed. Your service layer expecting an instance of User won’t bother wasting any more time doing validity checks since an exception is bubbling up, and ultimately this results in redirecting to your custom error page. Similarly, for WebApi or RESTfull services, a 400 or 404 is all I should expect when I pass a bogus id to your API. You might want to do some custom handling to decide on which HTTP status code to return, and maybe a custom header message, but the fail should be obvious to you, your testers and the end-user. Note that handling the exception for the purposes of logging doesn’t constitute an error-handling strategy in my book, and in such a case the exception should be thrown right back anyway.

You often see articles or quotes stating that you should not be afraid of failing. The thing is, the exact same thing applies to code. If you hide all your errors you’ll never find them, and you’ll never improve. Errors are meant to preserve data integrity and to communicate state, and ignoring them opens you up to all sorts of other uncontrollable scenarios with unknown states. You should never take it personally either, for example when an error is reported by QA. Similarly, managers and project leads should accept and encourage risks and thus failures from their developers. It’s a spark for good design discussions and self-improvement.

A final anecdote: A while ago I talked to a start-up which was looking to hire an API and back-office support person that should be available 24 hours during the extended launch period. That is absolutely insane. Apart from the fact that no person should be doing such a job, that start-up made fundamental flaws in their design if they don’t have the confidence to launch and know every error’s cause within moments of it happening. And we're not even considering the case of scaling up. A big part of “lean” is failing, and a huge part of succeeding is how you deal with errors. Similarly, how your code deals with errors is a fundamental component of its design.

Here are some more reading on failing:

A year on with the Mazda CX-5

Posted on 2016-07-20

It's been a year and a bit since I took delivery of my CX-5. If you're wondering how it has been living with this car, read on. Spoiler alert: It's been pretty fantastic.

I'm a car guy. I don't mean a general car or motor sport enthusiast, I mean I like to drive cars, as opposed to bakkies*, vans, crossovers and SUVs. This puts me squarely on the opposite side of the line from the general South African car-buying public. The car as a car is being whittled away by peer-pressure buying decisions. Everyone wants a Toyota Fortuner or a Ranger Rover Evoque now. Presumably these types of vehicles are more practical, and off-road capability has somehow become an essential requirement these days.

There was absolutely nothing wrong with the practicality offered by the traditional sports-wagon or estate car just a few years ago. The Audi wagons also all came with quattro as standard, which meant that even off-road performance was on the check-list. I thus deduce that most of the car-buying public these days live down hard-to-access roads requiring good ground clearance. Either that, or they want to assert their dominance towards their next-door neighbor with a 4x4. Guess which one is more likely.

No, I prefer a car. So when I went shopping last year for a 'family plus dogs' practical manual vehicle, society had denied me the Mazda 6 Touring, which is only available in the European and Japanese markets. After looking around a bit I settled on the fresh face lift 2015MY Mazda CX-5. It had it's first service last month, almost both exactly 15 000km and 12 months to the day later. So, how has it been?


Let's start with the interior. This is where you spend most of your time and is also where cheaper cars fall down against the premium brands, both on finish and longevity. It's a very important area that contributes enormously to the practicality of a car. With the CX-5 it is arguably it's strongest point. It has withstood a year's worth of direct assault from a (now 2 year old) toddler, constantly swapping the baby seat in and out, road trips, two pit bulls and all sorts of general weekend activities like stacking the rear chock full of rooikrans. And despite all of that there isn't a single squeak, rattle, grind or other interior noise.

I can't fault it on ergonomics either. The seats and steering column are adjustable in all manners (not electrically). So even though my wife and I differ by a good 20 centimeters in height it's not hard to quickly get yourself comfortable again, or even while on the move. Infotainment-wise I'm glad I took (and paid for) the 2015MY with the center-console controls. I could have paid a lot less for the pre face-lift model that only had the touch screen, but this is disabled when the car is moving and is also much harder the operate. The knob down by the shifter with its shortcut buttons and smaller volume control became second nature within a few weeks and I certainly don't need to look down to operate it anymore. And oh, the sound system is really really good.

Even though part of my choosing this model was that the seats split 40:20:40, I had never imagined it would play such a major role almost immediately out of the box, and I'm really glad for it. That 20% middle part is the perfect fit for our camping cot without loosing seating space (admittedly 2 + 2) or the contents of the boot rolling around an otherwise folded down 60% of the rear cabin area. We could easily fit 3 adults, the baby seat and all the luggage in there for a weekend away with friends. It's also very well designed in the fact that you don't need to reach in through a rear passenger door to unlatch the middle bit. This means I can emerge from the hardware shop with a 2 meter steel pipe (for loosening hub nuts you see) and stick it into the car by only accessing the boot.

The only drawback of the interior worth mentioning is that there aren't any shopping bag hooks anywhere, so the milk bottles tend to topple over when meandering over the endless speed bumps at the malls.

Shifting and power

The engine is typically Mazda. The Active model has the 2.0l version of the naturally aspirated SkyActive range. It has fairly good pull if you commit, it's much stronger higher in the rev range, but runs very smoothly so you want to rev it out. The thing about this engine is what I don't have to mention. It doesn't have a turbo charger so there's no worries after 100 000km, yet I have the same (or even better) fuel consumption benefit. Mazda runs a factory catch-can and has sorted out the intake oil deposit buildup which other direct-injection engines like the EcoBoost suffer from. This is unmentionable stuff, but contributes significantly to a much longer engine life with fewer problems. Sometimes, after shifting, there seems to be a dead-spot (or a delay) right after I release the clutch, either with up or downshifts. I get the impression that it's interference from the traction control which can't decide on the attitude characteristics, probably directly related to my driving style; switching instantaneously from granny shifting to very aggressive downshifting. You will probably never experience this issue, and I have also not driven with the TC turned off yet to test it.

This brings me to the best thing about this car for me; the gearbox. I chose the Active model because it has the option of a manual gearbox, another thing slowly being sidelined by society at large. Sadly, this is the only model in the range that has this option; it is really very good. And don't bother to ask any journalist about it, they never get the base models on test. So what about it? Well I shift this thing faster than I do my '91 MX-5. It's a very accurate shifter; not short by any means, but not very long either. And together with the pedal placement I've upped my heel-toe game significantly in this car, because race car. The clutch is light so traffic is a doddle when the start-stop system isn't active. You have the ability to really beat on it and yet achieve 7.0l/100km consumption rates yourself. There is a shift indicator to help in this regard. And yes, I get far better consumption than you with your automatic and different driving modes. The dealer has told me that his customers with auto-boxes complain they don't get below 8.6l/100km, while reviewers have said it's a slow and sluggish shifting experience. I have none of those problems.

Typical consumption figure for my morning commute along the N1 between Bellville and Century City, about 20km. The bottom graph shows the long-term averages, which I reset at every second fill-up.

Driving it

Lets talk about the windshield. Shit finds this car's windshield like a child finds Pokémons. I don't understand why this is the case, what contributes to it, or how to prevent it. I'm on my second windshield already thanks to some roadworks on the West Coast. And when you do replace it, watch out for the washer feeder tube. It can easily fall into the side with little access; I had a significant amount of trouble connecting it up again. Anyway, I suppose it's simply because the windshield is so large and provides excellent forward visibility. But it's also steeply raked, and comes with a very thick A-pillar which I find regularly blocks my view at junctions or crossings. I have to physically look around it from time to time. Obviously not a problem maybe for people of a different height, but an annoyance to me.

That aside, driving it is excellent. Really the only time I know I'm in a high car is when I get in it or climb out. Around the corners this has better dynamics than any small hatch and is way more comfortable too, including some premium ones. I cane it on gravel roads and it returns a very rewarding experience, eerily the faster you go the more stable this car becomes. I have no doubts about the turn-in or mid-corner stability. This is one area that the Mazda lineage and philosophy really becomes apparent, and I've even considered taking it to the track. It really is that engaging to drive. It's also not a big car. I've left less-confident Corolla drivers behind after squeezing past into the left-turning lane on several occasions, and I didn't cheat by "4x4ing" over sidewalks or gutters.

The bottom line

So it's an SUV. I'm not a fan of the fact that I own one, let alone drive one. But as cars go, this is one of the best cars I've had the pleasure of. I can easily forget that it's not actually a car when on the road. I wish it was that touring model, and someday I'll probably import that touring model anyway and let this go. But until then I'll recommend this car above all else in the market segment now.

*For the non South African readers, this is a bakkie. You probably know it as a mini-truck, or you know, the F-puny class.

A better IoC pattern

Posted on 2016-07-13

A while ago I wrote about interfaces and how everybody just uses it for IoC these days. This time I want to discuss something with regards to established and advertised IoC patterns and problems I have with it.

All the C# projects at Global Kinetic makes use of various libraries from nuget, because "tried and tested" I suppose. One of these is Autofac. This library, along with the likes of log4net, has become almost a staple of the environment here, and no-one questions their use or implementation any more. And while it's an accomplished library that seems to do a good job with generics and so on, it's in my opinion too complicated for what it sets out to do and it's container pattern is flawed. Now, I don't want to focus on bad-mouthing Autofac, but rather to illustrate how using 3rd party libraries unwittingly drives us away from maintaining good principles, while what we should be doing is design something useful, simple and unique.

When you talk about IoC, you also inadvertently talk about SOLID principles. I mean, that is exactly that the "I" stands for. So why do we need it? For unit testing, mostly. A typical plug-in architecture is also based on it for example, but for everyday run-of-the-mill IoC, TTD is where it's at. But lets also look at the "D" part, that is, the "Dependency inversion" principle. This is a much harder to realize principle, since it actually takes some design to achieve.

I consider there to be two levels of "D". First, there's the deep level where we keep different concrete implementations abstract from one another, mostly between different layers or modules. The "I" principle together with treating interfaces as contracts plays a very large role towards achieving this. But then there's the second level, and that is to do with project or module references, which are also dependencies. Some consider this a superfluous thing, but I prefer to keep my inter-layer dependencies to an absolute minimum. I consider it good housekeeping, but there are operational reasons for doing so too*. These two levels together for me is "D".

Lets move back to Autofac and consider their example right there on their front page:

var builder = new ContainerBuilder();

//some registrations

var container = builder.Build();

You have to store the container instance yourself because to start off you have to first resolve something manually. Typically in or around Main() or Application_Start(). You can't have more than one instance, and so the builder has to know about everything. Even if you use the Modules base class and have your bindings on the respective layers (where the concrete implementations are defined), the project where builder is created has to know about all those assemblies where those module classes are. This means that you'll have to include references to every project in your stack, most likely from your top layer. But you'll say "Hey you can scan for assemblies instead, and you can load them from disk dynamically". And this is where I just think this is way too complicated for its purpose. Scanning for assemblies solves a problem that shouldn't have existed in the first place. There are much simpler ways of achieving good, self-managing IoC.

Lets also quickly stop at constructor parameters for a moment. Autofac resolves these for you when you include it in your concrete implementation and provide the appropriate bindings. The thing is, this is an absolute misuse of constructor parameters. Code consists of two different constructs: State-less classes (for example controllers, managers and repositories) which serves to simply group related functions and methods together. And then there are also state-full classes (domain models, ORM entities etc). Constructor parameters' only purpose is to give an initial state to a class. You shouldn't be using it for state-less classes, save for configuration settings which might control logic flow and that you only want to read once (but rather store it statically). Consider the following: By using constructor parameters like Autofac suggests, you pass resolved instances of lower-order dependencies 'down' from above. Doesn't that just reek of not conforming to the "D" principle? I've also seen examples where developers have used Autofac to instantiate state-full domain model instances. Why?! An IoC container is not for factory use, so there should be no need to ever implement any kind of constructor parameter discovery mechanism at all.

A better alternative would be for the assembly containing the lower-order dependency to provide you with an instance, passing it 'up' instead. And only when you ask for it. This solves the spaghetti references hanging around like jungle vines all over the place, but also the other problem of carrying a bunch of resolved instances around in private scope which you might only use in one or two methods, yet requires a list of constructor parameters to set up. I've seen line-lengths in excess of 500 characters!! All in all it's a dirty and unmanageable scenario which permeates every project here because a 3rd party library's documentation says so.

The better IoC implementation that I use is through an adapter pattern. Every layer has an adapter which is responsible for statically setting up the bindings it alone is responsible for. It does not need outside assistance, references, constructor parameters or otherwise. For layers which doesn't get replaced by proxies (typically the core or domain layer), that can be achieved through a single static class inside that assembly. It knows about the shared interface and it's own concrete implementations. The layer above then has the only reference to that assembly, and uses the static class as the resolver. If the same concrete implementation is required in more than one assembly, or for layers that do get replaced by a proxy, I setup a separate adapter assembly which references all the concrete implementations of the shared interface. This acts as a bridge-connector between the referencing layer and the lower assemblies. These adapters are self-managing and sets up the default bindings in the static constructor. No manual resolving is needed if you provide a public getter to use. There is also a generic setter which takes a lamda that constructs a different instance for the typed interface which you can use before calling the getter.

A basic, non-generalized example is the following:

    public static class IoC
        private static Func<IFooRepository> _iFooRepositoryFunction;

        public static void SetFooRepository(Func<IFooRepository> iFooRepositoryFunc)
            _iFooRepositoryFunction = iFooRepositoryFunc;

        public static IFooRepository FooRepo { get { return _iFooRepositoryFunction(); } }

        static IoC()
            SetFooRepository(() => new FooRepository());

This is the absolute minimum you need for IoC, and you just don't need anything else for basic projects. It's superbly clean, it's clearly and strictly referenced and isolated and there's no managing of container instances. It is also trivial to extend it to self-manage singleton instances or incorporate configuration elements to control switching bindings at runtime. If you really want to use Autofac however, you can still implement the adapter pattern, but maintain a container instance statically per adapter while managing access through getters (to make sure it uses the correct instance). Regardless, you should never compromise your implementation principles to fit the library you are considering to use.

*The first reason is that you should not have to make changes in your top layer (maintaining bindings) when you make a change in your repo layer. That's an operational dependency which should not be there. Using the Autofac modules prevents this to a large extent though. Secondly, software development typically means you are subject to someone's existing endpoint/service/API, over which you have no influence. At best you can implement the REST client yourself, at worst you are given a DLL that is being developed in parallel. It has been the case in every one of my projects that we'll be working on functionality that the client's endpoint/service/API is not ready to serve yet, or don't even have implemented yet. How do you deal with that? You break the dependency and you build versions that run on proxies that mock out 100% of the endpoint/service/API. In these cases you cannot rely on an assembly being available to reference for your container instance, because half the time your QA cycle will be done on a build with the mock assembly. I guess if you're an amateur you can physically switch the references as you need, but basic IoC should take care of that for you, without the need to reference (i.e. be dependent on) actual bottom-level repository assemblies. In the Autofac world I've seen code to get around this problem by manually instantiating the mocks and feeding them into the constructor as parameters. What's the point of IoC then? It's an atrocious hack, and is a complete disregard for code quality (maintainability in this case) to fit the library implementation which is now just a formality.

Polymorphic generic poofters

Posted on 2016-06-10

You would expect that a member in a sub-class would always take precedence over a member with the same name in the base class when being invoked? I did too.

A colleague recently showed me an unexpected result in a Java test he did. By being clever in the declarations it appeared that it was possible to circumvent the declared scoping (private, public etc) completely. And although that isn't actually the case, the problem does highlight some intricacies with regards to polymorphic design. Lets look at the test example (copied verbatim to C#):

    public class Deer
        private bool HasHorns() { return false; }

        static void Main(string[] args)
            Deer deer = new ReinDeer();



    public class ReinDeer : Deer
        public bool HasHorns() { return true; }

So we have Deer and we extend ReinDeer from it. Reindeer instances has horns, while Deer instances doesn't. The output from this code yields "false". Did you expected that? Well, I didn't at first, but it does make sense that you would get that result. First of all, it is important to note that the Main method is inside the base class and has access to the privately scoped method, which is why it even compiles in the first place. But more importantly, we've strictly typed the variable to the Deer class, even though you assign a sub-class instance to it. This is really important and also where I think the surprise or expectation comes from, but the method in the sub-class doesn't actually override (or hide) the one in the base class. So it is perfectly acceptable for the compiler to adhere to the strictly typed variable and call its implementation of the method.

At this point I'd also like to note an often overlooked weakness of generic typing. If we change this line in the Main method to be

            var deer = new ReinDeer();

what do you think we get? We get "true" of course, because it assumes the type from the assignment and thus it executes the sub-class' implementation of the method. Great. But what if it wasn't a straight call to a constructor, but instead something like this?

            var deer = World.GetClosestDeer();

How can you be sure which type your variable will assume throughout the life-cycle of your program? And how can you be sure which result you can expect? Using generics is nice and all, but it presents an awful lot of pitfalls and idiosyncrasies when you have to deal with objects over which you have no control (think a 3rd party class library).

So following on from this, I decided to check out at which point proper, predictable polymorphic behavior manifests itself within the scope of this design. I extended the classes a bit (and moved the Main method out of the Deer class since that doesn't really represent a real-world scenario):

    public class Deer
        public void Dump()

        private bool HasHorns() { return false; }

        public string NoseColor() { return "Black"; }

        public virtual string HoofPattern() { return "Cleft"; }


    public class ReinDeer : Deer
        public bool HasHorns() { return true; }

        public new string NoseColor() { return "Red"; }

        public override string HoofPattern() { return "Single"; }

    public class Program
        static void Main(string[] args)
            Deer deer = new ReinDeer();


There are three scenarios tested here. An example as already discussed, an example of where a base-class method is hidden by the sub-class' implementation, and an example of a straight method override on the sub-class. Executing this code yields the following result:


I must be honest that I was surprised by this result. Obviously the "false" we've already covered, and of course anyone with an understanding of Object Orientation will expect the "single". But the fact that the base-class' implementation of NoseColor was executed inspite of explicitly hiding it in the sub-class was unexpected. You can argue that the variable is still strictly typed, but in this case that doesn't hold water in my opinion. For one, both implementations are publicly scoped, and thus the signature is exactly the same. And so because I assign a sub-class instance to the variable I expect it to adhere to the new keyword present in the sub-class declaration.

I agree that using member hiding is questionable practice (even more so than using var), but the MSDN reference is a bit lacking in clarity about when and when not it is applied. Specifically it states that:

A method introduced in a class or struct hides properties, fields, and types that share that name in the base class. It also hides all base class methods that have the same signature.

The actual language specification goes into a bit more depth about this:

Hiding an inherited name is specifically not an error, since that would preclude separate evolution of base classes. For example, the above situation might have come about because a later version of Base introduced an F method that wasn’t present in an earlier version of the class. Had the above situation been an error, then any change made to a base class in a separately versioned class library could potentially cause derived classes to become invalid.

That's great, but it actually follows logically that even in a strictly typed scenario it should not call the base-class implementation (it's similarly scoped) since it is defined to be hidden, and possibly wasn't even there before. I believe that in this specific case the behavior is wrong and will result in undefined program behavior in case of base-class updates. Definitely something to watch out for!

Interfaces are contracts too

Posted on 2016-05-24

Lately I’ve noticed that while everyone’s talking about SOLID principles, it also seems that most people think that the benefit of “I” (Interface segregation) is solely related to dependency injection. While that is indeed one benefit, the true (and original) intention of an interface is often overlooked; that is, an interface is first and foremost a contract.

An interface is not just a collection of methods or functions. The declaration creates an expectation with the consumer of the interface, whether it is an API, a DLL, another layer in your project or a factory pattern. Firstly, there is an expectation of implementation, and secondly an expectation of terms.

Let’s talk about implementation. In Visual Studio for example, when you use the IDE function to implement an interface, you usually end up with generated code that looks like this:

    public class Foo : IComparable
        public int CompareTo(object obj)
            throw new NotImplementedException();

This is, in my opinion, a very dangerous template result. It leaves the code in a state that builds and requires no further action to be taken. It’s obviously not really an issue with a simple interface like IComparable since there is only one method to implement, and you probably used IComparable for that method in the first place. But what if it’s an interface that defines plenty of methods?

public class Foo<T1, T2> : IDictionary<T1, T2>
        public T2 this[T1 key]
                throw new NotImplementedException();

                throw new NotImplementedException();

        public int Count
                throw new NotImplementedException();

        public bool IsReadOnly
                throw new NotImplementedException();

        public ICollection<T1> Keys
                throw new NotImplementedException();


I’ve seen implementations of interfaces that only implement the immediately required methods “to get it working” and leaves the rest all as “not implemented”. This is particularly bad behavior because anyone else who looks at Foo through the Object Browser or intellisense will immediately grasp that Foo implements a Dictionary and can therefore be used as such. But it doesn’t. Introducing a half-implemented interface is tremendously dangerous and will most certainly introduce huge project stalls further down the road. You as a developer are not honoring the contractual nature of an interface if you do this. There is only one exception to this rule, and that is when you create a fake for unit testing and only need a specific method. However if that method is extendable (virtual or abstract) rather inherit from a properly implemented fake and override that one method, keeping the contract in place.

The second aspect of this contractual nature is the terms of the contract. By this I refer to the declared methods and, importantly, also the method signatures; this is something a lot of people overlook. Altering an interface introduces immediate breaking changes to everyone downstream of you, which is essentially breaking contract (and which should be punishable by death). This is why most web-based APIs introduce a version routing identifier, most commonly in the form of /v1/. The reason they do this is to preserve the contract on version 1, while introducing significant new or altered contract terms for version 2. But so often I come across an oversight of the signatures; the parameter and output objects. Anything that forms part of the interface declaration is part of that contract. This includes classes used either as the result or as parameters of methods. And it includes all the properties and classes used inside those. You can see how quickly this rabbit hole goes fairly deep, which is exactly why you have to make use of properly structured domain models, or better yet, interface models declared specifically for use in the contract. You are steering into disaster if you use your ORM entities directly in your interfaces.

And while that still seems pretty obvious, let’s talk about the persistence of these classes used in the signatures. For example, let’s declare the following interface and class:

    public interface IFoo
        FooPrinciple Login(string username, string password);

    public class FooPrinciple
        public string Username { get; set; }
        public string Domain { get; set; }
        public DateTime Expiry { get; set; }

Without API documentation it’s unclear to the user of an instance of Foo : IFoo that Expiry is local time or UTC time. And, changing that behavior in your code, as the implementer, could have big repercussions to everyone relying on this contract. Clearly, using a more explicit name for that property should be considered, but it’s also important that the meaning of Expiry stays the same throughout the life of the interface.

Looking at a more complicated example, let’s change IFoo to this:

    public interface IFoo
        FooResponse<FooPrinciple> Login(string username, string password);

    public class FooResponse<T>
        public string Message { get; }
        public int Code { get; }
        public T Data { get; }

    public class FooPrinciple
        public string Username { get; set; }
        public string Domain { get; set; }
        public DateTime Expiry { get; set; }

Typically a user of an instance of Foo : IFoo will expect that once the Code property indicates success, the Data property will be set. No null checking should be necessary because clearly it’s part of the interface, and thus the contract. In the past I’ve received responses like “Just look at the Code property, the Data property is not set anymore”. This is awful, and is completely breaking the contract established by the interface. If in some cases the Data property is set and in other cases not, then the pattern is a bad design and should be abstracted from the interface. If Data was being set previously and then is suddenly not anymore, the authors should be shot at dawn.

Contracts are a basic necessity of integration projects or phases, and breaking contracts costs everyone huge amounts of time and money, not to mention the tension that it creates between the different parties. Of course, depending on the project or even development phase, you don’t have to introduce a new interface every time there are some changes. By using interfaces correctly, breaking changes will become apparent to the integrator through breaking builds, and as long as the changed interface matches the expectations it sets, the need for detailed release notes is mostly negated.

Payment gateway security

Posted on 2016-05-10

As part of our continued Counsel Connect development, I started implementing the online payment service “Pay Now” provided by Sage Pay. Unfortunately, things went bad a few days into the initial prototype.

After the extensive sign-up process I was presented with the technical documentation for implementing the online payment gateway and their secure SOAP endpoint calls and functions for debit order processing and other things. At this point in time I’m only interested in online card transactions, and so I immediately started looking through the SOAP endpoint for the relevant functions. It wasn’t there.

I read back and found that the only way to process card payments is through their proprietary web-form flow which you have to initiate with a POST call to a web endpoint. Their example looks as follows:

<form name="form" id="form" method="POST" action="">
  <input type="hidden" name="m1" value="xxxxxxx-4ad4-7c2a-yyyy-eee9f23651097"> // Pay Now Service Key 
  <input type="hidden" name="m2" value="24ade73c-98cf-47b3-99be-cc7b867b3080"> // Software Vendor Key 
  <input type="hidden" name="p2" value="ID:123"> // Unique ID for this transaction 
  <input type="hidden" name="p3"  value="Test / Demo goods"> // Description of goods being purchased 
  <input type="hidden" name="p4" value="5.00"> // Amount to be settled to the credit card
  <input type="hidden" name="Budget" value="Y"> // Budget facility being offered? 
  <input type="hidden" name="m4" value="Extra 1"> // This is an extra field 
  <input type="hidden" name="m5" value="Extra 2"> // This is an extra field 
  <input type="hidden" name="m6" value="Extra 3"> // This is an extra field 
  <input type="hidden" name="m9" value="[email protected]"> // Card holders email address 
  <input type="hidden" name="m10" value="Demo attempt for testing"> // M10 data 
  <input name="submit" type="submit" value="PROCESS R5,00 TEST PAYMENT"> // Submit button 

If it’s not immediately obvious, the first hidden input (m1) is supposed to hold my service key. This is the ONLY piece of data in this entire thing that identifies me, as Counsel Connect, to Sage Pay. And it’s right there in the open for anyone to see if they care to press F12 to view the source. Hell, you don’t even need to know the shortcut key; “view source” has been on the right-click menu for Windows users since Netscape Navigator.

What’s the next very obvious vulnerability? Yes, you can edit the amount (the p4 input) to your liking, and it is completely out of my control that you can do that, or to what you can set it. All this basically means that you, as a user of Counsel Connect, can pretty much initiate the payment transaction in any state that you fancy.

I raised this with my technical contact at Sage*:

I have a question regarding security: The m1 parameter is to store my service key, but this is a big security risk. Keeping it in the open for anyone to simply grab using developer tools in any decent browser is pretty amateur. My current implementation is only slightly better: I have some javascript that first does a GET to an API endpoint to retrieve the key, add it into the m1 input, and then call form.submit(). This only serves to hide the key value in a JS variable, and of course still allows anyone to place a breakpoint in the correct location to get it. Another problem with both these methods is that the price is totally open for manipulation by anyone even remotely capable. I want to perform a POST from my server rather, where I am in complete control of the scope of the data, but getting back an HTML result and presenting that to the user neglects to pull style sheets and running scripts on documentReady, since I'm only injecting HTML into a page already loaded. Do you have any tips on how to securely initiate payment?

His response was simply:

I would suggest when posting the data the URL should have the HTTPS at the start: action=""

This raised a cacophony of alarm bells to me, since it was obvious that this person had no understanding of the issue raised, or any appreciation that they have vulnerabilities. Also, so far this only covers the initiation of the process. The hand-off at the end is also a concern. On their configuration pages I can set up URLs that serve as callbacks which presumably they will call with all the relevant information in the open (i.e. as url parameters). This is another failure, since it’s entirely feasible that a user can fake (or manipulate) a successful request (given access to the initiating POST) which will force my system into a processed-payment flow if I don’t do any additional checking against the SOAP endpoint (which is, thankfully, possible).

I was called by their technical director, and while conceding that I indeed raised valid vulnerabilities, I was instead offered an explanation about the limited use of my service key, and the mitigating function calls available in the SOAP interface to verify the transaction status. That’s not good enough. An analogy here is my bank telling my I don’t have to worry about keeping my card pin number secret, as long as I keep my daily limit low and I call them when I suspect fraud. It doesn’t fly. It leaves me, the customer, open to be unknowingly complicit to fraud which can only be mitigated by retrospective checks. This completely breaks the nature of 100% automated straight-through processing towards which I’m working. Any human intervention is too much, and every check that needs to be done is another possible point of failure. Additionally, having access to the actual values that constitutes the contents of an HTTPS call gives the competent attacker a massive advantage towards identifying correct decrypted bytes while guessing keys, and from there it’s a simple leap to decrypting the entire process and launching man-in-the-middle. This is not a hard problem to crack any more, and OpenSLL has been the subject of much controversy and bugs in the last year or so, with the last two really big vulnerabilities only recently patched. Furthermore, this coming from a representative of one of the biggest payment providers in South Africa, and a player even in international markets, speaks volumes about the proliferation of internet crime and calls into questions this provider’s attitude towards combating and preventing it.

So, what is a better architecture for this? There are two major options here. Firstly, they can extend their SOAP endpoint so that all the communication is initiated and handled on my server side, away from the user, and I simply capture the required details (card number, cvc etc) from a form that I build, and pass it on to the SOAP endpoint for processing. Problems with this method is that I then need to become PCI-compliant since I’m handling card numbers. The second problem is that my own web-request handlers are subject to waiting for web-requests to complete; also something to be advised against.

The second option is to only initiate the payment flow via the SOAP endpoint. I would basically pass all the relevant details (service-key, amount, transaction reference) from my server-side, away from my users, to a SOAP function. This will leave the initial request handler dependant on another web-request, but you have to start somewhere. Their SOAP endpoint then returns a single-use, timed transaction-key with which I can then seed a simple web form with a single hidden input field (the value of which means nothing to any user or criminal). When that form is then POSTed, that transaction-key identifies me and my associated transaction details on their side, and the process carries on as it currently does.

The second option is clearly superior. It leaves Sage to handle all the sensitive stuff inside their form-flow, but the transaction initiation is secure and out of reach of fiddly hands. I’m not sure who came up with their current architecture’s initiation flow, but it certainly does not stand-up to even a small amount of scrutiny, and is actually laughable**.

As for the hand-off; a similar strategy should apply. A hand-off token should be the only thing sent back in the callback (and there shouldn’t be separate callbacks for success and failure), my system should then retrieve the transaction state using said token and my service key. This token is never exposed to any users, and so they cannot manipulate the hand-over. Basically, you never push sensitive data. This is a basic security concept.

At this point you might note that I can still do the POST call server side as suggested in my initial email, and accept the Sage HTML page in their response manually and then return that as my controller result directly instead of injecting it via JS. Yes, that would allow me control of the initiation, but it doesn’t address the poor service architecture design, and it doesn’t address the additional processing and web-requesting required at hand-off at the end. It also doesn’t address the vulnerabilities that’s so obvious in their actual documentation example which would be followed by junior programmers without questioning it.

After inquiries I found that PayU implements the above suggested solution almost verbatim. PayGate still hasn't responded to my request for documentation at the time of publishing, but PayFast seems to have the same implementation problem as Sage. So before I take the plunge and sign up with another provider, I need to seriously evaluate the technical implementations of the providers considered. If you need to implement an online payment solution I suggest you do the same. This is really important since South Africa has recently been crowned the cyber-crime capital of Africa (not surprising since we’re also the biggest economy on the continent), but we’ve been in the top ten in the world since at least 2012, and third since 2013. Security is no joke.

*you’ll note that I mention a JS-based work-around. This could be implemented for the p4 input too, but it’s not a sustainable solution anyway.

**as proved when reciting this story to my friends.

I’ve been privileged to recently be part of a Unity project at Global Kinetic. Game development and the intricacies specific to that discipline is something I really enjoy, but it also reminded me about my philosophy of first principles and native code development.

Cross-platform these days is almost a bandwagon, again. Five or so years ago when XBox Live Indie pushed the indie-game development scene pretty hard, Unity was already the market leader with the ability to compile C#, JS and other code files to Java, Objective C and other native forms for many different target platforms. Everyone and their sibling was churning out iPhone and Android games faster than the journalists could punch out reviews. Lately there is a surge in Xamarin with Microsoft’s acquisition and license free inclusion of it in Visual Studio. Unity has also recently announced their partnership with Microsoft and we’re all hoping that this will serve to catch them up with the latest version and features of C# through IL2CPP and unlink the ball and chain that is the outdated Mono. So are these two cross-platform suits now the ultimate development space for a second time? Is something like Phonegap even still relevant?

First though, let’s consider why cross-platform. What are the benefits? For the most part, business owners and sponsors are immediately sold on the idea of develop once, launch everywhere. It’s billed as cheaper and more cost-effective. This is essentially the only reason cross-platform is ever considered.

As for problems, probably the biggest technical one is the notion of least common denominator. And even though this is also a problem with implementations of technologies like OpenGL, I’m not referring to platform-independent tech here, since my second point is not applicable to platform independence in general.

Let’s be clear, when you go cross platform you are making yourself dependant on something over which you have no control - the black box that is the cross compiler. Why is this a problem? Firstly, the developers of said black box is also looking at their own profitability and efficiency, and they will support only that which they can support on all their target platforms. Straight off the bat you miss out on exclusive platform features. Secondly, OS and platform updates introduce and change features, and your cross compiler will never immediately be ready with support for these changes. Unless you’re writing a very basic application, you’ll need to leverage off of some of the latest platform enhancements or OS/SDK updates. This is when your cross compiler of choice starts coming up short, having not yet implemented certain (or the latest) features of this SDK, or entry points in that API. At best you will need to plug in native platform code to make use of the specific or latest SDK features to solve your problem. At worst you have to wait for an update for the cross platform compiler before the issue can be resolved.

There are other differences too, like JIT vs AOT for instance, but let’s move on to one of the most important aspects of software development - the people.

If you’re a small company or startup you’ll have a small dynamic team who’s amped for success at all costs. That’s fine, and your app or game will do ok in its first year or two coming off a Xamarin or Unity base while you’re still in the “launch early, fail often” phase. But what if you’re an established development house with teams of platform developers? Are you going to force them into C# dev? Or reduce them to simply producing XML layout files and only let them out to code native every now and then when it necessitates you? That’s both demoralizing for the devs and disingenuous with regards to the sustainable skills rhetoric at every company meeting; certainly every one of those platform devs will eventually resign. And then when the next bandwagon comes by you have no-one to turn to when you need platform developers again.

I feel that selling your whole organisation out to cross platform is incredibly short-sighted, and though it might be cheaper and more efficient early on, the longer term effects of rehiring, reskilling, basic troubleshooting and losing an effective grooming and mentoring culture is far higher, but too easily discounted in the excitement of jumping onto this latest bandwagon.

This post was co-authored with Slyfox.


Can you still read a wiring diagram from school? I tried. Also had the car out on a back road for the first time in almost 6 months.

After all the recent calamities I was keen to get the car properly on the road. I drove it around the neighborhood and to work for the first time on Tuesday and then again Thursday. Thursday night it was dark by the time I went to put the car away, and I found that none of the back lighting/illumination in the dashboard was working. Not a catastrophe, but we’re approaching winter and you never know when you will find yourself in the twilight. My initial thought was that the bulbs all broke when the cluster dropped. So out it had to come to test it, but the bulbs were fine. So I broke out the wiring diagram and started measuring all the points.

Some time later I had figured out that half of the circuit (red/black) was intact and measuring 12V on ignition with ground. There was definitely power. But it measured nothing with the other half of the circuit (red). After some head scratching I contacted an electronic engineering friend (who helped me in that first week after getting the car) and also started searching through forums. My friend was first to the line with his own conclusion that the problem was the dimmer switch. But I don’t have a dimmer switch. And right as I replied to him I happened onto a post on the UK owners club forum mentioning that some later Eunos models had blank-offs that complete the circuit and needs to be plugged in.

There are three additional switch locations to the outside of the driver’s position. In later models and in US spec models this is where things like cruise control and the dash dimmer was located. In my car this is where the fog light switch is. And also two blank-offs, one of which has an electrical connection at the back.

The wiring diagram, and the end result - the litte blue plug that fits into the big white connector.

When I was putting the dash back (all three times) I couldn’t find the home for two plugs. The white one didn’t fit in the only white empty connector there was, and there was no blue connector under the dash whatsoever. It turns out that, one, it was the small blue plug that fit the empty, large, white connector. And two, this plug didn’t need to be unplugged to remove the dash at all. The lesson here? Take notes when you disassemble, so that you know how to put it back together again.

So with that sorted I took it for its first proper shakedown over Malan’s Hoogte. It’s a small, narrow equivalent of a B-road from Durbanville to the entrance of the Fair Cape farm. It’s rutted in places, but it’s quiet so if there was any problems I wouldn’t find myself in a dangerous situation. There wasn’t, except for the new EBC brake pads that smelled terrible! They’ll settle in soon enough though. It was a fun drive, but I must say that after 6 months I have to really get used to the car again. I’m so spoiled now with the CX-5’s level of comfort and ride-quality that this old MX-5 now feels like a right old go-kart. It’s not a bad thing though, it’s still a visceral experience. But the old 1.6 is sluggish in comparison and I really struggle to heal-toe this car. It’s far easier in the CX-5. Still, I wouldn't swap it for anything in the world, but today and tomorrow my wife has it because I'm on doggy duty :(

It's dirty from the months in the garage putting it back together again. Once I get the roof replaced it'll get a wash.

A few cock-ups

Posted on 2016-03-30 in The Blue Car

The project suffered some setbacks, one which is pretty big and commands even more unforeseen expenses to correct.

After I sorted the carpet and dashboard I decided to test the instrument cluster and head unit. The cluster checked out fine, but when I started connecting the head unit I realized that I had not pulled out the antenna cable. It was stuck somewhere underneath the carpet next to the tunnel, under the dash. What a cock-up. So, I started to unplug just enough to get sufficient space for my hand to get in underneath. This involved removing the instrument cluster again since the speedo-cable is the one item in this whole thing that has the least amount of play. I struggled a bit, and when it finally came off the cluster slipped out of my hand from the force and bounced onto the floor of the car. This caused the rev-counter needle to slip past the stop-pin, and just sort of dangled there... another cock-up!

The cluster with the clear cover removed and the rev counter still in limp mode.

I got the antenna cable out and cut my hand pretty bad in the process, and then set about fixing the rev-counter. This was pretty easy actually, and had the thing back and tested in no-time. The head unit connections also checked out fine.

After all that I started preparing to put the roof back. The roof of the MX-5 is actually pretty complicated, and was also the only convertible roof of its time that tucked in under the body. This means it comes with a gutter tasked with leading any water away into drain holes located on either side of the car, exiting just in front of the rear wheels. It also means there are a lot of fitting and lining up to do when putting it back. It can be done by one person easily enough, but of course it's easier with two people. That said, I attempted it alone and promptly cocked it up.

I had to rotate the roof to get it in the correct position for fitment, and did this in the air while holding it. During this manoeuvre I didn't keep a firm enough grasp on all the struts, and it slipped out of my fingers and opened up. Except, it didn't open with the puny force of my arm pulling it, but with the mighty force of gravity, popping it open like a parachute. This literally pulled the roof lining out of its fixture at the driver side along the window line. A third cock-up! So now I have to get a replacement, at considerable cost, fitted. To state it eloquently, I was rather disappointed.

You can see where it tore out of the frame fixture. It's not an abysmal tear, but it will leak and I'm concerned that moderately fast driving will simply rip it clean out.

With that job well in hand, I moved back to the interior to start putting back the small things like the glove-box. As I peered in under the passenger side of the dashboard I noticed that one of the clasps around the heater box wasn't fastened. I couldn't remember if I had even loosened it, but it didn't matter. The dash had to come out again! I had lost count of the cock-ups.

By now I'm was a bloody expert at removing the dashboard of the MX-5, and had it out, the clasp (and the heater box bolts) tightened and the dashboard back in, in under 30 minutes. Job done. Then I slowly set about putting the steering wheel, the cluster cover and the tunnel cover back. The latter got new genuine leather shifter and handbrake gaiters which goes a long way towards lifting the tired old plastics out of their gloomy existence, and goes perfect with my old leather gearknob

The final exterior bit was the chrome fittings where a hardtop would fasten. These have rubber gaskets underneath them which was absolutely knackered. I had to cut new ones to replace the fixtures. I'm now ready to take the car to the motor-trimmer to get the roof done, and then it's back to the paint shop to finish the stone chip and final touch-ups and polish.

The one hardtop fixture with the old gasket at the top, torn and dusty. The new gasket is twice as thick (it's all I could get from the rubber place), so the fixtures are now slightly elevated from previous fitment.

The last month or so saw some serious work towards completing the car's interior, including sealing and sound proofing. Finally I'm on the home stretch to complete this restoration!

During the final steps of assembly on the front and rear ends, I was contemplating putting the rubber seals on the windscreen frame and rear-deck back. It was pretty clear that I wouldn't be able to do this alone, what with trying to pry open the lip while also pushing the sealant in there with a caulking gun at the same time was impossible. But first I had to clean them out. The old sealant had to come out, and along with it years of dirt and grime. This stuff isn't like silicone which sort of hardens and dries. This sealant is more tar than anything else, and remains in this perpetual sticky state. It's a real mess to work with.

The windshield frame rubber now fits much better than before. Loads of sealant went in here.

After some help from a friend and my wife the two rubbers were prepped, filled and pulled back into place over their mountings and plugs. The windscreen frame rubber was especially hard to work with since it's really floppy and pretty long too. During fitment I tore it a bit around one of the plug holes. To compensate I simply pushed more sealant in. The rear deck seal was much easier. This one is metal backed and doesn't flop around and touch and smear sealant on everything in a 5 meter radius. It was also much easier to fix into place. It's worth mentioning that getting the sealant in wasn't easy. It has an astronomically high viscosity, and pressing it out with the caulking gun is especially difficult and really hurts your hands after a while. Not to mention all the flexing you have to do is really tiring. I noticed on an MCM video that they were using an electric caulking gun. I don't know what those cost, but I would imagine it's much easier to work with. So with that done, I started doing the sound proofing.

The rear deck rubber was easy.

When I took the carpet out there was a lot of insulation material that came off, either stuck to the car's floor or which simply just fell out. Of course this stuff is there for a reason, and unlike some motoring journalist I don't want to turn my car into a stripped out race car. So after contemplating taking it all to a motor-trimmer, some further research led me to Dynamat. It's not expensive, and really easy to fit. The biggest part of the work was cleaning out the interior properly - the one-on-one session with the methylated spirits was really interesting. So, hopefully this stuff is as good as they claim, and it also helps to shield against permeating heat from the drive-train.

Wear gloves when applying this. The aluminium lining cuts like a Cape gangster.

Next I put the carpet back, a process during which I cut my one finger to shreds on the one mounting stud over which the carpet had to be fed, going in underneath the heater box. Overall it was much easier fit compared to taking it out. I made sure that the speaker wires for the seats were pulled out, but I neglected to pull out the antenna lead. This will prove to be a problem later on.

The dashboard was of course a bit more trouble, since it's rather unwieldy, and putting it back means checking and placing a lot of plugs, looms and other bits like the speedo cable in the correct positions while maneuvering the dashboard. Fortunately this time around I had removed the steering wheel beforehand, which made the process a lot easier. After it was in place I started plugging things back - which almost immediately resulted in that particularly concerning burning electronics smell. Eeeek!

Being the amateur that I am, I had neglected to disconnect the battery beforehand. This wouldn't have been a problem if not for the alarm. Since it's on an always-live connection it immediately started polling it's sensors and switches, even though I had it bridged to not immobilize the car. However, with not all the plugs and sensors connected yet it had some sort of potential build up which literally turned one of the sensor wires into a heating element and promptly started melting off it's own insulation. Luckily it was isolated to that one sensor, and didn't appear to have damaged any other looms, plugs or whatnot. A bit of a "phew" moment for me there.

I have now started preparing the reassembly of the roof, which I'll detail in the next post. And then there is testing all the systems on the car too, electrical and otherwise. Lots to do still, but I'm so excited to be almost finished with this project!

The internet of SMMEs

Posted on 2016-02-19

In this modern age it’s an assumption that any and every business has a website, but that’s hardly the case.

The younger generations look for businesses and services almost exclusively online, and we often end up on directory websites that list outdated contact information and addresses; sometimes even businesses that have closed down. For the most part, these directories do not offer a service which keeps your information up to date, and your day-to-day operation makes it difficult to remember or find the time to go to each of them and update it yourself, if they even have such facilities. And then to add insult to injury, do you remember what your username or password was? Is the directory's login page even secure?

The internet is a difficult place to grasp, and usually a small business has to rely on third parties to assist them in the process. In some cases, internet access for your business via one of the ISPs comes bundled with hosting, web design and domain registration services thrown in. The latter expires after a year, and suddenly you have to start paying to renew the domain. Most business don’t remember or decline to do this, and so their website (and associated email addresses) all stop working. Suddenly all your business cards are out of date, none of your former customers can get hold of you unless they have your phone number, and that only lasts until the day you move to different premises.

So unless you have representatives that go around and meet clients (usually other businesses) your business has suddenly disappeared off the face of the earth to the online public, Google will in due time remove your address and associated links from its search engine results and once you’re off Google, you are off the internet. It’s as simple as that.

So what should a small business do? Here are a few tips:

Don’t use your cousin’s son who is in IT

This is probably the most important item on this list. Using an individual with only an oral agreement is a recipe for a disappearing act. This individual will forget the login details to the registrar’s website, could move overseas or worse and become unavailable, and your only line to your online assets are severed. Use the professionals. Use the services of the ISPs if they provide it.

Keep credentials and other records safe

Always store credentials to the various websites and portals that you use safely. Use different passwords for different websites, and two-factor authentication where possible. The best practice for credentials is to use a password manager. There are both online and offline versions of this. To keep records safe, create a DropBox, Google Drive or Microsoft OneDrive account and store your important documents online. These can all synchronize to your local computer or laptop so you don’t need to manually upload it every time, and you can make backup copies of it on DVDs or otherwise.

Renew your domain name

Unless you renew your domain name it will expire and your website and emails will stop working completely. To register or renew a domain costs R100 per annum (in 2016) if you do it yourself at This is a very small amount of money to pay towards the upkeep of your online infrastructure. There is nothing less professional when your advertisement states you have 50 000 happy customers, but then have an email address of [email protected]

Secure your website

Internet security is no joke, and there are many misrepresentations in the press about it. In 2014 South Africa was rated sixth in the world on cyber crime. If you have an online shop or a user section, it is paramount that you provide a secure portal through which customers can login. But more than that, these days it is important to secure your website regardless*. There are a few reasons for this:

  • Legislation (POPI) provide for harsh penalties (10 years in prison) should your users’ data be compromised and you did not take appropriate measures to secure that data.
  • If it is not secure, you cannot ensure that what your users see in their browsers when they attempt to look at your website is actually your website.
  • If your site (typically based on wordpress) has an administrative login and is not secure, your own admin password can be stolen and your website can be defaced and taken over.

All of these problems arise because of “man-in-the-middle” attacks where someone impersonates your website. This is typical of how phishing attacks happen. This means that the bad guys will use your business’s good name as a front in order to upload malicious software to your customer’s phones and computers and steal their passwords and email addresses or other identifying information. You can get away with an issued SSL certificate from about R1000 per year, depending on the authority (there is also a new free authority at, which is in beta). Again, this is a small amount to pay for peace of mind and surety to your customers.

Email Backups

If you make use of your ISP’s email facilities, make sure that they keep an online backup of your mailbox (much like using email on the web at GMail or Apple). Never rely on your computer, because computers crash and hard drives break, and you will lose your entire inbox and email history. Alternatively, register for Google Apps or Microsoft Office and make use of their cloud-based solutions. The same goes for documents.

Social Media and other advertising sites

Get your Facebook page setup and use it. Let people check-in when they arrive at your business. It’s free, it spreads organically and it’s relatively cheap to promote it. Similarly, online advertising sites are very useful for certain types of services, not just physically selling something.

Like anything, keeping an online presence active and engaging will cost both time and money, but rest assured that as new generations arrive, things like physical directories are in less and less circulation, quickly being replaced by social media and online searches. Compared to other forms of advertisements, an online presence is so cheap it’s a no-brainer. A radio advertisement is about R350 for 30 seconds for one airing, excluding the cost of actually recording it. That alone is the cost of three years for your domain name. In South Africa we are slightly behind the trend in the bigger west, but know that soon it will be a case of if you are not online, you are not in business.

*This website will soon be secure too, once I get my certificate from letsencrypt. Until now, securing a personal website has been prohibitive due to the diligence checks.


The first ever BMW M2 was launched last week, and they coaxed a lot of journalists into Laguna Seca for the event. On the surface this car sounds like a driver’s dream car. But in the South African context I’m not so sure.

The headlines around this car are that it doesn’t share the whole engine with the M4, it doesn’t share the twin turbos, but it does have almost the same 0-100 time of 4.3 seconds with the DCT gearbox as the M4, and 0.2 seconds slower with the manual. For those really interested, the engine shares pistons and seal shells with the M4, and has a single twin-scroll turbocharger instead. But that’s not really important. The really important bit there for me is the manual gearbox. What this does is cements the car’s purpose as a driver’s car.

So, it’s my kinda car for sure. Unfortunately, here in South Africa I doubt it will be the case. Of course there hasn’t been any official announcement or launch yet, but I will bet you any money the complete M2 consignment has already been sold out, and that not a single one will have a manual gearbox. How can I be sure?

You don’t walk into a BMW dealership in South Africa and offer cash for a specific model with your specific requirements, unless it’s a low 1, 3 or 5 series model. Because our small market always get limited numbers, the good cars have all first been offered to the BMW-lifers exchanging their old 135i models and who can’t care less about driving a manual car. Subsequently all of them are always sold out or there is a waiting list of more than a year well before launch, and none of them will have a manual gearbox. I can also guarantee that not one of the press demonstrators will have one either because buttons and DCT programs and launch control!

It’s a shame, and I decry the demise of the manual gearbox. My recent purchase of a family bus was guided by who offers a manual, which meant I had to settle for the base model. Subsequently I hear from the dealer that other customers are complaining they don’t get good enough mileage, where-as I get very close to the claimed figures and get to rev-match and heel-toe while doing it. For those looking for a simple and clear driver’s car, the options are getting limited in modern times. You can still get a WRX, the GT-86 and the MX-5 (also a year waiting list now) in manual, although only the Subaru offers real saloon practicality. Then you jump to the Porsche Boxter Spyder which you might still be able to get. There's really nothing in between and absolutely nothing beyond.

So, as good as this car sounds, you can forget about it.

The R60 000 tshirt

Posted on 2016-02-12

You might remember I tweeted in January about having to pay R58 500 for a tshirt waiting for me at the Post Office. Well, I finally got it.

I ordered the NAlware tshirt from blipshift in November. This is a single print run business where users send in art suggestions; there is no guarantee of a second or third print run. Shipment was scheduled for early December, and it arrived at the Johannesburg International Mail Center on the 19th of December.

Fast forward about three weeks and my wife went to the local Post Office to pick it up, where they pointed out that they needed to collect R58 500.60 on behalf of Customs for import duty and VAT. That’s about $4 300 for a $15 tshirt. So naturally I started the process of contacting Customs to try and sort out this pretty hilarious mistake.

First I called the SARS call center. I explained the situation to an agent which promptly burst out laughing, and then asked me if I had actually paid R60 000 for a tshirt. I replied “Of course not. I’m not Jacob Zuma.”, at which point she laughed even louder. Anyway, I got a phone number for JIMC who instructed me to just contact the post-master and get the parcel returned.

The post-master stated that they need instruction from Customs to return the parcel, and that I should contact the local Customs office in Goodwood. He gave me the direct line of a Customs agent there. And so the run-around began.

I tried for a week to get hold of anyone on the phone. The direct line would simply redirect to the switchboard after 5 minutes of ringing. On the third day, the lady at reception tried four different numbers, and eventually I ended up with a domestic parcel agent, who tried two more numbers, and then finally just read me a number back that I should try later. It was the exact same direct number I had started with. So now I had six new numbers to try, and try them I did. Every morning while sitting in traffic I scrolled through the call history on my phone and tried every single one again and again. Finally I reached a lady at the Customs office who said I just need to print my invoice and proof of payment, take it to the Post Office and instruct them to send it back with the papers attached.

This I did, and two weeks later, today, I picked my tshirt for a total customs fee of R163. To be honest I had expected the ordeal to be much worse, and apart from the “we don’t hear telephones ring” mentality at the Goodwood Customs office everyone was really friendly and helpful.


Work continues amid some setbacks on the A-pillar and boot-lid. I’ve got the two ends of the car almost completed, but there's so much building...

The rear of the car is now almost reassembled, including the rear lights, reflectors/side indicators and the license plate cover panel. I had some trouble with the rear mudguards. These were never completely fastened, and I wanted to reattach them properly. It took some bending and cutting of clips to get it all to fit and tighten nicely. The only other problem area is the lid brake-light. When I took it off, the water seal around it tore off completely, and of course it’s not listed as a part on any Mazda catalog anywhere, although it does appear on their PDFs with a part number. I can special order from the UK, or try and make something locally. Of course, I was never going to reuse the old one anyway, so this would have been a problem all along.

Then, as I was preparing to put the windshield frame strip back, I noticed that there was a missing plug in it. When I removed it, I thought I had done so with all four plastic plugs in place, but alas, I didn’t check correctly and one had remained in the A-pillar. This isn’t a problem of course, but as it turns out it had torn out of the rubber long ago. I removed the plug from the A-pillar and found a load of rust underneath it in an area that I really don’t want rust. This has been leaking through the tear for a long time. I had to sand and treat this area too, since it’s critical to prevent further deterioration of the A-pillar. It seems to be mainly around the plug hole; knuckling around the hole and down the pillar doesn’t deliver dull or soft sounds which would probably indicate very thin or eroded areas. It was quite a mission to clear this off, and I used a Dremel bit extensively rather than hand-sanding.

The rust at the top of the passenger A-pillar around the plug's hole.

Moving towards the front of the car, I had some surprises. The shop had not attached the front-bumper to prevent damage during transportation (the tie downs inside the mouth made them nervous I guess), but they had also removed the plastic support for the spot lights, bumper stays and under-tray. This made it easy to take off the tie-downs to clean up, one of which would be replaced by a license plate bracket. While waiting for the paint to dry, I reattached the bumper completely, and set about the front parking and indicator lights. The new clear side indicators look ace. Unfortunately, due to budget constraints I didn’t get clear side reflectors too. I will definitely get the stock orange ones replaced as soon as possible now.

The light clusters though, that was something else. You might remember I had gotten the black smoked clusters instead to replace the stock clear and orange clusters with. In the past these didn’t support a separate indicator light, but recently they started offering these clusters with the indicator hole drilled out and a special rubber fitting for a separate indicator bulb. What I didn’t expect was that these clusters come with bulb harnesses and wiring already fitted, with the caveat that the main bulb is wired as the indicator (due to the old configuration) and the separate indicator bulb is not connected. Fortunately, because of the state of my car’s original bulb harnesses and wiring I had ordered replacements, expecting to use it with the new clusters. So now I had two sets of harnesses and wired plugs for each cluster. The stock indicator holders doesn’t fit into the drilled hole in the new clusters though, so after some cutting and soldering I built a mix and match set for each cluster that fits rather nicely from the two sets of harnesses. The smoke clusters look absolutely super, probably the best buy for this car I’ve done to date.

One of the new light clusters with the main bulb wired as the indicator, and the state of my original harnesses.

After that I started measuring out the passenger side tie-down and designed the plate-holder to put in it’s place. This plate holder will go all the way out of the mouth of the bumper and present a suitable area to attach an import-sized plate onto. This was a very interesting part which I really enjoyed doing, partly because I used to do 3D modelling as a hobby, and I’m pretty anal when it comes to things like rounded corners and bevels and so on. Of course, this morning it turns out that using Blender to model it and the export plugin to save to the .DXF format is pretty useless, and the guys doing the laser cutting of the stainless steel can’t do anything with the data set. I guess I’ll try something like Sketch-Up next, otherwise I’ll just get a trial version of AutoCAD.

Measuring out the tie-down as the base of the bracket, and a render of how the bracket will look when completed.
The font of the car with the red tie-down, the new light clusters and a cardboard template for the bracket.

This is slowly turning into more of a build than I anticipated, but I really shouldn’t be surprised about that actually. This car went from being an “occasional maintenance” project to a full-on “project car”, so I guess it’s just a continuation on that theme. More next time!

I got the car delivered back home in the third week of January. It looks so good, and I couldn’t be happier with the outcome.

Now the rebuilding process is underway, and there is so much to do. Pulling it all apart is easy, really. I didn’t care so much as long as everything was stacked and sorted appropriately, and nothing broke. But when you put it all back again, you focus on the detail, on the alignment, on the fit, on not stripping old rusted bolts.

First off, I have to deal with what the paintshop didn’t deal with, and what they left me with. After the clean-up I found a strip of rust on the passenger floor which wasn’t obvious before. I sanded that down and sprayed it. Secondly, there were the remains of the rear lights’ gaskets. I had hoped that the paintshop would clean and treat these areas for me, but I wasn’t explicit enough about it, and they sprayed around it. On the driver side it was OK, and only required light treatment. On the passenger side however I had to chisel it off, then scrape the remaining bits off, and then sand it down. After this I sprayed both sides too so that the new seals have a clean smooth surface to sit on. The rear bumper was only partially attached, and I had to get innovative with some extra clips the detailer gave me when I pointed out that the rear mudguards weren’t attached. And of course, I have to attach those too.

The passenger rear light area work: the old gasket partly removed, sanded and resprayed.

The paintshop didn’t attach the front bumper on delivery (I took it home separately along with the mudguards) for paint-protection reasons, and this turned out to be a blessing in disguise. It made it very easy to take off the two front tie-down hooks. These need to be cleaned and painted too. There’re also a myriad of small little consumables that I want to replace; clips and plugs and grommets and so on.

The devil’s in the details as they say, but on the whole I’m very excited to get this done now, before the turn of the season still. Stay tuned!

2016 brings reassembly

Posted on 2015-12-30 in The Blue Car

So the car stayed in the paint shop over the festive season. They couldn’t finish the paint job and heat-treatment in time before they closed up. I’m not in a hurry to rush this at all though, and I had a few small things to prepare anyway.

The first item was a cleanup of the fog lights. These are the original JDM spec lights that was apparently OEM fitted only from 1991 to 1992. They work, but the outer plastic covers look really tired. I got a kit that did a neat job of clearing up the housings. They are still quite gritted and have a lot of pockmarks, but hey - they saw 25 years of gravel road after all.

The effectiveness of the cleaning kit is obvious in this photo.

Secondly, the galvanised plates fitted inside under the carpet covering the fuel tank and hoses, trunking and rear shock tower mountings, and also the plate in the boot covering the fuel hose, had to be cleaned. Three of them succumbed to moisture, probably from a leaky roof, and had started to rust in a few spots and also had a white mouldy chalk buildup on them. Vigorous sanding was applied and then I spray-painted them with high-heat black. The main lid comes with a lining and some sound-dampening material stapled to it which I don’t want to remove, so that’s why these weren't re-galvanised. They look so good now that it’s actually a pity no-one will see them!

The one cover plate before and after sanding (I also got into the rust dips with a dremel), and later the spray job well in hand.

So that’s what I did while the car is in the shop. After i get it back one of the first things I want to do is manufacture a plate-bracket. Here in SA it’s required to have a front number plate. Previously I didn’t care much and simply stuck it to the nosecone with double sided tape. This is of course pretty amateurish, and really spoils the look of the car quite a bit. Fortunately my car still has both its tie-down hooks, or baby teeth. These I want to remove, clean and then spray paint as well. Then, I want to make a bracket that attaches to one of these hooks and provides a mounting place for my plate off to the side, next to the fog lights. I saw a plate mounted in that position on a VW today. It worked to some extent for the City Golf, but it’ll look really cool on the MX-5.

After that there is the air vent system under the dash. The heater box has sponge lined around the edges that the vent pipes press into when the dashboard is fixed in place, making a seal. These sponges are pretty ragged, resembling something akin to dried sea anemone. I need to remove it all, clean it and line it with new sponge. I'm not sure if there is a specific type of sponge to use for this yet - I got a tip about a rubber and what-ever place nearby from which I can buy something suitable.

Even though the plate bracket doesn’t prevent reassembly of the interior to proceed, it’ll still take me a few weeks to work through these two items. First though, a New Year's party, so here’s to 2016 and all my giddiness to start reassembly.

Talking about it

Posted on 2015-12-17

I've added the DISQUS platform to my site.

Comments is something I'm pretty wary of on the internet. For example, when News24 disabled the commenting on their site the experience was hugely improved. But it's not as simple as that. EWN still have commenting, but instead of their own system like MyNews24, they leverage off of the DISQUS platform. This means that everyone's comments are linked to a public profile that is visible on the entire internet, not just on News24. There's that extra layer of depth there which somehow seems to get round the hate- and one-time-post-accounts.

So I've decided to incorporate the same here. Obviously, the DISQUS embed can only show from the single blog post page, and I don't want to clutter my front-page with a stack of comments in the middle. So you'll have to click the links to see and use it.

I'm looking forward to hearing what people think about my projects, or my github, or whatever. Teach me something new, please!

Coding Challenges

Posted on 2015-12-14

It’s been a while since I just coded something for the fun of it. And it is the first time I’m taking someone else’s challenge.

A colleague recently linked to, created by Eric Wastl. I don’t know how recent it is; I’m probably way behind the rest of the population on this one, but it took my fancy. The challenges are interesting, well structured and in some cases pretty hard. You can find my solutions on github.

I’ve learnt a few things about C# during the bits that I have completed, like dealing with unsigned integers for a start, the sort of thing that never really crops up in business orientated software. It’s also an ideal opportunity to practice TTD and red/green/refactor outside of a pressure environment.

Similarly, I’ve also started building my own implementation of Kevin Perlin’s noise algorithm. The link to his original source code from 1983 is here, but I haven’t read or used C or C++ since 1999. None the less, so far I’ve completed the method for 1 dimension. It’s a great challenge to take on and make your own. Also soon on my github.

The polarizing 2016 Mazda MX-5

Posted on 2015-12-03

Since the launch of the new ND this year, there’s been a lot of heated discussion about the car. It’s had such a polarizing effect, even between stout supporters of the previous models on the forums to such an extent that they had to aggressively moderate the thread for a while.

Even today, in almost every press article’s comments section you’ll find the debate still raging about the low power and it having no turbo. You’ll find fan-boys defending it saying the car has never been about power. You’ll find the hot hatchbacks saying every modern 2.0L now makes upwards of 150Kw (200bhp) and that 116Kw (155bhp) is pathetic. And then there’s the muscle and the tuners. The aftermarket also hasn’t rested on their laurels, already there are all sorts of options for every opinion out there. And so with everything that is subjective, and motoring is absolutely subjective, no-one will convince anyone else about their view.

When I first read about it after the launch, my first thoughts were that this new car is essentially a rebuild of the original car on a modern platform. It has a 1.5L engine for a start, a simple soft-top and it is in the same weight class as the original, compared to the porker the NC was. And this, in a nutshell, is exactly what I wanted out of the new car. I can stop there, since I think that already defines my subjective position on the matter. But of course, markets are different. That 1.5L is not coming to SA, instead we’re only getting the 2.0L full house model. Our market is small, and we have to live with limited options. So now the car I read about is not the car I can buy. This got me thinking a bit about what I actually do with my car, compared to what I expect from the car. And those two things are very different.

Let’s start with the expectation. The general consensus is that the MX-5 is an excellent handling sports car with a revvy engine. It’s also the most tracked and auto-crossed car in the US by a big margin*. It’s even got it’s own spec racing series over there. So, your expectation is that it will be a fast car, it will be something to carve up the road with. You expect that you’ll be able to drift it, take it on track days and work some racing mods into it over time. This then is what the 1.5L and 2.0L Sport and Club models in the US are all about. Yes it doesn’t have major power, but in every YouTube test available it beats the BRZ/FRS/GT86 around the track or drag strip by a couple of car lengths.

In the real world though? What is it that I actually do with my car, my old NA? I drive it to work and back every other day of the week. I go to the shops on a Saturday and stick a case of beer and enough gardening material into the car that it rides on its bump stops on the way back home. I strap our son in his baby seat into the passenger seat and take it over a mountain pass during his nap time on a Sunday. This is so far removed from my expectation, and I absolutely agree with Chris Harris that it’s not a sports car. Yes, we all want to think we are young and want to track our car every now and then, but it doesn’t happen. In the real world with real responsibilities, it’s only a handful of youths and toffs that prefer to spend their money on brake pads and tires. And this new car sits on 17’’ rims, so tires will be bloody expensive. Already there’s a lot of discussion on which brands’ 14s or 15s will fit over the calipers so that you can buy smaller, less expensive tires to track. See Fly’in Miata for some import options on wheels and brake kits.

So my ponderings about the new model led me here: I’m all for the ownership experience, and in that guise the 2.0L full house model we have in SA is actually perfect. It’s got all the same kit in it that I use every day in my CX-5. And when I do find myself on a mountain pass, alone on an empty road, I’ll be able to pull 90% of my expectation out of the hat and give it a go with the roof down.

Now the only question is, is it something to replace my NA with?

*Based on anecdotal forum posts

The skeletons in the closet

Posted on 2015-12-01 in The Blue Car

I didn’t think that after six years of ownership the history of this car would still throw up a few surprises.

One of the things that I both love and hate about this car (mine specifically) is that at full chat the bonnet, or hood, rattles and vibrates like crazy. This has always been the case since I got the car. I’m not sure why it does it. I’ve even secured the bonnet with rubber at it’s mounting points to stop it from destroying the paint. That didn’t help. It lends the car a very unsettling TVR-esque feeling when driving it very hard.

This is one of the experiences unique to my car that is probably from a direct result of it’s history with Malawi and it’s four previous owners. I’ve learned to love it by now, it delivers a hard acceleration character that is dramatic and assaults the senses.

So I was quite surprised when I learnt of a completely new aspect of the car as a direct result of that history: the missing wheel-arch liners. As I now learnt, these came as standard fitment (on NA models at least) between the front wings and sub-frame, helping to keep the dirt and mud build-up out of the wing fixture point at the sill. My car has never had these while in my ownership. In fact, I didn't even know these existed. So I’m fitting new ones, together with aftermarket front mudguards, in an effort to keep the build-up to a minimum and the wing mounting points clean.

This is the amount of dirt from the right side wing's sill mounting. And the rust effect of that prolonged build-up and moisture trap - a snapped bolt and rusted sill.

So what happened to it? Well, I suspect that they were simply destroyed from years of gravel and dirt roads, much like my engine undercover-tray. This had one mounting point left at the back, so the whole thing hung low, almost dragging along the road. That is, until my pirouette the other day, at which point it was practically ripped off by the tall grass.

And then, as a small cherry-taste reminder on top, I found this underneath the fuel hoses cover plate:

Maybe a soft toy? How did it get under the carpet and the cover plate to end up here?

Cleaning Up and Blowing Out

Posted on 2015-11-20 in The Blue Car

After I got the dashboard out I could inspect the interior thoroughly. I needn’t have worried. But there is a lot of cleaning up to do.

In the end it wasn't as difficult to remove the dashboard. A previous owner had an aftermarket alarm fitted which is probably the worst part of the electronics that still remain. It will be a challenge to put it back again since I had to cut three of those wires. They’re marked for resoldering later. The alarm-box itself is also haphazardly hanging loose, so I’ll make an effort to secure it to the inside of the dash.

The exposed firewall and fittings. The dust has permeated everything, and the ECU cover needs to be treated and galvanized again.

Some of the trim does look a little worse for wear after 25 years. My wife is keen to have some of it replaced, but that sum comes to more than everything else I’ve had to order already, and what I still need to order. I don’t think it’s something I’m going to attend to in this round, since it’s mostly cosmetic and easy to replace. Besides, I really want to order new wheels for the car too.

The car is now completely stripped (as far as I can strip it), but the end of the year is fast approaching and I still need to put in an order for the cracked cover panel, among other things. By the time all that will arrive the paint shop will probably be closed for the festive season, so I'm going to delay the body work until next year. Unfortunately I damaged the cover that fits over the mounting point of the rear-view mirror. Another item I need to order :/

Pretty pissed off about this mishap.

In the meantime, I'm taking the car to an air conditioning shop to rebuild the AC system. It still operates on the old CFC-laden refrigerant, and it has a leak too. Since the dashboard is completely removed, it is an opportune time to get that sorted out. Hopefully it won’t be too intense; the biggest problem is getting the car to start and the AC to switch on with the dashboard removed. This is required for the shop to test the system for leaks or properly drain and refill it. So, I’m probably going to hot-wire my own car!

Stuck on the interior

Posted on 2015-11-05 in The Blue Car

The toughest part by far to strip is the interior. Especially if you intend to put it back together again.

Since the last post, after I had removed the roof, belt-mould and rear cover plate, I moved on to the exterior. The rear lights, boot lid light, reflectors, side lights and headlights are really easy. However, the water seals around the rear lights are shot and I’ll have to replace it. I cleared the boot and found some plastic bowls that got stuck there and forgotten from our Namibia trip. Then I moved to the interior. The door cards are easy too, if you’re careful not to tear the clips through the cardboard. This was all the low-hanging fruit. Easy to get to, easy to undo and easy to remove.

A reduced back-end and a cleaned-out boot.

While I was removing the roof, I removed the seats to get space to crouch and work in, and found that cracked and rusted mounting point. So I decided that I need to remove the entire interior carpet to ensure that the floor of the car is solid and can get treated if required. The tunnel cover isn’t too tricky, I’ve removed that before to service the shifter boots. Then it was the center console. This comes with all sorts of wiring and connections - the radio, the hazard and pop-up buttons, and the vent-controls. Most of it is undone, but I’m still struggling with the vent controls.

The radio is obviously no longer the original (after five owners that would be a leap), but whoever fitted this one didn’t do a good job of soldering the connections. The speakers also need to be unsoldered to remove - those I’ll connect up with clips again instead.

The mess behind the radio. Also, note the fire-hazard wool. Not sure if that's OEM or not.

I’ve had one other casualty so far. My son dropped a piece of his feeding chair (disassembled at the time) something through the passenger side hole in the wing where the side repeater is fitted. It was crucial that I retrieve it. No problem, I just needed to undo the bottom of the wing by the door. Needless to say, a LOT of dirt came out, dirt which didn’t dislodge when I sprayed it out with the hosepipe earlier, dirt that had been there for a very long time. This is a major cause of rust, and so of course the one bolt that holds the wing in place snapped straight off. I haven’t attempted to remove the wing completely yet, and I’ll probably have to drill that fucker right out. Not looking forward to that, or to see the full state of that sill.

Some good news though; I also received my first parts order. Super happy with the more modern side light clusters!

Detailing project started

Posted on 2015-10-19 in The Blue Car

The detailing project I mentioned previously has been planned, budgeted and costed.

This car is now 24 years old (if it's a '91, I'm not 100% sure) and it's really started to show it during the last few years. If you compare photos of when I got it in 2009 vs now, it's clear how much the paint has deteriorated. Of course, things like the trip to Namibia hasn't done it any favours, but that's part of owning the car in my book.

This detailing project isn't just about looks however. There is just as much an emphasis on proofing the car against another 20 years of use and ownership. Seals, fixtures, trim and then some two or three nice-to-haves as well. So in that spirit I've started working out my budget to see where I can get things done and what should/could wait until later.

The main items to be addressed are:

  • Total body strip, treatment of dents and dings, prep and finally respray
  • Complete chassis prime, rustproof and respray
  • Windshield replacement
  • Various fixture replacements and additions, including front sidelights/indicators and front mudguards
  • Wiring and bulb holder replacements for front sidelights/indicators

Items currently on the longer term plan are:

  • Front lip
  • New wheels
  • Clear reflectors
  • Replacement rear light clusters
  • New exhaust with a 4-2-1 branch

Items on the very long term list are:

  • Flyin’ Miata’s little big brake kit
  • Air Conditioning rebuild/refit to new standards
  • A new idle control valve

I've made some progress on the strip of the body. The first item was to get the roof removed. It's only about 4 years old now and doesn't need replacing yet, so I only need to remove the frame. For easy access I removed the two seats as well, and found the first problem area of rust - one of the mounting points of the passenger seat. Apart from that, the exposed interior of the body just needs to be cleaned properly for the most part.

The seat mount had cracked and rusted. Probably only surface by the looks of it. The roof and gutter belt-line removed. The gutters are super clean and solid, no structural rust issues so far.

Unfortunately, the plastic rear cover plate between the rear lights that holds the license plate cracked during the removal process. It was extremely brittle, which I thought was due to sun and age, but as it turns out this is simply the quality of plastic the OEM part come in. This adds an additional cost to proceedings, and also extra time to import since this is an Eunos rear cover and not UK/EU spec. Hopefully nothing else breaks.

Once all this is done, I need to remove all the light fixtures, wings, bumpers and door trim.

After the last update in June, I made the decision to change the project from the WPF implementation to an MVC web app instead. This brought about a lot of necessary changes.

One of the implementation strategies I used in the WPF app was to utilize the entity classes directly in the viewmodels. They come with all the necessary metadata for Entity Framework to know what to do with them, while the viewmodels simply leveraged directly from the entity's properties without exposing the entity directly to the views. The upshot of this is that it's a simple step from updating an object to saving said changes back to the database. There are no mappers required and there are no separate loads required to check for existence in the database. You're always in context and you're always in-memory. Of course, this app came with a local SQL Compact database, as opposed to running against a hosted API somewhere.

This doesn't work all that well on web, however. In fact, it doesn't work at all, since you loose context completely whenever a GET or POST request completes. So I've had to update the code significantly to deal with the difference in platform. I've had to introduce mappers (via Automapper) to deal with moving from entities to viewmodels and back. I've had to introduce a manager layer to better marshal calls into the repository, and of course the repository layer around EF has had to change significantly to first find an instance of the entity in the context before adding it if required during saves. All this now also comes with a custom IoC implementation (that doesn't hinge off a bootstrapper pattern).

Why am I doing this? Well, a WPF app locks me into the Windows front-end. It's possible to build that into an RT app or whatever for a phone or Surface, but most people don't want to install stuff anyway. If it's on the web, it's accessible, it scales and I have access to Google charts (although I was very well advanced in producing my own WPF charting library). If it's on the web, it's also easier to show it to people, so I figured it's better to combine the website and the app into a single entity.

I've now exceeded the feature set of the old WPF app, and I completed a first round of CSS and front-end work just last night, so there's still a long way to go. I'm not sure if I'm even going to try and monetize this, turn it out for public use or just keep it among friends. We'll see how that pans out. First and foremost, this is something I like to work on (which is a primary factor), and I don't want overhead or management that will detract from that joy. Secondly, I'm building this mostly for ourselves to use, instead of manually updating spreadsheets on a continuous basis. This being a web deployment now though, you have to have a certain level of finesse to it before sending the link to someone, because they can send the link to everyone!

A pirouette

Posted on 2015-09-25 in The Blue Car
A lovely rainy day, rear wheel drive and no traction control conspired, together with a patch of oil or brake fluid, to turn a morning commute into an adrenaline-filled few seconds.

It was a Tuesday like any other, and I was on my way to work. On my route there is quite a sharp bend with two lanes. I moved into the right-hand lane on the turn's entry to overtake, and moved back to the left-hand lane before the exit. Suddenly I was 90 degrees with the road, heading for the end of the barrier. I really did think I'm going to launch over the embankment down into the valley. Of course I was on full opposite lock as quick as I could, but with no result, and slid into the barrier and scraped the nose cone along it until it ended. Here the right front wheel came upon some really tall grass into which it dug rather deeply, kicked up a tonne of mud and let out a bunch of air, but served to pin the nose immediately. This spun me right round, I corrected with the steering and caught the car straightening out, facing down the road again. I simply shifted to second and continued driving on, straight to the Supa Quick for an alignment. I'm still left with flat-spotted rear tires though, which is utterly annoying at highway speeds.

The big piece of tupperware took the brunt of the damage. It scraped a pretty deep gauge into the bumper, but no damage to my lights.
The right front wheel, pretty flat and caked with mud. The bubble in the fender lip is from a stone or something, previously.

Stuff like this is exhilarating, even at 40 Kph. But it makes you appreciate the modern safety features like traction control. The fact that it was in no way on purpose is my only way of saving face for not being able to apply sufficient counter-steer quickly enough.

Now, I don't advocate drifting on public roads, but I have tried it exactly three times on purpose. Once in the wet, which ended in me facing the wrong way in the incoming lane. Once in the dry, which worked out well, and a third time in the dry which didn't work out well, spinning out over three lanes, snapping the auxiliary belt in the process. Why? Because using power to try and slide a Miata (ala Chris Harris) is actually really difficult. It's on the very top shelf of the car in a manner of speaking, and you have to be truly quick to catch it because there's no transition. There's no inertia. Just suddenly, zero grip. Fitting skinny 185 tires will probably make things a lot easier, and I'll do that one day when I get a set of steelies for the track. For the most part though, you're much better off carrying momentum through the corner and hoping for a little hip-wiggle at the exit; this is the correct way to drive a Miata.

An effect this episode has had however, is break my procrastination towards some detailing I've been putting off. With the fender gauged, it's as good a time as any to get to it. I've started my project plan and budget already, which is always the first step. There are a lot of small dings, surface rust and damage from the boot-rack that broke on the Namibia trip, and bits of old paint are starting to peel around the rubbers. The car then also needs a complete respray since the paint is at least 10 years old already. And I'm having the chassis completely rustproofed from scratch. In addition to all that, I need to replace the front brake disks and pads, and I want to install additional exterior trim, specifically a front lip and front mudguards. All this will require me to strip the car comprehensively, so I'll start doing that soon I guess.

The petrol filter replaced

Posted on 2015-08-01 in The Blue Car
So I gave up waiting for the imported filter, and found a close enough match locally.

I usually order from, and I've never before had an issue with any of their parts, service or delivery. So naturally I had no qualms ordering the fuel filter from them. Especially since locally my only option was getting an OEM one made for almost 4 times the price at the old Ford/Mazda factory.

However it was the first time I imported since the South African postal service strikes last year, which lasted almost six months. Why the postal service? I don't have a choice. If the parcel is small and light, MX5Parts sends it via Royal Mail, and hence it arrives via our postal service. And to be honest I doubt I'll ever receive it - it's probably stolen, stuck or even lost at customs somewhere.

So after a month of waiting I jacked the car up, removed the old filter and took it to the local Midas parts store. They pulled out their catalogues and we started searching for a filter that will fit. There are no after-market fuel or air filters for the MX-5 available locally, only the oil filter. The catalogues' section for the MX-5 were literally only one line. What was interesting though was that, apart from the MX-6, all the other older Mazdas (323, 626, Astina, Etude) use the same fuel filter. It's not the same as mine, but not too different either, and the Astina and Etude has pretty much the same engine, mounted sideways for the FWD train.

Replacing the fuel filter on a Miata is super easy. Just pull the fuse so that the pump is disabled and then idle the car until the fuel line is dry. Undo the flap and pull off the fuel lines. Fitting this other Mazda filter didn't pose a problem, but because the filter's lines doesn't match exactly and because the rubber fuel hoses are very short, there was some bending necessary. So it's on, and it works, and it doesn't leak, but the hose on the engine side bends a bit too much for my liking and fitting it in the bracket would simply tear the fuel lines off. I'll try and get some silicone hoses at some point to replace these old line hoses with.

The GUD after-market E58 filter. Doesn't fit the bracket, and the fuel line on the right (nearest) makes a nasty twist and will rub against the chassis. Cable-tie replaces the bolt that should hold the bracket.

Or, here's hoping I still get the real filter delivered.

UPDATE So I did actually receive the ordered filter. It arrived a month and two weeks after shipping from the UK. The packaging was absolutely mangled, but fortunately the filter itself was intact. It's sitting on a shelf now for when I need it again in 30 000 km.

Open sauce

Posted on 2015-07-28
It's now on GitHub.

So, a site update. You'll notice the SVG icons in the footer instead of the previous simple letter links. With this update I've also added a link to my GitHub.

I've started to push some of my previous work related to these personal projects to GitHub. If you're interested, check it out. It's mostly isolated components and small libraries. I might even try to get the more useful ones onto nuget, but don't bet on it.

Currently there are three repositories. A Windows service based host for Minecraft that also manages a mapper through a headless Chunky instance and includes a website that hosts a custom tiled Google Maps implementation. Then there's a c# tree library. Only the basic binary tree is implemented but is a complete ICollection that supports removing of nodes through tree reorganizing. It doesn't have any LINQ extensions yet. And finally a client for integrating with

I'm also going to put up my WPF regular expression tester. It's probably the most self-used little application I've ever written.

Got a family bus

Posted on 2015-07-13
So while I'm waiting for a petrol filter for the little blue car, I took delivery of our new family car, the Mazda CX-5.

Apart from the fact that Mazda just updated it for the 2015 year model with a new grill and interior layout, this isn't a new car by any stretch of the imagination. But then again, I'm not a motoring journalist either. After the launch of Mazda SA last year I was really disappointed that we don't have the Mazda 6 Touring available in South Africa. This year, after shopping around for a while, I was back in the Mazda showroom signing the order form. This is the first new car I've had since 2003 so you can imagine my excitement, even at a school bus like this one. A school bus though is very far removed from what this car is, dynamically speaking. In fact, it is a deeply impressive vehicle.

Considering the only wagons still available here are Volvos and Audis (meh on styling and price), I was basically looking at getting a cross-over or an SUV, and as the market are these days, I had a really big list to choose from. There's the new Qashqai, the new X-Trail, the ix35, the Sportage, the Forester... So what are the reasons I bought this car specifically? Essentially, because of the manual gearbox (on the base model), because the rear seats split 40:20:40, and because the SKYACTIV platform is still naturally aspirated. Those were my practical elements in the decision. The new Qashqai is probably the closest rival, so lets quickly dwell on the differences. Price wise, the Qashqai matches the CX-5 with a measly 1.2L turbo. Output wise, it matches the CX-5 with a R50 000 premium with a 1.6L turbo. There is also a whole other thing about turbos, for another time. Since both cars offer a manual gearbox (the X-Trail and Forester don't at all) that's one all, but then practically the CX-5 has the Qashqai beat. Although the new Qashqai is truly massive compared to the outgoing model, and the boot is significantly bigger than the CX-5's, the rear seats only splits 60:40. This means that I can't fit two baby-seats in the car and still have the capability to load something long or oddly shaped, like our super-massive off-road pram. This single difference held the biggest sway for me. And lastly, the CX-5 just looks sublime, inside and out. I don't even list this as a difference because, quite frankly, nothing out there comes close to matching the lines of anything in Mazda's range now. And the Nissan's interior is simply inferior.

So what about the engine? At first it sounds clunky, especially when cold, but quietens down very quickly as it heats up to operating temperature. Without changing gear, it's got that typical Mazda characteristic where there's a small tug as you bury your foot, and then surges forward, building up ever quicker until suddenly you realise you have to back-off. Drop it down to third however, and it almost snorts as it pulls forward; it really makes a good noise too. It's also a very quick revving 4 pot, and the delivery of the power belies the size and weight of this vehicle. Spirited driving still nets me a consumption rate of less than 8L/100km. On a side-note, the new MX-5 in the US comes standard with this 2.0L unit (or something very similar). If this tune is anything to go by, then I'm properly chuffed that the new roadster will, frankly, be epic!

As for handling, you'd be surprised how little you roll about, or back and forth over speed bumps. I've been in hatches that are less comfortable or stable over the back-roads or through the Century City boulevards than this. The big wheels help a lot, but the suspension is the most impressive. It's not hard or sporty by any means, and I'm not going to pretend it's communicative, but it handles the Cape crosswinds exceptionally well for something this tall, hardly moves about during lane or camber changes, and when pushing 80 in a corner, it doesn't lean at the limit half as much as you'd expect. Stopping suddenly from low speed does rock the boat a bit though, that is, suddenly in the strict sense.

So what is there that I don't like? Well, it is very high, and my dogs struggle to get into the tailgate on a slope. The reverse gear selection is a bit clunky when you are in a hurry; I'd have preferred a pull-ring or something instead of the knob push-down mechanism. The seat adjusters (on the base model at least) is very low rent and the worst part of the interior. In fact, it's so far apart from the rest of the excellent trim that it's a bit hard to fathom... And lastly, the infotainment system comes with the "Auto-download contacts" setting switched on by default, and when you pair your phone it downloads every person you've ever sent an email to, and none of the contacts that actually has phone numbers. This was utterly annoying, and after manually downloading the contacts that I wanted to the system, I'm now in the process of deleting the rest. One by one. After two weeks, I'm at N in the alphabet.

Still, I'm enjoying this car so much, it really is fun to drive. That Mazda chassis heritage is very prevalent in this car, and coupled with that smooth power delivery, makes for an excellent and comfortable daily driver and cruiser. Defying convention might be the best thing this marquee ever did!

A new format

Posted on 2015-06-17 in BadaChing
That thing about PDFs?

I started looking at some new import options for BadaChing. Until now you could import the CSV format available to download on most internet banking sites, and also the old OFX format popularized by Microsoft Money about 15 years ago and which is still offered by the local banks here. The problem with this is that those statement downloads are only available for the last 3 months, or 150 transactions or so, depending on your bank. So if you miss a month or two, your data is incomplete.

The solution is to read the PDF statements we all get sent on email. I have a label in GMail with every bank statement ever sent to me, so I should be able to import any of these. All I had to do was to read the PDF, which is easier said than done. Depending on the library (free, paid or otherwise) this is unstructured text; it is impossible to know all the different combinations that transactions are printed with, and some people will receive theirs in different languages! But the POC worked well enough. At least for FNB statements. I hope to make this available soon with two additional features: a complete statement overview before committing the import (because its unstructured and not guaranteed), but also to ask about duplicate transactions.

Currently only the OFX format supplies unique transaction identifiers, so if the user imports a CSV or PDF, all the program has to go on is the description, date and amount. I had a situation where the same amount at the same vendor and on the same date was transacted twice. BadaChing failed to correctly import this statement. This check is necessary because the CSV and OFX formats can be downloaded at any time, meaning it could contain the same transactions. The PDF statements are of course based on monthly periods, so using these exclusively means the check is superfluous.

Check back for an update in, oh.. December?

It is also a family car

Posted on 2015-06-01 in The Blue Car
Albeit one family member at a time.

So we got a new baby seat, and I tried to fit it into the car, which it did! Naturally, this called for a drive, so I piled my son in and we set off. Of course, the previous baby seat was a lot smaller, as was my son, so that wasn't a problem at all. This new seat though, this is much bigger, and supports the backward facing position up to 18Kg. This is required since the boy's already 13Kg at 10 months.

His first time in the car, at about 6 months, in the small seat.

Now that he's older I can take longer drives with him, so I went for gold: Franschhoek pass. It was a superbly cold and misty day, and drizzle every 5 minutes to keep the road conditions just so. I set off out the back from Durbanville, avoiding the N1 and instead opting for a more interesting and slightly bumpy road, the R312. It's winding, but not tight, and not loaded with traffic. The view over to Paarl mountain is good enough on a clear day. From there I crossed the N1 past Butterflyworld, and turned off for Simondium at Klapmuts, past the Anura estate. This is an even more bumpy road, but low traffic makes it the better option. A great place to visit on this road is Le Bonheur. From here it's pretty much straight on the R45 to Franschhoek, although you might want to stop off at the Franschhoek Motor Museum; it's well worth it. It has some great banked bends and a really smooth surface, but it is pretty much single-carriage, and the run into town is slow. A lot of other cars join from the Stellenbosch/Pniel road and are sight-seeing and stopping and turning at different estates. It makes this road rather hazardous, and the wet conditions and low visibility on the day also didn't help.

Then you are into Franschhoek, and boy this town is something else. On a rainy Sunday traffic in general is usually light since everyone's in a shopping mall somewhere. Franschhoek however is the exception. This town is a tourist trap, even for locals, and there are too many restaurants and never any parking. I never stop here. It's too expensive, to crowded and too pretentious. It is rather good if you are out supercar-spotting though. On this occasion I saw a black Lamborghini Gallardo, which didn't disappoint. But despite the parked-up high street you are through it in no time, and at the start of the pass going up the mountain. A GP plated Kia was kind enough to slow down for me to pass, and we were off.

Now, before you complain about irresponsible parenting, note that I do this safely and while focused (unlike you, probably on your mobile texting*), and advanced driver training helps to deal with any conditions, especially wet roads, and to maintain a margin within the car's capabilities for any unforeseen circumstances. Having my son on-board doesn't change how I drive, or my approach to driving this car or any other car. As a user of a public road, you always have to drive safe, not just in certain circumstances. So with that out of the way, the pass was clear of traffic, but it was misty and it was wet, and I mean standing water wet. In spite of this, the MX-5 is simply just fun. This is a car you drive, and a car that responds to your driving it.

In contrast to that Lamborghini I saw, in the Miata you go quickly by not going slow. Going uphill I never use the brakes. I simply shift down if required and let gravity slow me enough, stick it in the bend and gently apply throttle to balance any under-steer there might be. Then, as the road straightens out again you push down on the throttle harder, and in the wet especially, you can feel how the back-end tightens up and starts pushing, first the one wheel, then the other. The open-differential is a bonus in this regard. Even in the wet this car's grip and lack of body roll translates to a flow between the corners. Swap a cog, get off the gas with a burble, turn it in and listen for the swoosh as the excess water washes out from under the additional pressure on the tires, let it settle and then step on it again and feel that prop grab the two rear wheels by the scruff of the neck. It really is automotive poetry.

On the first part of this pass there isn't really enough space between bends to even reach the 7k red-line on the engine, so there's no real need to change up. You might push 80 or so on this 1.6l in third, and on these wet roads that's already plenty. The other thing about this engine: it's rather restrictive. If you take your foot off the throttle in a low-gear the car slows right down. This is handy both in traffic and on the downhill. The car will easily under-steer if you under-brake for a corner or if you don't shift to a low-enough gear. The most defining aspect of this car's driving experience to me is managing the gears, and to enjoy this car the driver has to get to know the ratios and engine speeds. It's not just the chassis that makes you feel one with the car.

Soon though I spotted a Land Rover a few turns ahead and decided to turn around. It was nearing lunch time, and the boy was bound to wake up grumpy from hunger. I quickly stopped at the viewpoint to take a picture, but alas, the weather drew the curtains.

At the top, misted over and fast asleep

The drive back was, as usual, pale in comparison, and I simply lumbered down the N1 to get home in time for lunch. That pass in the wet though.. that's something else.

*Based on anecdotal data

So there was the whole thing between Zapper and Global Kinetic, and now, three months since the announcement, and after the dust have settled, we're all busy at GK's official offices, the Lighthouse on Esplanade.

To be honest, it was with much trepidation that I started on the first of May, but as it turns out, for all the wrong reasons. Comfort zones are a real thing, and most people are averse to change. I don't count myself as being among them, but sometimes it's not your decision. So I was totally unprepared for how stale I had become while being stuck on a project that was, when I started on it, already out of date with current .NET and ASP tech.

Unfortunately, a company uses you where they need you, and despite efforts and requests for other projects, I was the team lead for ZapZap's API for almost two years. I didn't realize what a blessing the executive whims would be, being axed from the project through the contract termination. So finally there's some new work, new technologies and new platforms to look forward too. But it hasn't been easy.

Microsoft has moved MVC and ASP into completely different spaces recently, with Owin self-hosting for example, and the WebAPI stuff. I had not had the chance to look at these given the limited time I have at home, so ramping up on a new project, on new tech, within two weeks (the ever present two-week sprint targets) was a real challenge. And also very unfair, but trail by fire is the only trail I know, and it's the only trail I've ever had. I surely can't claim to have ever had a relaxing day at work.

So now we're into the third week, and things are settled. The project template, established by the .NET tech lead, is becoming more familiar, and since troubleshooting some issues with Owin and self-hosting for integration tests, things have become more familiar and the conceptual picture more detailed. So what about all this then? Well, there are a couple of things I encountered here. One is the "turn-key" solution attitude, which I think is ill-advised, and over complicates matters significantly. Each project is different, and each project's solution should be too. I've had to remove more from the template than I added to complete the requirements. The other is statements like "Oh this is code-first, like everything should be". That's a pretty reckless statement in my opinion. In fact, for the project I'm currently on I have no control over the database, rather it's an entity that I simply integrate with. Running EF code-first in this scenario is actually rather silly.

But there has been some good things too. Since this is a new project, we don't have a unit-test backlog, so we can keep our code coverage up. We can employ SOLID principles from the start, and not have to deal with very extensive surgery to correct issues in the code-base. In fact, this is the first time I'm starting on a new project. It's a totally alien concept for me since I've always had to deal with refactoring on every single task, which it's probably one of my strongest skills. I hope that it doesn't now become stale as a result! Update

Posted on 2015-04-28
Simpler, quicker, better layout and easier navigation. Those were the four challenges I wanted to address and take-on with this site update and redesign.

In fact, it was a complete rewrite, not just a restyle. The new platform is super efficient, simple and much easier to extend. After the Global Kinetic/Zapper calamities, I was left with some off-pressure time within which I redesigned this site and code base, give or take about two weeks in total.

The biggest challenge for me was the UI layout and navigation. Building a site based on a design, as opposed to building a design based on what you know how to do, was a bit of paradigm shift for me. Specifically with regards to web and mobile compatibility. So, I set out with an empty project and started building my own CSS from scratch. And now I hear all of you shouting "Why not use stuff like bootstrap?" Well, I looked at it, and I didn't like it. I suspect that initially bootstrap was pretty cool, but by now it's fallen into the same "popular trap" as any of the other usual "solutions" out there: it's simply become too complicated and bloated for it's own good. This site uses 30 CSS classes and two @media overrides. It's simple, well named and easy to consume in one take. Bootstrap is none of this. The second reason is that I need to learn CSS. I still don't know it after a small project like this obviously, but using bootstrap instead of building it from first principles is not conducive to learning it. This is the same argument I had when I did Stringray on the question "Why don't you use Unity?!" Because writing your own 3D and game engine is fun, an intellectual challenge and I also learnt a lot of very interesting and useful stuff. Loading assets and dragging relationships in Unity is... first base, boring, and doesn't further you as a programmer at all. If you are one of those that asked either of those questions, I urge you to read this blog post about "Never invent here" mentality. Programming is a science, and if the science of it doesn't excite you, I don't consider you a programmer, but rather an operator. That is, you are simply operating your IDE (probably through extensions like Re-sharper) and you don't understand half of what is going on inside LINQ, Roslyn or what-ever tech is specific to the platform you're using.

So now that I've got you completely riled up, let's move on. The other thing I wanted to do was to list projects that are not necessarily IT related. So far there's only one, The Blue Car, and it's related to my other interest, motoring. This project for me is a very special one, and of course very subjective, as is the way with motoring enthusiasts. Now, I'm not the sort of guy that runs the numbers. I don't remember or care to know all the stats of the latest Ferrari, I don't even know which one was released last, or which one is fastest around the Nordschleife. I do however care about owning the car, what it's like to live with, to maintain, to experience and to road-trip with. Your car and seeing your country goes hand-in-hand, unless you do a fly-drive holiday, which isn't a holiday at all (as advocated by the late Top Gear). So, as I've owned the car since 2009, I've retrospectively posted blog entries because a lot has happened that forms the backbone and context of this project. It's a lot to read, but might be well worth it.

My son turned 9 months old last week and I'm currently playing either Elite: Dangerous or Skyrim, so I don't expect to be updating projects and writing a lot of new blogs, but there will be more activity here now that the site is no longer specifically game-dev focused.

Ceres trip

Posted on 2015-04-24 in The Blue Car
Not so much a road trip this time, but we did do a lot of driving.

The Ceres valley is rather unique among the different South African climates. Nestled in between the Matroosberg range and the Koue Bokkeveld plateau, it's an agriculturally rich area with specific farming specialization, and it's only accessible via mountain passes. We stayed at the Klondyke cherry farm, which is also at the top of a tremendous little mountain pass. In short, the Ceres valley is a bit of a haven for the driving enthusiast

The main road into Ceres is via Mitchell's pass. This is a wide road that carries a lot of cargo as the farmers truck their loads in and out (Ceres has several cold storage facilities too). As a result, the pass is heavily congested at peak times, but off-peak it's a super pass to drive. It's smooth, it's got excellent bends and great camber. Driving this road is easy and relaxing. You have plenty of time to prepare for the bends and to get your gear changes sorted out to maintain pace through the wide lanes. And then there's ample acceleration stretches between the bends too.

The Tolhuis bistro on Mitchell's pass is pretty good, although they get regular visits from baboons and other wildlife.

Remember that Ceres is considered a rural town. At the fuel station everyone stopped to ask questions about the car. I am always surprised at the response this little 1.6L Mazda invokes from folk when we travel. And it's always funny to see their faces when they realise it's only a 1.6L, and not some fire-breathing V8 that makes a ton of power. The perception and reality around this little car is very far apart. When we reached the cherry farm, the environment was so tranquil it was actually surreal. It was also bitterly cold. The cherry farm is on top of the Matroosberg range, right next to the reserve, so you're easily more than 2 km up. And of course, we went in June, hoping to be there when the snow comes. The cottage and the bigger guest house is old, but the fire place and the constant supply of firewood made this a truly romantic experience. There's also very nice hiking trails on the farm.

Parked up next to the cottage. The clear air came at a very low temperature. No boot-rack this time.

The areas around Ceres is also well worth a visit, and of course it requires driving out via any one of the passes. The R43 between Worcester and Wolseley sports an excellent old train bridge, and an Anglo block house, built by the British very long ago. Tulbagh is picture pretty, and the Paddagang estate sells some excellent wine (when they are open). A bit further out is Riebeeck Kasteel where you'll find the excellent Grumpy Grouse Ale House, which (used to) double as a classic car dealer and have the stock on display. Towards the north there are two excellent hiking trails, and the Gydo pass, which is well known for the annual King of the Mountain event, which was sadly cancelled after two fatal accidents. The pass itself though is rutted, rough and requires quite some skill to navigate quickly. There are very sharp turns, heavy off-camber sections and uneven tarmac. At the top, there's a bit of a geological marvel, where persistent individuals will be able to uncover intact sea shells from the mountain side, and also apparently a restaurant which we couldn't find in the dark.

You have to trespass to get close to either.

After a week we packed up and headed home. We drove down the short pass from the farm, which we had traversed at least twice a day for the whole week, and stepped into a bolt at the bottom. This was unbelievable. I had really thought we would have an incident free trip, but alas it was not to be. So I set about changing the wheel for the minispare.

I had to unpack the car to get to the spare. And it looks really stupid when on the car. See all the wine we had bought.

And then, of course, what do you do with the wheel? I had to flag down someone that could take it into town for me. This car makes friends! After I had put the minispare on, however, I noticed that it was rather flat. I had to drive extremely slowly into town, which was about 15 km away. We finally met up with our friendly wheel transporter at a local tyre shop, where it was confirmed that the tyre wasn't repairable. And to make matters worse, the 205/50R15 size is not very common, and the actual Bridgestone stock that they could source immediately was no-where near the correct size for my rim. Finally I settled on a Michelin (195/55R15 if I remember correctly), which meant I now had odd-sized tyres at the rear which will cause problems on the drive shaft, axles and hubs in the long run. After the swap, and the spare was pumped up and put back in the boot, we set off home at a gingerly pace. And so of course, I had to buy another new set of rear Potenzas when we got back.

Post-trip maintenance

Posted on 2015-04-23 in The Blue Car
After the beating the car took in Namibia, I was totally expecting multiple failures on multiple components. There were some.

Replacing the two rear tyres naturally included a spot of alignment, and the guy showed me some play in the steering rack. It wasn't a catastrophe, or urgent, by any means but indeed something that needed to be looked at. Later I noticed some oil on the garage floor, at the rear of the car. Closer inspection showed that the differential was leaking, and I could see a small stone (typically used in resurfacing roads) was lodged between the diff and the one axle, obviously damaging the seal. I suspect this was picked up during the road works on the Piekenierskloof pass which we encountered on the way up to Namibia. It had withstood 2000 km worth or travel, so I wasn't going to delay getting this looked at. At best, the diff was simply low on oil, at worst, the gears would have been ground smooth due to heat build-up. I got new seals from the local dealer and took the car to a workshop. Why did I take it to a shop? Well, there were two things I don't really feel I could handle - gearboxes and differentials. After this effort though, and while observing the mechanic, I'll take on the differential by myself now without hesitation.

A little later, while doing rotation, the steering rack issue came up again. By now I'd pretty much committed to keeping the car for ever, and solutions to problems needed to be long-term and lasting. So, I ordered a completely new steering rack. It took some doing removing the old one, and even more doing to fit the new one, but with some help I got it aligned perfectly.

The old and new racks next to each other.

By now I had quit the owners' club after a year of being chairman of it to spend every waking hour on Stingray Incursion instead. I was driving the car every day still, right until it popped the new relay I had fitted after the Namibia trip. I got another one, which lasted about 3 months, but by now it wasn't popping the relays any more, it was just losing connectivity as a whole. It was very frustrating, and it resulted in me not really being able to drive the car reliably, since it would just cut out anywhere at any time. The wiring underneath the main fuse box in the engine bay was shot, corroded and burning. It had to be addressed.

So, I roped in my good electronic engineer friend who, at the time, happened to be working at an auto-electrician workshop. He helped to rewire the main loom and split the main relay out to a new mount we made on the firewall, next to the master brake cylinder. Good as new. But, because of the logistical problems, we used some of our savings to buy a cheap small car as a temporary measure. Now the blue car was practically garaged for weekends only. It also needed a service.

To Windhoek and back

Posted on 2015-04-23 in The Blue Car
So now we get to our first proper road trip in this car. To date we've mainly meandered around the greater Cape Peninsula, and out into the Overberg and surrounds once or twice. This trip was different. It was going to be more than two thousand kilometres in total.

We were going up for a wedding, and had decided to make a two week holiday out of it. There were several friends going as well, all on different routes and times and we would meet up in Windhoek. But how do you pack two peoples' two weeks worth of luggage into an MX-5? You don't, you use a boot-rack instead. I had borrowed one from a fellow club member. But clothing and toiletries was just the start. There is one thing about the NA MX-5 that no engineer can get around - the size of the wheels. They don't fit in the boot. The car comes with a minispare (or as we call it, a Marie-biscuit), which is fine. But what do you do with the proper wheel then if you have a passenger? So I opted for a bunch of rescue gear instead, and threw out the spare completely. This included a heavy-duty off-road type compressor (the sort that hooks up directly to the battery instead of plugging into the cigarette lighter), lots of tyre-plugs and also tyre-goop. I was fairly determined not to be stranded in the middle of nowhere.

The boot rack fitted and loaded. The boot itself was chock'n block full of stuff.

I decided to take the straight and boring route - the N7. Now, the problem with any of the national routes in South Africa is the cargo hauls. Since the demise of the railways, these routes have become the main arteries of the cargo business. The N7 trails along the west coast where port jackson bushes and wheat farmland is exchanged for fruit trees and mountainous greenery as you pass over the Piekenierskloof pass. This is a narrow and terrible pass. There's been road works on it for as long as I can remember, and there are almost no overtaking areas to get past the slow, thundering and black exhaust-spewing lorries as they try to get over this mountainous area. Four years later and I have yet to cross it again; hopefully it's better now.

I must state that this car is simply epic on the long road. Sure, it's engine isn't a creamy V6 and the gearbox isn't a smooth and hassle-free auto, but the car is solid on the road and you get tremendous control through the quick steering rack. All of this sounds counter-intuitive, but compare this to something like a simple Ford Figo for instance (a much more modern car) which rolls around in a cross wind, pitches and dips over any sort of bump and has vague steering which requires constant flailing to ensure you go straight on any sort of road camber. It's truly tiresome to drive something like the Figo for an extended amount of time. The MX-5 though, not so.

Soon the valley of fruits and honey are given up as you pass Vanrhynsdorp. The end of the Karoo plateau is also on your right now, and the arid climate starts to take over here. Small, grey shrubs dot the landscape, and it's hot. The little car was simply stellar, the engine was singing and the wind rushing past. We had the roof up for the most part from the sun, but the air conditioning in our car had long ago puffed the last of its gas. The one thing that made up for that is the seats. They are supreme and my wife falls asleep in them in five minutes flat. And so we pulled up in Garies for a loo break. We had driven for the most part of the day, almost 530 kilometres.

Parked at a fuel station and restaurant in Garies. The sun was low by now.

I wanted to reach Springbok before night-fall - another 120 kilometres. So we got back into the car and... nothing. It was dead. It wouldn't turn over, although all the other electrics were working. I poked around a bit, but there wasn't anything I could do without tools really. We decided to spend the night at the B&B right next to the fuel station. It was solemn, and my wife tried to console me. The beer helped a bit too. In the morning, a cat had urinated on the soft top, and I got hold of the owner of the fuel station (and the B&B). He had a workshop at the back. As it turned out, he was a former employee at a BMW service centre somewhere in the city, and had retired here and was applying his trade in the old-fashioned way. Meaning he actually fixed stuff instead of simply replacing parts. And of course, I wasn't his only customer that morning. In a small town, on the edge of the greatest plain in South Africa, this guy was having a Thursday morning to beat any other while we were having breakfast in his restaurant.

He had the fuel-pump out to see if that had given up, but ultimately he found that the main relay had popped. This was an 'a-ha' moment for me. I had completely forgotten about that first breakdown, and had driven the car constantly since then for almost two years, so this did come as a surprise. To get under way he helped me make up a by-pass for the relay using a 10 Amp fuse. Ultimately I found it rather humorous, this make-shift fix for what had turned out to be this make-shift car.

The by-pass plugged into the relay box. This saw us through the rest of the holiday and all the way back home.

At Springbok I tried to shop for a new relay, but to no avail, and we just set off for the border with Namibia where we arrived late afternoon. Here, things almost went awry. I suspect that the South African police officer saw the history of the car on his computer, that it had been imported from Malawi. This he figured gave him an ideal opportunity, since he immediately stated that the car had been reported stolen in Malawi. Of course, I'm not a regular border-crosser, so this came as huge surprise to me (and I only realised later he was surely looking for a bribe). I was on the phone to my colleague from who's in-laws I bought the car, but fortunately I had all the paper work and clearances for the new engine and everything with me, to which this warrant-officer William surrendered his claim and simply said "It's fine". We were stamped and through the Namibian side in less than 15 minutes. We spent the night at the Orange River Lodge on the Namibian side.

The next morning we set off for... Ai-Ais. My wife insisted that we should go there, so I turned onto the C-grade gravel road and set off into the reserve. The wash-board roads were terrible, and it shook us to pieces as we went down into the fish river valley. After a while I even stopped caring about the car as we burst out laughing for the insane situation we had put ourselves in. Here we were in probably one of the most unsuitable cars for this trip, on a gravel road in the middle of a reserve, getting our teeth rattled from their sockets. So when we reached the resort, we weren't at all surprised to have to park between Land Rovers, Cruisers, Fortuners and all manner of off-road trailers attached to each one.

Waiting at the entrance to Ai-Ais.

We got a plethora of comments from the other 'hard-core' resort visitors in their massive pick ups, mostly the "What the fuck!?" sort. Still, we enjoyed the resort as day visitors. It was rather tranquil, my wife went for a swim in the various pools and hot springs, and we had some drinks. Then it was off again. She wanted to see the Fish-river canyon, we needed to reach Keetmanshoop before nightfall, and I actually had no idea how long or far we were from either. From here the roads became much worse. There had been a massive storm earlier in the week, and it was supposedly still raining in the northern parts of the country. The gravel roads showed the extent of the flash floods and rivers. In general, C-grade roads in Namibia is the equivalent of 80 to 120 km/h tar roads, but not on this occasion. Huge swaths of veld had been taken by the rains and been run over the road, causing muddy or sandy pits, or massive rock-hard settled sand lumps, spanning the entire width of the road. Since the car is low, and rear-wheel drive, I was scared of getting stuck, and no amount of rescue gear would allow me to get us out again on my own. So, unless I was pretty sure it wasn't the rock-hard sand lumps, I simply floored it, relying on momentum to carry us through the longer stretches of mud or sand. It was truly hair raising, and at one point I basically sand-boarded the car across one of these pits, flat on it's engine-cover belly and chassis. Sand went everywhere, and later I wiped some off the top of the engine, from between the cams.

A supposedly smooth C-Grade road where rocks had been deposited by the flash floods.

The C-grade road became so bad that I thought we were driving on a bed of rocks instead. But, soon we had to turn off from this onto a D-grade road to get to the Fish-river canyon lookout point. This road forced me down to 20 or 30 km/h. We didn't time it, but I reckoned it took us almost 2 hours to do the in and out legs of this 15 kilometre stretch. I was in a pretty foul mood by now. The car was suffering badly, those fancy new Koni sport dampers were getting hammered, the engine and gearbox mountings were getting hammered, and it was hot. It was really hot and dusty. Our visit to the lookout point was, in a word, disappointing. The lookout point itself was unstaffed and there were no refreshments available in a tuck shop or otherwise. The view though... that was spectacular. The depth and width of the canyon is on a scale that neither I nor a photograph can convey. This is definitely a place that I would want to visit again, and would want to hike as well.

After we made our way out from the lookout, we turned north again and hoped to stumble across some civilisation. By now I had done 250km since the border, which on a smooth tar road would be almost half a tank. On these roads, however, with all the braking and slowing down it wasn't, and I had no idea how far we still had to go. To make it worse, we encountered several junctions (which I figured we needed to take) that was closed because of the rains and the damage to the roads. So we ploughed on, and while stopping for a loo break, I noticed that the boot rack had given up.

The boot rack with a split strut. Here the roads were better again, in the expected condition. You can see the damage to the paint that the suction cups had caused because of the dust. I twisted the strut around to better support the weight. The strut had also gauged into the paint where it had collapsed.

After this calamity, I really couldn't care any more. There was nothing I wanted more now than an ice cold beer and a shower. Somehow, our moods had improved though, such is the charm of this little car, even on these roads. It was late afternoon, we needed to find petrol soon, and we still had to find a B&B in Keetmanshoop. Then we were forced, due to road closures, back onto a D-grade road again, and suddenly we came upon a dam wall in the middle of nowhere. To me it seemed... magical, something man-made, something major. It was unbelievable. In reality though, it was the Naute reservoir, and we were very close to the tar road linking Keetmanshoop and Luderitz. We had made it!

Notice the water-line bridge we had to cross. Fortunately it wasn't a full stream of water.

That evening we spent in Keetmanshoop and slept well, after a lot of beer. You can buy beer in any shop in Namibia (what a blessing!). In the morning we set off for Windhoek, sticking to the B1 national road. This is a busy cargo road again, and after all those rains, the pot holes could have swallowed any of those trucks whole. It wasn't easy to maintain pace, I would have absolutely lost a wheel had I struck one of them. It was also getting cooler as we were catching up to that storm that had been raging in the south a few days prior, and in Windhoek itself we encountered a hail storm. My wife parked the car under a tree for the duration, and after arriving at the self-catering, it was covered in leaves and branches.

From this point on, after we had met up with everyone, we rented a big Toyota truck to travel together, so we parked the little MX-5 at our friends' place. Two weeks later and we were on our way back, hauling the tar roads and making lots of progress. I had stuffed most of our luggage in with friends, so we were now travelling without the load on the boot. It was also discovered, at a roadblock, that my license had expired, so my wife was doing all the driving. The problem with the main road seemed that fuel consumption is actually worse, overtaking trucks and other cars. With the small fuel tank, we had to literally stop at every town to refuel, even if there was still half a tank left; there was no guarantee that half a tank would get us to the next town. But it was plain sailing, and within one day we were back down at the lodge at the border, and the following day back home. The car had performed amicably, the undercarriage had stood the test of grade C and D roads superbly, and my only loss was to replace Steve's boot rack and the damage to the paint. And then I drove to the shop that night. It had rained just after we arrived back home, so the roads were wet. I turned left on an arrow at the traffic light and promptly when into full opposite lock to get it straight. It must have looked superb from outside, but I immediately knew what had happened.

My two rear tires where almost slicks.

I'm not sure if it was the extra weight over the boot, the softer Bridgestone Potenza compound, or a combination of both, but there was literally nothing left of my two rear tyres. I couldn't believe that we had completed a 2000km trip, coming out on tires that looked like this in the end. But, we had, and now I had to replace them. It was the most expensive part of the trip, by far.

Driving a (modern) classic

Posted on 2015-04-22 in The Blue Car
Official bodies (racing, club, secretariats) won't classify the NA Miata as a classic yet, even though it's now well past 20 years. I've heard of it being referred to instead as a "modern classic" though. I don't really care either way, I'm just here to drive and own this car.

Using this car on a daily basis is simply one of the best continual memories I have of it. It being this old however comes with a set of problems that you constantly have to watch out for. One of these is cooling.

When I bought the car, the radiator was almost brand new. I didn't think I'd have a problem with cooling, ever. But, the South African climate can sometimes roll you a nasty one. It was one of those heat wave periods and while sitting at the lights on my way home, I suddenly noticed that the temperature gauge was right off the scale. Something was wrong, but it was the last stop on my way back and I had less than a kilometer to go. So I pushed through. There was still plenty of water left in the radiator as I found out when I made the rookie mistake of immediately opening radiator cap to check, which resulted in me almost ending up with third degree burns all over my face.

The problem was the thermostat. It had given up and locked off the circulation so that the water around the engine couldn't get back to the radiator. Driving around like this is a problem, not least because you're putting the water pump under tremendous pressure.

So of course I had to order a thermostat. I opted for a cheap Chinese knock-off, which is still going to this day. Naturally it would take a while to arrive, and I had to get to work the next day, so I simply took out the old thermostat. This is no problem, the only effect is that the engine takes much longer to get up to working temperature, so I had to nurse it and curb my enthusiasm for much longer. This added yet another flavour to my driving this car which I remember fondly. A few days later though, still in this heat wave, I noticed a hissing sound while waiting for a friend, and popped the hood right there in the parking lot. I found that one of the hoses had sprung a leak. This was obviously because of the pressure build-up on that last stretch home. It was a tiny hole, but it could spell disaster at any time.

Fortunately it was the hose feeding the heater under the dashboard. This meant that it was easy to by-pass it and stick it into the back of the engine to complete the flow. A friend helped to get a specially made hose for this, and I drove around with this by-pass until my order of a complete set of silicone hoses arrived.

The by-pass hose visible at the back of the engine, and the two ports on the firewall that wasn't connected any more.

The silicone hoses took about a morning to fit, and there are two small pipes that I just couldn't get too, and thus never replaced. But I think the yellow pipes offset against the engine and blue bay looks absolutely brilliant. One problem I had was the hose clips. These things are super finicky (perhaps because of the age) even with hose-clip pliers. In the end I simply replaced all of it with proper plumber fasteners.

Heater hoses being fitted.

By now, the wheel bearing noise were much worse, and so I had to get cracking on the hubs. I had to buy a special tool to pull the hubs off of the axles (and almost totally wrecked the one center bolt completely!). I couldn't pull the bearings from the hubs myself, or fit the new ones. My father-in-law took it all to a guy that had a press who was kind enough to assist.

The one rear hub pulled off the axle and lying on the ground.

So at this point the under carriage was almost completely refurbished or replaced, apart from the front brake disks. I had total confidence in the car's long range capability now. But before we move on, the fabric top ripped up during a highway blast. Instead of ordering a Robins or OEM top, I took it to a local upholsterer, together with a new rain-rail, who did a fairly decent job for a third of the price (including fitting). It's still looking neat and is weathering really well almost 3 years later.

Getting it road worthy

Posted on 2015-04-16 in The Blue Car
When I bought the car, there was almost a full year left on the plates, and I simply transferred it to my name. But this period was fast approaching, and I made an appointment with the AA for a full inspection.

The condition of the car deteriorated with daily use. Knocking noises started appearing in the suspension under cornering and from the back there were some very faint bearing noises. I sort of expected that the car wasn't actually road worthy, and a test at the AA proved my suspicion. Because I bought the car it did require a new road worthy certificate. Now at least I had a list of immediate issues to look at.

After the calamities with the engine and its subsequent replacement, a few of the items on that list were knocked off, one of which was oil seeping out of the crank seal at the front. Presumably the same happened at the back, since the old clutch was slipping quite often. Mostly though, the problems were all due the car having sat in a garage for a long time. When this happens, the rubbers start to loose their suppleness and become brittle. So when it actually gets to being driven again, the rubbers immediately starts tearing and breaking apart under the stresses.

My shopping list was basically a refitting of the entire undercarriage: New polyurethane bushes and anti roll-bar mounts, new ball joints and new dampers. The old dampers didn't leak oil, but they were 20 years old by now and had seen a significant amount of gravel road.

The effort of replacing all the bushes was outside the scope of my own abilities and the tools in my garage. The poly bushes are very difficult to get into the wishbone mounts, so I delivered the car to a shop where they could use a press. They didn't do a very good job though. The bushes were fitted well enough, but the rear anti roll-bar lost one of its bolts and I found another bolt on the one wishbone that also hadn't been fastened properly either. I then subsequently heard, quite by coincidence, that this shop refurbished the brakes on another car which failed mere days afterwards and resulted in a terrible accident. So of course I double checked everything on the undercarriage. Real sloppy work.

So after the bushes were sorted out, I turned my attention to the bearing noise that permeated into every drive. Initially I wasn't too sure that it was actually a wheel bearing, and opted to first replace the rear brake disks and pads. They could have been warped at some point and anyway, the pads were done, so it needed doing.

The green stuff fitted to the rear.

As it turned out, it wasn't the brakes, but the hubs are rather expensive and weren't necessary for the road worthy certificate, so I let it be for now.

The lower ball joints on the front are sold separately, so I got those new. But the upper ones are only sold attached to the upper wishbone. This makes it rather expensive (and also rather silly, because the wishbone itself doesn't really deteriorate). These I had refurbished instead. For the dampers and springs I borrowed a compressing tool and set about building the new dampers and springs. It is rather difficult to manage when you're working on the floor. I had ordered Koni sport dampers, which are adjustable in height and stiffness. The poly bushes had made the ride so stiff all on their own, that I didn't really mess with the damper settings at all. Lowering the car would have made the ride way too hard. Remember that this car weighs less than 1 ton, and the poly bushes props it up so well it doesn't lean into the corners nearly as much as with stock rubber. So, the dampers were really just an extra, and for peace of mind. Also, I'm not into stancing or any of that stuff. This car was meant to be a daily.

Working on the dampers and ball joints

One of the problems of old cars like this is rusted nuts and fasteners. The one front damper unit wouldn't loosen on the top cover. It had rusted down so bad that I couldn't dismantle the damper unit at all. I moved on to the rears units, and by the time I completed and fitted the those it was already mid Saturday afternoon. I needed the car ready on Monday morning for work. At this point I desperately called in some friends to come and help. The solution finally turned out to simply split the rusted nut using a chisel. This of course woke up all the neighbours on the Sunday morning, and the chisel was totally wrecked. And I had lost a nut, which is actually a bigger problem than you might think. You don't just go to the hardware store and buy nuts to use. These nuts' and bolts' thread are not compatible with general hardware store types, and I don't have a huge stash of lost nuts and bolts like many workshops have. I can't remember where we got another nut that fit though, but fortunately it was all sorted out by lunch time on the Sunday.

After this the car passed roadworthy, but there were a few other things that needed attention, and naturally some more surprises were in store for me.

A growing familiarity

Posted on 2015-04-16 in The Blue Car
As I was getting more comfortable with the car, and using its available power more succesfuly and more often, the pleasure I derived from driving it was simply staggering.

By now I was reading up on the forums and talking to other club members on what other people were doing with their cars. Everyone had stories and advice, and I took some leads from these as to where to direct my focus. I started with the easy stuff. The gear shifter has it's own oil, seals and maintenance schedule. Mine was in a catastrpophic state. It's a quick job to fix it, but it required more part imports (the shifter boots and nylon cup).

I had to fish parts of the smaller shifter boot out with long-nose pliers. The nylon cup was also split in two.

The shifter update made a good difference to the feel of the car. The more strenuous rubber of the boot made it feel more accurate. This was the first fix to the car that made me feel like it's turned into a project car. It was also the first fix where I got told off for my dirty clothes by my wife. Since then, that particular t-shirt has turned into my "mechanic shirt".

Then one day at the office, a collegue reversed into me. I was in the car at the time, fortunately, but it didn't help. Her husband (who was uninsured, as he was a car trader and held nothing for more than two weeks) didn't pay me a cent towards the repairs.

The wing was bent at the start of the arch, and the paint crumbled off of the plastic nose-cone where it took the impact.

Since the plastic of the bumper was already 20 years old by this time, I insisted on a new one. The clamps on the old one showed the tell-tale white strain where it had bent. But he would have none of it. In the end, I got a new wing and a new nose, but I paid for it myself. So naturally, I fitted it myself to save costs.

The fixtures of the nose-cone was rather...extended.

Prior to this, there was a vibration that had developed along the drive-train, and by now, this was getting more severe every day. It got better for a while after some hard driving, but would become gradually worse again. This cycle repeated, and shortened as time went on. The workshop manual talked about the two different crank nose designs, and it became clear that I had the early sort, and was suffering from the well documented short-nose crank problem. I concluded that under hard driving, since the engine's rotation is against the thread of the crank's centre bolt, it tightned itself sufficiently to alleviate the problem. But at idle and normal driving the strain of the belts was enough to gradually loosen it again. This gave the car somewhat of a mood, and it started to develop a sort of a personality. There were a few solutions to this problem.

I opted to try the "lock-tight" fix. This was the first time I had to strip the engine all the way to the crank pulley. It was a tremendous experience, until I came upon that tensioner's stripped bolt. Anyway, the workshop manual really helped a lot here, and soon I was putting my broken baby back together again. So, a bit of background: this problem with the crank erodes the key that fits the pully to the crank nose itself. So, to perform this fix, you have to fit a new key that holds the pully onto the crank. Naturally I ordered a new one.

The engine stripped down to the crank nose; a similar effort is required to replace the timing belt

This is when I learnt that one of the previous owners had already suffered from this problem and had his bush-mechanic attempt a fix of a completely different nature. I presume they could only find a key that came from a tractor or a pickup truck, because the key I took out, compared to the one I had ordered, was very different in size. So, to make their one fit, they had manually extended the slot in the crank that the key fit in, and manually filed at the crank pully so that it would go over the bigger key. Nothing I could do with the proper sized key would work. So I had a new key custom made to fit the crank slot, and set about with the lock-tight. This fix lasted for about 12 000 Km. And when it started acting up again, it was three times as bad as before. The custom key had completely broken, ruined the pully and almost took out the one wall of the crank slot completely. This crank was done with life.

The old (bigger) key compared to the real (ordered) thing. You can see where the old key was eroded

I had the option to replace the crank, which would involve a lot of labour and refitment. It's a complete engine-out job though. Or I could just get a new engine. In Japan they chop cars up after 80 000 Km, and some companies get hold of these cars' engines and gear boxes for export. I got a second-hand engine (and gearbox, since all the vibrations had ruined mine) from one of these chops and had it fitted. I also supplied a new clutch. This new engine, since it was a later model, sported the updated long-nose crank, so this problem will never reoccur again. I had my baby back, and it was better than ever with that new clutch. It dawned on my then that what I had actually done by buying this car was take in a rescue dog, and I was busy nursing it back to health.

From bush-mechanic to a real beauty.

When I first started looking at MX-5s, I wanted an up-to-date one. Affordability meant I was looking at the NB model (from about 1998 to 2006), since the NC model (2006 to 2015) was still too expensive. Then one day a colleage arrives at work with this lovely blue NA model (1989 to 1998). It belonged to his girlfriend's dad, who was mostly out of the country, and he drove it occasionally to basically keep the battery charged.

So I drove the car, and made an appointment with the owner on his next visit to the country, and I bought it from him without hesitation. My wife wasn't prepared for this sudden gut-punch purchase, but of course, she loved it after the first drive as well.

I couldn't believe I had it in my garage!

Then the real issues started cropping up. First though, some history. I am the fifth owner of this car, as far as I know. Back in 1991, a Malawi gentleman imported five examples directly from Japan. Since then, this car was owned by three people, and driven within Malawi. The last of these owners moved the car to South Africa (along with two others) for safe-keeping at his family home in Tableview. This is where I bought that car. Since it was a direct import from Japan, this car is branded as an Eunos (Mazda's experimental luxury brand at the time, akin to Lexus from Toyota). It also comes with all the stickers, warning labels and else printed in Japanese. And of course, it is right-hand drive.

At this time, my wife had already bought me the workshop manual, and I was pretty familiar with the theory of maintaining it. I thought that I would have to do the odd fix now and then. Boy was I in for a surprise. It didn't have a service history prior to arriving in SA. Presumably it was serviced and worked on by non-Mazda workshops through-out its lifetime in Malawi. The radiator was brand new, but apart from that, everything else was pretty old, very dirty and in working condition. For about a week.

We were on our way to watch a show when it just died as we entered the parking lot. It got towed home by a friend (very carefully, on the tie-down hooks!!) where we tried to find the problem. By all conclusions it was an electrical one, but we couldn't find it. What we did find, however, was by-passed fuses, bridged fuses and an horrendous after-market alarm installation. It quickly became apparent that every single part of this car (including the interior) had been worked on by, presumably, people that didn't have the foggiest idea of how to dissamble anything. So, it had it's first trip on a flat-bed pickup to the dealer. They found that the main engine relay had burnt out, and replaced it with one that didn't look like the original, but worked. This was an omen that I didn't know to interpret correctly.

Soon, it was time to service the car. I instructed the dealer to do a full service, including timing belt. Beforehand I shopped around for brake pads, but couldn't find any. So, I had to order from the UK, my first of many part imports. The dealer didn't fit the brake pads correctly. The mechanic either broke, lost or took the custom pad-clips, and the car was returned with the pads rattling within the caliper, and generally not performing very well. I took it back, had a few words, and have never taken the car to another workshop for service. I realised then that I would have to learn to do all of it myself. The dealer mechanics are only trained on the new models, and I later found out they had also partially stripped the thread of the timingbelt tensioner's bolt, in the aliminium block, during that service. So I imported a brake fitment kit, and set out with my first socket set ever.

The difference in the clips were quite apparent.

Most of the rest of the year was quite uneventfull. We did regular trips over weekends, and later joined the local branch of the South African MX-5 Owner's club. I performed small tasks, like refit the radio completely and hooked up the seat speakers correctly.

Presenting Presentation

Posted on 2013-10-23
I'm messing around with WPF and plumbing the XAML depths these days.

Creating Windows forms interfaces used be something I enjoyed immensely back in the day. Now with WPF it's a different ball game, but the challenge remains the same. Recently I'm been struggling with a hierarchically-bound tree view, and dragging and dropping within the same tree view as well as from outside.

Here's a list of the most tricky bits:

  • Once you initiate the action with DragDrop.DoDragDrop(), it doesn't exit until you drop.
  • Once you have dropped, finding on what you dropped is particularly difficult.
The result of the first point is that no mouse events are forthcoming during the drag operation, so you cannot keep track of the mouse over your tree view to determine the location with InputHitTest(). And even if you could, finding the correct destination is tricky. Using a combination of FrameworkElement, UIElement and the VisualTreeHelper resulted in inconsistent results: sometimes it's a border, other times a textblock. And the value of the attached DataContext is also pretty shaky.

My solution at the end of the day was to hook up the mouse, drag and drop events onto the tree view item HierarchicalDataTemplate static resources directly. This way, the source and destination of my drag operation was always related directly to the sender parameter.

Whilst keeping the event handlers on the tree view itself too, and setting the e.handled parameter correctly, the context of the drop action is exceptionally well defined. If the user drops on one of the tree items the template events will handle it where possible, else it will bubble back up to the tree view itself, where the context of dropping changes, and the item is added or linked differently. And at the same time the case of dropping onto the blank area of the tree is also handled. This works pretty well, and the remaining challenge is managing the data integrity in the background for consistent storage.

So until I've got my project page set up, I'll post a bit about my challenges on the news feed for now. It's not like there's any actual newsworthy events happening here at helloserve Productions anyway :)

XNA? What XNA?

Posted on 2013-02-03 in Stingray Incursion
Microsoft announced that "... there are no plans for future versions of the XNA product."

Apparently there's a date of April 2014 around this. But, as they said: "XNA Game Studio remains a supported toolset for developing games for Xbox 360, Windows and Windows Phone," said the representative. "Many developers have found financial success creating Xbox LIVE Indie Games using XNA. However, there are no plans for future versions of the XNA product." Here the article on

As these things go, I'm now flogging a dead horse. If the support is not guarenteed, there's absolutely no point in carrying on with this framework. So, I have a rather big decision to make. Port, or abandon.

Seeing as the whole world has seemingly adopted Unity, that does look like the best use-case at the moment. Although there's a lot out there, including Crytek's offering. But all that effort? Starting from scratch now, while I've got two other (non game related) projects already going seems a bit too much.

I can't even begin to hint at a direction in this blog post. It's all a bit depressing, really.

Simulate all the things, part II

Posted on 2013-01-30
A while ago I posted about Woodcutter Simulator. Well, they're back...

I just discovered they have another title called "Road Construction Simulator". Honestly, I don't know when they'll stop capitalizing on their 3D engine and stop releasing games that are no fun to play, what so ever.

Anyway, here's the youtube...

Strike Suit Zero

Posted on 2013-01-28
Excuse me for punting someone else's game this morning, but I want to share this with you. Found it on GOG, out of the blue.

Not so much as even a facebook add (which is a damn good thing) alerted me to this game's existence. But I love it. It's a bold attempt at the space sim genre again, and they've taken their inspiration from all the good places, places that I hold in great standing. Like Robotech. Anyone remember that? I've found over the years that absolutely no-one can bring Macross island and the proto-culture to mind.

I haven't spent too much time with it yet, but what there is seems to be pretty solid. Even the simple mouse/keyboard controls (the same trick I'm trying to pull with Stingray) is easy to master and lends the game such a low barrier of entry.

And then there's the visuals - yet another punt for the proprietary engine.

Get it on GOG or on Steam


Posted on 2013-01-21
Happiness! The test site has been migrated over to the live site. But is the design complete?

It's been a few months in the making, including a complete re-design and re-write, but I've finally put my new site over to my new hosting at Arvixe.

Functionally the changes are mostly for me own benefit. The new site has a light-weight content management system running which is managed through a built-in Admin section. This allows me to manage and create content from inside the site, even from my mobile.

But there's also been a visual re-design for your benefit. I wanted to bring the site in-line with the current phase of the web, something easy on the eye. My designer friends tell me it looks crap, but I like it, and I hope you do too. Accessibility has also been re-worked a lot. The site scales much better on the smaller mobile screens for instance.

Interactivity has been boosted by a custom built forum for discussions and what-not, but I'm holding back on a comment feature for blog and news posts until I feel it will add value to the site.

I'm pretty sure you'll find all sorts of small errors and omissions, but I do hope you like your stay here!

Blog Entries

Posted on 2013-01-17
You might have noticed that all Stingray's old blog entries has been transferred to the test sight now. While doing it, however, a funny thing happened.

As I moved them, I looked at each entry to make sure the images show correctly, and also reread them all. After a while I realized how awesome an experience it was working on Stingray. And what an achievement it actually is.

It also brought back the many hours of work I put into the preparation for IndieCade, and the subsequent feedback of "This is not really an IndieCade type of game". It dawned on me yesterday how much of a disappointment it really was, how big the psychological blow was. And how it resulted in me practically dropping the project almost all together for a time, to take a break.

At the moment I'm not spending any real time on it. Just here and there, and mostly on content which is tedious. I'm not an artist by any means. For me the whole thing so far has been about the challenge. And now, content wise at least, the challenge has evaporated. So I'm really struggling to keep my motivations up.

Maybe there's another means of getting there...

Design Guru

Posted on 2013-01-14
Titled for irony, I am not a designer and really don't know what I'm doing. But, I think the way it looks now is more communicable and easier on the eye.

Basically, I got rid of all the rounded rectangles. That 'phase' of the web seems to be over.

I've added another feature page for my HTML 5 experiments. There's only one up at the moment, but it's a good one ;) Apart from that, have a look through and let me know if you find problems, either on the forums or elsewhere.

Oh, and a happy 2013!

Simulate all the things

Posted on 2012-12-19
So for a while there everyone was making fun at Farming Simulator. What else can they do with that engine?

It was's best selling PC game for a while too. That maybe says more about South Africa than we'd like to admit. But here's what is coming up next:

Woodcutter Simulator 2013

I think this must be a joke... made using the Farming Simulator actually :)

Test Site Update

Posted on 2012-12-18
The recent update brings performance, activity logs and a SQL Server back-end...

I started building the logging module bespoke for Alacrity since I found that Log4Net doesn't serve my purposes all that well. So I needed a test implementation, and where better than my own site :) That, together with moving to SQL Server from a SQL Compact database required some scripting to JSON to preserve the database contents during the migration. But all's well. Features, News and Forum posts are all intact.

The only problem is that with migrating the user accounts across, the password information is necessarily lost. If you're having problems logging in, request a password reset. That should set you right.

Other things that's been added is the availability of image tags in the forum posts. So, go wild. I do not support hosting your images though...

Testers, Destroy!

Posted on 2012-10-02
So the new site is in "Consumer Preview" state, what ever that means.
This means that any problems you experience, let me know so that I can fix it. There's a forum category for that specifically. Post it anywhere else, and you suck.

Michael Bay-ish

Posted on 2012-09-20 in Stingray Incursion
As craptastic a title as that might be, stick with me because I think you might like what you're about to see.

Rogue Trooper is a game that's still on the top of my personal charts. It's a basic shooter. Fun, and stays fairly close to it's source material. And quite cheap on Steam now. Those are not the reasons I like it though.

I like it because of the set-pieces. This is the same reason a lot of people liked Spec Ops: The Line as well. The whole game is one long Michael Bay movie. Every level is Optimus-Prime vs Megatron, and this is what I think gives that overall feeling of excellent narrative. It's not really complicated or in-depth narrative at all when taken in isolation, but because of the environment, because of the level of involvement in the set-piece, it feels like it has massive narrative. Rogue Trooper didn't have the tech to do this very well, but it is still there. Spec Ops though, that does it all. The vistas, the vertigo, the armoured enemy, the banter and the swearing. Well worth the popcorn. It's also worth noting that the Uncharted series is based on the same ideas, but I've never owned a playstation so I don't have personal experience with it.

So what makes for a good set-piece? Well, I'm not an award winning director or anything, but the first thing I've identified is size. The bigger it is, the better. This is why in the first Far Cry, they plopped down an aircraft carrier instead of a simple abandoned airbase.

The aircraft carrier is not just long and wide, but from a first-person perspective on the ground, it's also damn high. That's a set-piece right there.

The second thing I've identified is effects and animation. A dead area is dead. That's not a meme, it's a truth. If there's radars in the area, move them about. If there's a stack, make smoke come out of it. Old doors? Hinge them back and forth a bit. It's these little nuances that really makes it special. For an analogy: Look at the inside of the Pagani Zonda. They could have just gone and stuck some normal air-vents in that cabin, but they didn't. They made it look like steam-punk plumbing, chromed and polished.

So, since my submission to Indie Cade in May I've been working on the ten identified set pieces for Stingray. This isn't something I can post about easily though, since it would essentially be spoiler posts. But I wanted to share what's currently in the foundry. The process of constructing one of these set-pieces, the environment of one of these.


The first is the wireframe of the in-game model, and next to it is the same model as a solid, but colored based on which bits are mapped together. Next is the sculpted seriously high-detail model of the same area (12 million faces). This is used to generate the normal maps, which when applied together with other (still WIP textures), forms the last image, which is again the low-poly game model as it would look in the game.

As I've said before, the most time in development is spent making content. And these sort of set-pieces takes the longest of them all. The content itself, the maps, the models. But also the animations and effects, testing, the scale. Does the detail level sit on par with the rest of the game? What will be the memory requirements for this bit? There's a whole lot of planning that has to happen, but the pay-off is worth it though. If I pull this off, it would hopefully transform Stingray from a simple game to an epic spectacle with pantomime and theatre. Something memorable. Something that, hopefully, you will like and talk about.

Tweaks and Highlights

Posted on 2012-04-25 in Stingray Incursion
My wife made a comment about how she struggles to distinguish the HUD from the background. And she had made a good point.

So I Googled a bit for images of real-world HUDs, and found that almost all of them are lit up like a christmas tree. The high contrast from the background is what makes all the difference. Here's an example:

Looking at that it was obvious that I couldn't just get away with adjusting the color from Lime to something more white. I could maybe have gotten away with adding some gradients and clever masking to my PNGs files which I use for the HUD elements, but that wouldn't have solved my problem for the text. So I started writing a pixel shader. A derivation of the Poisson blur and a few pow() and clamp() calls later, and the result seems pretty ok.

HUDEffectDay_U.png HUDEffectNight_U.png

Obviously this will get tweaked as time goes by. The filter settings is pretty sensitive to the render target size (where a 0.001 shift in UV could be a dramatic shift), and there's some glaring/white-out where there's overlapping, like on the radar. But all in all I'm pretty happy in a preliminary sort of way.

Scale Models

Posted on 2012-04-24 in Stingray Incursion
More modelled content, but not everything's to scale...

I continued to model the ordnance in the game and recently added the flak rounds and the bullets. The references I initially used was the WW2 flak cannons. The most widely used AA round was 22mm calibre, and thus I modelled the rounds based on this information. As it turns out, it's so small that it wasn't visible in the game at all. This is in contrast with the rocket and missile models which are to actual scale. Because those are launched from a position close to the player, they are visible when it matters. However as the player dodges and manoeuvres around, the flak rounds were almost never close enough to the player to see. My solution was to make them bigger. They ended up being almost 7 meters long in real world scale, and close to .5 meters diameter. This is massive. But to give the player the ability to see them, it was necessary. Similarly, the bullet rounds are the same, almost 7 meters in length.

Even after all this, I found that it looked crap. Previously I had simply drawn lines between the previous and the next positions. This had looked a lot better, and my conclusion was that it created the illusion of motion blur. I then adjusted my textures to blend some alpha towards the rear of the AA and bullet models, which did the trick! Additionally, with the use of models I'm able to make and apply illumination maps for the tracer rounds as well.


The bullet rounds are visible on it's way to the player here, and also some from the chopper being fired. With the models it looks much more realistic than the orange lines did. The design decision behind all this is to facilitate the shoot-em-up/dodge-em-up aspect of the game play. Instead of the bullets hitting almost instantaneously, it's instead very obvious to the plater that there's a burst of rounds coming his/her way, and the speed is balanced such that the player can react to it. Making the visual aspect of this work towards that goal is very important. Of course, there's no such thing as tracer flak rounds as far as I know, so those will not be visible at night and makes night-time engagement a bit more tricky.


Posted on 2012-04-14 in Stingray Incursion
So with the submission deadline for Indie Cade coming closer, I've spent some time polishing up the game. And fixed a lot of bugs. And worked some more on the effects, specifically for night time.

Simple stuff like a main menu on the title screen and an in-game pause menu seemed easy enough, but as it turns out there were some real problems with my disposing and switching between the different sections of the game. Also, I had a bug in my sound code that caused some sounds to continue playing irrespective of the game being paused. This same bug is also what caused some strange sound corruption issues.

The effects I worked on was mostly explosions and other combat effects, like the missiles and rockets. Fighting at night time now proves to be way more epic than day-time, simply because of the updated effects. Tracer rounds now fill the sky, and the explosions at night time are pretty awesome to behold, specially on moving vehicles. I've added a small integrated tutorial as well, the end of which is visible in the shot below.

Night Battle_U.png

I also added actual geometry for the rockets and misses. Both have an illumination map, making their tail-pipes visible during the night. At day time they're not really that easy to spot though, due to size and speed. The missile is visible below the chopper in the following shot.

Night Battle 2_U.png

The HUD-integrated pause menu is also visible in these shots. Additionally, when landed, there is another in-game menu in the same fashion where later options to save the game and select upgrades will appear. Soon the build for the submission to Indie Cade will be made available to the public, so watch this space.

I remodelled the cliffs again. They were too blocky and well, looked crap. Messed around with some normal mapping and then, well, I got sculpting.

It was my first time. It took some getting used to, but ultimately I must say I'm super happy with the results of that, and the resulting normal maps that I could make.


All that's left to do now is rework the diffuse maps a bit and then get around to sculpting all the rest of the cliffs. The challenge at the moment is the tiling aspect. With the normal maps in the mix it's difficult to control at the moment particularly along the lines of the cliffs itself. If it does appear very obvious I'll have to think of something. But other than that it's all lovely.

Update Here's an updated screenshot of the cliffs with a new diffuse texture. You'll notice that the ground and cliff colours are all a bit more de-saturated and lightened. This was thanks to my wife's suggestion since she didn't think the colours matched.


More Visual updates

Posted on 2012-03-14 in Stingray Incursion
In wrapping up preparation for my submission to IndieCade, I've spent some more time on visuals. My focus last night was on getting animated textures working, specifically to do good normal mapping for the constantly changing water.

It might seem superfluous, but these updates just serve to flesh out the experience on the test level. Instead of just having a small piece of land that simply ends in space, the level now extendeds all the way to the edges of the map, and this gives a well rounded impression. It will also serve to fill up the space in the final game.

I opted for the solution of the 'sprite sheet' setup. One 2084 texture with 16 frames of 512px each tiled onto it. The animated texture class finds the correct texture for the current frame, and also finds the correct texture for the next frame. The duration of a frame is pre-calculated and then the two frames are transitioned using alpha-blending depending on how far into the current frame we are. This means I can easily use only 16 frames for a 5 second loop. Naturally, a screen-shot doesn't show this off very well :)


Update: I've spent some more time on the water shader and on the assets for the water and also the coast line. The animation is now much smoother and the all the coastline assets have the shader applied. To prevent secularity on the sand and non-water bits I had to use a specular value map. This is basically a map with 8 bits per pixel detailing how much specular should be applied in the pixel shader. Instead of using a whole texture for this I've embedded it into the alpha channels of the existing normal maps. It's a bit of an extra process, but saves a hell of a lot of memory. There are some artefacts along the edges of the tiles, but this is something I can get right with a bit of shader manipulation, so not so much of a worry for me now. On to other things!


Quick Visual Update

Posted on 2012-03-12 in Stingray Incursion
Not much to say in this update. I put the gun in the game proper, fixed a projection bug, and started mucking around with normal maps for the environment.

Even though the gun is hardly going to be visible, it had to go in. I really just had to texture it and so one, and started looking at a bug where the gun wouldn't lift higher than a certain point. Quite quickly I noticed that it is similar to another bug on the sight. Turned out it was the Dot product that always takes the smallest angle, and you have know when it's negative or positive. So, solved that.


So Saturday I was kinda lethargic and just messed around in photoshop trying out some ideas for the normal textures for the ground and sand. This quickly turned into a quite a bit of work that carried on through Sunday, and I've only done like 12 tiles worth :) But it least helps to make things look awesome. Here's the ground and the sand, both obviously most visible in low light conditions.

EnvironmentNormalMap_01.png EnvironmentNormalMap_02.png


Posted on 2012-03-09 in Stingray Incursion
Animations. I've come to realize that this is a very important part of game development. Of course there are various different types of animations, and in this blog post I'm going to talk bit about hierarchical ridged body animation.

So what exactly does that constitute? Well firstly, I'm not even sure that it's the correct term - it's just what I think it's called. But basically it's animating objects that doesn't have skeletons. Skeletons and thus character animations through skin modifiers is a whole different sort of animation, and also on a whole different level of complexity. I dabbled a bit with that in the MiniLD submission "Mommy there's monsters". But for Stingray, here, today, I'm not doing that.

What I'm talking about mostly mechanical objects that have different parts, and each part can rotate/move to a certain extent, but is also subject on it's parent's rotation/movement. A good general example is robot arms used for vehicle manufacture and production line stuff. The upper arm is connected to a base, which can rotate around the vertical, turning it either left or right. The upper-arm itself is constrained by it's mounting point, and can only lift and lower. The lower arm has the same constraints, but is subject to the upper arm. And so on. This situation is prevelant in Stingray, specifically with the mounted mini-gun on the front of the chopper. It's mounted to the body where it can turn around. The gun itself can only be lifted or lowered. Together, the movements of the two parts allows the gun to be aimed almost anywhere.

Now, I realise that most middleware solutions (UDK, Unity etc) has implementations of this which is (probably) easy to use and setup. Since I'm not using any of those, I had to build a small system to handle that and to configure it quickly. So far it's not a very robust system, and I'm sure there are many situations which it won't be able to handle yet. But I'll expand it as I need to. For the moment, however, it's a simple implementation with a small config that does what I need it to do.


In the image you can see the two different pivot points. The mounting bit, which turns left or right, and then the gun which is linked up off-center from the mounting point. The gun's position is dependent on the left or right rotation of the mounting.

Each object that needs to be included in the hierarchy is listed in the XML, together with which type of animation (rotation or translation) and it's own constraints in terms of the three axis. In this example, the mounting can only rotate around the Y (green) axis, and the gun can only rotate around the X (red) axis. Optionally, each object can then be linked to a previously configured item in the list and the constraints of this link, the inherited movement, is also specified in terms of the axis. So the rotation of the mounting around the Y axis causes the gun to also rotate around the Y axis. The tricky part here is that the gun's pivot should be rotated relative to the mount's position, and not relative to the world origin, and you have to keep in mind which transformations is applied before or after the model transformations in the .X file. But like I said, it's pretty basic, but it's a solution that works for now and which has a very small footprint.

On the Up

Posted on 2012-02-22 in Stingray Incursion
I'm in a better mood regarding the development this week. It's been a great weekend.

Earlier in the month I visited The Bakery Studio in Claremont who agreed to assist in making sound effects for Stingray. That process was completed this week and I picked up the sound effects from them. To put it simply, it was worth the very reasonable cost. The engine sounds and stuff were pretty much spot on and went into the game without a hitch. Actually, that's the case for 90% of the sounds. I was quite amazed at the difference it made. Of course there were some technical implementation issues as usual, but some intellegent queueing and what-not seems to have solved most of the issues.

And so with that in place, a gameplay video is on the cards, but there's a few things I need to do before I record one. Chief of which is the controls. The prototype control system was a bit too jerky and had a whole host of issues. I've come to know how to get around those through all the play testing, but keen viewer eyes will certainly pick up on the issues when watching the movie. So I finally pulled out the old (1998) physics text books from varsity and started overhauling. The effect was quite dramatic. It actually works very very well, and it's not super higher-grade flying mechanics either. Well, play-testing will reveal that, but so far it's smooth and weighty. It has a nice feel. However, to alleviate some complexity I've also had to build a feed-back system that basically tries to keep the chopper from ascending and descending when there's no player input. All that's left now is the tail-rotor physics. When all this is done I hope to tune it such that it will be possible to do some very cool evasive and strafe-attack manoeuvres.

I've also identified and determined the 10 main story points. This is a great leap forward for me since it will make determining the content and AI scope going forward a lot less daunting and random. In fact, I have already drawn up a list of all the various static defence systems that will be in the game. It will include beam weapons :) So go dodge that :P

Mind Games

Posted on 2012-02-16 in Stingray Incursion
The last couple of weeks has been a real mind fuck.

Probably the most difficult part of this projet to date has been definining the various game systems and AI states. Yes, it is a bit late in the day for this sort of thing. But at least I'm there and I'm doing it. Descriptively, tiring is an understatement. Much like concept art this process starts on paper, and then I transitioned it into a mind map. But conceptually it's way more taxing than anything I've done so far. It's just hard.


Initially it seemed like it would take two ticks to do actually, hence the up-beat nature of the previous blog entry. The autonomous AI for the vehicles were quick to define and implement. And also easy to test. But when you start fleshing out the 'greater force' that the player will be up against, things start spiraling out of control very easily. And like most software projects, when you start eyeballing the details there's nothing keeping an eye on the bigger picture. I've come to refer to this as the distilling process.

Piling on states and variables is no good thing, and it's hard to take all that and boil it away until the essentials remain. So what is essential then? That rather depends, but one non-negotiable requisite is to have a very clear idea of what your game is going to be about. And I'm not talking about your idea's premise. I'm talking about your game systems.

How are you going to achieve meaningfull player decisions? Variations in player engagement? Sure, some of the story elements or the underlying premise drive towards that, but that doesn't go deep enough.

In Stingray, the premise is corporate warfare. Part of the story is infrastructure. So, a tactic that would slowly disable intallations is on the cards because if you're on the full-stealth side of the tech-tree you can't bull-run an installation out right. Take out the power supply then, that'll shut it down. Oh wait, there's more than one. But now they know where you are. Was that a good decision? Because here they come to investigate. Will they repair it while I'm dodging?

And so the spiral continues...

Guns firing, missiles flying and some sort of engagement on the books.

So what have I been busy with? Well mostly player engagement. Previously I had the chopper's gun firing, and shortly after that the rockets started flying as well. It wasn't long until missles were added to the mix using locked-on targets and following the up and over flight path towards the target.

After that I messed around with the controls a bit more. Some people suggested a Battlefield control system which seems to work a lot better for aiming the rockets, but to me not such an intuitive third-person system. It's still subject to change. What I also did was put in the forward outposts. These are massive roaming landing platforms to refuel, rearm and repair at. They are commandable from within the chopper's Nav mode and they have some defensive capabilities, but they are very slow to move around and require time to deploy and redeploy.


Then I moved on to the NPCs. After some discussions with Sven (@FuzzYspo0N) regarding systems in the game, I sat down and started designing the gameplay systems with regards to the AI and player engagement. The first order of business was individual AI. Each NPC needs to react to the environment in some sort of autonomous way. Questions like "Do I attack?" and "Do I flee?" is not always subject to external commands. These questions' answers are also different depending on the type of NPC. A simple recon jeep will react differently to a tank when a chopper appears over the horizon.

So, coming off that, I first had to restructure yet again. My asset structures were incompatible with different settings for different unit types and the class hierarchy did not lend itself too well to applying these different settings. The restructuring took about a day, but it left me with a clean implementation structure for not just custom AI and individual configurations, but also for unit specific animations and other sorts of class-differentiating NPC attributes.

So, what's the outcome of that you ask? Well, the recon jeeps now react when you come into visual range. If you're still far enough and appear not to head in their direction, they will continue on their patrol route. If you come closer they will start to put distance between themselves and you. If you're really close, they start to open fire in defense. And they are also reporting your position to the central AI every 60 seconds. Of course, this is just the start of the autonomous part of it all that I've now completed. Going furthur than that, the AI should be able to form groups of recon jeeps who will then openly attack you instead of simply defending for example. But those sorts of things will all be built as the model becomes clearer and the design improves. Also they are actually shooting back and dealing damage to you - that already works too :)


Team Rocket

Posted on 2012-01-14 in Stingray Incursion
The ordnance are flying all over the place. And it's just beautifull.

This morning I tweeted a picture of the first successful rocket smoke trails. That was a major milestone. By that time the controls were also about half way sorted, and I did a lot a of play testing through-out the day. But there was still more to do. A rocket is pointless if it can't hit anything, and the next step was to start blowing stuff up.


Without too much effort on content (I only made a new explosion particle map) I managed to get the explosions going, and soon after that the accurate hit testing followed. Fortunately, this far into the project, there's already a lot of fully functioning methods, and I really didn't have to write anything majorly new to achieve it.


All that remained was further play testing, and of course bugs. Some were small and some where pretty hard to figure out. For example, the rockets travel fast. Their speed is configured at 40 meters per second, and after failing to destroy targets that I thought I really did hit, I found that the hit testing condition is too simple. It tested how close the rocket was to the target to ascertain if it's explod'n time. But, on one frame it was just .045 units too far away, and on the next frame it was already past it, e.g. -3.5. Which meant I had to build in a prediction variable of where it will be on the next frame which is then used in confuction with the simple test.

Fortunately I managed to solve all the problems that I could uncover during all my testing, and boy, it's now really starting to look like war. On my test level the objects are pretty closely placed, so everything is within reach of your gun and rockets. At some point I had about 20 smoke plumes going up, and that's a big smile on my face...

So after some effort I've got the rocket's firing. But now I've come upon a control problem.

The thing about rockets is that they fire straight from thier mounted position, much like the mounted machine guns on the old WW2 airplanes. Of course the mountings are slightly angled inwards so that eventually the two lines of fire cross each other, which determines the optimal distance to target. But in that I've now come across an issue, and it's to do with the controls. The chopper pithes and rolls, and affects where the rockets are aimed at. But with keyboard control it's really difficult to keep that reticle pointed on a specific target. Partly because my targets are now moving as well (if you aim it at a vehicle), but mostly because with keyboard controls there's no in-between. The chopper is either fully pitched flying forward, or hovering flat, or lifted up going backwards, in which case you're firing into the air.

Now, the controls as implemented at the moment are shaky at best, because I've never needed it done well enough for gameplay. So that's obviously what I now need to do next, before I even bother with the smoke effects and all the other neat stuff that will give the pleasure of firing rockets.

There was also a more minor problem with the HUD in regards the rockets. Since they have a static reticle, and since the player can swivel the view around the chopper, seeing where the rockets were going to hit didn't work out that well. Of course it's easy to say "Yes, but the player should look down the sight when using rockets", but that's not a very good argument because then it renders the use of the gun, which is tied to the camera, impossible.

Fortunately, with the reticle for the nav/gun sight, I've already solved the problem of correctly projecting the reticle at the correct position onto the HUD relative to the spot being aimed in conjunction with the camera position. So it wasn't too much of a hassle having the rocket reticle always appear to show where the rockets will hit, irrespective of where the camera is. In this way the player can now fight two battles as once - attacking one target with the gun, and another with the rockets, all at the same time.

Hopefully I'll have some screenshots towards the end of next week with smoke effects, bullet tracers etc, but first I've got to solve the control issue, otherwise there's no point to all this.

I decided that after the quantum leap of the last week or so, I needed to refine what I had done. Both in artistic appeal and technical implementation.

The graphical side of it is not as important for me at this time. If it's functional and it's visible, it's fine. But with the particle systems the work involved in just making it look so much better than the first draft implementation was miniscule, so I spent an hour on it.

More importantly however, technically I had to consolidate a bit. It's so easy for the code to run away from you if you're not carefull. Sloppy implementation up-front will make for big headaches later on in life. That's kinda common knowledge. The problem with prototyping though is that you tend to rush through things with your mind on the game play design and not so much on the implementation design. Because the purpose is primarily to test the game play, this isn't a problem per-se. However once the game play is confirmed and the idea is not binned, you have to go back and refactor and revise that implementation.


So after some structural changes, some new additions to the XML pipeline and three re-works of the smoke and new dust textures, this is the result. The dust trail specially adds a lot of base for me. It makes the vehicles have substance. But anyway, I hope you like!

Patrolling continued

Posted on 2011-12-08 in Stingray Incursion
So when you're going up a hill, you're going up a hill, right? Not so easy when you're coding it.

It's all fine and dandy moving vehicles around the level, heading in the correct direction and at the correct height based on the tile underneath. But when it encounters a hill/dune/bump it shouldn't just float up and float down, the vehicle's orientation should change with it's nose point towards the sky as it mounts the slope.

It was easy enough reading the normal from the currently occupied face. But I had some trouble getting the measurements to work correctly according the vehicle's local axis, as opposed to the world axis. This means that if the vehicle is heading north-east, it's local axis is turned 45 degrees (or Pi over 4 :P) from the world axis. But in essense I ended up with a value that I could use to stick into the world matrix for the vehicle.


The green (tangent) vector here is the key. You cannot work with the normal, because it has no relation to the direction. I initially used the red (normal) vector to calculate an angle with the Unit Y vector, because irrespective of direction, Unit Y is still applicable to the vechicle's local axis. But in fact that was wrong. The direction is important, and the calculation of the tangent to use in conjunction with the vehicle's own direction vector is the key. In the image below the magenta lines are the face normals, and the green lines are the tangent normals.


So that solved the tilt angle, but as you can see, I still need to do the roll-angle. Hopefully this won't take me as long.


Posted on 2011-12-06 in Stingray Incursion
I'm on a roll man! The few vehicles (actors) on my test level are now roving around, following thier randomly generated patrol routes.

Started this last night, and they're all moving around already. Simple waypoint manager class attached to each actor objects. At this point I just randomly generate a circular route or each object.

As far as problems go, correct orientation was a slight issue. At some point all of them moved around in reverse gear. But once that got sorted it wasn't too difficult to make them actually turn around to go to the next waypoint. Just like a car would, e.g. not turn in the same spot.

Another one was to stick to the ground. The tiles are not flat. There's bumps and stuff on them. So I hijacked the code used for the chopper's altitude calculation to determine the correct height of each object at their location.

And then the speed. At the moment they're all travelling at max speed. It's fine, but it looks strange when they first set off - immediately barelling along at 25 m/s. There needs to be a run up. So I've been mucking about in Maxima to determine a good formula.


At first I was trying for a formula that I could simply use at any point between two waypoints. But the multiplier, which is the maximum speed, doesn't actually result in the maximum speed. So I've abandoned that one.


Then I decided I'll work on a percentage from the start of the 'from' waypoint and a reverse percentage from the destination waypoint seperately and apply the above function.


But it looks that the ramp-up is still too quick to 10 m/s, so adjusting the exponent solves that with the third function.

Now to implement! Oh yeah - also all of them always turn left. I need to look at the dot product to get a sign to apply to the direction adjuster.


Posted on 2011-12-05 in Stingray Incursion
Yes... there are finally some gameplay elements.

If you expected to see some gameplay footage though, sorry to dissapoint. But I have sort of completed the shooting. This followed from my chat to Danny from QCF Design after Wednesday's game dev session. Basically it went something like : "You say you're writing a game, but there's no gameplay..?" "Um..yeah.."

I was looking for a way to determine an exact hit. Until now I targeted the objects using the bounding boxes. It's fine to know what your aiming at in the general area, but not good enough to determine what exactly you are hitting. So I got an idea last night as I lay in bed, got up, and tried it out. As it turned out, it only worked in my head. But I then I found another solution, and that worked like a charm.

The chorus goes something like this:

  • Find the ray that is the gun sight
  • Use that ray and object bounding boxes to find potential targets
  • Test every object found if there is an exact hit based on the point in triangle method
  • Now I know exactly which object is hit, and I know precisly where that object is being hit based on the UV coordinates

Two things here to note: Firstly, I already have a FOR loop to determine viewable objects in a pre-process method. I implemented these steps into this same FOR loop, otherwise you're doubling up on CPU time going through the same list of things. Secondly, when using the point- in-triangle method, I use my lowest detail models. They closely mimick the shape and size of the high detail versions, and there's no point in hit-testing detail. You're only interested in the bottom line - is it hitting exactly and where is it hitting? If I was writing Battlefield 4, then yes I'd use the higher detail versions to determine of an overhead power cable is being hit and break apart and spew electricity on everyone :)

All that was left to do now was to pass a couple of event handlers around to effect the damage dealt by the gun. This actually happens per round being fired. I have a single particle system, attached to the chopper, that generates one particle for each round being fired. Each particle is assigned it's target at the time of creation based on what the chopper was targeting at that specific time. When it expires it triggers its event that is hooked up to the linked object and some method calls sorts out the rest.

So, I finally have some real gameplay. Go figure :)

Smoke signals

Posted on 2011-11-23 in Stingray Incursion
Finally, after a bit of a rewrite, the particle system is taking shape.

I initially just set off doing my own thing, using the existing architecture of the game. Things worked... but it looked utter crap. I was initially using an instanced quad, and was about to get into how to billboard it. But it wouldn't have worked. Calculating rotation angles for 600+ particles per smoke plume was always going to be too expensive. Of course, you can argue that you only need to calculate it once because each particle needs to face the same direction. But only if you're viewing it from a horizontal position. Instead I decided to look at the AppHub 3D Particle sample. And boy did that one look pretty damn good.


So I set about rewriting my particle system a bit to make use of the clever ideas implemented in that sample. The circular array instead of a list (must faster). The dynamic vertex buffer instead of an instanced quad. And the vertex shader which does almost all the calculations on the GPU instead of on the CPU. Clever stuff.


There's still one bit of this implementation that I don't quite understand: the way it calculates the actual screen space coordinates of each particle based on the viewport scale (??) and a single position vector. It's something I'd never have thought of in the first place, but using this method evidently requires very little overhead to do billboarding. So I'm quite embarassed to say that at least that bit of code is a straight copy and paste from the sample.


In the end though it's at least looking proper. That said, the first implementation is something I'd want to keep, because there will be use for a system that uses instanced geometry. I'm thinking debris particles when something blows up. That would just look 100 times better using actual models than billboards, and you can do some excellent random rotation per particle.


And lastly, untweaked and not done yet, is the new HUD showing in these screenshots. It's fully integrated with the chopper, and doesn't required the player to move focus from the action in order to see the details. Granted, at some angles it does become less readable, but so far to me that doesn't pose a problem.

The pepper grinder

Posted on 2011-11-16 in Stingray Incursion
Still busy with the random level generation. But the sprinkles of success is landing everywhere.

There are still a few missing tiles that needs to be made in order to complete the puzzle. These are the obscure tiles that will appear maybe five times in the level in very suspicious nooks and crannies. But those nooks and crannies are there because the random decides for them to be there. That is the problem with the random level generation. If everything was sand, for instance, the problem would not be present at all. But with the transition areas from sand to ground, and with the cliff tiles in the mix, the current seed I'm testing with throws me a few curve balls with regards to the layout.

MissingTile01.jpg MissingTiles02.jpg

Apart from that though, other level bits like the trees, the shacks and the prefab houses are now also being 'strewn' accross the level. It's kinda like discriminatingly shaking a pepper grinder over the basic layout. But palm trees should stick to the sand tiles. The shacks and prefabs can land anywhere. And nothing should land on the dune tiles. Of course, since this is random, the specific locations are not exactly controlled. Hence sometimes (again on the transition areas) the trees do land on the ground. But that's ok.

And 'land' is the operative word here. I know the level elevation will never go higher than 240. So I start by placing the object on an altitude of 1000. From there I cast a ray straight down and get the intersection on the bounds, and then get the intersection on the exact face. It's the same routine I use for the chopper's altitude calculation. From that I can determine with a high degree of certainty exactly what Y value the object must have to be on the ground. Basically firing the objects from space.

The other bit of work I had to so was to restructure some bits of the entity model so that I could create an object preset. These presets are used to define objects that will be used in the level. For example, the guard tower is made up of the main building model, and then the windows/screens that has an alpha component. The alpha components are drawn last for obvious reasons, so it has to be two separate models in the content set. The preset combines these and from the preset various different instances is created as level objects.


The HUD has also been tweaked and the display errors on the alti-meter has been corrected. But there's more changes I want to do on it before I'll show you what it looks like.

Don't loose faith

Posted on 2011-10-11 in Stingray Incursion
So ya'all figured the project was dead?... No it's not.

Progress has been steady, but nothing that was worth posting. I've finished all the tiles for the basic level layout. There were quite a few do to, but now that it's done I can start working on the level generation. There is already something happening there, but with missing content things didn't work very well.

Other than that, I've also started on the targeting system, e.g. basic hit testing. The chopper's gun follows the cursor on screen and the system correctly identifies which object is being aimed at. Some simple reverse projection calls and maths really. The biggest challenge was to do that as part of the pre-process loops to avoid having to traverse the object structures a second time. Of course it's easy to just make a bunch of global variables... but come-on... seriously? So, since there's already loops happening to identify viewable objects and LOD distance in the pre-process phase, there was no point in looping through that list again to find my target. Also, the gun does fire, ammo gets expended, but no visual cue and no damage is being delt yet.

Next on my agenda, I'm looking for a small success item, so I'm gonna do the HUD. But not as you might think ;)

There's also other stuff that I'm starting to think of. Especially sound and music. I've asked Ramon to start on an intro sequence as well. Whether that will be stills based or movie based we're not sure yet. But now that the basic level layout options are done for the most part I can get on with rounding out the experience, story line, missions etc. At some point though I'll have to go back and revise all the art, get normal maps in there etc. That will be the polish phase.

The Real Stingray

Posted on 2011-09-07 in Stingray Incursion
Finally, a much anticipated moment has arrived... Click through to see the real Stingray model in action.

The office asked me to do a presentation on Stingray for one of the Knowledge Sharing Sessions they have on a monthly basis. A couple of guys at the office have been keeping tabs on the progress and they've deemed it worthy of a recommendation for a session. We'll mostly cover the performance side of things - optimal code with optimal structures - to keep it relevant.

As part of the preparation for it, we decided to wrap up the production on the actual Stingray chopper model and stick it into the game for real. Ramon had actually delivered the model to me a while ago, and it's been lying in my inbox ever since. But the weekend I decided to take it in, clean it up, finish the unwrapping and well... see for yourself...


I decided to start with a 'stealth' texture. There will be more options later on, which will be determined by your upgrades. The other neat success about this is that the canopy of the chopper is the first real transparent item in the game. I've had to tweak the shaders a bit to get the sun/moon reflection off it looking good, and it pretty much all worked out well.


I've also built some more items for the levels and overall there's much more going on in the scene than before. The frame rate on my laptop seems to still hover around 30 after I revised some infrastructure and optimised distance sorting, but I think that's the limit of the hardware on here. I should start taking screens from my gaming rig now. All in all it's now really starting to look like a game... At least that's what Ramon said :D

Graph Theory

Posted on 2011-08-25 in Stingray Incursion
Moving on to random level generation...

Noise maps are a tricky thing. The theory of it, as with everything, is pretty straight forward. Implementation, on the other hand, proved to be more than a day's worth of frustration. But I did come up with something usefull for generating the levels in Stingray.


The second problem is knowing which tiles belong together. Apart from the fact that four tiles at a time share a single texture map, there's no other obivous link apart from the naming of the tiles. This is not something usefull. I've opted to build a tool to link tiles to one another in all four directions. It's hard work up front (read tedious) but it will make it fairly easy to determine which tiles to place where on the map. Considering the number of tiles I already have (and that are still to come), the links of all these tiles would make for some excellent graph theory examples :)

So, to achieve this, I've worked hard on the content set editor to facilitate both the actual linking and testing the linking. These screenshots also show the various elements that the drawing engine supports, although I haven't made maps for any of the slots except the diffuse.


Testing the links is simple - you select the tile in the appropriate direction and it's drawn next to the tile being edited. It's worth while to note that the coordinate system in XNA is different (Y-Up) from the modelling software (Z-Up). So, when modelling you have to remember that north becomes south, and west becomes east... maybe thats the future of world politics as well?

After cracking on with the content, I realized that I was going to have to redo some of it...

It sounds like a bit of a setback, and to a certain extent it is, but I was never under any illusion that I'd be able to get away without LOD. So, I revised my content model and drawing engine a bit, and then proceeded to model different versions of some of the same models. The palm tree below as exhibit A.


I reached a point where I was forced to solve my frame-rate issues. I built an object brush for the level editor. It allowed me to pretty quickly paint many, many, many palm trees. After battling a bit to determine the exact height underneath each instance so that they 'stick' to the level tiles, I ended up with a frame-rate of about 12... And it was even slower in the game when the depth and shadow maps are also rendered. So, LOD as a first step. Secondly, I've had to limit the content that is used for the depth map and shadow map renders. Not all content will generate shadows, and not all content will receive shadows. Discriminating between that has helped to get the frame-rate back to around 35... which is better for what is currently all happening.. but I'll still need to spend time revising. I hope to avoid having to do LOD on the tiles - it will probably cause tears and look pretty bad. So hopefully far-plane distance and drawing distance will help to limit the lag from the tiles.

Also, Ramon has made progress with the Stingray...

After some good time spent at the art centre (my desk top PC at home), I've managed to make some cool additions to the level layouts.

There's not much to say here. Doing this using tiles is a bit of a pain, but it proves interesting, and gives us the ability to upsize the level area dramatically without a big impact on the footprint on disk, memory or framerate for that matter. Also, since the inspiration dates from the late 80's and early 90's, the tiling is sort of mandatory. Think of the new Ford Mustang. It's a faithfull homage to the old classic. The lines, the overall design harks back explicitly, but yet manages to convey a modern design. This is what these tiles are all about. Back in the old days they had maybe... 5 or 6 different ones. Today we have excellent capacity and capability, so I can have 600 different tiles split between the various geographical areas. But of course, I'd still want you to get the sense that it's tiled... just the like old games.

The screens below show two of the new tile areas, cliffs and the coastal regions. There's also a shot showing the recon vehicle in game. It's still with the basic texture. And I've not yet built in the positioning of objects (static or actor), so the palm tree and recon is in the same spot for the time being :)

AboveCliffs.jpg OceanView.jpg WithRecon.jpg

The Dark Side

Posted on 2011-07-20 in Stingray Incursion
Tricky shadow mapping bits comming up...

The principle behind this is quite simple. There are no shortage of tutorials out there on this topic, and plenty of discussions about the best method to use. The one I have opted for is the custom shadow mapping option. I suppose at some point I will try and figure out proper hardware supported stencil buferring etc, but there are well documented pros and cons attached to each method. See here for details. It is a very, very old post, but serves to illustrate my point.

So anyway, moving on to what I'm trying to accomplish... Here's the depth buffer I'm generating from the viewpoint of the sun. This bit is pretty straight forward, using its own (but small) shader and a render target. One note on this - most native DirectX implementations opt to use the R32 surface format for this to retain accuracy. For XNA, I initialize the render target to the R1010102 format. This is memory overkill at the moment, since I'm only using the red channel for depth information. The problem with this is that other formats like Single does not support blending (or requires Point filtering instead of linear), so passing it to the real scene render shader is not supported when the BlendState is set for alpha. So, to make the best of it, I'll probably change it to a normal RGBA32 format and pack the float depth value into the various components instead. Should make for some pretty psychedelic depth buffers :)


Currently the actual generated shadows are all over the place. I'm thinking an error in the projection from the camera viewpoint to the light viewpoint for depth comparison. No luck so far, but I'll keep at it. After that, there's blurring to be done as an additional step. The only problem with all this is that the frame rate is cut in half... At this point I'm not discriminating as to what I'm rendering for the depth buffer, but that said there isn't a lot on the test level to exclude either.


I've managed to solve the problems with the projection from the camera view to the depth buffer. The HLSL function tex2Dproj doesn't actually do what it says on the tin. Basically, once you've transformed the pixel to the depth buffer space using the appropriate projection, simply calling tex2Dproj directly using your resulting projected vector isn't actually going to work. Even though the function does the projection divide, it doesn't restrict it into the 0.0 to 1.0 space that is used for UV lookup. A simple matrix calculation can do that for you though, and then use the function to get the appropriate texel.


This is what I ended up with. Very blocky and very low fidility, but at least it's accurate. These is still a problem with the clamp filtering or something. The far tiles are dark which shouldn't be. But all in good time...

Big Update!

Posted on 2011-07-13 in Stingray Incursion
There's no release or anything, but the more we finish the more excited I'm getting about this project. Honestly...

Ramon has almost completed mapping the chopper. I know he's having difficulties and headaches since it's a rather complex model, but if anyone can get it done, he can. This is what he's come up with so far...


In other news, I've finished a prototype 'skybox'. I say prototype, because at this point it's not really all that good looking, but the idea is proven and all that remains is some tweaking. The textures are transitioned based on the sunlight system. Actually, the sunlight system drives the whole skybox by itself, including the rendering of it. All nicely packed into a single box of pleasure :) The screenshots were taken at dawn, noon and during late evening respectively.

skyboxDawn.jpg skyboxNoon.jpg skyboxLateEve.jpg

Transforming Normals

Posted on 2011-07-05 in Stingray Incursion
I've been thinking about my problems lighting my scene using directional light (the mentioned sunlight system) for a while now.

The screens of the level editor I've posted have hacked code to get it working correctly. I basically passed the original normals straight on to the pixel shader code instead of transforming them.

This means that all the tiles get the same amount of sunlight. It's obviously a problem when you start to rotate stuff, since the shadow areas will change depending on the orientation. Thus I need to transform the normals. However, doing so resulted in very strange results. Tiles to the left were lit differently from tiles to right. It's actually an age-old problem, and perfectly obvious when you visualise the process of the vertex and pixel shaders.

The problem comes in with using the same matrix as the one for the vertex transformations (which, incidentily, all tutorials do). But, this article explains quite clearly that using the main transform matrix as is, is not correct. Also see my news post about the MATHS :)

As the article explains, if you use the correct distinction between points and vectors, the same transform matrix could apply if you don't do any scaling. Which means you modelled everything correctly at the start. But I did a test, and even though you can declare the input to the vertex shader as float4 for the normal, XNA does NOT pass in a normal with the w component. In the .X files, the normals only have three components, and XNA's vertex structures are based around the Vector3 struct. This means the w component does not exist within the C# space. So it's up to you to set w to zero in the shader code for normals (it seems it's set to 1 by default based on my test results).

The conclusion then, is that XNA does not employ "homogenoeus vectors and matrices" at all. So you have to make it do so, otherwise you won't be able to move your models around your world. Well, you will, but it will all be dark.


Posted on 2011-06-21 in Stingray Incursion
It's been slow progress for us, but, at least I've managed to complete the unwrapping of the recon vehicle. It's got a basic color map for now, with little or no details at the moment.

Also, I intended to make the map only 512x512, but that proved to be a bit too small in getting the colors on the right places. Anyway, here's another render of it with a small thumbnail of the map. With these units there is a difficult decision as to what level of detail is required, and I suppose it will take a few iterations to get it right. For the most part I envisage that the player won't be too near them at any time. But, if there are any animated cut scenes later on starring these models, they should at least look the part. So, this model clocks in at just under 7000 triangles. If that's too much or too little, I don't know. From these pre-renders, the quality looks ok-ish to me. And I still got to do all the normal maps as well.


Ramon also sent me a render of the chopper. This is already a week out of date, and he has already started unwrapping it I believe. As you can see, he put a lot of effort into the rotor mechanism. It's something that will be on the screen almost all the time, and I think it's stellar. There's also a lot of animation loops he's got to do still. The wheels, bay doors and the various guns and their different animations. Hopefully I'll be able to figure out how to get it all into the game itself.


There's so much more content to make still, but slow and steady wins the race :)

Weekend overtime

Posted on 2011-06-12 in Stingray Incursion
I've had a good weekend in so far as Stingray is concerned. I also suppose that it's time I add it to the features on this website. Anyway, the alpha blending problem was sort-of solved after I posted on stackoverflow.

I say sort-of, because if you read the articles that was linked by Andrew, it's quite obvious that I didn't understand some pretty basic concepts very well. For instance, the depth buffer. That forced me to rethink my design a bit, and how I process the content for drawing. It also forced me to do some math, and I realised that for items like the palm tree, it might just be possible to model it completely instead of using very few faces with alpha blending. And as a result it actually looks better.

Apart from the above, I've managed to get somewhere with one of the enemy units as well. A simple recon vehicle. Here is a render of it. There's still a bit to do before I can use it as a game model, one of which is the texture coordinate unwrapping, which will probably prove to be a pain.


You might have noticed that there is a rather complete lack of concept art so far. The fact of the matter is, I just don't have the time to sit and churn out pencil sketches or otherwise to design the game elements. However, as things start ramping up, I'm pretty sure I will have to take a step back and spend some time on this.

By far the most time intensive part of game development in my experience is content generation.

Once you've defined the scope, the types and the number units you will be putting into the game, there's really no getting around the fact that making all of the content, be it 3D models or sprites for the UI, takes a very long time.

Much more so than the actual coding of the game. Needless to say, as an indie I don't have a team of artists at my disposal. So between myself and Ramon, we're filling up our free time (an increasingly scarce commodity!!) quite comprehensively.


Currently I'm concentrating on level objects, like the palm tree above. There's some problems with the alpha map as you can see. Depending on the lighting it's either white, or it's the fill color. Not yet sure what the reason is, but I'll muck around with the shader and the device state to try and solve it. Ramon is currently concentrating on the actual player helicopter model. We've dicussed various "modes" for the chopper, and there will be an upgrade system. Mostly though, we want a visual aspect to each of the upgrades and special abilities. This might complicate matters dramatically with the animation loops etc, but we'll get to that when we get to that :)

So, most of my initial teething problems have now been overcome. Partly to do with differences between XNA 3.1 and XNA 4.0, and partly other stuff, like shader code and model export problems.

The upshot of it all is that I'm finally in a position to actually start building my game. To date it's just been infrastructure mostly, and there are still a bit left of course. But as you can see, the level editor is now gaining momentum.


Those are the 8 basic tiles I've modeled so far. There's plenty more to come, and they've been manually placed in XML because the editor is still lacking vital functions. Something else I've managed to accomplish is a solar model, or sunlight & moonlight model if you want. It manages itself through calls from the Update() method. The main game code does not have to do anything special to change the direction or the light color. It all happens within the instance itself. You simply instantiate it at a certain time of day and it starts ticking over. The effect as it goes from dawn to dusk through the color and direction changes is quite amazing to see. Of course, in the finished game, a complete cycle of 24 game time hours will probably take around 3 hours real time. It should be subtle, otherwise it will distract the player. However, if someone does play for 1.5 hours straight, the changes will be noticed. That's the sort of scenario I'm looking for.

And as you can see, I managed to solve the problem with the incorrect rendering of hidden faces. Some settings in the shaders solved it. Not sure what device state in XNA 4.0 is appropriate to fix it yet though.

Transition to XNA 4.0

Posted on 2011-05-24 in Stingray Incursion
I never tried XNA 4.0 while it was in CTP, so the list of changes from 3.1 to 4.0 was new for me, but it's going well. However, there's one problem which I have not been able to solve yet. And no matter what I try, it simply won't go away.

The above is a screen dump of my actual back buffer from the level editor. And, as highlighted, there seems to be some problem with the clipping or something. Areas which are supposed to be hidden behind other areas are drawn and seen. Sort of like a Z order problem when working with sprites.

I've searched high and low for solutions to this, but Google has not been forthcoming. And the App hub registration is so spastic that I cannot sign up (South Africa is not supported). My options are now either to

  • Continue development while ignoring this 'artifact'.
  • Revert back to XNA 3.1 and go with DirectX 9.0c and shader profiles 2.0
  • Of course, I'll still continue to look for a solution. The members over at Stackoverflow might hit on something...