Robin Minto

Software development, security and miscellany

Alt.NET and Me

Microsoft, community and .NET Renaissance

I wasn’t there in the beginning. In 2003, whilst I was taking some time out to go snowboarding, a small group of pioneers were inciting the .NET revolution. Dissatisfied with the oppression of Microsoft, they wrought the iron of .NET into the sword of Alt in the fires of pain.

Anyway, I was in the Alps and the snow was great! So much so that I went back in 2004.

But the revolution was brewing.

IMGP1045 - rotated

Every day’s a school day

“You have to run just to keep up”
— My old boss, around 1997.

The pace of change in software development is significant and “every day’s a school day” has epitomised my software development career, where it can feel like you’re running just to keep up. I could have learnt different things in different ways but I’ve never stopped learning.

There have certainly been points at which my learning accelerated. One such moment is where new team members have joined the team and have brought new experience and insight. Another is joining the developer community.

Discovering community

Although I was learning, I was almost a Dark Matter Developer. I was getting stuff done but I was blissfully unaware of the vibrant community of developers that surrounded me. One of the team introduced me to the blog of Brighton (UK)-based developer Mike Hadlow. It was reading the comments on this blog post that I noticed a phrase that piqued my interest — “ALT.NET”.

2017-03-03 14_18_48-Code rant

What was this strange “ALT.NET UK” that they talked about? I Googled this magical thing and found that the Alt.NET UK conference was imminent — I registered immediately and I was in.

I met an amazing group of people at the Alt.NET UK unconference (including Mike Hadlow!). That led me to the London .NET user group, coding dojos, Progressive.NET, OWASP, DevOpsDays and the list goes on. I have many friends from that first Alt.NET event.

What’s in a name?

“There are only two hard things in Computer Science: cache invalidation and naming things, and off-by-one errors.”
— Phil Karlton et al

In Computer Science, naming things is hard. The same applies here. As Scott Hanselman said around the time of the first US conference, the Alt.NET name is polarizing. Some took it to mean anti-Microsoft. As with others, I took “alternative” to mean something that embraces choice.

Why shouldn’t we build software using any tools irrespective of where they come from? The key is that we build great software. The same is true of languages and frameworks. I love C# and .NET but if Python is more appropriate for a particular use case, use that. Why can’t we look at other the communities around other languages for inspiration too? We can, and that’s Alt.NET.

There was a feeling that Alt.NET was elitist and the Alt meant an alternative group of people. Perhaps. However, I wasn’t part of the elite and Alt.NET won me over. And the Alt.NET name still has a lot of power. It can be a rallying cry.

.NET Renaissance

I want a wider set of developers to discover a rich .NET ecosystem. Microsoft is treading a difficult path which seems to be guided by dragging large Enterprise customers using .NET Framework into the future that is .NET Core at the expense of new developers, who might struggle in adopting .NET Core. Will backwards compatibility and the complexity it brings mean that they choose PHP, Node.js, Go or something else instead?

Microsoft is a very different to the organisation that it was in the last decade. There are great people there doing great things but it is a large organisation and can do a lot of damage with little missteps. I want to see Microsoft continue to foster the ecosystem without stifling it. Much progress has been made here. Visual Studio wizards allow you to choose a unit testing framework other than MSTest; Newtonsoft Json.NET and jQuery were embraced, and not extinguished. These are limited examples but show the right intent. But even then, when Microsoft “bless” something, does that mean the others can’t compete?

.NET Core has amazing potential. It’s enabled a cross-platform world that was unthinkable at one time. It is even present in AWS Lambda ready for the next evolution of cloud computing. We have an opportunity to grasp all of that potential as a community and run with it.

Ian Cooper has detailed some of the reasons we need to take up the baton. Mark Rendle and Dylan Beattie have already written much more about the possibilities. As Gáspár Nagy says, it’s “high time to think positively”. I’m certainly not the only one looking forward to being a part of the .NET Renaissance!

HTTP Public Key Pinning (HPKP)

There’s a new HTTP header on the block - HTTP Public Key Pinning (HPKP). It allows the server to publish a security policy in the same vein as HTTP Strict Transport Security and Content Security Policy.

The RFC was published last month so browser support is limited, supported in Chrome 38, Firefox 35 and newer. However, there are helpful articles from Scott HelmeTim Taubert and Robert Love on the topic and OWASP has some general info on certificate and key pinning in general. Scott has even built support for HPKP reporting into his helpful reporting service -

Although Chrome and Firefox will honour your public key pins, testing the header is slightly tricky as they haven't implemented reporting yet (as of Chrome 42 and Firefox 38). I spent some time trying to coax both into reporting, working under the assumption that they must have implemented the whole spec right? It seems not.

In writing this, I also wanted to note the command I used to calculate the certifcate digest that's used in the header. In contrast to other examples, this connects to a remote host to get the certificate (including allowing for SNI), outputs to a file and exits openssl when complete.
echo | 
openssl s_client -connect -servername |
openssl x509 -pubkey -noout | openssl pkey -pubin -outform der |
openssl dgst -sha256 -binary | base64 > certdigest.txt

I won't be using HPKP in my day job until reporting support is available and I can validate that the configuration won't break clients. There's great potential here though once the support is available.

Server trials and “trebuildulations”


[tree-bild-yuh-ley-shuh n]
plural noun: trebuildulations
: grievous trouble or severe trial whilst rebuilding or repairing
: made up word
: nod to Star Trek
"the trebuildulations of a server"

What started with warning about disk space, turned into a complete server rebuild which occupied all of my free time this week. I’m writing down some of the issues for the benefit of my future self and others.

Old Windows

When I see a disk space warning, I head into the drive and look for things to delete. In this case, I immediately noticed Windows.old taking up 15GB – this remnant of the upgrade to Server 2012 R2 last year was ripe for removal.

If this were Windows 7, I would run Disk Cleanup but on Server that requires the Desktop Experience feature to be installed. It isn’t by default and can’t be removed once it is. So, I set about trying to remove the folder at the command line.

At this point, it seems I failed to remove the junctions from Windows.old but succeeded in taking ownership, resetting permissions and removing the folder. The folder was gone but all was not well. On restart, the Hyper-V management service wouldn’t start.

I forget the error but I eventually determined that the permissions had been reset on C:\ProgramData\Application Data. Unrestricted by folder permissions, Windows links this folder to itself resulting in C:\ProgramData\Application Data\Application Data\Application Data\Application Data\Application Data... This causes lots of MAX_PATH issues at the very least.

Despite correcting the permissions, I wasn’t quite able to fix everything and Hyper-V continued to fail. A rebuild was in order.

We couldn't create a new partition

Microsoft have refined the Windows Server installation process over the years so that it is now normally relatively painless, even without an unattended install. Boot from a USB key containing the installation media, select some localisation options, enter a licence key and off it goes. Not this time.

My RAID volume was visible to the installer and I could delete, create and format partitions but when I came to the next step in the process:

We couldn't create a new partition or locate an existing one. For more information, see the Setup log files.

This foxed me for a while. I tried DISKPART from the recovery prompt, I tried resetting disks to non-RAID and I tried disabling S.M.A.R.T. in the BIOS. Nothing worked. I did notice mention of removing USB devices and disconnecting other hard drives suggesting there’s some hidden limit on the number of drives or partitions that the installer can handle. I could have removed each of the three data drives one by one to see if that theory had merit but I decided to jump in and remove all three.

Success! A short while later I had a working Windows Server 2012 R2.

Detached Disks in the Storage Pool

I reconnected the three data drives and now came time to see if Storage Spaces would come back online as promised.

The steps were straightforward:

  • Set Read-Write Access for the Storage Space
  • Attach Virtual Disk for each of the virtual disks
  • Online each of the disks

The disks were back online and the data was available. Great!

This was a short-lived success story – the disks were offline after reboot and had to be re-attached. Thankfully, I was not alone and a PowerShell one-liner fixed the issue.

Get-VirtualDisk | Set-VirtualDisk -IsManualAttach $False

Undesirable Desired State Configuration Hang

I thought I’d take the opportunity to switch my imperative PowerShell setup scripts for Desired State Configuration-based goodness. This was a pretty smooth process but there was one gotcha.

DSC takes care of silent MSI installations but EXE packages require the appropriate “/quiet” arguments. The result of missing arguments in my configuration script meant that DSC sat waiting for someone to click an invisible dialog box in the EXE’s installer.

Having fixed my script, I killed the PowerShell process and re-tried to be presented with this:

Cannot invoke the SendConfigurationApply method. The PerformRequiredConfigurationChecks method is in progress and must 
return before SendConfigurationApply can be invoked.

The issue even survived a reboot.

Again, I was not alone and some more PowerShell later, my desired state is configured.

I’m pleased to report the server is back to a happy state and all is well in its world.

IIS configuration file contention

I’ve been automating the configuration of IIS on Windows servers using PowerShell. I’ll tell anyone who’ll listen that PowerShell is awesome and combined with source control, it’s my way of avoiding snowflake servers.

To hone the process, I’ll repeatedly build a server with all of it’s configuration from a clean image (strictly speaking, a Hyper-V checkpoint) and I’m occasionally getting an error in a configuration script that has previously worked:

new-itemproperty : Filename: \\?\C:\Windows\system32\inetsrv\config\applicationHost.config 
Error: Cannot write configuration file

Why is this happening? The problem is intermittent so it’s difficult to say for sure but it does seem to occur more often if the IIS Management Console is open. My theory is that if the timing is right, Management Console has a lock on the config file when PowerShell attempts to write to it and this error occurs.


I'm now blaming AppFabric components for this issue. I noticed the workflow service was accessing the IIS config file and also found this article on the IIS forums - AppFabric Services preventing programmatic management of IIS. The workflow services weren't intentionally installed and we're using the AppFabric Cache client NuGet package so I've removed the AppFabric install completely and haven't had a recurrence of the problem.

Disabling contactless payment cards, or preventing “card clash” with Oyster

I’ve been carrying an RFID-based Oyster card for years. It’s a fantastic way to use the transport network in London especially if you pay-as-you-go rather than using a seasonal travelcard. You just “touch in and out” and fares are deducted.

It’s no longer necessary to have to work out how many journeys you’re going to do to decide if you need a daily travelcard or not – the amount is automatically capped at the price of a daily travelcard, if you get to that point. “Auto top-up” means money is transferred from your bank to your card when you get below a certain threshold so you never have to visit a ticket office (especially given they’ve closed them all) or a machine. It’s amazing to think they’ve been around since 2003.

More recently it’s possible to pay for travel with a contactless payment card a.k.a. Visa PayWave or Mastercard Paypass. However, that leads to problems if, like most people, you carry more than one card in your wallet or purse – the Oyster system may not be able to read your Oyster card or worse, if your contactless payment card is compatible, it takes payment from the first card it can communicate with. If you have an Oyster seasonal travelcard, you don’t want to pay again for your journey by paying with credit card.

TFL have recently started promoting this phenomenon under the name “card clash”.

There’s even a pretty animation:


I first noticed card clash in 2010 when I was issued a new credit card with contactless payment. It went straight in my wallet with my Oyster and other payment cards and the first time I tried to touch in by presenting my wallet (as I’ve always done with Oyster) – beeeep beeeep beeeep (computer says no).

It didn’t take me long to work out that the Oyster reader couldn’t distinguish between two RFID-enabled cards. I soon had a solution in the form of Barclaycard OnePulse – a combined Oyster and Visa card. Perfect!


I could now touch in and out without having to get my card out of my wallet again. But not for long…

I don’t know about you but I like to carry more than one credit card so if one is declined or faulty, I can still buy stuff. I had a card from Egg Banking for that purpose. To cut a long story, slightly shorter, Egg were bought by Barclaycard who then replaced my card with a new contactless payment enabled card. Oh dear, card clash again.

I wrote to Barclaycard to ask if they could issue a card without contactless functionality. Here’s the response I got:

All core Barclaycards now come with contactless functionality; this is part of ensuring our cardholders are ready for the cashless payment revolution. A small number of our credit cards do not have contactless yet but it is our intention these will have the functionality in due course. That means we are unable to issue you a non-contactless card.

Please be assured that contactless transactions are safe and acceptance is growing. It uses the same functionality as Chip and PIN making it safe, giving you the same level of security and as it doesn't leave your hand there is even less chance of fraud.

I trust the information is of assistance.

Should you have any further queries, do not hesitate to contact me again.

I was very frustrated by this – I was already ready for the “cashless payment revolution” with my OnePulse card! Now, I was back in the situation where I would have to take the right card out of my wallet – not the end of the world but certainly not the promised convenience either. Barclaycard’s answer is to carry two wallets – helpful!

I’m certainly not the only one who has issues with card clash. I’m not the only one to notice TFL’s campaign to warn travellers of the impending doom as contactless payment cards become more common.


So what’s the solution? If your card company isn’t flexible about what type of cards they issue and you can’t change to one that is (I hear HSBC UK may still be issuing cards without contactless payment on request), another option is to disable the contactless payment functionality. Some people have taken to destroying the chip in the card by putting a nail through it, cutting it with a knife or zapping it. However, that also destroys the chip and PIN functionality which seems over the top. You can probably use the magnetic strip to pay with your card but presenting a card with holes in it to a shop assistant is likely to raise concerns.

RFID cards have an aerial which allows communication with the reader and supplies power to the chip via electromagnetic induction (the zapping technique above overloads the chip with a burst of energy by induction). This means we can disable RFID without disabling chip and PIN by breaking the aerial circuit, meaning no communication or induction.

The aerial layout varies. I dissected an old credit card to get an idea where the aerial was in my card (soaking in acetone causes the layers of the card to separate). Here you can see that the aerial runs across the top and centre of the card.


There are some helpful x-rays on the interwebs now showing some alternate layouts.


The aerial can be broken with a hole punch or soldering iron but I’ve found that making a small cut across the aerial is sufficient. It’s not necessary to cut all the way through the card – all that’s needed is to cut through to the layer containing the aerial. Start with a shallow cut and make it deeper if required. From the images above, a cut to the top of the card is likely to disable most cards.


The central cut on this card wasn’t effective. The cut at the top was.

If you have an NFC-enabled smartphone, it’s possible to test the card to check that contactless is disabled. I’m using NFC TagInfo on a Samsung Galaxy Nexus. Here’s the card above before:


and after the aerial is cut (the chip no longer registers):


Job done! The card continues to work with chip and PIN but doesn’t interfere with Oyster.

Barclaycard have thrown another spanner in the works for me – OnePulse is being withdrawn. The solution seems to be to switch to contactless payments instead of Oyster. Here’s hoping that TFL can finish implementing contactless payments (promised “later in 2014”) before my OnePulse is withdrawn.

Disclaimer: if you’re using sharp knives, have a responsible adult present. If you damage yourself or your cards, don’t blame me!

New host, new look – Azure and responsive design. Part 2.

Part 2. A new responsive design.

Mobile is the new black. Mobile phones are increasingly used to access the web and websites need to take that into account.

I’m using BlogEngine.NET which can deliver pages using a different template to browsers with small screens but that’s not ideal for larger phones and tablets. However, there is a better way – responsive web design. With a responsive design, CSS media queries and flexible images provide a flexible layout that is adjusted based on the screen (or browser) size.

The BlogEngine.NET team have recognised that responsive design is the way forward so the recently released version uses the Bootstrap framework for the default template and administration UI. They’ve done a great job.


As part of moving my blog to Windows Azure as described in part 1, I felt a new look was in order but I didn’t want to use the new BlogEngine.NET template so set about designing my own.

Bootstrap isn’t the only framework that can help you build a responsive design. There’s Foundation, HTML5 Boilerplate, HTML KickStart and Frameless to name a few. I wanted a framework without too much in-built styling. Although Bootstrap can be customised, there are a lot of choices if you take that route.


I chose Skeleton as the framework for my design. It’s a straightforward grid system with minimal styling. And I’ve retained the minimal styling, only introducing a few design elements. My design is inspired by the Microsoft design language (formerly Metro) – that basically comes down to flat, coloured boxes. Can you tell I’m not a designer?

An added complexity is the introduction of high pixel density displays such as Apple’s Retina. Images that look fine on a standard display, look poor on Retina and the like. Creating images with twice as many pixels is one solution but I chose the alternative route of Scalable Vector Graphics. provides an amazing range of icons (1,207 at time of writing), intended for Windows Phone but perfect for my design. They appear top-right and at the bottom of the blog.

Another website that came in very handy was – icon generator for Apple and favicon icons.

So, I’m now running on Windows Azure with a new responsive design. Perhaps I’ll get round to writing some blog articles that justify the effort?

New host, new look – Azure and responsive design. Part 1.

Part 1. Getting up and running on Windows Azure.

Microsoft’s cloud computing platform has a number of interesting features but the ability to create, deploy and scale web sites is particularly interesting to .NET developers. Many ISPs offer .NET hosting but few make it as easy as Azure.

The web server setup and maintenance is all taken care of and managed by Microsoft, which I already get with my current host Arvixe. However, apps can be deployed directly from GitHub and Bitbucket (and more) and it’s possible to scale to 40 CPU cores with 70 GB of memory with a few clicks. If you don’t need that level of performance, you might even be able to run on the free tier.

Here’s how I got up and running (I’ve already gone through the process of setting up an Azure account).

Create a new website from the gallery:


In the gallery, select BlogEngine.NET. Choose a URL ( and a region (I choose North or West Europe as those are closest to my location).

Wait. 90 seconds later, a default BlogEngine.NET website is up and running.

I’ve created a source control repository in Mercurial. This contains:

  • extensions (in App_Code, Scripts and Styles)
  • settings, categories, posts, pages (in App_Data)
  • the new design for the blog (in Themes) – see part 2

Next, I need to configure my web site to deploy from Bitbucket where I’ve pushed that repo. “Set up deployment from source control” is an option in the dashboard:


Select Bitbucket and you’ll be asked to log in and allow Azure to connect to your repos:


Azure will then deploy that repo to the website from the latest revision.

All that’s left is to log into blog and make a couple of changes. Firstly, I delete the default blog article created by BlogEngine.NET. I’m displaying tweets on my blog using the “Recent Tweets” widget so I install that.

I’m using a custom domain rather than That means I have to scale up from the free tier to shared:


Windows Azure must verify that I am authorized to use my custom domain. I do this by making a DNS change (my DNS records are hosted with DNSimple), adding a CNAME records pointing from to and from to If I wasn’t using the domain at the time, I could have added the CNAME records without the awverify prefix. Verification step complete, I can then add the domain names to Azure.


Finally, having checked everything is working properly I change the DNS entries (as above without the awverify prefix) to point to my new web site and it’s job done!

I took the opportunity to update the design that I use. I’ll deal with that in part 2.

Analysing Forefront TMG logs

I’m lamenting the demise of Forefront TMG but I still think it’s a great product and continue to use it. Of course, I’m currently planning what I’m going to replace it with but I expect it’ll have to be a combination of products to get the features that I need.

Anyway, I’m currently doing some log analysis and wanted to capture some useful information about the process. TMG is using SQL Server Express for logging and I want to do some analysis in SQL Server proper.

First, export the logs from TMG. I won’t repeat the steps from that post but in summary:

  1. Run the Import and Export Data application (All Programs / Microsoft SQL Server 2008 / Import and Export Data). Run this elevated/as administrator.
  2. Choose the database (not today’s databases or you’ll cause contention).
  3. Select Flat File Destination and include Column names in the first data row.
  4. Select Tab {t} as the Column delimiter.
  5. Export.

I took the files from this export and wanted to import them into SQL Server. Rather than import everything into varchar fields, I wanted to reproduce the schema that TMG uses. I couldn’t find the schema on the interwebs so I grabbed the databases from an offline TMG and scripted the tables.

Here’s the script to create the FirewallLog table:

CREATE TABLE [dbo].[FirewallLog](
    [servername] [nvarchar](128) NULL,
    [logTime] [datetime] NULL,
    [protocol] [varchar](32) NULL,
    [SourceIP] [uniqueidentifier] NULL,
    [SourcePort] [int] NULL,
    [DestinationIP] [uniqueidentifier] NULL,
    [DestinationPort] [int] NULL,
    [OriginalClientIP] [uniqueidentifier] NULL,
    [SourceNetwork] [nvarchar](128) NULL,
    [DestinationNetwork] [nvarchar](128) NULL,
    [Action] [smallint] NULL,
    [resultcode] [int] NULL,
    [rule] [nvarchar](128) NULL,
    [ApplicationProtocol] [nvarchar](128) NULL,
    [Bidirectional] [smallint] NULL,
    [bytessent] [bigint] NULL,
    [bytessentDelta] [bigint] NULL,
    [bytesrecvd] [bigint] NULL,
    [bytesrecvdDelta] [bigint] NULL,
    [connectiontime] [int] NULL,
    [connectiontimeDelta] [int] NULL,
    [DestinationName] [varchar](255) NULL,
    [ClientUserName] [varchar](514) NULL,
    [ClientAgent] [varchar](255) NULL,
    [sessionid] [int] NULL,
    [connectionid] [int] NULL,
    [Interface] [varchar](25) NULL,
    [IPHeader] [varchar](255) NULL,
    [Payload] [varchar](255) NULL,
    [GmtLogTime] [datetime] NULL,
    [ipsScanResult] [smallint] NULL,
    [ipsSignature] [nvarchar](128) NULL,
    [NATAddress] [uniqueidentifier] NULL,
    [FwcClientFqdn] [varchar](255) NULL,
    [FwcAppPath] [varchar](260) NULL,
    [FwcAppSHA1Hash] [varchar](41) NULL,
    [FwcAppTrusState] [smallint] NULL,
    [FwcAppInternalName] [varchar](64) NULL,
    [FwcAppProductName] [varchar](64) NULL,
    [FwcAppProductVersion] [varchar](20) NULL,
    [FwcAppFileVersion] [varchar](20) NULL,
    [FwcAppOrgFileName] [varchar](64) NULL,
    [InternalServiceInfo] [int] NULL,
    [ipsApplicationProtocol] [nvarchar](128) NULL,
    [FwcVersion] [varchar](32) NULL

and here's the script to create the WebProxyLog table:

CREATE TABLE [dbo].[WebProxyLog](
    [ClientIP] [uniqueidentifier] NULL,
    [ClientUserName] [nvarchar](514) NULL,
    [ClientAgent] [varchar](128) NULL,
    [ClientAuthenticate] [smallint] NULL,
    [logTime] [datetime] NULL,
    [service] [smallint] NULL,
    [servername] [nvarchar](32) NULL,
    [referredserver] [varchar](255) NULL,
    [DestHost] [varchar](255) NULL,
    [DestHostIP] [uniqueidentifier] NULL,
    [DestHostPort] [int] NULL,
    [processingtime] [int] NULL,
    [bytesrecvd] [bigint] NULL,
    [bytessent] [bigint] NULL,
    [protocol] [varchar](13) NULL,
    [transport] [varchar](8) NULL,
    [operation] [varchar](24) NULL,
    [uri] [varchar](2048) NULL,
    [mimetype] [varchar](32) NULL,
    [objectsource] [smallint] NULL,
    [resultcode] [int] NULL,
    [CacheInfo] [int] NULL,
    [rule] [nvarchar](128) NULL,
    [FilterInfo] [nvarchar](256) NULL,
    [SrcNetwork] [nvarchar](128) NULL,
    [DstNetwork] [nvarchar](128) NULL,
    [ErrorInfo] [int] NULL,
    [Action] [varchar](32) NULL,
    [GmtLogTime] [datetime] NULL,
    [AuthenticationServer] [varchar](255) NULL,
    [ipsScanResult] [smallint] NULL,
    [ipsSignature] [nvarchar](128) NULL,
    [ThreatName] [varchar](255) NULL,
    [MalwareInspectionAction] [smallint] NULL,
    [MalwareInspectionResult] [smallint] NULL,
    [UrlCategory] [int] NULL,
    [MalwareInspectionContentDeliveryMethod] [smallint] NULL,
    [UagArrayId] [varchar](20) NULL,
    [UagVersion] [int] NULL,
    [UagModuleId] [varchar](20) NULL,
    [UagId] [int] NULL,
    [UagSeverity] [varchar](20) NULL,
    [UagType] [varchar](20) NULL,
    [UagEventName] [varchar](60) NULL,
    [UagSessionId] [varchar](50) NULL,
    [UagTrunkName] [varchar](128) NULL,
    [UagServiceName] [varchar](20) NULL,
    [UagErrorCode] [int] NULL,
    [MalwareInspectionDuration] [int] NULL,
    [MalwareInspectionThreatLevel] [smallint] NULL,
    [InternalServiceInfo] [int] NULL,
    [ipsApplicationProtocol] [nvarchar](128) NULL,
    [NATAddress] [uniqueidentifier] NULL,
    [UrlCategorizationReason] [smallint] NULL,
    [SessionType] [smallint] NULL,
    [UrlDestHost] [varchar](255) NULL,
    [SrcPort] [int] NULL,
    [SoftBlockAction] [nvarchar](128) NULL

TMG stores IPv4 and IPv6 addresses in the same field using as a UNIQUEIDENTIFIER. This means we have to parse them if we want to display a dotted quad, or at least we find a function that will do it for us.

Here’s my version:

CREATE FUNCTION [dbo].[ufn_GetIPv4Address]
        DECLARE @binIP VARBINARY(4)  
        DECLARE @h1 VARBINARY(1) 
        DECLARE @h2 VARBINARY(1) 
        DECLARE @h3 VARBINARY(1)
        DECLARE @h4 VARBINARY(1)
        DECLARE @strIP NVARCHAR(20)

        SELECT  @binIP = CONVERT(VARBINARY(4), @uidIP) 
        SELECT  @h1 = SUBSTRING(@binIP, 1, 1) 
        SELECT  @h2 = SUBSTRING(@binIP, 2, 1) 
        SELECT  @h3 = SUBSTRING(@binIP, 3, 1) 
        SELECT  @h4 = SUBSTRING(@binIP, 4, 1) 
        SELECT  @strIP = CONVERT(NVARCHAR(3), CONVERT(INT, @h4)) + '.'
                + CONVERT(NVARCHAR(3), CONVERT(INT, @h3)) + '.'
                + CONVERT(NVARCHAR(3), CONVERT(INT, @h2)) + '.'
                + CONVERT(NVARCHAR(3), CONVERT(INT, @h1))

        RETURN @strIP 

Action values are stored as an integer so I’ve created a table to decode them:

CREATE TABLE [dbo].[Action](
    [Id] [int] NOT NULL,
    [Value] [varchar](50) NOT NULL,
    [String] [varchar](50) NOT NULL,
    [Description] [varchar](255) NOT NULL,

        ( Id, Value, String, Description )
(0, 'NotLogged', '-', 'No action was logged.'),
(1, 'Bind', 'Bind', 'The Firewall service associated a local address with a socket.'),
(2, 'Listen', 'Listen', 'The Firewall service placed a socket in a state in which it listens for an incoming connection.'),
(3, 'GHBN', 'Gethostbyname', 'Get host by name request. The Firewall service retrieved host information corresponding to a host name.'),
(4, 'GHBA', 'gethostbyaddr', 'Get host by address request. The Firewall service retrieved host information corresponding to a network address.'),
(5, 'Redirect_Bind', 'Redirect Bind', 'The Firewall service enabled a connection using a local address associated with a socket.'),
(6, 'Establish', 'Initiated Connection', 'The Firewall service established a session.'),
(7, 'Terminate', 'Closed Connection', 'The Firewall service terminated a session.'),
(8, 'Denied', 'Denied Connection', 'The action requested was denied.'),
(9, 'Allowed', 'Allowed Connection', 'The action requested was allowed.'),
(10, 'Failed', 'Failed Connection Attempt', 'The action requested failed.'),
(11, 'Intermediate', 'Connection Status', 'The action was intermediate.'),
(12, 'Successful_Connection', '- Initiated VPN Connection', 'The Firewall service was successful in establishing a connection to a socket.'),
(13, 'Unsuccessful_Connection', 'Failed VPN Connection Attempt', 'The Firewall service was unsuccessful in establishing a connection to a socket.'),
(14, 'Disconnection', 'Closed VPN Connection', 'The Firewall service closed a connection on a socket.'),
(15, 'User_Cleared_Quarantine', 'User Cleared Quarantine', 'The Firewall service cleared a quarantined virtual private network (VPN) client.'),
(16, 'Quarantine_Timeout', 'Quarantine Timeout', 'The Firewall service disqualified a quarantined VPN client after the time-out period elapsed.')

The next step is to import the exported text files into the relevant table in SQL Server. Note that the SQL Server Import and Export Wizard has a default length of 50 characters for all character fields – that won’t be sufficient for much of the data. I used the schema as a reference to choose the correct lengths.

I can now look at the log data in ways that the log filtering can’t and without stressing the TMG servers.

-- Outbound traffic ordered by connections descending
SELECT ApplicationProtocol, A.String AS Action, COUNT(*) 
FROM FirewallLog FL
INNER JOIN Action A ON A.Id = FL.Action
WHERE DestinationNetwork = 'External' AND FL.Action NOT IN (7)
GROUP BY ApplicationProtocol, A.String

-- Successful outbound traffic ordered by connections descending
SELECT ApplicationProtocol, A.String AS Action, COUNT(*) 
FROM FirewallLog FL
INNER JOIN Action A ON A.Id = FL.Action
WHERE DestinationNetwork = 'External' AND A.String IN ('Initiated Connection')
GROUP BY ApplicationProtocol, A.String

-- Successful outbound traffic showing source and destination IP addresses, ordered by connections descending
SELECT  ApplicationProtocol ,
        dbo.ufn_GetIPv4Address(SourceIP) AS SourceIP ,
        dbo.ufn_GetIPv4Address(DestinationIP) AS DestinationIP ,
        COUNT(*) AS Count
FROM    FirewallLog
WHERE   ApplicationProtocol IN ( 'HTTP', 'HTTPS', 'FTP' )
        AND DestinationNetwork = 'External'
        AND Action NOT IN ( 7, 8, 10, 11 )
GROUP BY ApplicationProtocol ,
        dbo.ufn_GetIPv4Address(SourceIP) ,

Removing IIS headers using Powershell, ASafaWeb and server configuration

I’ve been using the ASafaWeb security analyser to scan various websites that I work on. It picks up basic configuration problems in .NET websites and, very handily, will scan periodically to make sure that misconfiguration doesn’t creep back in.

ASafaWeb was created by Troy Hunt, a software architect and Microsoft MVP for Developer Security based in Australia. Troy writes some great articles on improving application security, with a focus on .NET, and he links to those articles in ASafaWeb to illustrate why and how improvements should and can be made.

Most of the issues can be addressed in the application, generally in web.config, but the issue I’m interested in here can be solved by configuring at the server level. Here’s the output from ASafaWeb:


See that “X-Powered-By: ASP.NET” header? That one’s inherited from the IIS root configuration. We could remove it in every web.config but better to remove it once.

Troy links to his blog post on removing all of those headers – Shhh… don’t let your response headers talk too loudly. As he mentions, there are many different ways of removing those headers – he goes into IIS Manager UI and removes it from there.

I’m trying to make sure that I script server configuration changes. This is a great way of documenting changes and allows new servers to be configured simply. I want servers that can rise like a phoenix from the metaphorical ashes of a new virtual machine rather than having fragile snowflake that is impossible to reproduce (thanks to Martin Fowler and his ThoughtWorks colleagues for the imagery).

So, the script to remove the “X-Powered-By” header turns out to be very straightforward once you figure out the correct incantations. This assumes you have Powershell and the Web Server (IIS) Administration Cmdlets installed.

Import-Module WebAdministration
Clear-WebConfiguration "/system.webServer/httpProtocol/customHeaders/add[@name='X-Powered-By']"

That’s a lot of words for a very little configuration change but I wanted to talk about ASafaWeb and those ThoughtWorks concepts of server configuration. I also need to mention my mate Dylan Beattie who helped me out without knowing it. How? StackOverflow of course.

Accurate time synchronisation using ntpd for Windows

Computers can be finickity about time but they’re often not very good at keeping it. My desktop computer, for instance, seems to lose time at a ridiculous rate. Computer hardware has included a variety of devices for measuring the passage of time from Programmable Interval Timers to High Precision Event Timers. Operating systems differ in the way that they use that hardware, perhaps counting ticks with the CPU or by delegating completely to a hardware device. This VMware PDF has some interesting detail.

However it’s done, it doesn’t always work. The hardware is affected by temperature and other environmental factors so we generally synchronise with a more reliable external time source, using NTP. Windows has NTP support built in, the W32Time service, but it only uses a single time server and isn’t intended for accurate time. From this post on high accuracy on the Microsoft Directory Services Team blog,

“W32time was created to assist computers to be equal to or less than five minutes (which is configurable) of each other for authentication purposes based off the Kerberos requirements”

They quote this knowledgebase article:

“We do not guarantee and we do not support the accuracy of the W32Time service between nodes on a network. The W32Time service is not a full-featured NTP solution that meets time-sensitive application needs. The W32Time service is primarily designed to do the following:

  • Make the Kerberos version 5 authentication protocol work.
  • Provide loose sync time for client computers.”

So, plus or minus five minutes is good enough for Kerberos authentication within a Windows domain but it’s not good enough for me.


I’ve been experimenting with the NTPD implementation of the NTP protocol for more accurate time. NTPD synchronises with multiple NTP servers but only synchronises when a majority agree. I’m using the Windows binaries from Meinberg. Meinberg also have a monitoring and configuration application, called NTP Time Server monitor, which is very useful.

The implementation is straightforward, so I won’t go into it in detail. Install the service. Install Monitor. Add some local time servers from to your config and away you go.

I do, however, want to describe an issue that we had with synchronisation on our servers. Wanting more accurate time for our application, we installed NTPD and immediately started getting clock drift. Without NTPD running, the clocks were quite stable but with it running, they would drift out by a number of seconds within a few minutes.

A quick Google led me to this FixUnix post and I read this quote from David:

“Do you have the Multi-Media timer option set? It was critical to getting
good stability with my systems”

The link was broken but I eventually found this page which describes the introduction of the –M flag to NTPD for Windows. This runs the MultiMedia timer continuously, instead of allowing it to be enabled occasionally by applications like Flash.

Cutting a long story slightly shorter, it turns out that the –M flag was causing the clock drift on our Windows servers. Removing that option from the command line means that our servers are now happily synchronising with an offset of less than 300ms from the time source.


As I mentioned, my desktop computer is very inaccurate. NTPD can’t keep it in check so I’m trying a different approach – Tardis 2000. So far, so good. I’m not sure about the animated GIFs on the website though.