Robin Minto

Software development, security and miscellany

HTTP Public Key Pinning (HPKP)

There’s a new HTTP header on the block - HTTP Public Key Pinning (HPKP). It allows the server to publish a security policy in the same vein as HTTP Strict Transport Security and Content Security Policy.

The RFC was published last month so browser support is limited, supported in Chrome 38, Firefox 35 and newer. However, there are helpful articles from Scott HelmeTim Taubert and Robert Love on the topic and OWASP has some general info on certificate and key pinning in general. Scott has even built support for HPKP reporting into his helpful reporting service - https://report-uri.io/.

Although Chrome and Firefox will honour your public key pins, testing the header is slightly tricky as they haven't implemented reporting yet (as of Chrome 42 and Firefox 38). I spent some time trying to coax both into reporting, working under the assumption that they must have implemented the whole spec right? It seems not.

In writing this, I also wanted to note the command I used to calculate the certifcate digest that's used in the header. In contrast to other examples, this connects to a remote host to get the certificate (including allowing for SNI), outputs to a file and exits openssl when complete.
echo | 
openssl s_client -connect robinminto.com:443 -servername robinminto.com |
openssl x509 -pubkey -noout | openssl pkey -pubin -outform der |
openssl dgst -sha256 -binary | base64 > certdigest.txt

I won't be using HPKP in my day job until reporting support is available and I can validate that the configuration won't break clients. There's great potential here though once the support is available.

Server trials and “trebuildulations”

trebuildulation

[tree-bild-yuh-ley-shuh n]
noun
plural noun: trebuildulations
: grievous trouble or severe trial whilst rebuilding or repairing
: made up word
: nod to Star Trek
"the trebuildulations of a server"

What started with warning about disk space, turned into a complete server rebuild which occupied all of my free time this week. I’m writing down some of the issues for the benefit of my future self and others.

Old Windows

When I see a disk space warning, I head into the drive and look for things to delete. In this case, I immediately noticed Windows.old taking up 15GB – this remnant of the upgrade to Server 2012 R2 last year was ripe for removal.

If this were Windows 7, I would run Disk Cleanup but on Server that requires the Desktop Experience feature to be installed. It isn’t by default and can’t be removed once it is. So, I set about trying to remove the folder at the command line.

At this point, it seems I failed to remove the junctions from Windows.old but succeeded in taking ownership, resetting permissions and removing the folder. The folder was gone but all was not well. On restart, the Hyper-V management service wouldn’t start.

I forget the error but I eventually determined that the permissions had been reset on C:\ProgramData\Application Data. Unrestricted by folder permissions, Windows links this folder to itself resulting in C:\ProgramData\Application Data\Application Data\Application Data\Application Data\Application Data... This causes lots of MAX_PATH issues at the very least.

Despite correcting the permissions, I wasn’t quite able to fix everything and Hyper-V continued to fail. A rebuild was in order.

We couldn't create a new partition

Microsoft have refined the Windows Server installation process over the years so that it is now normally relatively painless, even without an unattended install. Boot from a USB key containing the installation media, select some localisation options, enter a licence key and off it goes. Not this time.

My RAID volume was visible to the installer and I could delete, create and format partitions but when I came to the next step in the process:

We couldn't create a new partition or locate an existing one. For more information, see the Setup log files.

This foxed me for a while. I tried DISKPART from the recovery prompt, I tried resetting disks to non-RAID and I tried disabling S.M.A.R.T. in the BIOS. Nothing worked. I did notice mention of removing USB devices and disconnecting other hard drives suggesting there’s some hidden limit on the number of drives or partitions that the installer can handle. I could have removed each of the three data drives one by one to see if that theory had merit but I decided to jump in and remove all three.

Success! A short while later I had a working Windows Server 2012 R2.

Detached Disks in the Storage Pool

I reconnected the three data drives and now came time to see if Storage Spaces would come back online as promised.

The steps were straightforward:

  • Set Read-Write Access for the Storage Space
  • Attach Virtual Disk for each of the virtual disks
  • Online each of the disks

The disks were back online and the data was available. Great!

This was a short-lived success story – the disks were offline after reboot and had to be re-attached. Thankfully, I was not alone and a PowerShell one-liner fixed the issue.

Get-VirtualDisk | Set-VirtualDisk -IsManualAttach $False

Undesirable Desired State Configuration Hang

I thought I’d take the opportunity to switch my imperative PowerShell setup scripts for Desired State Configuration-based goodness. This was a pretty smooth process but there was one gotcha.

DSC takes care of silent MSI installations but EXE packages require the appropriate “/quiet” arguments. The result of missing arguments in my configuration script meant that DSC sat waiting for someone to click an invisible dialog box in the EXE’s installer.

Having fixed my script, I killed the PowerShell process and re-tried to be presented with this:

Cannot invoke the SendConfigurationApply method. The PerformRequiredConfigurationChecks method is in progress and must 
return before SendConfigurationApply can be invoked.

The issue even survived a reboot.

Again, I was not alone and some more PowerShell later, my desired state is configured.

I’m pleased to report the server is back to a happy state and all is well in its world.

IIS configuration file contention

I’ve been automating the configuration of IIS on Windows servers using PowerShell. I’ll tell anyone who’ll listen that PowerShell is awesome and combined with source control, it’s my way of avoiding snowflake servers.

To hone the process, I’ll repeatedly build a server with all of it’s configuration from a clean image (strictly speaking, a Hyper-V checkpoint) and I’m occasionally getting an error in a configuration script that has previously worked:

new-itemproperty : Filename: \\?\C:\Windows\system32\inetsrv\config\applicationHost.config 
Error: Cannot write configuration file

Why is this happening? The problem is intermittent so it’s difficult to say for sure but it does seem to occur more often if the IIS Management Console is open. My theory is that if the timing is right, Management Console has a lock on the config file when PowerShell attempts to write to it and this error occurs.

UPDATE

I'm now blaming AppFabric components for this issue. I noticed the workflow service was accessing the IIS config file and also found this article on the IIS forums - AppFabric Services preventing programmatic management of IIS. The workflow services weren't intentionally installed and we're using the AppFabric Cache client NuGet package so I've removed the AppFabric install completely and haven't had a recurrence of the problem.