BizTalk and Certificate Revocation Lists (CRLs) - Part II#
(This is the third in a series of three posts about CRLs - the first was Web Services and the 15 Second Delay, and the second was BizTalk and Certificate Revocation Lists (CRLs) - Part I).

Note: A lot of the information in this post comes from a great MSDN article located here.

Caveat: My client uses 64-bit servers (AMD Opterons), running 64-bit versions of Windows 2003 R2 and BizTalk 2006. IIS is running in 32-bit compatibility mode (as we use Sharepoint). I haven't yet worked out if the CRL problem occurs on 32-bit servers - I definitely haven't noticed the problem on our 32-bit servers as of yet.

For 2 months, my BizTalk application was working fine. The system passed performance testing, and was deployed on the Live servers in preparation for final connectivity testing.

Then one Monday, last week, the test team complained that they were experiencing sporadic timeouts. On the same day, I was doing some testing on an unrelated BizTalk application on a separate server... and I noticed that I would occasionally get request-response latency approaching 70 secs...

Given that the same day I'd noticed I no longer had access to iTunes Radio from that morning (bah!), I assumed that changes had been made to our proxy sever or firewall. I fired up TCP View on the server I was working on, and there was our old friend SYN_SENT: something was blocking access to the CRL again. I spoke to the Tech Support team and discovered that no changes had been made to the proxy server. Leaving them to check for changes to our firewall and security policies, I decided to do some research into why this delay exists (if the call is blocked) and if there was a way around it. Here's what I discovered (refer to this article for a more in-depth explanation of Certificates and CRLs):

  • Any given Digital Certificate contains a property called the CRL Distribution Point which is a collection of URIs.
  • When a certificate is validated, a CRL retrieval attempt is made using each URI in the list. Retrieval stops with the first URI to return a valid CRL
  • When a valid CRL is obtained, it is stored in the Certificate Store for the Local Machine (under Certificates (Local Computer)/Intermediate Certification Authorities/Certificate Revocation Lists)
  • A CRL is a certificate in its own right and as such, it contains an expiry/update date called the Next Update date
  • If the CRL already exists in the Certificate Store and is still valid then this CRL is used; otherwise an attempt is made to download an updated CRL
  • URI schemas valid for CRLs include http://, ldap://, and file:// - it is the Publisher of the certificate who decides upon the contents of the CRL Distribution Point
  • In large corporations, it is common to use Active Directory (AD) as the provider of CRLs: AD can download the required CRLs and either publish them to a master location, or distribute them to servers that need them

One thing I was curious about was this 15 second delay which kept popping up.

The Xceed Software post I had read had made reference to there being a 15 second delay hard-coded into the WinVerifyTrust API call.

Looking through the documentation for WinVerifyTrust I noticed two things:

  1. Microsoft recommend the use of CertGetCertificateChain for validating a certificate (instead of WinVerifyTrust)
  2. That WinVerifyTrust enumerated a registry key (HKLM\SOFTWARE\Microsoft\Cryptography\Providers\Trust\) to find out what API call to use to verify the trust of the given object

I'm not about to trace what WinVerifyTrust does to actually check the CRL, but I'd suspect that it ends up delegating to either CertGetCertificateChain or CertVerifyRevocation (and I'd bet that internally, CertGetCertificateChain calls CertVerifyRevocation to verify the CRL for a given certificate).

Suffice to say that CertGetCertificateChain will build a chain of certificates starting from the given certificate, and building the chain all the way up to the root CA, and will optionally check the revocation status for each certificate in the chain; whilst CertVerifyRevocation will verify the revocation status for a single certificate.

And both of them take, as one of their parameters, a struct called CERT_REVOCATION_PARA.
The format of that structure is:

typedef struct _CERT_REVOCATION_PARA {
  DWORD cbSize;
  DWORD cCertStore;
  HCERTSTORE* rgCertStore;
  LPFILETIME pftTimeToUse;
  DWORD dwUrlRetrievalTimeout;
  BOOL fCheckFreshnessTime;
  DWORD dwFreshnessTime;
  LPFILETIME pftCurrentTime;
  LPFILETIME pftCacheResync;

Heh, look, there's a member called dwUrlRetrievalTimeout.

Wonder if that's relevant??? ;-)

The documentation has this to say:

This member contains the time-out limit, in milliseconds. If zero, the revocation handler's default time-out is used.

And what's the revocation handler's default time-out?

Well, Microsoft doesn't specify this directly... but I notice in a related knowledge base post, that a value of 15000 milliseconds is used… i.e. 15 seconds!

So that's as far as we can go with that – unless IIS includes an option to configure this timeout, then we can't change it (and they do, sort of).

Whilst researching this post, I noticed that one solution that is frequently touted is to modify the following registry key:

HKCU\Software\Microsoft\Windows\CurrentVersion\WinTrust\Trust Providers\Software Publishing\State

But that's not much use, as that's for the Current User (hence the HKCU). Great if I was using my own local user account for the application pools, bad if I'm using a non-interactive user account (which we are). Plus I'm not sure this would work for IIS… maybe I'll try it at some stage.

(Note: looks like Microsoft are aware of this issue, because in Windows Vista/Longhorn there's now a Group Policy setting which lets you set this default timeout for non-interactive processes i.e. IIS App Pools!!)

So what's the solution in this case?

Well, unless the technical support guys can work out what they changed to block CRL access (I suspect they turned on authentication on the proxy), we have four choices:

  1. Use Active Directory to store and publish CRLs (which we should have been doing from the start IMO)
    This is Microsoft's preferred way of doing it for large customers.
    More information on configuring CRLs with AD can be found here.
  2. Manually download the required CRL and install it
    This is my preferred solution for this particular issue, and is detailed below.
  3. Disable CRL checking for the server
    This is an interesting one. I'm not convinced that this can be done - there are a few posts about how to do this, including one on how to do it for IIS here.
    However, this seems to be related to certificate exchange for HTTP request/responses, as opposed to certificate validation for signed code, which is a whole different thing. Plus, turing off certificate checking is a rather large security hole as you don't know if a given certificate is still valid. 
  4. Change the default CRL timeout period for CAPI
    I noticed in the Knowledge Base article for an update to IIS 5.0 that new registry keys had been added, including allowing a value called ChainUrlRetrievalTimeoutMilliseconds to be set.

    Then when browsing through the PKI documentation, I noticed a reference to the same registry keys, plus a note saying "this setting was first introduced with MS04-011" (the IIS 5.0 update linked to above).

    So it looks like it is possible to set the default timeout.

    I haven't tried this, so can't verify that it works, but to me it's not the correct solution: the CRL should be available, either from AD or the URL, or by installing it manually – setting the timeout to a lower value seems to be just ignoring the problem, plus creates a potential security hole as you can't be sure that the certificate used to sign code is valid anymore.

Manually downloading and installing a CRL
Needless to say, I thought I'd have a go manually downloading the CRL and installing it – and it worked a treat. Problem solved (at least until the next CRL update is needed, which is August 2007). Still, gives us a breather to get it properly sorted.

Finding the URL to the certificate is easy: look in the certificate details for the CRL Distribution Point, and copy the URL from there. In this case, it's the Microsoft Code Signing Public Certification Authority CRL:

You can put this URL in a web browser, and download the certificate.

(Note: if you're doing this in Windows Server 2003, you'll need to add to your list of Trusted Sites, otherwise you won't be able to download the CRL file)

Once you have the file, you can install it following the instructions here:

And lo and behold, the problem was fixed.
At least, it is fixed until August 30th 2007 when the CodeSignPCA.crl expires... ;-)
But by then, I'm sure we'll have found a permanent fix!

Saturday, 24 March 2007 20:41:17 (GMT Standard Time, UTC+00:00) #    Comments [0]  |  Trackback


BizTalk and Certificate Revocation Lists (CRLs) - Part I#

(This post follows on from last week's post Web Services and the 15 Second Delay).

Having accounted for 15 secs of the 45 secs latency I'd been observing in my BizTalk application, I started looking at where the additional 30 secs was coming from.


When I looked at my log files, I noticed the entire process from start to end (including calling external Web Services) was now taking just over three seconds. However, when invoking the BizTalk Web Service from a test client, the measured latency was still over 30 seconds. Something wasn't right…


I put logging code in the BizTalk Web Service around the ServerProxy.Invoke() call (the method call the submits the received SOAP message to BizTalk).

In my BizTalk application, the BizTalk Web Service is used to submit a message to the MessageBox, via the SOAP Adapter and a static request-response Receive Port using the PassThru pipeline. There is an Orchestration which subscribes to messages coming from this receive port, processes them, and then returns a response back to the receive port (and therefore the Web Service). My logging code logged the time at which the orchestration received a request message, and the time at which it sent a response message.


What I saw in my log files was that there was a 15 sec delay between calling ServerProxy.Invoke(), and the orchestration logging that it had received a request, and then another 15 sec delay between the orchestration sending a response, and the return from ServerProxy.Invoke() in the WebService.



Our BizTalk Load Test environment is composed of a single BizTalk 2006 Server, a single SQL Server, and SAN disk storage. Given that everything that happens between ServerProxy.Invoke() and an orchestration instance being started is non-user code (i.e. not my code!!), the problem was most likely environmental i.e. network, disk, database, security or "other".


So my first step was to attach SQL Server Profiler and log all the database calls going between the BizTalk Server and SQL Server.

If you haven't used Profiler before, be aware that it captures a huge amount of data – going through its logs can be very very painful.

However, in this case we're only interested in calls that have originated from BizTalk – specifically, calls which have originated from one of the Service Accounts that BizTalk runs under. In this case, the service account that the Isolated host instance runs under, and the account that my Orchestration Host's host instance runs under.


Why these two? The SOAP adapter runs under the Isolated host, which means that it'll be this Isolated host instance that will be putting the request into the MessageBox. And the Orchestration Host's host instance will be the one which picks up the request from the MessageBox, and then writes the response back the MessageBox.

For the Orchestration Host's host instance, finding out the service account name is easy – just look in the Services console, find the service named something like BizTalk Service BizTalk Group : <host name>, and then note the value in the Log On As column.


For the Isolated Host's host instance, we need to check the identity account for the Application Pool under which the BizTalk Web Service is running (if you're using IIS 6 – if you're using IIS 5, you'd use the service account for IIS itself, usually the ASPNET account).


So all we have to do is look for entries in the Profile log file which have a LoginName property equal to one of these two account names. Fortunately, you can setup a filter in Profiler quite easily – either before you run the trace (so that only events you're interested in are captured) or after.


I chose to capture all events and then filtered on entries which had an EntryClass of RPC:Completed, and a LoginName value of the Isolated host account name.

(Note: because I'm only interested in the SQL statements being executed by BizTalk to call Stored Procedures, I only need to see RPC events – none of the other events are relevant here as they pertain to Audit events, Keep-Alive requests, or Transaction statements).


Running through the filtered log means I only have to trawl through 300 odd lines, rather than 10,000…!

What I'm looking for is the time at which the Isolated host submitted the SOAP request to the MessageBox, so I'm looking for a call to Stored Procedure (SPROC) called bts_InsertMessage:



(Note: You'll notice in the above image that just before bts_InsertMessage is called, there are calls to bts_InsertPredicate_<hostname> and bts_InsertProperty followed by a call to bts_FindSubscription.

InsertPredicate and InsertProperty populate the MessageProps table, and FindSubscription checks whether the combination of these predicates and properties match a current subscription – if so, bts_InsertMessage is called; if not, a "subscription not found" exception is raised).


So now I know at what time the SOAP request was placed in the MessageBox: 17 seconds after ServerProxy.Invoke() was called. Now I need to check what time the message was picked up by my Orchestration Host's host instance.

To do this, I filter the Profiler log on the service account name used by the host instance.

However, this time I'm looking for a call to bts_DeQueueMessages_<hostname> which did some work – that is, a call which has a higher duration than the others:



However, this doesn't clear up the mystery – this just shows that the message was picked up by the Orchestration Host's host instance 4 secs after being inserted by the Isolated host instance.


And then something hits me: what if this related to the problems I was having earlier, with the Web Service and the CRL?

What if "17 seconds" is really "15 seconds plus some BizTalk processing time"?


I started up TCP View, passed a test message through – and sure enough, there was a CRL call with a SYN_SENT state.

Only this time, the TCP connection was to

What are the chances that the BizTalk assemblies are digitally signed? Specifically Microsoft.BizTalk.WebServices.ServerProxy.dll, the assembly used by BizTalk Web Services.

Let's have a look:



And if we drill down into the Certificate Details, and look at the CRL Distribution point we see:



So there we are – because the code is hosted in IIS, and is digitally signed, IIS will attempt to validate the certificate, which involves attempting to download the CRL for the certificate.


Unlike my Web Service, I don't think I'd have much luck in convincing Microsoft to *not* digitally sign the BizTalk assemblies… ;-)


So, we set the default proxy on the BizTalk server (using the proxycfg command line tool discussed in the previous post).


And sure enough, the delay in the BizTalk Web Service went away.

The 45 secs latency dropped to 2 secs… and then dropped to 0.5 secs once I did some low-latency tuning.


So there we go. Now the only reason my client has this issue is because they don't want to have separate BizTalk servers which just run the BizTalk Web Services (i.e. dedicated BizTalk Servers, most likely in a DMZ, which don't do anything but put received SOAP messages in the MessageBox – all the other BizTalk servers would be behind the firewalls and in the secure portion of the network – this is Microsoft's recommended way of doing it, but is costlier in terms of licensing).


We decided we could live with the Load Test server having access to the internet via a proxy server, so applied this change to all our BizTalk servers (both Live and Test).

Saturday, 17 March 2007 22:22:34 (GMT Standard Time, UTC+00:00) #    Comments [0]  |  Trackback


Web Services and the 15 Second Delay#
Background: The large(ish) BizTalk system I've been working on for the last few months consumes a few external Web Services. Whilst doing some early performance tuning, I noticed that I was getting pretty serious request-response latency from the primary BizTalk Web Service... Like over 45 secs, instead of the expected 0.5 – 2 secs...

Fortunately, I routinely put tracing/audit calls in my BizTalk code which includes timing info - and I always log the times around orchestration entry/exit... So I was quite surprised to see that a request to one of the external Web Services was taking 15secs to respond... especially as the web service was hosted (temporarily) on the same server...

Although I didn't have the source code, with the help of Lutz Roeder's trusty .NET Reflector I was poking around the innards within minutes… and couldn't find anything that would cause a delay of 15 secs.

What's more, after further testing I found that the problem only occurred the first time after IIS was restarted. After the first time, it would return within 2 secs (still not very good, but much better). One thing I noticed about the Web Service was that it made use of some code in an unmanaged assembly (old school C++ DLL)...

On a hunch, I wrote a simple test Web Service which simply referenced some functions in the DLL (using the DllImport attribute)... and bang, a 15 sec delay before the Web Service constructor was called.

I also tried the same trick in a console app... and there was no delay.

By this time I was fairly stumped. This issue only happened on our Load Test server: it didn't happen on the Dev or Test servers. And it seemed to only happen to code hosted under IIS.

I figured it was time to dig a bit deeper and see what IIS was doing: starting up Process Monitor (to monitor file and registry access) and TCP View (to monitor network activity), I called my test Web Service.

Filtering the Process Monitor logs to the w3wp.exe process (the IIS Worker Process for Application Pools) I noticed that w3wp was reading a lot of security and cryptography keys from the registry:

Hmm. Interesting.

Then I looked at the TCP View log. And I noticed something very interesting coming from the w3wp process: a connection was being attempted to and was being blocked.

How do I know it was being blocked? Well, the TCP View log showed a SYN_SENT, which appeared and was visible for 15 secs before disappearing:

Now if you're not familiar with how TCP works, then you might miss the significance of this: When a TCP client attempts to contact a TCP server, the first thing performed is a handshake. This involves the client sending a SYN, the server responding with a SYN-ACK, and the client sending a SYN-ACK-ACK.

In the TCP View tool, the rightmost column indicates the state of the local TCP connection end-point. You'll see that in this case, the state is SYN_SENT: this state indicates that the local TCP client is waiting for a SYN-ACK response from the SYN it sent.

Why would this be? Well, many reasons, but mostly because the address can't be reached e.g. a proxy server is required, or a firewall rule is blocking the call.

And what's the relevance of A CRL (Certificate Revocation List) is used to indicate which Certificates (issued by a Certificate Authority) are no longer valid.

And this is only done when you are using an assembly which has been digitally signed.

(Note: Microsoft became really really serious about CRL's back in 2001 when VeriSign accidentally released two Microsoft certificates to a third party – which meant that the third party could sign code that appeared to have come from Microsoft).

So what does this have to do with my Web Service and a 15 second delay?

Looking through the assemblies in the Web Service, I noticed that the unmanaged assembly was digitally signed. Aha! So when the unmanaged assembly was referenced in a Web Service, IIS would hand off control to the Windows Cryptography API (CAPI) and ask it to verify the certificate. And this involves retrieving a CRL for the certificate.

After doing some more poking around, I found this blog post (from one of the developers at Xceed Software, a company specializing in .NET components).

The thing that drew my attention was this comment:

Some people behind a very strict firewall, ignoring outgoing connections instead of rejecting them, had a timeout while loading our assemblies. That timeout (at least 15 seconds) is hardcoded in WinVerifyTrust!

Eureka! That's exactly my problem!

And it explained why it was only happening on our Load Test server: Our Dev and Test servers have access to the internet, whilst the Load Test server does not: it has no proxy server configured.

To test this theory, I configured the Load Test server to use a proxy server. If you haven't done this before, you can't do it via Internet Explorer settings – that only configures a proxy for application which go via wininet.dll i.e. the Internet Explorer API, which uses the proxy server settings in the Internet Explorer/Internet Options/Connections/LAN Settings panel.

CAPI, however, uses winhttp.dll (Microsoft Windows HTTP Services). And to set the proxy server for that, you need to use the proxycfg command line tool which sets proxy server settings for winhttp.

If you run proxycfg –u, this will import your Internet Explorer proxy server settings:

So I did this, restarted IIS and presto! The problem went away.

However, giving the Load Test server access to the Internet is a fairly large security hole (at least it is in our network, where the Load test server is behind both the firewall and DMZ). 

So, we contacted the vendor who supplied the unmanaged DLL, and managed to get them to release a new version which wasn't digitally signed.

Which solved the problem of the latency in the Web Service (which accounted for 15 secs of the 45 secs latency).

Solving the remaining 30 secs latency is the subject of next week's post.

Saturday, 10 March 2007 12:01:13 (GMT Standard Time, UTC+00:00) #    Comments [0]  |  Trackback


All content © 2017, Daniel Probert
On this page
This site
<2017 March>
Blogroll OPML

Powered by: newtelligence dasBlog 2.3.12105.0

The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

Send mail to the author(s) E-mail

Theme design by Jelle Druyts

Pick a theme: