Friday, November 22, 2013

A case of Occam's razor

I wanted to write about a seemingly bizarre issue to do with a web page fetch that ultimately proved to be none other than another validation of the Occam's razor, which is simply that the simplest explanation to a problem is generally the right one.

So, to give some background, I'm involved in doing some statistical calculations over a large number of web pages and this has the side effect of highlighting web pages that deviate from the norm. So I end up going through many web pages that stand out from the pack at first glance.

The fetcher I use talks HTTP directly, and deals leniently with the web servers out there that don't always implement HTTP according to spec. On this particular occasion, one web site : responded to the fetcher with content that was nowhere close to what the browser retrieved.

Let me post here what the HTML looked like:

<html lang="en">
    <title>PHP Application - AWS Elastic Beanstalk</title>
    <link href="" rel="stylesheet" type="text/css"></link>
    <link href="" rel="icon" type="image/ico"></link>
    <link href="" rel="shortcut icon" type="image/ico"></link>
    <!--[if IE]><script src=""></script><![endif]-->
    <link href="/styles.css" rel="stylesheet" type="text/css"></link>
    <section class="congratulations">
Your AWS Elastic Beanstalk <em>PHP</em> application is now running on your own dedicated environment in the AWS&nbsp;Cloud<br />

        You are running PHP version 5.4.20<br />


    <section class="instructions">
What's Next?</h2>
<li><a href="">AWS Elastic Beanstalk overview</a></li>
<li><a href="">Deploying AWS Elastic Beanstalk Applications in PHP Using Eb and Git</a></li>
<li><a href="">Using Amazon RDS with PHP</a>
<li><a href="">Customizing the Software on EC2 Instances</a></li>
<li><a href="">Customizing Environment Resources</a></li>
AWS SDK for PHP</h2>
<li><a href="">AWS SDK for PHP home</a></li>
<li><a href="">PHP developer center</a></li>
<li><a href="">AWS SDK for PHP on GitHub</a></li>

    <!--[if lt IE 9]><script src=""></script><![endif]-->

This is nowhere close to the HTML retrieved by the browser. You can try it. The web page is about hair products.

My experience is that sometimes, based on the HTTP headers and originating IP, some web servers can return different content. Sometimes, the server has identified an IP as a bot and decided to return an error response or an outright wrong page.

So I tested the theory of the IP by running the fetcher from a different network, with a different outgoing IP. This time, the correct page was retrieved. Then I used curl to retrieve the page from the same network that had given me the incorrect page. To my surprise, curl retrieved the correct page. curl got the correct page from both networks.

This was quite puzzling. I thought that perhaps the web server might have done some sophisticated finger printing and thus having identified the User Agent and maybe other headers the fetcher was using had decided to send it a wrong page.

So using wireshark, I captured all the HTTP headers sent by the fetcher. Another team member then used curl, specifying these same headers.

curl -H 'User-Agent: rtw' -H 'Host:' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -H 'Accept-Language: en-us,en;q=0.5'  -H 'Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7' -H 'Keep-Alive: 115' -H 'Connection: keep-alive' -H 'Accept-Encoding: gzip,deflate'

I was positive that curl would then fail. But of course it still returned the correct page. So my theory of the sophisticated finger printing was wrong - or maybe it was even more sophisticated that I thought. I was stumped.

And then I realized, that I had missed looking at a very crucial piece of data in this whole operation. The IP the fetcher used to get the page. The first thing the fetcher does is to resolve the IP and since the DNS query can be expensive and we do lots of those, the IP is retrieved from a memcached instance if it is available. An IP may be cached for a number of hours. From the fetcher logs, I could see the IP that it was using:

DNS resolved from cache -> /

But as dig showed, that was the incorrect IP :

>>$ dig
; <<>> DiG 9.3.6-P1-RedHat-9.3.6-20.P1.el5 <<>>
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 28108
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 4, ADDITIONAL: 4

;    IN    A

;; ANSWER SECTION: 300 IN    CNAME 60 IN    A 60 IN    A

;; AUTHORITY SECTION: 1703 IN    NS 1703 IN    NS 1703 IN    NS 1703 IN    NS

;; ADDITIONAL SECTION:    92612    IN    A    92612    IN    A    92612    IN    A 92510    IN    A

;; Query time: 11 msec
;; WHEN: Fri Nov 22 12:40:20 2013
;; MSG SIZE  rcvd: 345

All that remained now was to validate this - far simpler - hypothesis. It was trivial to do so, all I had to do was remove the domain->IP maping from memcached.

>>$ telnet localhost 11211
Connected to localhost.localdomain (
Escape character is '^]'.
VALUE 4096 4
Connection closed by foreign host.

This time, the fetcher logs showed that indeed, it was picking the correct IP. And of course it fetched the correct page with all the hair product details.

DNS resolved -> /

So once again, I was reminded of the Occam's Razor and how important it is to

1. Remember all the assumptions we make about how a certain software system works.
2. Validate all the assumptions, starting with the simplest first.

 Happy debugging the Net!

No comments: