Saturday, December 20, 2014

deleting an iterator in accumulo

I learnt the hard way that setting an iterator in the accumulo shell sets it for a table permanently. To make matters worse, I set this iterator in the metadata table and made everything fail.

Removing the iterator was tricky. First I had to find what accumulo decided to call the iterator as I did not specify a name but just the java class:

Here was the command I used:

setiter -class org.apache.accumulo.core.iterators.FirstEntryInRowIterator -p 99 -scan





Here is how I found what the iterators for accumulo.metadata table were called:


config -t accumulo.metadata -f iterator


SCOPE      | NAME                                                  | VALUE


table      | table.iterator.majc.bulkLoadFilter .................. | 20,org.apache.accumulo.server.iterators.MetadataBulkLoadFilter

table      | table.iterator.majc.vers ............................ | 10,org.apache.accumulo.core.iterators.user.VersioningIterator

table      | table.iterator.majc.vers.opt.maxVersions ............ | 1

table      | table.iterator.minc.vers ............................ | 10,org.apache.accumulo.core.iterators.user.VersioningIterator

table      | table.iterator.minc.vers.opt.maxVersions ............ | 1

table      | table.iterator.scan.firstEntry ...................... | 99,org.apache.accumulo.core.iterators.FirstEntryInRowIterator

table      | table.iterator.scan.firstEntry.opt.scansBeforeSeek .. | 10

table      | table.iterator.scan.vers ............................ | 10,org.apache.accumulo.core.iterators.user.VersioningIterator

table      | table.iterator.scan.vers.opt.maxVersions ............ | 1
The iterator I added seemed to be named "table.iterator.scan.firstEntry", so I tried to delete that:
root@work accumulo.metadata> deleteiter -n table.iterator.scan.firstEntry -t accumulo.metadata

2014-12-20 15:13:42,854 [shell.Shell] WARN : no iterators found that match your criteria
You have to specify just the last part of the iterator name:
root@work accumulo.metadata> deleteiter -scan -n firstEntry -t accumulo.metadata

Wednesday, November 19, 2014

grep many files while printing file name

A useful trick I found:

find . -exec grep -n hello /dev/null {} \;

Including more than one file makes grep print the file name as well as the line number. So we use the handy /dev/null as one extra file to do the job.

Attribution: http://www.linuxquestions.org/questions/programming-9/find-grep-command-to-find-matching-files-print-filename-then-print-matching-content-328036/

Friday, May 23, 2014

Decoding HTML pages with Content-Encoding : deflate

All web servers do not implement zlib protocol the same way when they return data with Content-Encoding set to deflate. Some servers return a zlib header as specified in RFC 1950, but some return the compressed data alone.

Java Inflator class can be used to deal with both cases, but first we must check for the header. The first two bytes denote the header and it is a simple check :


    static boolean isZlibHeader(byte[] bytes) {
        //deal with java stupidity : convert to signed int before comparison
        char byte1 = (char)(bytes[0] & 0xFF);
        char byte2 = (char)(bytes[1] & 0xFF);
        
        return byte1 == 0x78 && (byte2 == 0x01 || byte2 == 0x9c || byte2 == 0xDA);
    }

    private void inflateToFile(byte[] encBytes, int offset, int size, BufferedOutputStream f) throws IOException {
        Inflater inflator = new Inflater(true);
        inflator.setInput(encBytes, isZlibHeader(encBytes) ? offset+2 : offset, isZlibHeader(encBytes) ? size-2 : size);
        byte[] buf = new byte[4096];
        int nbytes = 0;
        do {
            try {
                nbytes = inflator.inflate(buf);
                if (nbytes > 0) {
                    f.write(buf,0,nbytes);
                }
            } catch (DataFormatException e) {
                //handle error
            }
        } while (nbytes > 0);
        inflator.end();
    }

An example URL that had to be processed this way : http://game4fun.square7.ch Here is the Wireshark capture, showing the Content-Encoding set to deflate as well as the de-chunked header (the first 2 bytes "78 9c") at the lower bottom pane of the display:

Tuesday, May 20, 2014

Mapping sockets of a process to the remote end point

Recently, one of our long running processes started exhibiting a high number of open file handles. We were leaking handles somewhere. The first thing is to figure out what handles are open, which is easy in Linux with /proc. Just plug in the PID of your process:

ls -ltr /proc/21657/fd

This spits out all the open file handles for the process with PID 21657. Here is an example of an open socket:

lrwx------ 1 user user 64 May 20 13:20 649 -> socket:[2336308491]

This alone doesn't tell us much. Our application use sockets for many reasons. There are connections to mysql, memcache and mongodb. There are sockets listening and responding to requests. There are connections made to web servers.

To get an idea of the two end points of the socket, we need to look at /proc/net/tcp (as well as tcp6, udp, udp6) :

user@host ~$ cat /proc/net/tcp6 | grep 2336308491
 129: 0000000000000000FFFF00004D29650A:9030 0000000000000000FFFF00005881754A:01BB 08 00000000:00000001 00:00000000 00000000   237        0 2336308491 1 ffff81061cf60740 371 40 0 4 -1

This is a connection to a SSL port on the remote end; 0x01BB = 443

Tuesday, January 14, 2014

telnet relocation error: symbol krb5int_labeled_fopen

Ever had telnet not work on a machine? It happened to me recently, on a Centos 5.8, with this error message:

telnet: error: relocation error: symbol krb5int_labeled_fopen, version krb5support_0_MIT not defined in file libkrb5support.so.0 with link time reference (fatal)

Googling hinted at a conflict in the shared library providing kerberos authentication. So I ran telnet under LD_DEBUG=all :

LD_DEBUG=all telnet host port

which, showed me the problem:

29655:    symbol=krb5int_labeled_fopen;  lookup in file=/usr/local/greenplum-db/lib/libkrb5support.so.0 [0]
29655:    telnet: error: relocation error: symbol krb5int_labeled_fopen, version krb5support_0_MIT not defined in file libkrb5support.so.0 with link time reference (fatal)


So, an installation of greenplum had inserted its version of the kerberos library ahead of the search path for libraries the linux loader uses. The kerberos version of the library did not export the said function.

This all can be verified quickly :

login@host ~$ nm -D /usr/local/greenplum-db/lib/libkrb5support.so.0 | grep krb5int_labeled_fopen

login@host ~$ nm -D /usr/lib64/libkrb5support.so.0 | grep krb5int_labeled_fopen
00000033aea040b0 T krb5int_labeled_fopen

The greeenplum installation was using the LD_LIBRARY_PATH to allow it preferential status, so inserting the /usr/lib64 before it, was sufficient to help telnet find the right library.

login@host ~$ export LD_LIBRARY_PATH=/usr/lib64:$LD_LIBRARY_PATH

login@host ~$ telnet host port
Trying host...
Connected to host (ip).
Escape character is '^]'.


Friday, November 22, 2013

A case of Occam's razor

I wanted to write about a seemingly bizarre issue to do with a web page fetch that ultimately proved to be none other than another validation of the Occam's razor, which is simply that the simplest explanation to a problem is generally the right one.

So, to give some background, I'm involved in doing some statistical calculations over a large number of web pages and this has the side effect of highlighting web pages that deviate from the norm. So I end up going through many web pages that stand out from the pack at first glance.

The fetcher I use talks HTTP directly, and deals leniently with the web servers out there that don't always implement HTTP according to spec. On this particular occasion, one web site : http://hairtype.naturallycurly.com responded to the fetcher with content that was nowhere close to what the browser retrieved.

Let me post here what the HTML looked like:


<html lang="en">
<head>
    
    
    <title>PHP Application - AWS Elastic Beanstalk</title>
    
    <link href="http://fonts.googleapis.com/css?family=Lobster+Two" rel="stylesheet" type="text/css"></link>
    <link href="https://awsmedia.s3.amazonaws.com/favicon.ico" rel="icon" type="image/ico"></link>
    <link href="https://awsmedia.s3.amazonaws.com/favicon.ico" rel="shortcut icon" type="image/ico"></link>
    <!--[if IE]><script src="http://html5shiv.googlecode.com/svn/trunk/html5.js"></script><![endif]-->
    <link href="/styles.css" rel="stylesheet" type="text/css"></link>
</head>
<body>
    <section class="congratulations">
        <h1>
Congratulations!</h1>
Your AWS Elastic Beanstalk <em>PHP</em> application is now running on your own dedicated environment in the AWS&nbsp;Cloud<br />

        You are running PHP version 5.4.20<br />

    </section>

    <section class="instructions">
        <h2>
What's Next?</h2>
<ul>
<li><a href="http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/">AWS Elastic Beanstalk overview</a></li>
<li><a href="http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/create_deploy_PHP_eb.html">Deploying AWS Elastic Beanstalk Applications in PHP Using Eb and Git</a></li>
<li><a href="http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/create_deploy_PHP.rds.html">Using Amazon RDS with PHP</a>
<li><a href="http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html">Customizing the Software on EC2 Instances</a></li>
<li><a href="http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/customize-containers-resources.html">Customizing Environment Resources</a></li>
</li>
</ul>
<h2>
AWS SDK for PHP</h2>
<ul>
<li><a href="http://aws.amazon.com/sdkforphp">AWS SDK for PHP home</a></li>
<li><a href="http://aws.amazon.com/php">PHP developer center</a></li>
<li><a href="https://github.com/aws/aws-sdk-php">AWS SDK for PHP on GitHub</a></li>
</ul>
</section>

    <!--[if lt IE 9]><script src="http://css3-mediaqueries-js.googlecode.com/svn/trunk/css3-mediaqueries.js"></script><![endif]-->
</body>
</html>

This is nowhere close to the HTML retrieved by the browser. You can try it. The web page is about hair products.

My experience is that sometimes, based on the HTTP headers and originating IP, some web servers can return different content. Sometimes, the server has identified an IP as a bot and decided to return an error response or an outright wrong page.

So I tested the theory of the IP by running the fetcher from a different network, with a different outgoing IP. This time, the correct page was retrieved. Then I used curl to retrieve the page from the same network that had given me the incorrect page. To my surprise, curl retrieved the correct page. curl got the correct page from both networks.

This was quite puzzling. I thought that perhaps the web server might have done some sophisticated finger printing and thus having identified the User Agent and maybe other headers the fetcher was using had decided to send it a wrong page.

So using wireshark, I captured all the HTTP headers sent by the fetcher. Another team member then used curl, specifying these same headers.


curl -H 'User-Agent: rtw' -H 'Host: hairtype.naturallycurly.com' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -H 'Accept-Language: en-us,en;q=0.5'  -H 'Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7' -H 'Keep-Alive: 115' -H 'Connection: keep-alive' -H 'Accept-Encoding: gzip,deflate' http://hairtype.naturallycurly.com


I was positive that curl would then fail. But of course it still returned the correct page. So my theory of the sophisticated finger printing was wrong - or maybe it was even more sophisticated that I thought. I was stumped.

And then I realized, that I had missed looking at a very crucial piece of data in this whole operation. The IP the fetcher used to get the page. The first thing the fetcher does is to resolve the IP and since the DNS query can be expensive and we do lots of those, the IP is retrieved from a memcached instance if it is available. An IP may be cached for a number of hours. From the fetcher logs, I could see the IP that it was using:

DNS resolved from cache hairtype.naturallycurly.com -> /54.243.101.48

But as dig showed, that was the incorrect IP :

>>$ dig hairtype.naturallycurly.com
\
; <<>> DiG 9.3.6-P1-RedHat-9.3.6-20.P1.el5 <<>> hairtype.naturallycurly.com
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 28108
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 4, ADDITIONAL: 4

;; QUESTION SECTION:
;hairtype.naturallycurly.com.    IN    A

;; ANSWER SECTION:
hairtype.naturallycurly.com. 300 IN    CNAME    secure-nc-04-2015-1845606936.us-east-1.elb.amazonaws.com.
secure-nc-04-2015-1845606936.us-east-1.elb.amazonaws.com. 60 IN    A 23.23.197.30
secure-nc-04-2015-1845606936.us-east-1.elb.amazonaws.com. 60 IN    A 54.225.215.76

;; AUTHORITY SECTION:
us-east-1.elb.amazonaws.com. 1703 IN    NS    ns-1119.awsdns-11.org.
us-east-1.elb.amazonaws.com. 1703 IN    NS    ns-1793.awsdns-32.co.uk.
us-east-1.elb.amazonaws.com. 1703 IN    NS    ns-235.awsdns-29.com.
us-east-1.elb.amazonaws.com. 1703 IN    NS    ns-934.awsdns-52.net.

;; ADDITIONAL SECTION:
ns-235.awsdns-29.com.    92612    IN    A    205.251.192.235
ns-934.awsdns-52.net.    92612    IN    A    205.251.195.166
ns-1119.awsdns-11.org.    92612    IN    A    205.251.196.95
ns-1793.awsdns-32.co.uk. 92510    IN    A    205.251.199.1

;; Query time: 11 msec
;; SERVER: 10.101.51.60#53(10.101.51.60)
;; WHEN: Fri Nov 22 12:40:20 2013
;; MSG SIZE  rcvd: 345


All that remained now was to validate this - far simpler - hypothesis. It was trivial to do so, all I had to do was remove the domain->IP maping from memcached.


>>$ telnet localhost 11211
Trying 127.0.0.1...
Connected to localhost.localdomain (127.0.0.1).
Escape character is '^]'.
get hairtype.naturallycurly.com
VALUE hairtype.naturallycurly.com 4096 4
6?e0
END
delete hairtype.naturallycurly.com
DELETED
get hairtype.naturallycurly.com
END
quit
Connection closed by foreign host.

This time, the fetcher logs showed that indeed, it was picking the correct IP. And of course it fetched the correct page with all the hair product details.


DNS resolved hairtype.naturallycurly.com -> /23.23.197.30


So once again, I was reminded of the Occam's Razor and how important it is to

1. Remember all the assumptions we make about how a certain software system works.
2. Validate all the assumptions, starting with the simplest first.

 Happy debugging the Net!

Wednesday, November 06, 2013

pretty print compressed JSON file

The highly useful json.tool stops short of parsing compressed JSON files. In particular, if you use the json-smart*.jar to produce your json files, you are out of luck with json.tool. But you can use jsonlint like this to get a readable view of your json file :


[~] echo '{json:"obj"}' | python -mjson.tool
Expecting property name: line 1 column 2 (char 1)
[~] echo '{json:"obj"}' | jsonlint -p 2> /dev/null
{
  json: "obj"
}
[~]