Tuesday, October 27, 2009
Mac/SnowLeopard and Wireshark
Wireshark has to be tweaked a bit due to permission issues on the Berekley Packet Filters. This has the info.
Since these settings are lost on reboot, a start up item can be created as shown here.
Also, if a module not found dialog box appears, follow these steps (taken from the long thread here):
You need to set the path to the folder that Wireshark looks in for MIBs &c, because the default in version 1.0.6 for Mac OS X is incorrect. In Wireshark, do the following:
• From the Edit menu, select Preferences...
• In the left pane of the Wireshark: Preferences window, click on Name Resolution
• For SMI (MIB and PIB) paths, click the Edit button
• In the SMI Paths window, click the New button
• In the SMI Paths: New window, in the name text box, type /usr/share/snmp/mibs/ and click OK
• Click OK
• Click OK
• From the File menu, select Quit
I found that it was also necessary to quit and restart X11 for the changed Wireshark preferences to take effect:
• From the X11 menu, select Quit X11
This gets rid of the loading the MIBS errors for me. YMMV. By the way, Wireshark is supposed to still be useable, even if you can’t get rid of these errors.
Posted by thushara at 10:57 PM No comments:
Friday, October 23, 2009
many options of lsof
this is handy reference to the ways of using lsof.
Posted by thushara at 11:15 AM No comments:
Thursday, October 22, 2009
Mac and launchd - removing programs that refuse to be killed
Mac uses the launchd daemon to start processes on boot time, much like the init.d scripts in Linux. Except, launchd has a configuration file (with an extention .plist) for each such process, and these files are in a few different places. So if you want to stop an auto launch, you will be hunting around, which I just did.
The official doc states all the locations a plist file.
In my case, I was using a educational prototype from UW called vanish. This spawns a couple of processes using launchd. So if I kill the processes, they come right back up. I found the plist files under ~/LaunchAgents and removing them did the trick.
Posted by thushara at 1:16 PM No comments:
Tuesday, October 20, 2009
Java ByteBuffer : how does this work?
The first time you encounter the ByteBuffer, you may run into some surprises. The function that flips most folks is in fact ByteBuffer.flip(). To understand the flip() and to be not flipped by it and other such idioms, we will look at what this class is and how it should be used.
Basically, a ByteBuffer allows us to read data repeatedly from some input, like a non-blocking socket. ByteBuffer keeps track of the position of the last byte written, so you don't need to. You can keep writing to the same ByteBuffer and rest assured that previous data will not be over-written.
This is pretty handy in asynchronous I/O (using Java NIO package) as data from asynchronous sockets don't always arrive all at once. We need to map buffers to sockets and keep reading until there is no more data from the remote end.
So what about this flip()? Well, the way the ByteBuffer class was designed, data is read to the buffer starting at position and upto limit. Data is written starting at position and upto limit as well. So, if you followed that, after reading some data from a socket, the ByteBuffer position would be advanced, and reading now will not get any data as position is at the end of the buffer. So flip() basically sets the position to 0 (start), and limit to the position (previous position to be exact, or rather the end of useful input).
So think about read and write operations on the ByteBuffer manipulating data within position and limit and you will see more clearly the need to flip once in a while.
Posted by thushara at 5:04 PM 2 comments:
Reading UTF-8 data from asynchronous sockets to the file system
Using asynchronous sockets, data is generally read into ByteBuffer objects. The general pattern is to read multiple times until there is no more data, and each time when the ByteBuffer is full, transfer to a larger buffer, like a ByteArrayOutputStream.
Now if you want to manipulate data collected (which is now in the ByteArrayOutputStream) as a String, it has to be decoded. This can be done using the CharsetDecoder object like this:
// read data to outStrm using nio
CharBuffer charBuffer = CharBuffer.allocate(outStrm.size());
byte ba = outStrm.toByteArray();
ByteBuffer byteBuffer = ByteBuffer.wrap(ba);
Charset charset = Charset.forName( "UTF-8" );
decoder = charset.newDecoder();
CoderResult res = decoder.decode(byteBuffer, charBuffer, true);
res = decoder.flush(charBuffer);
String out = charBuffer.flip().toString();
However, all this decoding does is translating UTF-8 characters to their respective code points. As a result, we can't save this data to a file (OutputStream) correctly.
If you were to print the out string to the display, it is not guaranteed to print valid UTF-8 characters. Of course it will work for the single byte characters, but not necessarily for the multi-byte characters. Ex: 0xca a0 represents a non-breaking space with a code point of 0xA0. The above decoding will decode this to the code point 0xA0, but if you now write this to an output stream, it will not be stored as UTF-8, as the decoding stripped the UTF-8 and replaced it with code points.
So the correct approach is to simply write the byte buffer to an output stream like this:
This will present UTF-8 characters to the output stream and thus the file will be saved as correct UTF-8 data.
Posted by thushara at 4:23 PM No comments:
Subscribe to: Posts (Atom)