Wednesday, December 27, 2006

Suse on wireless

Just installed my Airlink wireless card on Suse. I used ndiswrapper and the WinXP drivers that came on the CD.
I could ping the wireless card, iwconfig showed the access point etc, but I couldn't connect to the internet through wireless.
A post from otisthegbs here helped me in getting dhclient to get a connection through the wireless card. I reproduce below:

the way SuSE uses its dhcp client is a real bitch. I bet that if you ps -aux grep ifdhcp you notice that you'll have multiple instances of it, and since those daemons are running they constantly screw up the default route. I had the exact same problem as you. What I did was downloaded dhclient, which is part of the dhcp package at ISC, and told those startup daemon to go to hell.Then you can just "sudo dhclient " whenever you need a IP

I had two dhcpd processes running. I killed both and ran dhclient wlan0 and after that I could connect to the internet through wlan0.

Saturday, November 25, 2006

Suse 10.1 - is this what Vista should look like?

After seeing the XGL magic in Suse 10.1, I really must try Vista.
I installed Suse 10.1 over the weekend as I had heard of the XGL magic. XGL was not installed by default, and I used yast2 to get the right packages installed.

I could then get gnome-xgl-settings to work, but it couldn't enable XGL as my graphics card (NVIDIA GEForce FX5200) was not supported. This post suggested that I download the driver from nvidia.

The package I downloaded from nvidia actually built a kernel module, installed the driver, prompted me to log out and back again, and voila, I had XGL.

[BTW, I couldn't run the install while I had X running, so I simply changed the default run level in /etc/inittab to 3, rebooted and installed from the console]

Lots of fun using the 3D desktop!

Wednesday, November 15, 2006

debugging the linux kernel

I'm taking baby steps today, debugging the kernel.
This seems a very useful resource.

I tested the com ports between the 2 machines using the instructions there and it works great.

A tiny missing step that would be useful to a newbie like me unfamiliar with patching the kernel:

Under "Applying the kgdb patch", you need to first go into the linux directory before executing the patch command. So between 1 and 2, there has to be a step:

cd {$BASEDIR}/linux-2.6.7

Also the path to the patches is missing a sub-dir. The patches are all unzipped into {$BASE_DIR}/patch-kgdb/linux-2.6.7-kgdb-2.2/ so the patch commands from 2 to 7 should read this way:

patch -p1 < ${BASE_DIR}/patch-kgdb/linux-2.6.7-kgdb-2.2/core-lite.patch

Snag 2:

I realized I needed the qt-devel package to do 'make xconfig' - this is fully graphical. So I decided to do make menuconfig (which I've done before, and which uses a text based menu). Then I ran into this compiler error. By following the simple patch (remove static from declaration of 'current_menu'), I could proceed with the build. Now I had the menu where I could make the kernel selections - progress!

Snag 3:

I realized that 2.6 has been compiled with GCC 3 and does not compile with GCC 4 (which is what most newer machines have). I decided to be macho and 'fix' the kernel code to get the compiler to sing. Still going strong after 1 day...
This is a good article on the GCC 4.0 changes that affect Linux 2.6.7 as well.

Snag 4:

I managed to fix all the compiler / linker errors (they were due to the stricter way GCC 4.0 treated inline and static keywords). I transfered the image over to the test machine, changed grub, but it wouldn't boot -problem mounting the file system. Upon investigating I found that linux needs the initrd image to boot (the initrd image loads the drivers in RAM). But to build the initrd image, you need to install the modules as well. (mkinitrd that you use to make the initrd image needs the kernel version and it looks inside /lib/modules/ for the drivers). Now since I didn't have the modules installed on the test machine for this version of the kernel, I just specified a valid kernel that was on the box to mkinitrd, and it successfully built me an initrd image. Now I moved it over to /boot, edited grub to note that and rebooted. No joy! It was still failing to mount. Over the weekend, I was chatting with a good friend who is now hacking away in the Linux kernel (he was previously an engineer in the Windows kernel) and he said that this is probably because the driver versions are being checked by Linux. So I decided to install the modules on the dev machine, make a correct initrd image and copy it over to the test machine. This time it actually booted!

Here is where you can read how to do a regular kernel compilation (for the modules install part)

Snag 5:

SEGSEV in the kernel! Well I was happy. We came this far and here's my chance to look at some kernel code, see what's going wrong.

Here's the immediate output from gdb:

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 1]
0x00000000 in ?? ()
(gdb) bt
#0 0x00000000 in ?? ()
#1 0xc03051db in psmouse_interrupt (serio=0xc048cde0, data=250 '\uffff', flags=0,
regs=0x0) at drivers/input/mouse/psmouse-base.c:206

And here is line 206 from psmouse-base.c:

rc = psmouse->protocol_handler(psmouse, regs);

So it seems that somehow the protocol handler for the mouse is not set. This is a USB mouse, so perhaps I forgot to set an option in make menuconfig...

hacking inside the grub prompt

Yesterday, I was setting up Linux on a new PC and went ahead with a custom partition table using druid. I didn't set up the /boot partition and as a result the grub configuration file (/etc/grub.conf) was incorrect. Then grub stopped at the grub prompt (thus giving me a chance to temporarily correct the problem)

I found this article very helpful in telling grub what I wanted, so that it proceeded with finding the Linux image and loading it.

Incidentally, this machine had a SATA drive, and ata_piix used to complain and kernel panic at boot, I removed the SSC pin (spread spectrum clocking feature) and it booted fine after that.

sudo without password

You can execute root-level commands with sudo without typing passwords by editing /etc/sudoers file to have a line like this (if your user name is alice):


The file is read-only. You edit this file with the following command (which opens the file in vi):

visudo -f /etc/sudoers

Thursday, November 09, 2006

gcc offseof macro warning

Nathan Sidwell had this really neat trick that gets us around the gcc compiler warning on the usage of the offsetof macro.

Rather than using the offsetof macro, use the one below:

#define myoff(T,m) ((size_t)(&((T *)1)->m) - 1)

The trick is to use the address '1' rather than '0' (which the offsetof macro uses) and thus pacify gcc.

The issue of this compiler warning is a hotly debated issue.

Wednesday, September 06, 2006

remove carriage returns from Windows files on Linux

Sometimes you have to work with a file created in Windows from Linux. The file has the "\r" (0x0d) that can cause trouble for some Linux apps.

You can easily remove them using "tr" cmd.

  • cat windows_file | tr -d "\r"

  • The above command should write the new file to stdout. Save it to a new file, and this will not have "\r" characters.

    What problems may happen
    The "\r" character is generally interpreted by terminals to move the cursor to the beggining of the current line. The "\n" character moves the cursor to the next line. Generally, a text file created in Windows will have a "\r\n" sequence at the end of each line.

    What would happen if you were to remove the last character from each line and try to display it to the terminal? Now after each line, the cursor would advance to the beginning of the line ("\r") but not advance to the next line. Thus each line will overwrite the other.

    This could happen if you have a perl script where 'chomp' is used to remove the last character of each line.

    Wednesday, May 03, 2006

    cpan2rpm - perl modules and rpms

    Today I discovered this amazingly useful utility. This helps in building the perl module rpms that are necessary for other perl module rpms (dependencies) to install.

    If you have an rpm install, the dependent rpms need to be listed in the rpm database on your machine. The number 1 source for perl modules is CPAN but they don't necessarily release rpms for the modules. With the cpan2rpm, you can get the required perl module from CPAN, make the rpm and then install the module with rpm.

    The really nice thing about cpan2rpm is that it can locate the module from CPAN automatically, for ex: if you invoke it like this: cpan2rpm XML::Simple, it will locate the XML::Simple module from CPAN, copy it over and build the rpm at /usr/src/redhat/RPMS/noarch/perl-XML-Simple-2.14-1.noarch.rpm.

    Sometimes, cpan2rpm won't be able to find the module on CPAN. Then you can manually locate it from CPAN archives, download it and then point cpan2rm at the local gzip file and it will go ahead and build the rpm.

    cpan2rpm /home/downloads/HTML-Format-1.23.tar.gz

    Rather long winded - instead of simply installing the module, but if a latter part of the install uses rpm, you need to make sure the rpm database knows about the dependencies, so this is a necessary step.

    Once again, it is a good thing that there's more than one way to do it!

    Tuesday, May 02, 2006

    Backing up DVDs - Elizabeth Town

    I used DVDShrink to create a backup of this movie. The movie played fine on the hard disk but after copying it over to a DVD, I found that the DVD did not play past chapter 10.

    I then used pgcedit to open the movie files on the hard disk. It immediately found the problem - apparently there is a bug in DVDShrink that creates a slightly corrupted index. Pgcedit offerred to correct the problem and change the DVD files accordingly.

    After doing this step, I could burn a successful DVD with Roxio.

    I'm not sure if DVDShrink encountered some form of copy protection or if it was actually a bug on DVDShrink. Most recently released movies seem to come up with different and newer DRM techniques.

    Wednesday, April 26, 2006

    Hotel Rwanda

    This is a well made movie about a man who saved 1200 victims from one of the bloodiest massacres in recent history. The movie is based on the true story of Paul Rusesabagina.

    Watching this movie, I couldn't quite come to grips with the limits of human atrocities against humans and the inability of the better-off people in the 1st world to act quickly to avoid such horrors.

    Human Resources Watch reports the massacre. Reading it leads me to believe that the reason for the lack of intervention was the economics. The cost it took to save a million lives from terrible violence was too much to bear - for the wealthiest of nations.

    There was also the fear of loosing personnel. I copy below:

    "But instead of using the peacekeeping troops to stop the genocide, the U.N. sought primarily to protect its soldiers from harm. Dallaire was ordered to make avoiding risk to soldiers the priority, not saving the lives of Rwandans. To do so, he regrouped his troops, leaving exposed the Rwandans who had sought shelter in certain outposts under U.N. protection. In the most dramatic case—for which responsibility may belong to commanding officers in Belgium as much as to Dallaire—nearly one hundred Belgian peacekeepers abandoned some two thousand unarmed civilians, leaving them defenseless against attacks by militia and military. As the Belgians went out one gate, the assailants came in the other. More than a thousand Rwandans died there or in flight, trying to reach another U.N. post."

    After exploiting Rwanda from 1916 through 1961, Belgium could not deal with the economics of saving a million Tutsi lives. These were the same people Belgium favored during their regime, the people, who under the Belgian command resorted to extreme violence against the Hutus. Some claim that the later uprising of the Hutus against Tutsis were largely due to the ethnic division enforced by Belgium in their divide-and-rule policies.

    What is still not clear to me is why the 1st world did not threaten the genocidal government with economic sanctions. This would not have required much capital. Perhaps they looked at the situation from an economic view-point and just said - "why bother, what is in it for me?" I copy below:

    "Discussion about the size, mandate, and strategy for a new peacekeeping force continued until May 17, in part because of U.S. rigidity in applying its new standards for approval of peacekeeping operations, in part because of hesitations sparked by RPF opposition to any intervention. Manoeuvering by nations supplying troops and those supplying equipment consumed another two months, so that the second peacekeeping force landed only after the RPF had defeated the genocidal government. The slowness and ineptness of national and international bureaucracies in mounting the operation was not unusual, nor was the attempt by participating nations to get the most or give the least possible. What was extraordinary was that such behavior continued to be acceptable in the context of genocide, by then openly acknowledged by national and international leaders."

    Monday, February 13, 2006

    Search your intranet with Nutch!

    Our company maintains several wikis for each department. This is where the knowledge of how to set servers up, the milestones and deliverables of projects, new project ideas etc live. There is (actually was) one problem - the wiki was not searchable. The search engine that came with the wiki was pretty much broken. A search would most times hang the browser.

    So I was looking around, and found this amazing open source project - Nutch - with which I built the infra-structure for a crawl of a few intranet sites. Within a couple of days, I had the system running and the results were really good.

    Then another developer hooked up the search engine to the wiki's search button.

    Later on, I increased the crawl to other web based information we have like - the bug database, the system that logs perforce changelists. The results were quite good, now from a single place, we can find a lot of information about an item scattered across many links.

    I remember back at Microsoft, someone was always suggesting how all the systems that maintained different datasets should somehow be unified so that from one point, we can gather all the data for one item. This of course is a huge dev/test effort requiring major re-haul of a bunch of working systems. Enter search - a good one - and all this costly development work disappears.

    I want to commend the people working in the Nutch project for giving us such a cool set of tools. Keep up the excellent work!