[philiptellis] /bb|[^b]{2}/
Never stop Grokking


Friday, December 31, 2004

Nvidia GeForce MX 2 with linux 2.6.6+

I've been using a GeForce MX 2 for well over a year. It worked quite well with RH8, FC1 and Knoppix. I needed to use the proprietary drivers provided by Nvidia to get hardware acceleration though.

Motherboard: ASUS A7N266

A couple of months ago, upgraded to FC2, and the nvidia driver wouldn't work anymore. I had to run back to Bangalore, and since no one at home really needed hardware acceleration, I switched back to the free nv driver from X (well, I was using x.org now).

This December... well, yesterday actually, I decided to try out 3ddesktop, but of course, this requires hardware acceleration. So I started. Went through a lot to get it to work, and the details are boring. However, what I learnt could help others, so I'll document that.

The problem:

When starting X with the nvidia driver, the screen blanked out and the system froze. Pushing the reset button is the only thing that worked.

Solutions and Caveats:

Get the latest NVIDIA drivers and try.

At the time of writing, the latest drivers from the nvidia site are in the 1.0-6629 package. This doesn't work with the GeForce MX 2, and many other older chips, so if you try to use it, you'll spend too much time breaking your head for nothing. Instead, go for the 1.0-6111 driver, which does work well...

On kernels below 2.6.5 that is. FC2 ships with a modified 2.6.5 kernel that has a forced 4K stack and CONFIG_REGPARM turned on. The NVIDIA drivers are (or were) compiled with 8K stacks and do not work with CONFIG_REGPARM turned on. I'd faced similar problems when I first used the nvidia driver, and recompiling my kernel with 8K stacks fixed the problem.

Searching the net, I came across dozens of articles that spoke about 4K stacks v/s 8K stacks in the 2.6 kernel, but also said that from 5xxx onwards, the nvidia driver supported 4K stacks and CONFIG_REGPARM.

I tried getting prebuilt kernels (smaller download) with 16K stacks, but it didn't help, so finally decided to downlad the entire 32MB kernel source for 2.6.10.

While compiling, I came across this thread on NV News (pretty much the best resource for nvidia issues on linux). In short, the 6111 driver wouldn't work with kernels above 2.6.5 or something like that. I needed to patch the kernel source.

The patch was tiny enough: in arch/i386/mm/init.c, add a single line:
EXPORT_SYMBOL(__VMALLOC_RESERVE);
after the __VMALLOC_RESERVE definition.

Stopped compilation, made the change and restarted compilation.

Also had to rebuild the NVIDIA driver package, again as documented in that thread:

- extract the sources with the command : ./NVIDIA-Linux-x86-1.0-6111-pkg1.run --extract-only
- in the file "./NVIDIA-Linux-x86-1.0-611-pkg1/usr/src/nv/nv.c" replace the 4 occurences of
'pci_find_class' by 'pci_get_class'
- repack the nvidia installer with the following command:

sh ./NVIDIA-Linux-x86-1.0-6111-pkg1/usr/bin/makeself.sh --target-os Linux --target-arch x86 NVIDIA-Linux-x86-1.0-6111-pkg1 NVIDIA-Linux-x86-1.0-6111-pkg2.run "NVIDIA Acclerated Graphics Driver for Linux-x86 1.0-6111" ./nvidia-installer

The new installer is called "NVIDIA-Linux-x86-1.0-6111-pkg2.run"

With these changes, the driver compiled successfully and I was able to insert it.

I had a minor problem when rebooting. usbdevfs has become usbfs, so a change has to be made in /etc/rc.sysinit. Change all occurences of "usbdevfs usbdevfs" to "usbfs none".

Once you've done this, X should start with acceleration on.

3ddesktop is slow, but it starts up. Tux racer works well.

What I think is really cool about this solution, is that I did not have to make a single post to a single mailing list or forum. All the information I needed was already on the net. It was just a matter of reading it, understanding what it said, and following those instructions. For example, there were many threads on screen blanking with the 6629 driver, and somewhere in there was mentioned that the new driver didn't support older hardware, but that the 6111 did. That was the key that brought me the solution. I knew the 6111 didn't work out of the box, because I'd already tried it, but now I could concentrate on threads about the 6111 exclusively, only looking for anything that sounded familiar.

Saturday, November 13, 2004

/home is NTFS

A little over a year ago, at my previous company, I had to change my second harddisk on my PC. It was a bit of an experience, because the service engineer who came to do the job had never encountered linux before, but seemed to think that he could deal with it just like he did windows.

The engineer put in the new hard disk as a secondary master (my old one was a secondary slave to the CDD).

He then booted using a Win 95 diskette... hmm... what's this? Then started some norton disk copy utility. It's a DOS app that goes into a graphics mode... why?

Then started transferring data... hey, wait a minute, I don't have any NTFS partitions. Hit reset! Ok, cool down for a minute. I've got three ext3 partitions. So, now it's time to assess the damage.

Boot into linux - hmm, /dev/hdd1 (/home) won't mount, down to root shell. Get /home out of /etc/fstab and reboot. Ok, runlevel 3 again. Check other partitions - hdd5 (/usr) ... good, hdd6 (/usr/share) ... good... everything else is on hda... good. all my data, is in /home ... not good

So, I start trying to figure out how to recover. google... no luck. google again... one proprietary app, and a couple of howtos on recovering deleted files from ext2 partitions... no good. google again, get some docs on the structure of efs2, and find a util called e2salvage which won't build. time to start fooling around myself.

start by reading man pages. tune2fs, e2fsck, debugfs, mke2fs... so I know that mke2fs makes backups of the superblock, but where are they?

mke2fs -n /dev/hdd1... ok, that's where
dd if=/dev/hdd5 of=superblock bs=4096 count=2
hmm, so that's what a superblock looks like
dd if=/dev/hdd5 of=superblock2 bs=4096 count=2 skip=32768
hey, that's not a superblock. Ok, try various combinations, finally get this:
dd if=/dev/hdd5 of=superblock2 bs=1024 count=8 skip=131071
that's 32768*4-1
Ok, so that's where the second superblock is.

Check hdd1 - second superblock blown away as well. Look for the third... 98304*4-1=393215.. ok, that's good. should I dd it to the first? Hmm, no, e2fsck can do that for me... but, I shouldn't work on the original partition. Luckily I have 30GB of free space to play with, and /home is just 6GB.

dd if=/dev/hdd1 of=/mnt/tmp/home.img
cp home.img bak.home.img


Now I start playing with home.img.

The instructions in e2salvage said to try e2fsck before trying e2salvage, so I try that.

e2fsck home.img
no use... can't find superblock
e2fsck -b 32768 -B 4096 home.img
works... starts the fsck, and gets rid of the journal. this is gonna take too long if I do it manually, so I quit, and restart with:
e2fsck -b 32768 -B 4096 -y home.img
The other option would have been to -p(reen) it, but that wouldn't give me any messages on stdout, so I stuck with -y(es to all questions).

2 passes later it says, ok, got whatever I could.

mount -oloop,ro home.img /mnt/home
yippeee, it mounted
cd /mnt/home; ls
lost+found

ok, so everything's in lost+found, and it will take me ages to sift through all this. Filenames might give me some clues.
find . -type f | less
Ok, scroll, scroll, scroll... hmm, this looks like my home directory... yes.
cp -a \#172401 /mnt/home/philip
scroll some more, find /usr/share/doc (which I keep in /home/doc and symlink from /usr/share/doc). move it back to /usr/share/doc. find jdk1.1.8 documentation... pretend I didn't see that.

find moodle home - yay. find yabb home - yay again. Ok, find a bit more that's worth saving, and copy it over. Many files in each of these directories are corrupted, including mailboxes, and some amount of test data, but haven't found anything serious missing.

All code was in CVS anyway, so rebuilt from there where I had to.

Now decided to try e2salvage anyway, on the second copy of hdd1. It wouldn't compile. Changed some code to get it to compile, it ran, found inodes, directories and the works, then segfaulted. The program tries to read from inode 2, which doesn't exist on my partition, and then it tries to printf that inode without checking the return value.

I'd have fixed that, but the result is used in further calculations, so I just left it at that. The old hard disk was taken away, so I don't have anything to play with anymore.

It'll take me a little while to figure out all that was lost, but so far it doesn't look like anything serious.

Saturday, September 25, 2004

Fallback Messaging

One of the things that drew me to Everybuddy, its USP really, was fallback messaging. I haven't seen any other client (other than eb's offspring -- ayttm and eb-lite) implement this feature, which is why I've never switched to another client.

So, what is fallback messaging?

Consider two friends who communicate via various network oriented means (IM, Email, SMS, etc.), we'll call them bluesmoon and mannu (because they are two people who communicated this way for several years before meeting IRL). Now, said friends are extremely tech savvy, and have accounts on virtually every server that offers free accounts, and then some.

So, you've got them on MSN, Yahoo!, AOL, ICQ... um, ok, not ICQ because ICQ started sucking, Jabber, and that's just IM. They prolly have 3 or 4, maybe 5 accounts on each of these services, ok, maybe just one on AOL. Then they have email accounts. The standard POP3 accounts, 3 gmail accounts, a Yahoo! account for every yahoo ID, and likely no hotmail accounts (even though they have MSN passports) because we all know that hotmail is passé.

These guys also have lj accounts and one or two cellular phones on different service providers.

Ok, now that we have our protagonists well defined, let's set up the scene.

Act 1, scene 1

Mannu and bluesmoon are chatting over, umm, we'll pick MSN to start with. So mannu and bluesmoon are chatting over MSN, when all of a sudden <insert musical score for suspense here> a message pops up:

The MSN server is going down for maintenance, all conversations will now end

seen it before right?

Sweet. So MSN decides that we're not allowed to talk anymore.

What are our options? Oh well, Yahoo!'s still online, so switch to Yahoo!. It's much nicer too because you can chat while invisible too. MSN (officially) doesn't let you do that.

So, they switch to Yahoo!, but... what the heck were they chatting about when the server went down? Context lost. They need to start a new conversation, most likely centred around cursing MSN. What's worse is that the earlier conversation was being archived because they may have needed it as a reference later. The new conversation can also be archived, but it's a pain to merge all these different archives later.

Anyway, they plough ahead. The conversation veers back on topic, ... but now the main net connection goes down. The only things that work are websites and email. What do you do? What do you do? Ok, Dennis Hopper I am not, so let's forget I said that.

The easiest option would be for bluesmoon to send a mail to mannu saying, "Hey dude, my net connection went down, gotta end the convo here.", or he could send the same in an SMS. But to do that he's gotta start yet another program and type out stuff out of context again, or worse, type out an SMS that he has to pay for!

So, here's where fallback messaging comes in.

Act 1, Scene 1, Take 2

<jump back to the MSN chat>

Where were we? Oh yeah, the MSN server goes down. Now, what if the chat client we were using was smart enough to figure this out, and do something about it. What's that something you say? Switch to the next available service. So, in this case, the chat program would automatically and seamlessly switch to using Yahoo!

There's several user centric pluses here. The people chatting do not need to know that a server went down, leave alone care about it and figure out what to do. Archives will be maintained across sessions. The context of the conversation will be preserved. Mannu and bluesmoon can go on chatting as if nothing happened.

If all the IM protocols go down, the chat client could switch to Email or SMS. Of course, mannu should have to tell it explicitly to use one of these, because the conversation will no longer be online. There's gonna be delays between sending the message and getting a response, so the chatters need to know about this.

So, how does your chat client know that you have buddies on multiple services, and about their email address and phone number?

Well, the chat client would have to group accounts on various services into a single contact. This kind of grouping also has other benefits.

Two people chatting with each other now don't have to think about user names and different services and what not. Mannu wants to chat with bluesmoon, he just selects bluesmoon from his buddy list. He doesn't have to care whether bluesmoon has an account on MSN, Yahoo!, AOL or whatever. Why should he care? So, I'd be chatting one to one with another person, without caring about what happens behind the scenes. Isn't that what makes for a good play?

Well, at some point mannu would have to care about services and user names, because he'd actually have to manually add and group all these accounts into one. Perhaps he could also set preferences of the order in which to fallback. That's all a one time set up. For the continuous ease of use to follow, I'd say it's worth it.

Final questions...

Is this really possible? Yeah, sure it is. You can thank Torrey Searle for that. Torrey implemented everybuddy to do just this, and threw in loads of sanity checking - thanks for that dude. It's what drew me to use and then work on the project for so long.

So, is this really possible? Probably not until IM companies decide that the network is just a transport, and it's the value a user derives from using that transport that makes them choose one service over another. It's why we choose the Mumbai-Pune expressway over NH4 that runs through the ghats, even though there's a toll.

Update I did a talk on fallback messaging at Linux Bangalore 2004

Friday, August 27, 2004

Undelete in FreeBSD

A colleague of mine deleted a source file he'd been working on for over a week.

How do you undelete a file on a UFS partition? I'd done it before on ext2, I'd also recovered lost files from a damaged FAT32 partition (one that no OS would recognise), heck, I'd even recovered an ext3 file system that had been overwritten by NTFS. Why should undelete be tough?

Well, the tough part was that in all earlier instances, I had a spare partition to play on, _and_ I had root (login, not sudo) on the box, so could effectively boot in single user mode and mount the affected partition read-only. Couldn't do that here. I'd have to work on a live partition. sweet.

The first thing to do was power down the machine (using the power switch) to prevent any writes to the disk via cron or anything else. We then set about trying to figure out our strategy. A couple of websites had tools that could undelete files, but they'd have to be installed on the affected partition, so that was out of the question.

Now the machine has two partitions, one for / and one for /home. / is significantly smaller than /home, but has enough space to play around 100MB at a time. Decided to give it a try, copying /home to /tmp 10MB at a time.

Command:
dd if=/dev/ad0s1e of=deleted-file bs=1024k count=10

searched through the 10MB file for a unique string that should have been in the file. No match. Next 10 MB:
dd if=/dev/ad0s1e of=deleted-file bs=1024k count=10 skip=10

This was obviously going to take all night, so we decided to script it. (the code is broken into multiple lines for readability, we actually had it all on one line).
for i in 10 20 30 40 50 60 70 80 90; do
    dd if=/dev/ad0s1e of=deleted-file bs=1024k count=10 skip=$i;
    grep unique-string deleted-file && echo $i
done


We'd note down the numbers that hit a positive and then go back and get those sections again. Painful.

Ran through without luck. Had to then go from 100 to 190, so scripted that too with an outer loop:

for j in 1 2 3 4 5 6 7 8 9; do
    for i in 00 10 20 30 40 50 60 70 80 90; do
        dd ..... of=deleted-file ... skip=$j$i; ...

The observant reader will ask why we didn't just put in an increment like i=$[ $i+10 ]
Well, that runs too fast, and we wouldn't be able to break out easily. We'd have to do a Ctrl+C for every iteration to break out. This way the sheer pain of having to type in every number we wanted was enough to keep the limits low. That wasn't the reason. We did it because it would also be useful when we had to test only specific blocks that didn't have successive numbers.

IAC, the number of loops soon increased to 3, and the script further evolved to this:

for s in 1 2 3 4; do
    for j in 0 1 2 3 4 5 6 7 8 9; do
        for i in 00 10 20 30 40 50 60 70 80 90; do
            dd if=/dev/ad0s1e of=deleted-file bs=1024k count=10 skip=$s$j$i &>/dev/null;
            grep unique-string deleted-file && echo $s$j$i
        done
    done
done

Pretty soon hit a problem when grep turned up an escape sequence that messed up the screen. Also decided that we may as well save all positive hit files instead of rebuilding them later, so... broke out of the loops, and changed the grep line to this:

grep -q unique-string deleted-file-$s$j$i || rm deleted-file-$s$j$i

Were reasonably happy with the script to leave it to itself now. Might have even changed the iteration to an auto-increment, except there was no point changing it now since what we had would work for the conceivable future (going into the 10's place would be as easy as changing s to 10 11 12... and we didn't expect to have to go much further than 12 because the partition didn't have that much used space).

We finally hit some major positives between 8700 and 8900. Then started the process of extracting the data. 10MB files are too big for editors, and contains mostly binary data that we could get rid off. There was also going to be a lot of false positives because the unique (to the project) string also showed up in some config files that hadn't been deleted.

First ran this loop:

for i in deleted-file-*; do strings $i | less; done

and tried to manually search for the data. Gave up very soon and changed it to this:

for i in deleted-file-*; do echo $i; strings $i | grep unique-string; done

This showed us all lines where unique-string showed up so we could eliminate files that had no interesting content.

We were finally left with 3 files of 10MB each and the task of extracting the deleted file from here.

The first thing was to find out where in the file the code was. We first tried this:

less deleted-file-8650

search for the unique string and scroll up to the start of the text we wanted. Ctrl+G told us the position into the file that we were at (as a percent of the total). Then scroll to the end and again find the percent.

Now, we were reading 10 blocks of 1MB so using the percentage range, could narrow that down to 1 block.

Again got a percentage value within this 1MB file, and now swapped the block size and count a bit. So we went from 1 block of 1024k to 256 blocks of 4k each. Also had to change the offset from 8650 to 256 times that much. bc came in handy here.

I should probably mention that at this point we'd taken a break and headed out to Guzzler's Inn for a couple of pitchers and to watch the olympics. 32/8 was a slightly hard math problem on our return. Yathin has a third party account of that.

We finally narrowed down the search to 2 2K sections and one 8K section, with about 100 bytes of binary content (all ASCII NULLs) at the end of one of the 2K sections. This section was the end of the file. Used gvim to merge the files into one 12K C++ file complete with copyright notice and all.

If you plan on doing this yourself, then change the three for loops to this:
i=10;
while [ $i -lt 9000 ]; do
    dd ...
    i=$[ $i+10 ];
done

Secondly, you could save a lot of time by using grep -ab right up front so you'd get an actual byte count of where to start looking, and just skip the rest. Some people have suggested doing the grep -ab right on the filesystem, but that could generate more data than we could store (40GB partition, and only 200MB of space to store it on).

Friday, July 23, 2004

A first post

What does one write in a first post? A new beginning? A fresh start? The first step? It has always been hard for me to start a new project. Be it an article a large work of programming or cleaning my room. It is hard to start. The first step is always the hardest. It is easy to proceed if the project is interesting enough.

The question that begs to be answered with this project is why? I already have a journal, so why create a separate blog? What do I hope to achieve? How will this be different? Do I plan on maintaining two publishing outlets?

My journal - Blue Swiss Cheese, or The blues side of the moon - is, and remains to be a journal of what I've been up to. A collection of writings aimed at the community nature of livejournal. A space to jot down informal thoughts, birthday wishes, opinions and reports of meetings with interesting persons.

On the other side of the moon, I hope to publish articles of a more technical or documentary nature. I'd like to move my restaurant reviews here, my series on web standards, and my travelogues too. I'd like to categorise articles, although I'm still not sure how. This space will have more articles rather than posts, language will, I hope, be more formal, or less casual at the least.

Well, so much for an editorial, if I may call this one. Let the writing begin.

Wednesday, January 28, 2004

The Elements of Style

Programmers have a strong tendency to underrate the importance of good style. Eternally optimistic, we all like to think that once we throw a piece of code together, however haphazardly, it will work properly the first time and ever after. Why waste time cleaning up something that is almost certain to be correct? Besides, it probably will be used for only a few weeks.

There are really two answers to the question. The first is suggested by the word "almost". A slap-dash piece of code that falls short of perfection can be a difficult creature to deal with. The self-discipline of writing it cleanly the first time increases your chances of getting it right and eased the task of fixing it if it is not. The programmer who leaps to the coding pad or the terminal and throws a first draft at the machine spends far more time redoing and debugging than does his or her more careful colleague.

The second point is that phrase "only a few weeks". Certainly we write code differently depending on the ultimate use we expect to make of it. But computer centres are full of programs that were written for a short-term use, then were pressed into years of service. Not only pressed, but sometimes hammered and twisted. It is often simpler to modify existing code, no matter how badly written, than to reinvent the wheel yet again for a new application. Big programs — operating systems, computers, major applications — are never written to be used once and discarded. They change and evolve. Most professional programmers spend much of their time changing their own and other people's code. We will say it once more — clean code is easier to maintain.

One excuse for writing an unintelligible program is that it is a private matter. Only the original programmer will ever look at it, and surely he need not spell out everything when he has it all in his head. This can be a strong argument, particularly if you don't program professionally. But even if only you personally want to understand the message, if it is to be readable a year from now you must write a complete sentence.

You learn to write as if to someone else because next year you will be "someone else". Schools teach English composition, not how to write grocery lists. The latter is easy once the former is mastered. Yet when it comes to computer programming, many programmers seem to think that a mastery of "grocery list" writing is adequate preparation for composing large programs. This is not so.

The essence of what we are trying to convey is summed up in the elusive word "style". It is not a list of rules so much as an approach and an attitude. "Good programmers" are those who already have learnt a set of rules that ensures good style; many of them will read this book and see no reason to change. If you are still learning to be a "good programmer", however, then perhaps some of what we consider good style will have rubbed off in the reading.

— The Elements of Programming Style (Second Edition)
by Kernighan and Plaugher

The above text is part of the Epilogue of The Elements of Programming Style
by Kernighan and Plaugher.

See also: Elements of Programming Style fortune mod.

Friday, January 02, 2004

Endians

How would one transfer an int to a byte array?

This is code that I've seen:

memcpy(&byte_array[start], &value, sizeof(value));

It, however, doesn't consider the architecture's byteorder, and will not work correctly on the "other" sex.

This is my version:
for(x = 0; x < sizeof(value); x++, value>>=8)
        byte_array[start+x] = value & 0xff;
It works because it deals only with values, and cares nothing about byte order. Yes, I am looking at the value as a sequence of bits, but I do not care how many bits there are, and regardless of byteorder, the bits are always arranged with msb to the left and lsb to the right.

...===...