Skip to main content

Linux

Minimal pppd Setup For GPRS Dongles

So you have a mobile broadband dongle and you want to use it on Linux, one option is to just plug it in and from Gnome 3 it just works and can be set-up instantly with NetworkManager. However I require something more permanent, something that will start at boot, be less interactive and stay up. There is a lot of misinformation out there about how to do this with various programs and scripts to copy and paste. Lots of wvdial configurations and lots of poking about in /etc/ppp creating and modifying files etc.

vimrc for nagios

I use nagios to monitor servers. It’s great. I just thought I would share a quick snippet that I just put in my vimrc. It sorts the servers listed in host_name lines so they are in alphabetical order. Put the following in your vimrc, move your cursor over a host_name line and hit F2 and it will sort the server list.

Spotify Traffic Analysis

A colleague asked me how much bandwidth Spotify uses. I basically had no idea. I want to run Spotify on my mobile at some point, so it got me thinking. I decided to do some basic analysis. I ran the client behind a http proxy for a day or so and ran a tcpdump at the same time with a filter to capture all the traffic to the proxy. The dump ran from 19/10/2010 11:46 to 20/10/2010 17:19 and produced a 362M capture file. For the most part I was not playing any music, then for the last few hours I played music I had not played before.

Major Filesystem Corruption

On Sunday (03/10/2010) my laptop had a bad crash. The screen went blank; I could still see the cursor and move it around, but could do nothing else. I could not even log in on a virtual console. After I typed the user name it came straight back with a login prompt again. I rebooted, grub loaded, Linux ran, the initramfs loaded and started and then it moaned about not being able to mount the root filesystem and dropped me to a prompt.

ODBC 2 OpsCenter Access

This is a quick guide on how to configure an ODBC connection from windows to a Symantec OpsCenter Server. OpsCenter is a Symantec product that integrates various backup products including Netbackup to generate various reports and warnings about how the systems are performing. It used to be a purchasable option with NetBackup 6, however since the release of NetBackup 7 a cut down version is bundled in. It works quite well and can generate pretty reports in various forms and do various things to them. However it’s not very customisable. OpsCenter actually uses SQL Anywhere RDBMS as a back end for all the data. Thus if ODBC is configured anything can query the very same data and produce reports. I’m going to use crystal reports to generate many pretty pie charts.

ODBC 2 OpsCenter

OpsCenter is a Symantec product that makes reports for various backup products including Symantec NetBackup. I recently decided that I wanted to improve the reporting of our backups so looked into playing with OpsCenter. It turns out that it’s not actually that configurable, which is a shame as it’s mostly quite good.

SMS Notifications for Nagios

I use Nagios for monitoring. Up to recently I used a regular modem to send sms text messages to various people when systems are going wrong. The way this works is by using smsclient which dials up to a TAP server. [TAP] (http://en.wikipedia.org/wiki/Telelocator_Alphanumeric_Protocol) is a fairly archaic way of sending messages. It’s been fairly reliable however it has two major drawbacks, sending takes a long time and it’s limited to 160 characters. As far as I can tell it will not do long text messages, which are really just multiple short message combined together in a special way.

Weird Traceroute

I was looking at a development web site I am involved with and I was interested in where the site was in the big bad world, so I decided to traceroute to it [1]. What seemed very unusual was that the 5th hop reported an IP address in the 10.0.0.0/8 private address space. To quote Sam “10.what now?”. I’m still amazed that packets with private source addresses are routed across the Internet![2]

Extra IDE

ExtraIDE is a patch to the Linux kernel to enable more than the standard 20 IDE disk drives in a Linux system, as each IDE controller that is driven by the old style drivers be it PATA or SATA (ie not libata). Each IDE channel takes up two drive slots and letters even if the physical card has only one sata disk per channel attached. This severely limits the total IDE disks in one system. This patch adds four extra major device numbers and the necessary bits to extend past ide[0-9] and hda[a-t] to ide[0-9a-d] and hda[a-zA]. The real fix is to improve the libata drivers to include support for my old broken controllers or upgrade to new sata controllers. In any case I have actually run with some form of this patch for the best part of 3 years. It was never really worth submitting to the main line, but I did post it to the LKML.

GPG Bench Cipher

I benchmarked a few of the gpg ciphers. I created a 1GB file from /dev/urandom with dd. Running “gpg –version” gives a list of available ciphers. I then ran “time cat test | gpg –symmetric –cipher-algo TWOFISH > test.enc” for each cipher to see how fast they all were. It’s not amazingly accurate but it gave a good indication which one to avoid! I ran this on my “Intel(R) Core(TM)2 CPU 6400 @ 2.13GHz”, 2.6.30 gives this 4256 bogomips for what it’s worth. Anyway, on with the results.

Big Disk

So today I played a bit with some big cheap disks. I have 3 fairly old desktops each with 4*1TB disks, all exported via ATA over Ethernet to a much more modern sort of disk head. Basically it’s a “build a large store on half a shoe string” project. I’ve not quite got the network side of it sorted yet. Currently the disk head’s gigabit card is being saturated. On a single disk node each disk can do about 80M read sustained. If all 4 are read at the same time it goes down to about 50M, which is 200M in total. Which seems quite amazing for a Pentium 4 3.0Ghz. Also this seems a bit high as the PCI bus can only do 133M, I’m guessing that the onboard sata ports are somehow separated from the extra pci sata card I added. Interestingly one disk node can sustain 80M read from disk to the disk head. Again this backs down to 30M if all 4 disks are read, that’s 120M total, no surprise this is saturating the gigabit link. So the major bottleneck is the disk head. Currently max raid sync speed is 10M, ie 120M total for 12 disks. Ideally 3 nics in the disk head would be best, but then there is no way for the data to get to the disk head.

WD ShareSpace

(Update 01/09/2009 - All of the below is now not necessary as the latest firmware has an option from the web interface to turn on remote root ssh shell access, YAY for Western Digital!)

Anchor Fix

AnchorFix is a Greasemonkey script that I wrote to fix anchor links. This script searches for links that have anchors and adds an anchor icon that links to that anchor. Use case: sending a URL to someone without having to scroll to the top to find an anchor link or worse reading the HTML source and hand editing the URL to add the anchor. It also searches for links that link to an anchor on the current page and signifies this by adding an anchor icon after the link text. Use case: reading a page with a menu system at the top; some links are off site and some are anchors to the current page. After reading the whole page which links are worth clicking, the anchor indicates this.

ICMP Redirect

Today I found out where Linux exposes the extra routing information gathered from ICMP redirects. ip route show cache will show the entire cached routing table. It’s a bit hard to read so ip route show cache 1.2.3.4 is better. For example 192.168.1.0/24 is a network that is connected via a host on my 192.168.0/24 network. My default gateway (192.168.0.1) has a static routing entry to the host that gateways for the 192.168.1.0/24 network (192.168.0.57). So when a random host on the 192.168.0.0/24 network pings a host on the 192.168.1.0/24 network it first sends to 192.168.0.1 but it sends an ICMP redirect saying that in the future it would be better to just send direct to 192.168.0.57.

Libata Errors

I learned over the years to read the old ide subsystem errors for Linux and generally am able to get a feel for the sort of hardware error that’s coming. However I have yet to really get a feel for the libata errors, I’m not really used to reading the errors. A friend linked me to a page on the libata wiki.

Fedora needs legwork

I played with Fedora more today. There are loads of packages in the default repos these days. However I still find myself missing things from Debian. For example, I can’t do a “yum search” while doing a “yum upgrade”. Why? I also miss the small helper scripts that save on legwork. For example update-grub: in Debian when a new kernel gets installed the initrd and grub get updated and it just works. With Fedora I have to manually update the grub config and if I forget to create the initrd, it means a trip to the 13 PCs I just upgraded!

Linux MD

/proc/mdstat information # Linux has a software RAID subsystem and it is called md. It is generally quite well documented. However the md status file in the proc pseudo filesystem is not documented at all. So this is one of those cases where you have to read the source to understand what’s going on. I’ll jump right in, this is what my mdstat looks like: