Linux System Administration

Linux Command Line, Generating a Random File


It is very easy to create a random file using the linux command line. Much like the command to fill a file with all zeros, for example a 1 Meg file:

dd if=/dev/zero of=zero.filename bs=1024 count=1000

You do the same using /dev/urandom:

dd if=/dev/urandom of=random.filename bs=1024 count=1000

Resulting in a 1MB file:

1000+0 records in
1000+0 records out
1024000 bytes (1.0 MB) copied, 0.0294247 s, 34.8 MB/s

This is transferring random data from the virtual device urandom to the output file. We use /dev/urandom instead of /dev/random because the /dev/random source generates random data very slowly. urandom is much faster at this but remains very random, if not quite a random as /dev/random. This should work with any system with dd and /dev/urandom.

Tweaking TCP for Fast (100mbps+) Connections and Transfers on Linux

We recently did some speed testing on a few of the servers on our network, and we were not receiving the speeds expected considering they were sitting on a physical 100mbps ethernet port. The servers were indeed on physical 100mbps connection, however wget (TCP/IP, HTTP Port 80) download tests showed only a max of about 1.5MB/sec (note the 8bit/byte conversion, so this translates to about 12mbits).

This is due to how TCP frames data packets and optimizes them for connections. I believe by default TCP on most systems assumes about a 10mbit max capable transfer rate, so it does not show performance gains on a larger pipe without modification to the kernel options which govern TCP/IP frame size and features. Some distributions may make this change for you automatically however many will not.

To keep things short and sweet, we took the following advice from on tweaking TCP parameters on linux kernel systems. This will cover Linux 2.1 and above – which means CentOS, RedHat, Ubuntu, Debian and many more distributions.

The TCP Parameters we will change are:

  • /proc/sys/net/core/rmem_max – Maximum TCP Receive Window
  • /proc/sys/net/core/wmem_max – Maximum TCP Send Window
  • /proc/sys/net/ipv4/tcp_timestamps – (RFC 1323) timestamps add 12 bytes to the TCP header…
  • /proc/sys/net/ipv4/tcp_sack – tcp selective acknowledgements.
  • /proc/sys/net/ipv4/tcp_window_scaling – support for large TCP Windows (RFC 1323). Needs to be set to 1 if the Max TCP Window is over 65535.

If you recall /proc/ is the volatile portion of kernel configuration, you can change it on the fly but it will be reset on reboot unless settings are changed via an init file or setting the options in /etc/sysctl.conf. To change the settings once (to test):

echo 256960 > /proc/sys/net/core/rmem_default
echo 256960 > /proc/sys/net/core/rmem_max
echo 256960 > /proc/sys/net/core/wmem_default
echo 256960 > /proc/sys/net/core/wmem_max
echo 0 > /proc/sys/net/ipv4/tcp_timestamps
echo 1 > /proc/sys/net/ipv4/tcp_sack
echo 1 > /proc/sys/net/ipv4/tcp_window_scaling

And to apply them for good, add the following lines to /etc/sysctl.conf:

net.core.rmem_default = 256960
net.core.rmem_max = 256960
net.core.wmem_default = 256960
net.core.wmem_max = 256960
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1

Use ‘sysctl -p’ to apply the changes in this file to your running Linux instance. Feel free to experiment with these numbers to see how they impact your transfers, it depends a lot on how many and how large the files are that you transferring. These changes must be made on the SERVER side, any change on the client side would not impact the download speed from the server.

There are several other variables to consider, and these all depend on your application so change them if you know what you are attempting to do. After changing these settings, you will see speeds of about 10MB/sec (80mbps) on a 100mbps connection. The other 20mbps are lost in TCP and other network layer overhead, which is unavoidable.

A Poor Man’s VPN: Proxy Web Connection to Remote Server (via SSH and Tunnel)

Did you ever have a situation where you needed to access a website that had an IP restriction in place? I recently had a situation where I needed to access the web via my university connection (due to IP restrictions placed on accessing databases of research papers). They do not have a VPN setup so it is hard to do this off-campus.

I do however have access to a linux machine on campus. I am familiar with port forwarding using SSH but I had never used it to actually tunnel web traffic using a web browser on Windows. Turns out it is surprisingly easy!

The ssh command to use is:

ssh -C2qTnN -D 8080 username@remote_host

This command sshes to the remote_host, and creates a tunnel on your localhost, port 8080. Note that you need to have private key authentication already set up for this host – it will not work with password authentication.

The description of the switches are (from the ssh man page):

  • -C : Compression
  • -2 : Use SSHv2
  • -q : quiet!
  • -T : Disable pseuto-tty allocation
  • -n : Prevents reading from stdin (you need to have private key authentication set up, to prevent password authentication)
  • -N : Do not execute a remote command (or launch a shell). Just use the ssh process for port forwarding
  • -D : Allocate a socket to listen on the local side. When a connection is made to this port it is located to the remote machine. Makes SSH work as a SOCKS server. Only root can forward privileged ports like this.

From here, you set up Firefox or your browser of choice to use a Socks proxy on localhost:8080. The man page says that SOCKS4 and SOCK5 should both work but I had to use SOCKS v4, SOCKS v5 did not seem to work for me.

How to Install SNMP on Tomato Router Firmware and Graph Traffic with Cacti

You’ve flashed your old WRT54G or other vanilla router with the Tomato firmware. This itself turns your router into a lean, mean routing machine with QOS, SSH and more, but let’s say we want to take it a bit further. What it we want to get some more stats out of it?

In order to do this, we first need to set up a way to pull this information from the router. The best way to do this is to install an SNMP (Simple Network Management Protocol) daemon on the system.

The main roadblock we face here is that the system mainly runs in volatile system memory, meaning that every time the system is rebooted the filesystem is reset. Fortunately Tomato provides a way to get around this using CIFS shares. Follow the steps below (as modified from here) to install an SNMP server on a Tomato router.

  1. Create a network (samba, CIFS) share somewhere on the network. This computer must be on all of the time in order for Tomato to run the SNMP server.
  2. Download the file from one of these locations:

    expand the binary and .conf file into the share or a subdirectory (for example, <share name>/snmp)

    MD5 for snmpd binary is ae0d622648efdb8dceb7b3b5a63e23ac

  3. Set up the shared directory on the router. Visit Administration->CIFS Client. Add the share as follows, with your correct share information:cifs1
  4. Log into the Tomato router via ssh, and start SNMPd on the router by issuing the command:
    /cifs1/snmp/snmpd -c /cifs1/snmp/snmpd.conf &
  5. Test that SNMP is running and can be accessed on another computer on the network. To test it, you can use snmpwalk like so:
    snmpwalk -c public -v 2c <IP Address of Router>

    If it works properly, it will list the available OIDs from the router. You do not need to take note of these, but they will be used in the graphing software later.

  6. Finally, we need to launch the SNMP server when the router is restarted. You do this by adding the command to start it in the area Administration -> Scripts -> Firewall:
    sleep 30
    /cifs1/snmp/snmpd -c /cifs1/snmp/snmpd.conf -s &

    This launches the snmp server 30 seconds after the router is started or rebooted.

Thats it! SNMP is now running on the router.

Now to add this SNMP host to your graphing software. For this example, I will use Cacti, which I will assume you have already set up. If you need to set it up, please follow the directions on the Cacti site for installation.

First, add the router as a new device, using the information below (change IP to suite your needs):


After adding the device, you have several options depending on what sort of data you are looking for. For system information on the router – for example CPU usage, memory usage, etc; you can go directly to Create -> New Graphs. Select your device and then add the graph you are looking for.

The graph will show as a broken image at first, or a blank graph with “NaN” as the data source. Give it a few minutes to update, and the information should start to flow through. The ucd/net options work best, but feel free to experiment.

To get traffic stats on the interface, you first need to “Walk” the device.  Go back to your device list, and edit the device you added. Under “Associated Data Queries”, Add Data Query, add “SNMP – Interface Statistics” with Re-Index period as “Uptime goes backwards”. After adding it you should see under status something like: Success [39 Items, 6 Rows].

Since these data sources are now added, you can go back to Add a new Graph. After selecting the device, you should see a list of these new interfaces. Select the interfaces you wish to graph, and select the graph type (I suggest In/Out bits with Total).

After a few minutes, the data should start filling in. After a while, you will get a graph like this:


In conclusion, with a little work, you can get enterprise class graphing from your consumer router. The total project took me about 45 minutes, and I was trying to figure out all of the data sources and the correct way to enter everything.

Let me know your experiences, suggestions and corrections!

Command Line Packet Sniff Existing Running Process in Linux

Have you ever come across a server that is doing a lot of traffic? Maybe you have logged in to see a process running at 100% CPU, so you know the culprit, but instead of kill -9ing it, wouldn’t it be great to see what exactly it is up to? Or even if you see a process and don’t know exactly what it is doing, and you are just curious what it is up to?

terminal-icon-64x64As with most issues there are several ways to skin this cat. You can use tcpdump or wireshark to sniff the all of the network traffic on the device. If you know the port the program is running on (you can use lsof for that), you can restrict traffic to that port. But what if the program is jumping ports, or even uses a side-port for some sort of data transmission (UDP?).

The main problem going down this route is that on a server that is doing any significant bit of traffic, it is like sorting through a needle in a haystack. If you have a single process that is taking up all of your bandwidth, you can probably find it pretty fast. But if the process is not doing a ton of traffic it can be hard to track down.

Strace to the rescue

You can use the great program strace to sniff the network data that an executed program is doing, or even a currently running program. This works well because if you are trying to isolate the network traffic a currently running process, your options can be limited. Using strace is the only way that I know of to see ALL of the traffic coming from a process.

To check the traffic of a currently running process X:

strace -p X -f -e trace=network -s 10000

The command breaks down:

  • -p: process ID
  • -f: follow forks
  • -e: follow set of system calls. In our case, we use trace=network, which follows network system calls.
  • -s: set output string sizes. default is 32, which does not  give a lot of information.

Finally if you have a new program to execute and you want to watch the network traffic on it, you execute that command with strace. This would be good to use if you work in a highly secure environment and need to find out what sort of network traffic a distributed binary is doing. Checking for a program ‘Phoning home’ is a good example of that.

Here is the command that launches a new process:

strace -f -e trace=network -s 10000 /usr/bin/command arguments

Hopefully using strace in this manner will help you debug some issues on your server – I know I have used it several times.

Ubuntu Server in Place Network Upgrade From 8.10 to 9.04

Ubuntu Upgrade

It is easy to do an in-place upgrade of Ubuntu Server from 8.10 ‘Intrepid Ibex‘ to 9.04 ‘Jaunty Jackalope‘. You can do this remotely over ssh or whatever you use to control your server. Best practices say to make sure to backup your server before doing the upgrade. I’ve done several servers this way with no issues!

Issue the command:

sudo apt-get update; sudo apt-get upgrade; sudo apt-get install update-manager-core; sudo do-release-upgrade

Follow any prompts to first upgrade the current distribution with the newest packages, then do the release upgrade.

Installing And Compiling Zabbix Client / Agent


Zabbix is an excellent system monitoring package. It does everything from basic availability checking to detailed system resource analysis. It is capable of graphing the variables pulled from the system, and alerting admins if there is a problem or something needed for attention.

Once you have the Zabbix server set up, you need to install the client on any systems you want to monitor. Windows systems have a precompiled binary to install. On linux, unix or freebsd systems you’ll need to compile binaries. If you have a range of systems that are homogeneous, you can port the binary to those or also compile it with static dependencies. Below are steps to compile, configure and install the zabbix client:

Steps to install a Zabbix Client

  1. Download zabbix source code from; decompress with ‘tar zxvf’ and cd to directory
  2. Configure the make program: ./configure –enable-agent
  3. Compile and install the program: make install
  4. Add zabbix group and user: groupadd zabbix; adduser -g zabbix -s /sbin/nologin -M -p RANDOMPASS zabbix
  5. Create log file: touch /var/log/zabbix_agentd.log; chown zabbix.zabbix /var/log/zabbix_agentd.log
  6. Copy init script to /etc/init.d. Scripts are located in ./misc/init.d/ and your distro directory.
  7. Make sure bin directory in init script is where Zabbix actually compiled to.
  8. chmod 755 /etc/init.d/zabbix_agentd
  9. chkconfig zabbix_agentd on
  10. Copy agent config script to /etc/zabbix/zabbix_agentd.conf. Current one is:
    # This is config file for zabbix_agentd
    # To get more information about ZABBIX, go
    # This is the ip and port of the main zabbix server
    # ListenIP=
    # LogFile=/var/log/zabbix_agentd.log
  11. Start the zabbix service: service zabbix_agentd start
  12. Open firewall for zabbix port (10050) if necessary.
  13. Log into Zabbix on the server, Add server to hosts – use correct templates and groups depending on what type of server it is.
  14. Add monitoring and notification as appropriate.
  15. Consider if all necessary services are being monitored; test that detection of down services and notifications work properly.