System Administration

Adding Random Quotes to the Bash Login Screen

According to “official” system administrator rules and guidelines you shouldn’t be adding so-called vain scripts to the login prompt – only utilities that will add something useful to the system (for example, current system load, memory and disk usage, etc). However I have some systems that I frequently connect to and thought it would be neat to add a random quote script to my bash login. That being said, this should only be done on ‘non-production’ systems and adds a security vector so please be careful where you use this.

The goal of this is to add a little quote, at random, every time you log into your system. My thoughts were to do it not only as a little source of inspiration but also to add perspective to what I’m doing sitting in front of the computer all of the time.

Originally I was going to try to write the script solely in bash since it is so flexible (and just as a proof of concept) but dealing with RSS in bash isn’t exactly pretty and I just wanted to get this together as quick as possible. PHP makes parsing XML easy, there are a number of ways to accomplish it. I chose to use the ready-made script at to do this, if you are curious about how you can handle this yourself using SimpleXML check out this tutorial over at Pixel2Life. The end result of my solution is a bash script calling a php script to grab the quote.

The Code

First create a file named /etc/update-motd.d/10-quote. The name does not matter much – the number will decide what order the script is called in of all the scripts in /etc/update-motd.d. Do an ls on that directory to see what all is being called when you log in. Add the following lines to this file, assuming you are placing your scripts in /etc/scripts/:

echo ""
/usr/bin/php /etc/scripts/getquote.php
echo ""

Download v1 of rssphp and extract it to the /etc/scripts/ directory. We will require that file in our php code.

Create the file /etc/scripts/getquote.php and add the following:

$rssitems = $rss->getItems();

if ($rssitems) {
// print_r($rssitems);
echo $rssitems[0]['description'].' :: '.$rssitems[0]['title']."\n";


I am using the RSS source from QuoteDB as the source of my quotes. Of all the places I checked (and I checked a lot) they seemed to have the most appropriate ones for this use. Feel free to use any source you wish – as long as the XML fields title/description hold the quote you will be able to use it. The RSS url was not obvious from the site and I had to do some digging to find it, in the end I am using

We also add the if statement to allow it to degrade nicely in case you have no network connectivity to the server. After a short period – a second or two – it will time out and let you log in.

The end result is a pretty quote in our motd:

Linux vps01.[redacted].com 2.6.18-2-pve #1 SMP Mon Feb 1 10:45:26 CET 2010 x86_64 GNU/Linux
Ubuntu 10.04.1 LTS

"The absence of alternatives clears the mind marvelously." :: Henry Kissinger


It should be pretty strait forward; let me know if you run into any problems!

Firesheep Should Be A Call To Arms For System, Network & Web Admins

Firesheep by Eric Butler has just been released to the world. This Firefox plugin does a few things that have already been fairly easy to do for a while, but rolled up in one easy to use package:

  1. Sniffs data on unencrypted Wireless Networks
  2. Looks for unencrypted login cookies sent to known popular insecure sites
  3. Allows you to login to that account with ‘One Click’

So what sites are impacted by default?, Basecamp,, Cisco, CNET, Dropbox, Enom, Evernote, Facebook, Flickr, Github, Google, HackerNews, Harvest, Windows Live, NY Times, Pivotal Tracker, Slicehost, tumblr, Twitter, WordPress, Yahoo, and Yelp are among the few. A plugin system allows anyone to add their own sites (and cookie styles) to the plugin.

Yikes! It goes without saying that this is a major security problem for anyone who uses unencrypted wireless networks. Includes on this list are many universities and companies such as Starbucks.

It is a bit funny, because just last night I was talking with my friend Jorge Sierra about this very problem. My university in fact is one of those which uses unencrypted wifi. I installed the unencrypted password extension for Chrome to let me know when I am submitted an unencrypted password to a site. I was surprised how often this little box was popping up!

Why Open WiFi?

I am not sure – my undergrad university requires that any traffic going over wifi goes through their VPN which encrypts the traffic and prevents this program from working. Is open wifi still the ‘poison of choice’ for network admins because setting up a VPN-style system is too much for some organizations? Maybe – but it is clearly the wrong answer.

The other clear reason is that it is easier to use, and this is a valid complaint from a user experience perspective. I’ve seen plenty of folks have a hard time even with a simple WPA password. A shared password makes it even harder for a user to sign in. Hotels and coffee houses across the world opt for open wifi because it is simply the easiest for consumers to use. This is a problem us tech people need to solve.

Even if it is encrypted via WEP or WPA (1) these are very insecure protocols and still can be hacked with relative ease. This plugin could in fact be modified to include the cracking as well and cover an even wider range of wireless networks. This brings me to my second point.

Web Developers Must Encrypt All Login Forms

If you run ANY consumer facing app you should be passing any and all login information via an SSL secured website.

For hosts on a static IP address you simply need to purchase an SSL certificate. They are seriously under $20 these days (my cost as a reseller is $12) and are simple to install. Your code should be set up to always use this site and to never allow username and password to be sent unencrypted over the network. This is important not only at the end user’s connection (possibly over open wifi) but also for end-to-end encryption of this data.

Let’s say you are running a site on a shared IP address. You usually still have options. Most hosts I know of offer on SSL connection via the shared site – eg: This URL can be used to access your site’s information via an SSL certificate and it is normally included with the service.

Ideally every site would have an SSL certificate. But we need a few things for that to happen. People who buy web hosting are almost always looking for the cheapest deal. They will not be getting SSL at these bottom level prices. Hosting needs to have a paradigm shift so that people who run websites need to know that it is better to have people who know what they are doing from a security standpoint have configured and are running their servers, and that paying $10 a year for hosting isn’t sustainable. Some say that there is a significant overhead to running SSL on websites. It will, in fact, add some processing and bandwidth overhead. However this is necessary to provide security of services to the end users.

In my opinion, you either host your website on a large provider who use set up to have a secure infrastructure, or you pay more for an expert to host your website in a secure manner.

Another roadblock is the end of the free source of IPv4 address blocks. Web hosts need to move to IPv6 to free up IP addresses, and every website should be on its own IP address. That will allow SSL certificate installations much easier.

Back to Reality

What can you do, right now, about this problem? If you have to use an unencrypted wireless network, you should be running some sort of VPN to encrypt your traffic over the air as this is the most likely place it would be sniffed. You can get a cheap VPS at < $10 a month and proxy all of your traffic over SSH. Not the fastest method, but it will secure your data.

You can also install the Chrome Extension to warn you if you are about to submit form information via an unencrypted website. It isn’t the prettiest extension but it does get the job done.

Hopefully network, web and system administrators will get their acts together and push for a solution to this problem. It is a big one and one that isn’t apparent to the end user until their data, financial details and/or identity is stolen. We can fix this.

/via TechMeme

Fixing ip_conntrack Bottlenecks: The Tale Of The DNS Server With Many Tiny Connections

Server management is a funny thing. No matter how long you have been doing it, new interesting and unique challenges continue to pop up keeping you on your toes. This is a story about one of those challenges.

I manage a server which has a sole purpose: serving DNS requests. We use PowerDNS, which has been great. It is a DNS server whose backend is SQL, making administration of large numbers of records very easy. It is also fast, easy to use, open source and did I mention it is free?

The server has been humming along for years now. The traffic graphs don’t show a lot of data moving through it because it only serves DNS requests (plus MySQL replication) in the form of tiny UDP packets.

We started seeing these spikes in traffic but everything on the server seemed to be working properly. Test connections with dig proved that the server was accurately responding to requests, but external tests showed the server going up and down.

The First Clue

I started going through logs to see if we were being DoSed or if it was some sort of configuration problem. Everything seemed to be running properly and the requests, while voluminous, seemed to be legit. Within the flood of messages I spied error messages such as this:

printk: 2758 messages suppressed.
ip_conntrack: table full, dropping packet.

Ah ha! A clue! Let’s check the current numbers of ip_conntrack, which is a kernel function for the firewall which keeps tabs on packets heading into the system.

[root@ns1 log]# head /proc/slabinfo
slabinfo - version: 2.0
# name : tunables : slabdata
ip_conntrack_expect 0 0 192 20 1 : tunables 120 60 8 : slabdata 0 0 0
ip_conntrack 34543 34576 384 10 1 : tunables 54 27 8 : slabdata 1612 1612 108
fib6_nodes 5 119 32 119 1 : tunables 120 60 8 : slabdata 1 1 0
ip6_dst_cache 4 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
ndisc_cache 1 20 192 20 1 : tunables 120 60 8 : slabdata 1 1 0
rawv6_sock 4 11 704 11 2 : tunables 54 27 8 : slabdata 1 1 0
udpv6_sock 0 0 704 11 2 : tunables 54 27 8 : slabdata 0 0 0
tcpv6_sock 8 12 1216 3 1 : tunables 24 12 8 : slabdata 4 4 0

Continuing this line of logic, lets check our current value for this setting:

[root@ns1 log]# sysctl net.ipv4.netfilter.ip_conntrack_max
net.ipv4.netfilter.ip_conntrack_max = 34576

So it looks like we are hitting up against this limit. After the number of connections reaches this number, the kernel will simply drop the packet. It does this so that it will not overload and freeze up due to too many packets coming into it at once.

This system is running on CentOS 4.8, and since then newer versions of RHEL5 have the default set at 65536. For maximum efficiency we keep this number at multiples of 2. The top size depends on your memory, so just be careful as overloading it may cause you to run out of it.

Fixing The ip_conntrack Bottleneck

In my case I decided to go up 2 steps to 131072. To temporarily set it, use sysctl:

[root@ns1 log]# sysctl -w net.ipv4.netfilter.ip_conntrack_max=131072
net.ipv4.netfilter.ip_conntrack_max = 131072

Test everything out, if you have some problems with your network or system crashing, a reboot will set the value back to normal. To make the setting permanent on reboot, add the following line to your [cc inline=”1″]/etc/sysctl.conf


# need to increase this due to volume of connections to the server

My theory is that since the server was dropping packets, remote hosts were re-sending their DNS requests causing a ‘flood’ of traffic to the server and the spikes you see in the traffic graph above whenever traffic was mildly elevated. The bandwidth spikes were caused by amplification of traffic due to resending of the requests. After increasing ip_conntrack_max I immediately saw the bandwidth resume to normal levels.

Your server should now be set against an onslaught of tiny packets, legitimate or not. If you have even more connections than what you can safely track with ip_conntrack you may need to move to the next level which involves hardware firewalls and other methods for packet inspection off-server on dedicated hardware.

Some resources used in my investigation of this problem:

The image of the kittens used for the featured image has nothing to do with this post. There are no known good photos of a “UDP Packet”, and I thought that everyone likes kittens, so there it is. Credit flickr user mathias-erhart.

How to Stop an Apache DDoS Attack with mod_evasive

The first inkling that I had a problem with a DDoS (Distributed Denial of Service) attack was a note sent to my inbox:

lfd on High 5 minute load average alert – 89.14

Apache DDoS

My initial thought was that a site on my server was getting Slashdotted or encountering the Digg or Reddit effect. I run Chartbeat on several sites where this occasionally happens and I will usually get an alert from them first. A quick look at the Extended status page from Apache showed that I had a much different kind of problem.

If a site is getting a lot of “good” natural traffic you will see a few things:

  • clients will be requesting all kinds of files from your site as a normal web browser would, and
  • the referring agent will show the link of the sending site, which you can verify.

In my case I had about 400 or so IP addresses requesting “/” from a little trafficked site of mine. Fortunately my Apache is well-tuned enough that it did not take the server down, which was crucial for diagnosing this problem. Otherwise the entire server may down and your only option is to reboot and stop apache from starting on boot. My apache logs showed the requests: - - [21/Oct/2010:00:10:06 -0400] "GET / HTTP/1.1" 200 12474 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; WOW64; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.5.21022; .NET
CLR 3.5.30729; .NET CLR 3.0.30618)" - - [21/Oct/2010:00:10:06 -0400] "GET / HTTP/1.1" 200 12474 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; ru; rv: Gecko/20081201 Firefox/" - - [21/Oct/2010:00:10:06 -0400] "GET / HTTP/1.1" 200 12474 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; WOW64; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.5.21022; .NET
CLR 3.5.30729; .NET CLR 3.0.30618)" - - [21/Oct/2010:00:10:07 -0400] "GET / HTTP/1.1" 200 12474 "-" "Mozilla/4.0 (compatible; MSIE 5.0; Windows 2000) Opera 6.03 [en]" - - [21/Oct/2010:00:10:07 -0400] "GET / HTTP/1.1" 200 12474 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.19 (KHTML, like Gecko) Chrome/ Safari/525.19" - - [21/Oct/2010:00:10:07 -0400] "GET / HTTP/1.1" 200 12474 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; ru; rv: Gecko/20081201 Firefox/" - - [21/Oct/2010:00:10:07 -0400] "GET / HTTP/1.1" 200 12474 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.19 (KHTML, like Gecko) Chrome/ Safari/525.19" - - [21/Oct/2010:00:10:07 -0400] "GET / HTTP/1.1" 200 12474 "-" "Mozilla/4.0 (compatible; MSIE 5.0; Windows 2000) Opera 6.03 [en]"

I found out this is from a type of DoS called a sloworis attack. In this attack, a HTTP request is made to the server and the connection is ‘held open’, making multiple requests. A compromised botnet is used to hammer the server from all over the world on many different connections and IP addresses. From the standpoint of Apache, these are legitimate connections despite the frequency of them. It simulates someone sitting at a browser and hitting the refresh command a few times a second. While not a devastating attack with regards to bandwidth, it ties up your server and rejects legitimate connections.

Fortunately after some asking around I found the mod_evasive Apache module. This module is a very basic one that has a simple function: it will keep a hash table of IPs and pages requested and when a threshold is reached on a page or site it will “block” the IP with a 403 “Forbidden” error.

Installing the module is easy:

  1. Download the module onto your server:

    # wget

  2. Run the apache apxs command on the module which will compile it install it into your httpd.conf file (for Apache 2.0 – you are running 2.0 aren’t you?):

    # apxs -cia mod_evasive20.c

  3. Set up the configuration file. You can enter the following into your httpd.conf main server configuration (for Apache 2.0 again)

    DOSHashTableSize 3097
    DOSPageCount 3
    DOSSiteCount 100
    DOSPageInterval 3
    DOSSiteInterval 5
    DOSBlockingPeriod 300
    DOSLogDir "/var/log/httpd/modevasive/"

Now this works great to throw a forbidden page to the client, but that client is still taking up a slot on your server. Once all of your slots are filled up, other requests are queue and therefore your server is lethargic to non-responsive to real requests.

To fix this problem the mod_evasive module has a DOSSystemCommand option. Using this option you can have your server execute a command when a client trips the mod_evasive triggers. I use ConfigServerFirewall (csf) on my server, so I added the following command which specifies a 1 hour firewall ban on the IP, which effectively drops all of its traffic to the server:

DOSSystemCommand "/usr/bin/sudo /usr/sbin/csf -td %s 3600"

But wait a second! Apache doesn’t have access to the firewall normally. This is my one reservation about this proceedure. You need to give Apache access to the firewall programs (as root) via sudo so that it can execute this firewall block. This has other security implications, especially if you are on a multitenant server. We use visudo to do this.

# visudo

Add the following to the file:

User_Alias APACHE = apache
Cmnd_Alias FIREWALL = /sbin/iptables, /usr/sbin/csf, /sbin/ifconfig, /sbin/route

Where apache is the apache user (typically web, www, apache or httpd – this depends on the system) and the FIREWALL binaries are the ones used in the csf script.

Now my system watches for 3 or more connections to the same page in 3 seconds and not only serves a forbidden response to them but will drop their traffic completely via iptables. Within about 30 seconds my server load was back to normal, serving connections faithfully just like any other time.

This module worked great in my situation, but DDoS attacks come in many flavors and sizes. You may have luck with the mod_qos Apache module, where you can fine tune connections to certain pages. If an attack is bad enough, it is possible you will need to move to a hardware based solution because you can only do so much at the server level. A hardware firewall mixed with load balancers, caches like varnish, and other tricks can help to mitigate these DoS attacks.

The Ultimate Guide to DVD Encoding with Handbrake

It’s no secret that I’m a huge fan of Handbrake. After committing to copying my DVD collection to my storage array, I’ve tried and tested just about all software out there for converting video to H.264 with an emphasis on quality and speed. Many software packages have problems with quality or desynchronized audio, Handbrake is my hands-down favorite when it comes down to converting video — and that includes both free and commercial software.

One of the complaints I hear about Handbrake is that there are too many options. Well, the good news for someone looking for simplicity is that the built-in presets mostly take care of them for you. And for anyone who likes to dive into the nitty gritty of video compression, it also allows for a lot of tweaking to get the most out of your movie while maintaining small file sizes and high quality.

This guide was done on the Windows version but the other versions are all very similar. Also it assumes that there is no copy protection on the disc. If you have copy protection and it is legal in your country to remove it, you can use DVD43 (free), AnyDVD (paid), or other software to do so. If you have a particular question regarding Handbrake, be sure to ask in the comments below and I will get back to you as soon as possible. Thanks to the developers of Handbrake for the great piece of software they have put together! Many of the answers I am supplying below are available via the Handbrake wiki, which is also where you should go if you are looking for more detailed information on a particular item. On to the list!

The Main Menu

You will mainly use 2 buttons here – Source and Start. Source lets you select the DVD you wish to rip from, which is either the disc directly, or the “video_ts” folder if it is stored on your drive. Start begins the encoding process. If you want to run more than one at once, you would Start the first one, then “Add to Queue” to add it to the list of encodings it will do next. The Show Queue button allows you to reorganize or delete upcoming encodings.

Source Information

This area shows you information about your current disc. It will automatically select the largest title and most movies have only one angle. If you only want to encode a certain chapter or section of the disc, this is where you would select it.

Output File

Select where you would like to send the file when done, easy enough!

Output Settings

Container: MP4 or MKV. This depends on where you want to play the file – MP4 is usually more compatible but MKV can have more flexibility.
Large File Size: This mainly seems to be depreciated but you used to have problems playing files larger than 4GB on some devices if this is not checked.
Web Optimized: Assists with streaming via web – if you do not plan on hosting the video online it is safe to keep this off.
iPod 5G support: Only useful if you plan on playing video on a 5th Generation iPod.


Presets are the easiest way to get encoding fast. The Regular->Normal setting seems to work best both on my local PC and my HTPC. For movies with a lot of special effects where I am looking for higher quality (sacrificing longer encoding times and more hard drive usage) I will move to High Profile. If you are planning on making the movie portable, select the device which closest matches the one you have. It should shrink it down to whatever size you need.

You can also save new presets. If you like to encode a movie a certain way or for a certain device, feel free to change the settings and then ‘Add’ a new profile.


Source: Resolution of the original video.
Aspect ratio: Whether the original should be “stretched” and how much.
Anamorphic: How much the original should retain its resultion. I keep this on “Strict” most of the time. If you move it to different settings, you may need to specify the resolution, but in general it is best to keep the resolution the same.
Cropping: Best left to automatic – will remove any black bars on the top, bottom or sides of the video. If you find it missed the, select custom and then how much you want to chop off.

Video Filters

Detelecine: This has to do with how the video is interlaced when framerates are changed around. See Handbrake’s wiki for a technical explanation.
Decomb: Similar to Detelecine, this looks at every pixel. While having better results, it will slow down your encode. See more details.
Deinterlace: Deinterlaces every frame an can result in some loss of quality to the video. Removes the lines you sometimes see when anything is moving quickly across the screen.
Denoise: This will filter out noise and film grain from the video.
Deblock: This will remove blocky video due to compression or low quality.
Grayscale Encoding: Your movie will be in black & white.

Generally speaking I leave these options off as they slow down video and also alter the video from what is on the DVD. If you want to keep original quality, keep them off, otherwise feel free to customize to your needs. There is a great Handbrake wiki settings page which explain them further in technical detail.


Video codec: H.264 or MPEG-4 via ffmpeg. In most situations H.264 is superior as the video codec.
Framerate: Highly recommend keeping this same as source. Otherwise your audio and video might get desynchronized.
Quality: I usually go for Constant quality anywhere from 60 to 75. Target size is good if you are trying to fit it on a CD or DVD, and Avg Bitrate is used on some older devices.


If you want to encode the video only with English then it is usually OK to leave this area alone. You can add other audio tracks to your movie by clicking “Add track” and then selecting the source. Mixdown if you don’t expect to play the movie on a full-speaker system, but keep the bitrate at 160 or more. DRC stands for Dynamic Range Compression – setting this higher will help reduce the differences between the soft and loud moments in the movie.


MakeUseOf covered Subtitles on Handbrake last year, you can review it on that article.


Some software will allow you to fast forward or rewind based on the chapters in the file. Handbrake will automatically convert your DVD chapters into the file, tweak them here as needed.


Handbrake allows you to change all sorts of settings within the h.264 encoder. These are for advanced users, but below is a summary of the settings just in case you find a need to tweak them for a movie you are encoding.

Reference Frames B-Frames
Adaptive B-Frames Direct Prediction
Weighted B-Frames Pyramidal B-Frames
Motion Estimation Method Subpixel Motion Estimation
Analysis 8×8 DCT
CABAC Entropy Coding No Fast-P-Skip
No DCT-Decimate Deblocking

Instead of rehashing what is already over on the Handbrake wiki article H.264 options, you can just head over there for the gory details. These are tweaks which control how the video is compressed. The only reason to change them is if you are trying to encode a video and are not getting the quality or compression results you are expecting.

Get Encoding!

Hopefully this guide will help you to understand Handbrake and all that it is capable of. This is one of the most important tools in the savvy media connoisseur’s toolbox, and I hope you have found it as useful as I have. Let us know if you have any questions in the comments area below.

Google Adds Two-Factor Authentication To Google Apps (For Real, This Time)

I’m not trying to say I had anything to do with Google adding two-factor authentication to Google Apps. I’m really not. But on September 9th, MakeUseOf published an article named How To Secure Your Google Apps Account with Two Factor Authentication. In this article, I wrote:

All of this brings up the question: why doesn’t Google enable a direct way to use two factor authentication with their Gmail, Calendar and other services? Many folks such as myself use Google services for all too many things in their lives, and that login is potentially the most important one of their online life. I would suggest that Google gets onto the security boat and enables this as an option for everyday folks.

Today, 11 days later, Google released their own Two-Factor authentication scheme for Google Apps account (Premier, Education and Government). An example of accurate prognostication? Or just dumb luck? Either way, great job Google!

If you are a Google Apps user, your Administrator will need to enable the feature for your account. Standard edition users will have this feature available shortly. Highly recommended for password and data security if you store your data in the Google cloud.

Another Bash One Liner To Delete Old Directories

We received a tip from blog readers Christian and Michael for alternatives to the command to delete all directories older than a certain period of time. These both work in bash and can be used in scripts to clean up old backup directories or any situation where you need to delete old directories from the command line.

From Christian:

find /home/backup/ -maxdepth 1 -type d -mtime +7 -exec rm -r {} \;

From Michael:

find /home/backup/ -maxdepth 1 -type d -mtime +7 -exec echo “Removing Directory => {}” \; -exec rm -rf “{}” \;

The first one works quietly, while the second one will display what is being deleted. These are probably faster than putting it into a for loop, so feel free to use whatever works best in your particular situation!