My Thoughts on the iPhone 4 on Verizon

Iphone-vzw-hands-dsc0554-rm-eng

The Verizon iPhone is a win for consumers all around. The competition between VZ and ATT will only heat up with better values coming out of all cell phone plans (the current pricing trend is out of control.) Which will be better for you? Depends on whose network covers the places you frequent the most. On the VZ positive end there will be the ability to turn the iPhone into a network hot spot without jailbreaking (for an additional fee, likely $30/month); on the minus it can not make voice calls and transfer data at the same time. For AT&T’s part, they have grown up around the massive amounts of data the iPhone uses and it is as of yet unseen if Verizon’s network can handle it. 

I would recommend waiting until May or June to get the next generation iPhone from Verizon to see how things pan out. If you get it now, you will be disappointed after the few months it will take for your phone to be outdated. 

Image credit.

Adding Random Quotes to the Bash Login Screen

According to “official” system administrator rules and guidelines you shouldn’t be adding so-called vain scripts to the login prompt – only utilities that will add something useful to the system (for example, current system load, memory and disk usage, etc). However I have some systems that I frequently connect to and thought it would be neat to add a random quote script to my bash login. That being said, this should only be done on ‘non-production’ systems and adds a security vector so please be careful where you use this.

The goal of this is to add a little quote, at random, every time you log into your system. My thoughts were to do it not only as a little source of inspiration but also to add perspective to what I’m doing sitting in front of the computer all of the time.

Originally I was going to try to write the script solely in bash since it is so flexible (and just as a proof of concept) but dealing with RSS in bash isn’t exactly pretty and I just wanted to get this together as quick as possible. PHP makes parsing XML easy, there are a number of ways to accomplish it. I chose to use the ready-made script at rssphp.net to do this, if you are curious about how you can handle this yourself using SimpleXML check out this tutorial over at Pixel2Life. The end result of my solution is a bash script calling a php script to grab the quote.

The Code

First create a file named /etc/update-motd.d/10-quote. The name does not matter much – the number will decide what order the script is called in of all the scripts in /etc/update-motd.d. Do an ls on that directory to see what all is being called when you log in. Add the following lines to this file, assuming you are placing your scripts in /etc/scripts/:

#!/bin/sh
echo ""
/usr/bin/php /etc/scripts/getquote.php
echo ""

Download v1 of rssphp and extract it to the /etc/scripts/ directory. We will require that file in our php code.

Create the file /etc/scripts/getquote.php and add the following:

load('http://www.quotedb.com/quote/quote.php?action=random_quote_rss');
$rssitems = $rss->getItems();

if ($rssitems) {
// print_r($rssitems);
echo $rssitems[0]['description'].' :: '.$rssitems[0]['title']."\n";
}

?>

I am using the RSS source from QuoteDB as the source of my quotes. Of all the places I checked (and I checked a lot) they seemed to have the most appropriate ones for this use. Feel free to use any source you wish – as long as the XML fields title/description hold the quote you will be able to use it. The RSS url was not obvious from the site and I had to do some digging to find it, in the end I am using http://www.quotedb.com/quote/quote.php?action=random_quote_rss.

We also add the if statement to allow it to degrade nicely in case you have no network connectivity to the server. After a short period – a second or two – it will time out and let you log in.

The end result is a pretty quote in our motd:

Linux vps01.[redacted].com 2.6.18-2-pve #1 SMP Mon Feb 1 10:45:26 CET 2010 x86_64 GNU/Linux
Ubuntu 10.04.1 LTS

"The absence of alternatives clears the mind marvelously." :: Henry Kissinger

root@vps01:~#

It should be pretty strait forward; let me know if you run into any problems!

Find Out If A Twitter Username Exists Using PHP/JSON

I’ve been trying to grab a Twitter screenname that people continually register and do not use. Twitter eventually deletes it, but I suppose it is in high enough demand that someone else registers it right away (and then continues to never use it). Wrote up a quick and dirty php script to check the Twitter API to see if a screenname exists and if it doesn’t, shoot a short an email. I’ve been using changedetection to roughly do the same thing but it has fallen short on two counts: the first is that it reports follower count changes, and second it only runs once a day and this hasn’t been fast enough for me to grab my desired screen name.

You can set this script to run via cron at any desired interval, I’ve set it to run every 30 minutes for me. Test with sample information to make sure it works under your setup. I’m using the default mail() function, and I also included some json decoding in there so you could also use this script to play around with decoding a user’s information in PHP.

[cc lang=”php” lines=”100″]

If you are running it from cron, add the sample entry:

0 * * * * /usr/bin/php /path/to/script.php

The above would run the script every hour on the hour.

Of course you can easily modify this sample code if you are looking to do any kind of similar checking in your web app. If you find this kind of script useful, please let me know!

Firesheep Should Be A Call To Arms For System, Network & Web Admins

Firesheep by Eric Butler has just been released to the world. This Firefox plugin does a few things that have already been fairly easy to do for a while, but rolled up in one easy to use package:

  1. Sniffs data on unencrypted Wireless Networks
  2. Looks for unencrypted login cookies sent to known popular insecure sites
  3. Allows you to login to that account with ‘One Click’

So what sites are impacted by default? Amazon.com, Basecamp, bit.ly, Cisco, CNET, Dropbox, Enom, Evernote, Facebook, Flickr, Github, Google, HackerNews, Harvest, Windows Live, NY Times, Pivotal Tracker, Slicehost, tumblr, Twitter, WordPress, Yahoo, and Yelp are among the few. A plugin system allows anyone to add their own sites (and cookie styles) to the plugin.

Yikes! It goes without saying that this is a major security problem for anyone who uses unencrypted wireless networks. Includes on this list are many universities and companies such as Starbucks.

It is a bit funny, because just last night I was talking with my friend Jorge Sierra about this very problem. My university in fact is one of those which uses unencrypted wifi. I installed the unencrypted password extension for Chrome to let me know when I am submitted an unencrypted password to a site. I was surprised how often this little box was popping up!

Why Open WiFi?

I am not sure – my undergrad university requires that any traffic going over wifi goes through their VPN which encrypts the traffic and prevents this program from working. Is open wifi still the ‘poison of choice’ for network admins because setting up a VPN-style system is too much for some organizations? Maybe – but it is clearly the wrong answer.

The other clear reason is that it is easier to use, and this is a valid complaint from a user experience perspective. I’ve seen plenty of folks have a hard time even with a simple WPA password. A shared password makes it even harder for a user to sign in. Hotels and coffee houses across the world opt for open wifi because it is simply the easiest for consumers to use. This is a problem us tech people need to solve.

Even if it is encrypted via WEP or WPA (1) these are very insecure protocols and still can be hacked with relative ease. This plugin could in fact be modified to include the cracking as well and cover an even wider range of wireless networks. This brings me to my second point.

Web Developers Must Encrypt All Login Forms

If you run ANY consumer facing app you should be passing any and all login information via an SSL secured website.

For hosts on a static IP address you simply need to purchase an SSL certificate. They are seriously under $20 these days (my cost as a reseller is $12) and are simple to install. Your code should be set up to always use this site and to never allow username and password to be sent unencrypted over the network. This is important not only at the end user’s connection (possibly over open wifi) but also for end-to-end encryption of this data.

Let’s say you are running a site on a shared IP address. You usually still have options. Most hosts I know of offer on SSL connection via the shared site – eg: https://server.name/~username/. This URL can be used to access your site’s information via an SSL certificate and it is normally included with the service.

Ideally every site would have an SSL certificate. But we need a few things for that to happen. People who buy web hosting are almost always looking for the cheapest deal. They will not be getting SSL at these bottom level prices. Hosting needs to have a paradigm shift so that people who run websites need to know that it is better to have people who know what they are doing from a security standpoint have configured and are running their servers, and that paying $10 a year for hosting isn’t sustainable. Some say that there is a significant overhead to running SSL on websites. It will, in fact, add some processing and bandwidth overhead. However this is necessary to provide security of services to the end users.

In my opinion, you either host your website on a large provider who use set up to have a secure infrastructure, or you pay more for an expert to host your website in a secure manner.

Another roadblock is the end of the free source of IPv4 address blocks. Web hosts need to move to IPv6 to free up IP addresses, and every website should be on its own IP address. That will allow SSL certificate installations much easier.

Back to Reality

What can you do, right now, about this problem? If you have to use an unencrypted wireless network, you should be running some sort of VPN to encrypt your traffic over the air as this is the most likely place it would be sniffed. You can get a cheap VPS at < $10 a month and proxy all of your traffic over SSH. Not the fastest method, but it will secure your data.

You can also install the Chrome Extension to warn you if you are about to submit form information via an unencrypted website. It isn’t the prettiest extension but it does get the job done.

Hopefully network, web and system administrators will get their acts together and push for a solution to this problem. It is a big one and one that isn’t apparent to the end user until their data, financial details and/or identity is stolen. We can fix this.

/via TechMeme

Fixing ip_conntrack Bottlenecks: The Tale Of The DNS Server With Many Tiny Connections

Server management is a funny thing. No matter how long you have been doing it, new interesting and unique challenges continue to pop up keeping you on your toes. This is a story about one of those challenges.

I manage a server which has a sole purpose: serving DNS requests. We use PowerDNS, which has been great. It is a DNS server whose backend is SQL, making administration of large numbers of records very easy. It is also fast, easy to use, open source and did I mention it is free?

The server has been humming along for years now. The traffic graphs don’t show a lot of data moving through it because it only serves DNS requests (plus MySQL replication) in the form of tiny UDP packets.

We started seeing these spikes in traffic but everything on the server seemed to be working properly. Test connections with dig proved that the server was accurately responding to requests, but external tests showed the server going up and down.

The First Clue

I started going through logs to see if we were being DoSed or if it was some sort of configuration problem. Everything seemed to be running properly and the requests, while voluminous, seemed to be legit. Within the flood of messages I spied error messages such as this:

printk: 2758 messages suppressed.
ip_conntrack: table full, dropping packet.

Ah ha! A clue! Let’s check the current numbers of ip_conntrack, which is a kernel function for the firewall which keeps tabs on packets heading into the system.

[root@ns1 log]# head /proc/slabinfo
slabinfo - version: 2.0
# name : tunables : slabdata
ip_conntrack_expect 0 0 192 20 1 : tunables 120 60 8 : slabdata 0 0 0
ip_conntrack 34543 34576 384 10 1 : tunables 54 27 8 : slabdata 1612 1612 108
fib6_nodes 5 119 32 119 1 : tunables 120 60 8 : slabdata 1 1 0
ip6_dst_cache 4 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
ndisc_cache 1 20 192 20 1 : tunables 120 60 8 : slabdata 1 1 0
rawv6_sock 4 11 704 11 2 : tunables 54 27 8 : slabdata 1 1 0
udpv6_sock 0 0 704 11 2 : tunables 54 27 8 : slabdata 0 0 0
tcpv6_sock 8 12 1216 3 1 : tunables 24 12 8 : slabdata 4 4 0

Continuing this line of logic, lets check our current value for this setting:

[root@ns1 log]# sysctl net.ipv4.netfilter.ip_conntrack_max
net.ipv4.netfilter.ip_conntrack_max = 34576

So it looks like we are hitting up against this limit. After the number of connections reaches this number, the kernel will simply drop the packet. It does this so that it will not overload and freeze up due to too many packets coming into it at once.

This system is running on CentOS 4.8, and since then newer versions of RHEL5 have the default set at 65536. For maximum efficiency we keep this number at multiples of 2. The top size depends on your memory, so just be careful as overloading it may cause you to run out of it.

Fixing The ip_conntrack Bottleneck

In my case I decided to go up 2 steps to 131072. To temporarily set it, use sysctl:

[root@ns1 log]# sysctl -w net.ipv4.netfilter.ip_conntrack_max=131072
net.ipv4.netfilter.ip_conntrack_max = 131072

Test everything out, if you have some problems with your network or system crashing, a reboot will set the value back to normal. To make the setting permanent on reboot, add the following line to your [cc inline=”1″]/etc/sysctl.conf

file:

# need to increase this due to volume of connections to the server
net.ipv4.netfilter.ip_conntrack_max=131072

My theory is that since the server was dropping packets, remote hosts were re-sending their DNS requests causing a ‘flood’ of traffic to the server and the spikes you see in the traffic graph above whenever traffic was mildly elevated. The bandwidth spikes were caused by amplification of traffic due to resending of the requests. After increasing ip_conntrack_max I immediately saw the bandwidth resume to normal levels.

Your server should now be set against an onslaught of tiny packets, legitimate or not. If you have even more connections than what you can safely track with ip_conntrack you may need to move to the next level which involves hardware firewalls and other methods for packet inspection off-server on dedicated hardware.

Some resources used in my investigation of this problem:
[1] http://wiki.khnet.info/index.php/Conntrack_tuning
[2] http://serverfault.com/questions/111034/increasing-ip-conntrack-max-safely
[3] http://www.linuxquestions.org/questions/red-hat-31/ip_conntrack-table-full-dropping-packet-615436/

The image of the kittens used for the featured image has nothing to do with this post. There are no known good photos of a “UDP Packet”, and I thought that everyone likes kittens, so there it is. Credit flickr user mathias-erhart.

How to Stop an Apache DDoS Attack with mod_evasive

The first inkling that I had a problem with a DDoS (Distributed Denial of Service) attack was a note sent to my inbox:

lfd on server1.myhostname.com: High 5 minute load average alert – 89.14

Apache DDoS

My initial thought was that a site on my server was getting Slashdotted or encountering the Digg or Reddit effect. I run Chartbeat on several sites where this occasionally happens and I will usually get an alert from them first. A quick look at the Extended status page from Apache showed that I had a much different kind of problem.

If a site is getting a lot of “good” natural traffic you will see a few things:

  • clients will be requesting all kinds of files from your site as a normal web browser would, and
  • the referring agent will show the link of the sending site, which you can verify.

In my case I had about 400 or so IP addresses requesting “/” from a little trafficked site of mine. Fortunately my Apache is well-tuned enough that it did not take the server down, which was crucial for diagnosing this problem. Otherwise the entire server may down and your only option is to reboot and stop apache from starting on boot. My apache logs showed the requests:

186.58.179.33 - - [21/Oct/2010:00:10:06 -0400] "GET / HTTP/1.1" 200 12474 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; WOW64; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.5.21022; .NET
CLR 3.5.30729; .NET CLR 3.0.30618)"
189.76.197.117 - - [21/Oct/2010:00:10:06 -0400] "GET / HTTP/1.1" 200 12474 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; ru; rv:1.8.1.19) Gecko/20081201 Firefox/2.0.0.19"
186.58.179.33 - - [21/Oct/2010:00:10:06 -0400] "GET / HTTP/1.1" 200 12474 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; WOW64; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.5.21022; .NET
CLR 3.5.30729; .NET CLR 3.0.30618)"
186.6.168.11 - - [21/Oct/2010:00:10:07 -0400] "GET / HTTP/1.1" 200 12474 "-" "Mozilla/4.0 (compatible; MSIE 5.0; Windows 2000) Opera 6.03 [en]"
197.0.165.121 - - [21/Oct/2010:00:10:07 -0400] "GET / HTTP/1.1" 200 12474 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.19 (KHTML, like Gecko) Chrome/0.4.154.25 Safari/525.19"
189.76.197.117 - - [21/Oct/2010:00:10:07 -0400] "GET / HTTP/1.1" 200 12474 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; ru; rv:1.8.1.19) Gecko/20081201 Firefox/2.0.0.19"
197.0.165.121 - - [21/Oct/2010:00:10:07 -0400] "GET / HTTP/1.1" 200 12474 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.19 (KHTML, like Gecko) Chrome/0.4.154.25 Safari/525.19"
186.6.168.11 - - [21/Oct/2010:00:10:07 -0400] "GET / HTTP/1.1" 200 12474 "-" "Mozilla/4.0 (compatible; MSIE 5.0; Windows 2000) Opera 6.03 [en]"

I found out this is from a type of DoS called a sloworis attack. In this attack, a HTTP request is made to the server and the connection is ‘held open’, making multiple requests. A compromised botnet is used to hammer the server from all over the world on many different connections and IP addresses. From the standpoint of Apache, these are legitimate connections despite the frequency of them. It simulates someone sitting at a browser and hitting the refresh command a few times a second. While not a devastating attack with regards to bandwidth, it ties up your server and rejects legitimate connections.

Fortunately after some asking around I found the mod_evasive Apache module. This module is a very basic one that has a simple function: it will keep a hash table of IPs and pages requested and when a threshold is reached on a page or site it will “block” the IP with a 403 “Forbidden” error.

Installing the module is easy:

  1. Download the module onto your server:

    # wget http://www.zdziarski.com/blog/wp-content/uploads/2010/02/mod_evasive_1.10.1.tar.gz

  2. Run the apache apxs command on the module which will compile it install it into your httpd.conf file (for Apache 2.0 – you are running 2.0 aren’t you?):

    # apxs -cia mod_evasive20.c

  3. Set up the configuration file. You can enter the following into your httpd.conf main server configuration (for Apache 2.0 again)


    DOSHashTableSize 3097
    DOSPageCount 3
    DOSSiteCount 100
    DOSPageInterval 3
    DOSSiteInterval 5
    DOSBlockingPeriod 300
    DOSLogDir "/var/log/httpd/modevasive/"
    DOSEmailNotify your@emailaddress.com

Now this works great to throw a forbidden page to the client, but that client is still taking up a slot on your server. Once all of your slots are filled up, other requests are queue and therefore your server is lethargic to non-responsive to real requests.

To fix this problem the mod_evasive module has a DOSSystemCommand option. Using this option you can have your server execute a command when a client trips the mod_evasive triggers. I use ConfigServerFirewall (csf) on my server, so I added the following command which specifies a 1 hour firewall ban on the IP, which effectively drops all of its traffic to the server:

DOSSystemCommand "/usr/bin/sudo /usr/sbin/csf -td %s 3600"

But wait a second! Apache doesn’t have access to the firewall normally. This is my one reservation about this proceedure. You need to give Apache access to the firewall programs (as root) via sudo so that it can execute this firewall block. This has other security implications, especially if you are on a multitenant server. We use visudo to do this.

# visudo

Add the following to the file:

User_Alias APACHE = apache
Cmnd_Alias FIREWALL = /sbin/iptables, /usr/sbin/csf, /sbin/ifconfig, /sbin/route
APACHE ALL = (ALL) NOPASSWD: FIREWALL

Where apache is the apache user (typically web, www, apache or httpd – this depends on the system) and the FIREWALL binaries are the ones used in the csf script.

Now my system watches for 3 or more connections to the same page in 3 seconds and not only serves a forbidden response to them but will drop their traffic completely via iptables. Within about 30 seconds my server load was back to normal, serving connections faithfully just like any other time.

This module worked great in my situation, but DDoS attacks come in many flavors and sizes. You may have luck with the mod_qos Apache module, where you can fine tune connections to certain pages. If an attack is bad enough, it is possible you will need to move to a hardware based solution because you can only do so much at the server level. A hardware firewall mixed with load balancers, caches like varnish, and other tricks can help to mitigate these DoS attacks.

Using Google Analytics Or Other Javascript With Smarty Template Engine

On a website I was working on recently I added the Google Analytics tracking code to the footer of a Smarty template, like this:

footer.tpl:



However, since the javascript used by Google Analytics includes { and } tags, also used by the Smarty template engine, it tries to interpret this code and depending on your settings will either fail silently or or with an error such as this:

Smarty error: [in footer.tpl line 148]: syntax error: unrecognized tag 'var'

The fix is simple. Enclose your Google Analytics code, or other javascript code, with {literal} and {/literal}. The literal tag allows you to place code to be displayed, well, literally.

The final code will look something like this:


{literal}

{/literal}



Your website should now run properly with the Google Analytics code in place.