The Dirty Little Secret About SSL Certificates

The dirty little secret about SSL certificates is that:

Anyone can become a certificate authority.

The tools to become a certificate authority, and therefore to publish your own SSL certificates, is included in a wide variety of systems – chances are if you have an Ubuntu or CentOS install you already have the capability of becoming an SSL certificate authority via OpenSSL.

openssl genrsa -des3 -out ca.key 4096
openssl req -new -x509 -days 365 -key ca.key -out ca.crt

The security, and by that I mean trust, that SSL certificates provide in major modern browsers is that only certificates that are signed by a limited number of authorities are trusted. Currently there are about 50 trusted certificate authorities in the world. [Wikipedia] If the certificate that is presented to your browser is signed by one of those CAs, then your browser trusts that it is a legitimate certificate.

Unfortunately in the real world, no computer system should be assumed safe. I would presume that all of the major CAs – Thawte, Comodo, DigiNotar and others have their private key under lock stock and barrel, but simply put, no computer system is safe from intrusion.

The Difference Between Encryption and Trust

SSL certificates play two roles in a browsing session – encryption and trust.

When you visit an SSL site on the HTTPS protocol, you are encrypting your session between two places. In a typical situation, the connection between your browser and server is encrypted, therefore any party which is trying to sniff your data in-between the two endpoints can not see your data.

Trust also occurs when you use an SSL certificate. When you visit mail.google.com, you assume that the certificate is only held by Google and therefore the data you are actually receiving is from mail.google.com, not mail.attacker.com.

The Man-In-The-Middle Attack

A man in the middle attack occurs when your internet connection has been intercepted and someone is playing an active role of sniffing your data in between the two connections. When traffic is unencrypted, this is trivial in nature. When it is encrypted, for example with an SSL certificate, it becomes much more difficult. If you are not planning on modifying the data and just want to see what is occurring between the two connections, it looks something like this:

MITM Intercepts traffic from legitimate HTTPS server -> MITM decodes the content and then re-encodes with its own SSL certificate -> MITM passes all traffic back and forth using the fake SSL certificate on the client’s side, while using the real SSL certificate on the server side.

This all relies on the client’s browser accepting the SSL certificate that the MITM presents. This is why the recent DigiNotar false SSL certificate in Iran for *.google.com is so troubling. Once you have a “legitimate” SSL certificate then a MITM can decode the data without the client even knowing. This violates both the trust and encryption aspects of SSL certificates.

What is being done to protect us against MITM attacks like this?

Google is using its massive number of web crawlers to take inventory of all SSL certificates it finds. It no doubt includes this in its search rankings as well (because if a web site bothers to get an SSL certificate, it indicates it is probably a higher value site), but it can be used to increase the security of sites as well when integrated into Chrome. EFF also runs the SSL Certificate Observatory which has a similar function. The way the *.google.com certificate was discovered was that Chrome gave an error when it noticed the serial number of the certificate did not match what Google had crawled previously. This is all well and good, but it does not work in all browsers and also still allows the site to load, and I doubt a non-technically savvy person would have caught it.

Revocation lists help to recall bad certificates, but by the time a certificate is discovered and revoked the damage has already been done.

The problem is that the whole CA system is flawed. Putting trust into 50 or so companies really is a disservice for end users. Let’s say the US government puts pressure on one of the CAs to issue a similar certificate. Not to mention any hacker gaining access to the CA’s root private certificate.

There are also some at work on a SSL certificate system mixed in with DNSSEC [Ed note: strangely enough, their certificate is currently expired]. The problem again is that the root DNS servers hold a lot of power, and traffic can be spoofed.

Convergence is another tool from @moxie__ which is currently available as a Firefox plugin. It allows you to specify trust authorities which can then tell you when a certificate is insecure. I wasn’t able to try it as I’ve upgraded to Firefox 6.0 and it wasn’t compatible, but it appears to have promise. My concern is that Joe user doesn’t have enough sense to run any security plugins that require any type of input. Any final solution to the SSL CA problem will need to be standards-based and not solved as a plugin.

What Can You Do To Help

Support the IETF and other research into alternatives to the current SSL Certificate Authority system. The SSL CA system is broke, and we need a replacement ASAP if we expect to keep our connections encrypted and private.

Zalman ZM-VE200 Review – You Need This External Hard Drive Enclosure

Fellow tech friends, I have a find for you. If you have a job, or hobby, or whatever where you find yourself meddling with a bunch of .iso files, whether to boot off of them or just to access the data on them, then I have the device for you.

It all started after I backed the Kickstarter project for the isostick. Having never heard of a device before that would accept .iso images on a filesystem and then present them to the computer as a disc drive, I thought this was (and is) a pretty cool idea.

When browsing through the comments, I saw folks mentioning that this is just like the Zalman ZM-VE200 external hard drive enclosure. So of course I decided to do some research on this newly discovered gadget.

Overview

ZM-VE200 Size Comparison

Size Comparison: ZM-VE200 on Lower Left, Normal External Drive on Top, External Disc Drive on Lower Right.

The Zalman ZM-VE200 at its core is an external sata hard drive enclosure. These have been around for a long time, allowing you to put your hard drive in an external enclosure and accessing the file system via a USB port. They are great for when you need to transfer a large amount of data and have an internet connection which isn’t up the the task in any reasonable amount of time.

This external enclosure can work just like that, an external USB drive. However, Zalman has added an extra layer of functionality on the enclosure by adding additional components which add features which I frankly haven’t seen anywhere else.

Zalman’s Additional Hardware Magic

The additional circuitry allows you to select an ISO which is present on the drive, and load it just as if it were a DVD or CDROM on the system. This means that instead of carrying around discs to install operating systems on, you simply put the ISOs on the drive and then select the correct ISO when you boot.

The Zalman ZM-VE200 Screen

When you boot/plug in the drive you actually have 3 modes available to you. Disc, Hard Drive or Dual. With Disc, files you place in the _ISO folder on the drive will be selectable via the wheel on the side of the device. As it was shipped the drive needs to be formatted as NTFS in order to show the ISO files, however with updated firmware you can actually use with FAT or NTFS.

Operation

Plugging in the hard drive

First thing you need to do is to install a SATA drive into the enclosure. This is pretty much a no-brainer, it only plugs in one direction. Slide the drive and circuitry back into the case and use the attached screws to secure the case to the drive/circuitboard. The screws are hidden by little rubber seals on the edge of the case.

The Menu Navigation Wheel

When plugging it into the system, you interact with the drive in a few ways. The initial scroll wheel position, when powered up, determines the mode:

  • Hold Up to enter ODD or “Disc” mode
  • Hold Center to enter Dual mode (both HDD and ODD modes)
  • Hold Down to enter HDD only mode

eSATA Port on the ZM-VE200

An eSATA port and cable are also supplied. I did not use this mode in my testing. It requires that you still plug in the USB cable for power requirements, and I would assume you would see faster transfer rates while in eSATA mode.

Finally there is a small switch that enables write-protect mode. This makes it so that you won’t be able to accidentally change the data on the drive.

The only problem I had with the drive was when I first plugged it into my system via a USB extension cable. The drive did not even turn on, it actually just clicked a little bit. I changed USB ports and then it seemed to work fine. Also I’ve run into a situation where I plugged in the drive to a system that was off and then booted it, and the screen lit up but stayed blank. I believe this is because this drive requires more power than some USB ports can deliver, so if you have problems with it, try another USB port first to see if that fixes your problem.

I also had some problems occasionally mounting the ISO file, usually booting into ODD mode (hold the scroll wheel “up”) seemed to fix this problem.

Final Thoughts

When installing operating systems from this drive, the process is notably faster. Meaning, the transfer speed you see off of the “disc” on the drive is much faster than a normal CD or DVD drive. While there were some technical hiccups and gotchas, the drive works very well.

This “gadget” is a must-have tool for system technicians who find themselves constantly burning ISOs to discs. My co-worker who initially made fun of my fondness for new gadgets has since said I’ll have to pry this drive from his cold, dead hands. It is so useful that I am now recommending it to all of my sysadmin friends. At $50 it is a steal and you will even make your money back because you won’t be burning so many discs.

Official Zalman ZM-VE200 Product Site

Buy From Amazon

(Updated Amazon link to SE product on 5/12/2012 – Thanks Skip!)

What a Resilver Looks Like in ZFS (and a Bug and/or Feature)

At home I have an (admittedly small) ZFS array set up to experiment with this awesome newish RAID technology. I think it has been around long enough that it can now be used in production, but I’m still getting used to the little bugs/features, and here is one that I just found.

After figuring out that I had 2 out of 3 of my 1TB Seagate Barracuda hard drives in the array fail, I had to give the entire array up for a loss and test out my backup strategy. Fortunately it worked and there was no data loss. After receiving the replacement drives in from Seagate, I rebuilt the ZFS array (using raidz again) and went along my merry way. After another 6 months or so, I started getting some funky results from my other drive. Thinking it might have some issue as with the others, I removed the drive and ran Seatools on it (by the way, Seatools doesn’t offer a 64-bit Windows version – what year is this?).

The drive didn’t show any signs of failure, so I decided to wipe it and add it back into the array to see what happens. That, of course, is easier said than done.

One of the problems I ran into is that I am using Ubuntu and fuse to run zfs. Ubuntu has this nasty habit of changing around drive identifiers when USB devices are plugged in. So now when this drive is plugged in, it is on /dev/sde instead of /dev/sdd, which is now a USB attached drive.

No problem, I figure, I’ll offline the bad drive in the zpool and replace it with the new drive location. No such luck.

First I offlined the drive using zpool offline media /dev/sdd:

dave@cerberus:~$ sudo zpool status
  pool: media
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        media       DEGRADED     0     0     0
          raidz1-0  DEGRADED     0     0     0
            sdd     OFFLINE      0     0     0
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0

Now that it’s offline, I thought you should be able to detach it. No such luck – since it is a ‘primary’ device of the zpool it does not allow you to remove it.

dave@cerberus:~$ sudo zpool detach media /dev/sdd
cannot detach /dev/sdd: only applicable to mirror and replacing vdevs

What they want you to do is replace the drive with another drive. This drive (the same drive, with all info wiped from it) is now on /dev/sde. I try to replace it:

dave@cerberus:~$ sudo zpool replace media /dev/sdd /dev/sde
invalid vdev specification
use '-f' to override the following errors:
/dev/sde is part of active pool 'media'
dave@cerberus:~$ sudo zpool replace -f media /dev/sdd /dev/sde
invalid vdev specification
the following errors must be manually repaired:
/dev/sde is part of active pool 'media'

Even with -f it doesn’t allow the replacement, because the system thinks that the drive is part of another pool.

So basically you are stuck if trying to test a replacement with a drive that already been used in the pool. I’m sure I could replace it with another 1TB disk but what is the point of that?

I ended up resolving the problem by removing the external USB drive, therefore putting the drive back into the original /dev/sdd slot. Without issuing any commands, the system now sees the drive as the old one, and starts resilvering the drive.

root@cerberus:/home/dave# zpool status
  pool: media
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: resilver in progress for 0h9m, 4.62% done, 3h18m to go
config:

        NAME        STATE     READ WRITE CKSUM
        media       ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdd     ONLINE       0     0    13  30.2G resilvered
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0

It is interesting to see what it looks like from an i/o perspective. The system reads from the two good drives and writes to the new (bad) one. Using iostat -x:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          29.77    0.00   13.81   32.81    0.00   23.60

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.80    0.00    33.60     0.00    42.00     0.01   15.00  15.00   1.20
sdb               0.00     0.00  625.00    0.00 108033.20     0.00   172.85     0.56    0.90   0.49  30.80
sdc               0.00     0.00  624.20    0.00 107828.40     0.00   172.75     0.50    0.81   0.47  29.60
sdd               0.00     1.20    0.00  504.40     0.00 107729.60   213.58     9.52   18.85   1.98 100.00

It seems that ZFS is able to identify a hard drive by GID somehow but doesn’t automatically use it in the pool. This makes it so that you can’t test a drive by removing it, formatting it, and putting it into a new location. Basically, zfs assumes that your drives are always going to be in the same /dev location, which isn’t always true. As soon as you attach a USB drive in Ubuntu things are going to shift around.

After the resilver is complete, the zpool status is:

root@cerberus:/home/dave# zpool status
  pool: media
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: resilver completed after 0h16m with 0 errors on Sun May 15 07:35:46 2011
config:

        NAME        STATE     READ WRITE CKSUM
        media       ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdd     ONLINE       0     0    13  50.0G resilvered
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0

errors: No known data errors

You can now clear the error with:

root@cerberus:/home/dave# zpool clear media
root@cerberus:/home/dave#

Zpool status now shows no errors:

root@cerberus:/home/dave# zpool status
  pool: media
 state: ONLINE
 scrub: resilver completed after 0h16m with 0 errors on Sun May 15 07:35:46 2011
config:

        NAME        STATE     READ WRITE CKSUM
        media       ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdd     ONLINE       0     0     0  50.0G resilvered
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0

errors: No known data errors

So now the question I have is this: Are you able to manually update or remove the drive status somewhere in your system? How did zfs know that this drive already had a pool installed on it? I zeroed the drive and verified with fdisk there were no partition on it. Is there a file somewhere on the system that stores this information, or is it written somewhere on the drive?

ZFS is great, but it still has some little issues like this that give me pause before using it in a production system. Then again, I suppose all massive disk array systems have their little quirks!

Disabling The hald-addon-storage Service On CentOS/RedHat

The haldHardware Access Layer Daemon – runs several processes in order to keep track of what hardware is installed on your system. This includes polling USB Drives and ‘hot-swap’ devices to check for changes along with a host of other tasks.

You might see it running on your system as follows:

2474 ? S 0:00 \_ hald-runner
2481 ? S 0:00 \_ hald-addon-acpi: listening on acpid socket /var/run/acpid.socket
2487 ? S 0:00 \_ hald-addon-keyboard: listening on /dev/input/event0
2495 ? S 41:47 \_ hald-addon-storage: polling /dev/hdc

If your system is static and the devices do not change, you can actually disable this service using a policy entry.

Create a file in your policy directory, for example /etc/hal/fdi/policy/99-custom.fdi. Add the text:




hald-addon-storage


Save and reload the hald using /etc/init.d/haldaemon restart.

And you will find that service no longer is polling your hardware.

Of course to turn it back on, remove that policy entry and restart the haldaemon again, it will be back in service.

Solution Credit: Linuxforums User cn77

Why I’m Dropping Boxee for XBMC

The Boxee platform had so much promise. Since releasing the Boxee Box in November 2010, Boxee has absolutely abandoned the PC users who brought the platform to prominence. Having waited since November for Boxee 1.0, PC Boxee users (including me) are now in open revolt

I've been a huge proponent of the platform since the Alpha in October of 2008. The developers have done a great job building a product on top of XBMC, adding streaming capabilities from popular sources such as Netflix and Hulu. The product, overall, is a great idea.

Since the Boxee Box release, the PC version of Boxee has been left derelict and is just an afterthought at this point. Hulu content is inaccessible (from what I understand largely due to Hulu) and the other web-content is near-unwatchable due to the idiosyncrasies of each web player and how Boxee displays them. It is only with frequent updates to correct these glaring problems that Boxee has any type of value over XBMC and other home media software.

The problem is, I suspect, that they do not think that there is a way to monetize the build-your-own-HTPC crowd. They would be wrong, however. With proper integration with a payment system (what happened to Boxee Payments?) you have an army of faithful, media consuming techies ready and willing to shell out for shows on Hulu, Amazon Video on Demand, and other sources. I don't want another box next to my PC, I want my existing equipment (Acer Revo) to serve out all my TV and video.

With their monetization stream in flux, it seems that they are pouring their energy into the Boxee box. I feel that this is shortsighted as the hardware game is too competitive, and that their best bet for a lasting product is to focus on Boxee as a platform instead of a hardware product. That means updating and supporting Boxee across all devices, not just the Boxee box. 

Boxee, it was awesome while it lasted, but you leave me no choice to switch. With options such as XBMC DharmaQVIVORoku and network-connected blu-ray players and TVs, your time is now limited. I sincerely hope you turn around and start to support the PC users who held you in such high esteem just a year ago.

For me, the last straw was when I got a new TV and Boxee couldn't handle the high def movies at the new 1080p resolution. XBMC handles them with no problems. So, that being said, XBMC Dharma is my new media platform of choice. Here's to hoping that Boxee can get back on the right track and fully supports their product as a platform and not just as a locked-down hardware device

Experimenting with Pascal on Ubuntu

I’ve been busy lately on a number of projects, one of which is a programming class I am currently taking. The class itself is interesting, we are learning about the different types of programming languages. For our latest project, we were tasked with writing a simple program in Pascal. Pascal isn’t used too much any more since it lacks some of the features that most modern languages have, but it is good to know at least a little bit about it in case you ever run across some old Pascal programs in the wild.

The syntax for pascal is a bit verbose, that is the main complaint about it. There are a number of others, but that is beyond the scope of this howto.

Installing The Pascal Compiler on Ubuntu

Installing Pascal in modern Ubuntu is a cinch. The Free Pascal Compiler, or fpc, is all that you need to get started. It works great on 32-bit or 64-bit systems. Install with:

sudo apt-get install fpc

Any prerequisites will automatically download and install along with fpc.

Getting Started in Pascal

To test the compiler let’s start with a simple Hello World program. Open up hello.pas and enter:

[cc lang=”pascal”]program Hello;

begin

Writeln(‘Hello World’);

end.

Compile with fpc hello.pas and run:

dave@cerberus:~/Pascal$ ./hello
Hello World

Selection Sort in Pascal

Now that we’ve verified it is running, I’m going to show you the code that I wrote for my program. Basically we were asked to Selection Sort two arrays of varying length. Apparently one of the (bad) features of Pascal originally was that you needed to declare the length of the array which made it a pain to work with them.

In this situation it is just two arrays so it isn’t too bad. Enter your array by creating two text files arrayA.txt and arrayB.txt. One number per line. The source code for sort.pas is:

[cc lang=”pascal”]program Sort;

var
A: array[1..10] of Integer;
B: array[1..20] of Integer;
F: Text;
i,j,k,l,m,temp: Integer;

begin
{Read in array A}
Assign(F, ‘arrayA.txt’);
Reset(F);
i:= 0;
while not EOF(F) do begin
Inc(i);
Read(F, A[i]);
end;

{Read in array B}
Assign(F, ‘arrayB.txt’);
Reset(F);
j:= 0;
while not EOF(F) do begin
Inc(j);
Read(F, B[j]);
end;
i:=10;
j:=20;

{Print out the unsorted arrays}
WriteLn(‘Unsorted Arrays:’);
WriteLn(‘Array A:’);
for k:=1 to i do
Write(A[k], ‘ ‘);
WriteLn();
WriteLn(‘Array B:’);
for k:=1 to j do
Write(B[k], ‘ ‘);
WriteLn();
WriteLn(‘=========================’);
WriteLn(‘Sorting Arrays…’);
WriteLn(‘=========================’);

{Selection Sort Array A}
for l := 1 to i do
for m := l + 1 to i do
if A[l] > A[m] then
begin
temp := A[l];
A[l] := A[m];
A[m] := temp;
end;
{Selection Sort Array B}
for l := 1 to j do
for m := l + 1 to j do
if B[l] > B[m] then
begin
temp := B[l];
B[l] := B[m];
B[m] := temp;
end;

{Print out the sorted arrays}
WriteLn(‘Selection Sorted Arrays:’);
WriteLn(‘Array A: ‘);
for k:=1 to i do
Write(A[k], ‘ ‘);
WriteLn();
WriteLn(‘Array B: ‘);
for k:=1 to j do
Write(B[k], ‘ ‘);
WriteLn();
end.

Compile and run (ok to ignore the compile-time errors)

dave@cerberus:~/Pascal$ fpc sort.pas
Free Pascal Compiler version 2.4.0-2 [2010/03/06] for x86_64
Copyright (c) 1993-2009 by Florian Klaempfl
Target OS: Linux for x86-64
Compiling sort.pas
Linking sort
/usr/bin/ld: warning: link.res contains output sections; did you forget -T?
73 lines compiled, 0.1 sec
dave@cerberus:~/Pascal$ ./sort
Unsorted Arrays:
Array A:
28 24 85 55 43 6 23 13 59 71
Array B:
13 37 36 53 24 83 27 42 62 71 9 92 1 41 6 3 88 77 65 67
=========================
Sorting Arrays...
=========================
Selection Sorted Arrays:
Array A:
6 13 23 24 28 43 55 59 71 85
Array B:
1 3 6 9 13 24 27 36 37 41 42 53 62 65 67 71 77 83 88 92
dave@cerberus:~/Pascal$

And there you have it. Compiling Pascal program on Ubuntu is an easy way to get your feet wet in programming. Pascal is a great beginner’s programming language, but if you want to learn more there are a number of great resources available for learning Pascal.

Twitter Blocked in Egypt: A View From Inside Their Network

I keep various VPSes across the globe for research purposes. One of those locations is in Egypt. So what happens when I do a normal traceroute?

[root@vps01-eg ~]# tracert google.com
traceroute to google.com (74.125.230.81), 30 hops max, 40 byte packets
 1  host-x.com.eg (196.x.x.x)  0.033 ms  0.024 ms  0.017 ms
 2  host-x.com.eg (196.x.x.x)  0.780 ms  0.883 ms  1.091 ms
 3  10.x.x.x (10.x.x.x)  0.691 ms  0.700 ms  0.693 ms
 4  172.x.x.x (172.x.x.x)  1.623 ms  1.613 ms  1.603 ms
 5  172.x.x.x (172.x.x.x)  1.751 ms  2.060 ms  2.055 ms
 6  172.20.2.1 (172.20.2.1)  7.860 ms  7.855 ms  7.846 ms
 7  172.20.3.37 (172.20.3.37)  6.387 ms  7.281 ms  6.525 ms
 8  * * *
 9  * * *
10  64.214.151.81 (64.214.151.81)  140.963 ms  140.328 ms  140.305 ms
11  po2-40G.ar4.NYC1.gblx.net (67.16.132.174)  140.821 ms  143.027 ms  141.632 ms
12  72.14.198.93 (72.14.198.93)  129.329 ms  129.689 ms  129.658 ms
13  216.239.43.114 (216.239.43.114)  187.277 ms  132.211 ms  131.428 ms
14  216.239.46.219 (216.239.46.219)  136.966 ms  136.579 ms  136.060 ms
15  64.233.175.115 (64.233.175.115)  136.051 ms  135.554 ms  135.244 ms
16  74.125.230.81 (74.125.230.81)  134.443 ms  136.053 ms  142.846 ms

But when trying to traceroute to Twitter?

[root@vps01-eg ~]# tracert twitter.com
traceroute to twitter.com (128.242.245.36), 30 hops max, 40 byte packets
 1  host-x.com.eg (196.x.x.x)  0.049 ms  0.037 ms  0.027 ms
 2  host-x.com.eg (196.x.x.x)  0.807 ms  1.028 ms  1.247 ms
 3  10.x.x.x (10.x.x.x)  0.650 ms  0.736 ms  0.784 ms
 4  172.x.x.x (172.x.x.x)  1.942 ms  1.945 ms  1.941 ms
 5  172.x.x.x (172.x.x.x)  2.393 ms  2.391 ms  2.387 ms
 6  172.20.2.1 (172.20.2.1)  8.086 ms  8.091 ms  8.087 ms
 7  172.20.3.37 (172.20.3.37)  10.399 ms  10.519 ms  11.178 ms
 8  * * *
 9  * * *
10  * * *
11  * * *
12  * * *
13  * * *
<continues till timeout>

So basically, it is dropping the connection somewhere in Egypt's national private network. So when Vodafone says that they didn't block twitter – you can believe them… this is a block on the national level. When the government controls the internet this is the risk you run. When reading about the US "Internet kill switch" bill proposed, you should bet your britches it is bad for freedom of speech and bad for the citizens of that country.

IPs and hostnames blocked to protect the innocent.