Usually, I will try to push clients towards using SCP (via a client such as WinSCP), however inevitably there are clients who do not understand this new method of accessing their files securely online, and who for one reason or another insist on using FTP for their online file access. As they say – the customer is always right?
Anyway, there are currently 3 mainstream FTP servers available via the yum command on CentOS 6.x. PureFTPd, ProFTPd and vsftpd. So which FTP server is the best? I will summarize the servers below, or skip to the summary.
ProFTPd is a modular FTP server which has been around for a long time. The large control panels (cPanel, DirectAdmin) all support ProFTPd and have for years.
The most feature rich of the bunch is certainly ProFTPd. There are a ton of plugins available for it, and the creator of it modeled its configuration architecture much like Apache’s – it is also using the GPL for licensing.
Configuration of ProFTPd is fairly straight forward, and example configuration files abound at a quick search of Google.
ProFTPd is available on a wide variety of system architectures and operating systems.
Of the bunch, ProFTPd has the most CVE vulnerabilities listed. The high number is most likely an indicator of ProFTPd’s widespread use which makes it a target of hackers.
PureFTPd‘s mantra is ‘Security First.’ This is evident in the low number of CVE entries (see below).
Licensed under the BSD license, PureFTPd is also available on a wide-range of operating systems (but not Windows).
Configuration of PureFTPd is simple, with a no-configuration file option. Although not as widely used as ProFTPd, PureFTPd has many configuration examples listed online.
PureFTPd’s “Security First” mantra puts it at the lead in the security department with the fewest security vulnerabilities.
vsftpd is another GPL-licensed FTP server, which stands for “Very Security FTP daemon.” It is a lighweight FTP server built with security in mind.
Its lightweight nature allows it to scale very efficiently, and many large sites (ftp.redhat.com, ftp.debian.org, ftp.freebsd.org) currently utilize vsftpd as their FTP server of choice.
vsftpd has a lower number of vulnerabilities listed in CVE than ProFTPd but more than PureFTPd. This could be because, since its name implies it is a secure FTP service, or because it is so widely used on large sites – that it is under more scrutiny than the others.
Summary & FTP Server Recommendations
Considering the evaluations above, any server would work in a situation, however generally speaking:
- If you want a server with the most flexible configuration options and external modules: ProFTPd
- If you have just a few users and want a simple, secure FTP server: PureFTPd
- If you want to run a FTP server at scale with many users: vsftpd
Of course, everyone’s requirements are different so make sure you evaluate the options according to your own needs.
Disagree with my assessment? Let me know why!
The dirty little secret about SSL certificates is that:
The tools to become a certificate authority, and therefore to publish your own SSL certificates, is included in a wide variety of systems – chances are if you have an Ubuntu or CentOS install you already have the capability of becoming an SSL certificate authority via OpenSSL.
openssl genrsa -des3 -out ca.key 4096
openssl req -new -x509 -days 365 -key ca.key -out ca.crt
The security, and by that I mean trust, that SSL certificates provide in major modern browsers is that only certificates that are signed by a limited number of authorities are trusted. Currently there are about 50 trusted certificate authorities in the world. [Wikipedia] If the certificate that is presented to your browser is signed by one of those CAs, then your browser trusts that it is a legitimate certificate.
Unfortunately in the real world, no computer system should be assumed safe. I would presume that all of the major CAs – Thawte, Comodo, DigiNotar and others have their private key under lock stock and barrel, but simply put, no computer system is safe from intrusion.
The Difference Between Encryption and Trust
SSL certificates play two roles in a browsing session – encryption and trust.
When you visit an SSL site on the HTTPS protocol, you are encrypting your session between two places. In a typical situation, the connection between your browser and server is encrypted, therefore any party which is trying to sniff your data in-between the two endpoints can not see your data.
Trust also occurs when you use an SSL certificate. When you visit mail.google.com, you assume that the certificate is only held by Google and therefore the data you are actually receiving is from mail.google.com, not mail.attacker.com.
The Man-In-The-Middle Attack
A man in the middle attack occurs when your internet connection has been intercepted and someone is playing an active role of sniffing your data in between the two connections. When traffic is unencrypted, this is trivial in nature. When it is encrypted, for example with an SSL certificate, it becomes much more difficult. If you are not planning on modifying the data and just want to see what is occurring between the two connections, it looks something like this:
MITM Intercepts traffic from legitimate HTTPS server -> MITM decodes the content and then re-encodes with its own SSL certificate -> MITM passes all traffic back and forth using the fake SSL certificate on the client’s side, while using the real SSL certificate on the server side.
This all relies on the client’s browser accepting the SSL certificate that the MITM presents. This is why the recent DigiNotar false SSL certificate in Iran for *.google.com is so troubling. Once you have a “legitimate” SSL certificate then a MITM can decode the data without the client even knowing. This violates both the trust and encryption aspects of SSL certificates.
What is being done to protect us against MITM attacks like this?
Google is using its massive number of web crawlers to take inventory of all SSL certificates it finds. It no doubt includes this in its search rankings as well (because if a web site bothers to get an SSL certificate, it indicates it is probably a higher value site), but it can be used to increase the security of sites as well when integrated into Chrome. EFF also runs the SSL Certificate Observatory which has a similar function. The way the *.google.com certificate was discovered was that Chrome gave an error when it noticed the serial number of the certificate did not match what Google had crawled previously. This is all well and good, but it does not work in all browsers and also still allows the site to load, and I doubt a non-technically savvy person would have caught it.
Revocation lists help to recall bad certificates, but by the time a certificate is discovered and revoked the damage has already been done.
The problem is that the whole CA system is flawed. Putting trust into 50 or so companies really is a disservice for end users. Let’s say the US government puts pressure on one of the CAs to issue a similar certificate. Not to mention any hacker gaining access to the CA’s root private certificate.
There are also some at work on a SSL certificate system mixed in with DNSSEC [Ed note: strangely enough, their certificate is currently expired]. The problem again is that the root DNS servers hold a lot of power, and traffic can be spoofed.
Convergence is another tool from @moxie__ which is currently available as a Firefox plugin. It allows you to specify trust authorities which can then tell you when a certificate is insecure. I wasn’t able to try it as I’ve upgraded to Firefox 6.0 and it wasn’t compatible, but it appears to have promise. My concern is that Joe user doesn’t have enough sense to run any security plugins that require any type of input. Any final solution to the SSL CA problem will need to be standards-based and not solved as a plugin.
What Can You Do To Help
Support the IETF and other research into alternatives to the current SSL Certificate Authority system. The SSL CA system is broke, and we need a replacement ASAP if we expect to keep our connections encrypted and private.
Firesheep by Eric Butler has just been released to the world. This Firefox plugin does a few things that have already been fairly easy to do for a while, but rolled up in one easy to use package:
- Sniffs data on unencrypted Wireless Networks
- Looks for unencrypted login cookies sent to known popular insecure sites
- Allows you to login to that account with ‘One Click’
So what sites are impacted by default? Amazon.com, Basecamp, bit.ly, Cisco, CNET, Dropbox, Enom, Evernote, Facebook, Flickr, Github, Google, HackerNews, Harvest, Windows Live, NY Times, Pivotal Tracker, Slicehost, tumblr, Twitter, WordPress, Yahoo, and Yelp are among the few. A plugin system allows anyone to add their own sites (and cookie styles) to the plugin.
Yikes! It goes without saying that this is a major security problem for anyone who uses unencrypted wireless networks. Includes on this list are many universities and companies such as Starbucks.
It is a bit funny, because just last night I was talking with my friend Jorge Sierra about this very problem. My university in fact is one of those which uses unencrypted wifi. I installed the unencrypted password extension for Chrome to let me know when I am submitted an unencrypted password to a site. I was surprised how often this little box was popping up!
Why Open WiFi?
I am not sure – my undergrad university requires that any traffic going over wifi goes through their VPN which encrypts the traffic and prevents this program from working. Is open wifi still the ‘poison of choice’ for network admins because setting up a VPN-style system is too much for some organizations? Maybe – but it is clearly the wrong answer.
The other clear reason is that it is easier to use, and this is a valid complaint from a user experience perspective. I’ve seen plenty of folks have a hard time even with a simple WPA password. A shared password makes it even harder for a user to sign in. Hotels and coffee houses across the world opt for open wifi because it is simply the easiest for consumers to use. This is a problem us tech people need to solve.
Even if it is encrypted via WEP or WPA (1) these are very insecure protocols and still can be hacked with relative ease. This plugin could in fact be modified to include the cracking as well and cover an even wider range of wireless networks. This brings me to my second point.
Web Developers Must Encrypt All Login Forms
If you run ANY consumer facing app you should be passing any and all login information via an SSL secured website.
For hosts on a static IP address you simply need to purchase an SSL certificate. They are seriously under $20 these days (my cost as a reseller is $12) and are simple to install. Your code should be set up to always use this site and to never allow username and password to be sent unencrypted over the network. This is important not only at the end user’s connection (possibly over open wifi) but also for end-to-end encryption of this data.
Let’s say you are running a site on a shared IP address. You usually still have options. Most hosts I know of offer on SSL connection via the shared site – eg: https://server.name/~username/. This URL can be used to access your site’s information via an SSL certificate and it is normally included with the service.
Ideally every site would have an SSL certificate. But we need a few things for that to happen. People who buy web hosting are almost always looking for the cheapest deal. They will not be getting SSL at these bottom level prices. Hosting needs to have a paradigm shift so that people who run websites need to know that it is better to have people who know what they are doing from a security standpoint have configured and are running their servers, and that paying $10 a year for hosting isn’t sustainable. Some say that there is a significant overhead to running SSL on websites. It will, in fact, add some processing and bandwidth overhead. However this is necessary to provide security of services to the end users.
In my opinion, you either host your website on a large provider who use set up to have a secure infrastructure, or you pay more for an expert to host your website in a secure manner.
Another roadblock is the end of the free source of IPv4 address blocks. Web hosts need to move to IPv6 to free up IP addresses, and every website should be on its own IP address. That will allow SSL certificate installations much easier.
Back to Reality
What can you do, right now, about this problem? If you have to use an unencrypted wireless network, you should be running some sort of VPN to encrypt your traffic over the air as this is the most likely place it would be sniffed. You can get a cheap VPS at < $10 a month and proxy all of your traffic over SSH. Not the fastest method, but it will secure your data.
You can also install the Chrome Extension to warn you if you are about to submit form information via an unencrypted website. It isn’t the prettiest extension but it does get the job done.
Hopefully network, web and system administrators will get their acts together and push for a solution to this problem. It is a big one and one that isn’t apparent to the end user until their data, financial details and/or identity is stolen. We can fix this.
I’m not trying to say I had anything to do with Google adding two-factor authentication to Google Apps. I’m really not. But on September 9th, MakeUseOf published an article named How To Secure Your Google Apps Account with Two Factor Authentication. In this article, I wrote:
All of this brings up the question: why doesn’t Google enable a direct way to use two factor authentication with their Gmail, Calendar and other services? Many folks such as myself use Google services for all too many things in their lives, and that login is potentially the most important one of their online life. I would suggest that Google gets onto the security boat and enables this as an option for everyday folks.
Today, 11 days later, Google released their own Two-Factor authentication scheme for Google Apps account (Premier, Education and Government). An example of accurate prognostication? Or just dumb luck? Either way, great job Google!
If you are a Google Apps user, your Administrator will need to enable the feature for your account. Standard edition users will have this feature available shortly. Highly recommended for password and data security if you store your data in the Google cloud.
It is very easy to create a random file using the linux command line. Much like the command to fill a file with all zeros, for example a 1 Meg file:
dd if=/dev/zero of=zero.filename bs=1024 count=1000
You do the same using /dev/urandom:
dd if=/dev/urandom of=random.filename bs=1024 count=1000
Resulting in a 1MB file:
1000+0 records in 1000+0 records out 1024000 bytes (1.0 MB) copied, 0.0294247 s, 34.8 MB/s
This is transferring random data from the virtual device urandom to the output file. We use /dev/urandom instead of /dev/random because the /dev/random source generates random data very slowly. urandom is much faster at this but remains very random, if not quite a random as /dev/random. This should work with any system with dd and /dev/urandom.
Did you ever have a situation where you needed to access a website that had an IP restriction in place? I recently had a situation where I needed to access the web via my university connection (due to IP restrictions placed on accessing databases of research papers). They do not have a VPN setup so it is hard to do this off-campus.
I do however have access to a linux machine on campus. I am familiar with port forwarding using SSH but I had never used it to actually tunnel web traffic using a web browser on Windows. Turns out it is surprisingly easy!
The ssh command to use is:
ssh -C2qTnN -D 8080 username@remote_host
This command sshes to the remote_host, and creates a tunnel on your localhost, port 8080. Note that you need to have private key authentication already set up for this host – it will not work with password authentication.
The description of the switches are (from the ssh man page):
- -C : Compression
- -2 : Use SSHv2
- -q : quiet!
- -T : Disable pseuto-tty allocation
- -n : Prevents reading from stdin (you need to have private key authentication set up, to prevent password authentication)
- -N : Do not execute a remote command (or launch a shell). Just use the ssh process for port forwarding
- -D : Allocate a socket to listen on the local side. When a connection is made to this port it is located to the remote machine. Makes SSH work as a SOCKS server. Only root can forward privileged ports like this.
From here, you set up Firefox or your browser of choice to use a Socks proxy on localhost:8080. The man page says that SOCKS4 and SOCK5 should both work but I had to use SOCKS v4, SOCKS v5 did not seem to work for me.
The Remote Desktop connection settings for Windows Server 2008, and I believe Windows Vista, includes 3 levels of service:
- Don’t allow connections to this computer
- Allow connections from computers running any version of Remote Desktop (less secure)
- Allow connections only from computers running Remote Desktop with Network Level Authentication (more secure)
At first blush, you would probably choose the “more secure” option. Practically, this mainly means that it only allows connections from the latest Remote Desktop software in Windows Vista. It is probably another attempt by Microsoft to force consumers and businesses into upgrading to Windows Vista. But… I digress.
When connecting with an older Terminal Services (TS) client in XP or even Vista, you will get this message:
“Remote computer requires Network Level Authentication, which your computer doesn’t support”
Not all is lost. There are two ways around this. The first and most obvious solution is to select the less secure option and disabled Network Level Authentication (NLA). If you are in an environment that does not allow this change, or there are some other circumstances where you need to keep Network Level Authentication enabled, you can get a Remote Desktop connection from Windows XP.
The first step is to download the latest Remote Desktop Client for Windows XP. As of the writing of this article, the latest version is 6.1.
For XP SP3: here
For XP SP2: here
That is not it. For XP, you need to enable CredSSP – Credential Security Service Provider.
CredSSP is a new Security Service Provider (SSP) that is available in Windows XP SP3 by using the Security Service Provider Interface (SSPI). CredSSP enables a program to use client-side SSP to delegate user credentials from the client computer to the target server.
Directions on how do do this are available at Microsoft here:
The quick and dirty summary:
- Click Start, click Run, type regedit, and then press ENTER.
- In the navigation pane, locate and then click the following registry subkey:
- In the details pane, right-click Security Packages, and then click Modify.
- In the Value data box, type tspkg. Leave any data that is specific to other SSPs, and then click OK.
- In the navigation pane, locate and then click the following registry subkey:
- In the details pane, right-click SecurityProviders, and then click Modify.
- In the Value data box, type credssp.dll. Leave any data that is specific to other SSPs, and then click OK.
- Exit Registry Editor.
- Restart the computer.
For more information on CredSSP including how to deploy this setting using Group Policy, see the CredSSP page here.
Let me know if you have any other tips or a simpler way to connect to the more secure version of Remote Desktop.