System Administration

Evaluating FTP Servers: ProFTPd vs PureFTPd vs vsftpd

Usually, I will try to push clients towards using SCP (via a client such as WinSCP), however inevitably there are clients who do not understand this new method of accessing their files securely online, and who for one reason or another insist on using FTP for their online file access. As they say – the customer is always right?

Anyway, there are currently 3 mainstream FTP servers available via the yum command on CentOS 6.x. PureFTPd, ProFTPd and vsftpd. So which FTP server is the best? I will summarize the servers below, or skip to the summary.

ProFTPd

ProFTPd is a modular FTP server which has been around for a long time. The large control panels (cPanel, DirectAdmin) all support ProFTPd and have for years.

The most feature rich of the bunch is certainly ProFTPd. There are a ton of plugins available for it, and the creator of it modeled its configuration architecture much like Apache’s – it is also using the GPL for licensing.

Configuration of ProFTPd is fairly straight forward, and example configuration files abound at a quick search of Google.

ProFTPd is available on a wide variety of system architectures and operating systems.

ProFTPd Security

Of the bunch, ProFTPd has the most CVE vulnerabilities listed. The high number is most likely an indicator of ProFTPd’s widespread use which makes it a target of hackers.

ProFTPd CVE Entries: 40
Shodan ProFTPd entries: 127

PureFTPd

PureFTPd‘s mantra is ‘Security First.’ This is evident in the low number of CVE entries (see below).

Licensed under the BSD license, PureFTPd is also available on a wide-range of operating systems (but not Windows).

Configuration of PureFTPd is simple, with a no-configuration file option. Although not as widely used as ProFTPd, PureFTPd has many configuration examples listed online.

PureFTPd Security

PureFTPd’s “Security First” mantra puts it at the lead in the security department with the fewest security vulnerabilities.

PureFTPd CVE Entries: 4
Shodan Pure-FTPd Entries: 12

vsftpd

vsftpd is another GPL-licensed FTP server, which stands for “Very Security FTP daemon.” It is a lighweight FTP server built with security in mind.

Its lightweight nature allows it to scale very efficiently, and many large sites (ftp.redhat.com, ftp.debian.org, ftp.freebsd.org) currently utilize vsftpd as their FTP server of choice.

vsftpd Security

vsftpd has a lower number of vulnerabilities listed in CVE than ProFTPd but more than PureFTPd. This could be because, since its name implies it is a secure FTP service, or because it is so widely used on large sites – that it is under more scrutiny than the others.

vsftpd CVE Entries: 12
Shodan vsftpd entries: 41

Summary & FTP Server Recommendations

#

Considering the evaluations above, any server would work in a situation, however generally speaking:

  • If you want a server with the most flexible configuration options and external modules: ProFTPd
  • If you have just a few users and want a simple, secure FTP server: PureFTPd
  • If you want to run a FTP server at scale with many users: vsftpd

Of course, everyone’s requirements are different so make sure you evaluate the options according to your own needs.

Disagree with my assessment? Let me know why!

Simple Disk Benchmarking in Linux Using ‘dd’

A great way to do a real-world disk test on your linux system is with a program called dd.

dd stands for data description and is used for copying data sources.

A simple command to do real-world disk write test in linux is:

1
dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync

This creates a file named ‘test’ with all zeroes in it. The flag conv=fdatasync tells dd to sync the write to disk before it exits. Without this flag, dd will perform the write but some of it will remain in memory, not giving you an accurate picture of the true write performance of the disk.

A sample of the run is below, with a simple SATA disk:

1
2
3
4
[14:11][root@server:~]$ dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 5.19611 s, 103 MB/s

Now, there is a major caveat for using dd for disk benchmarking. The first is that it only tests filesystem access. Depending on your filesystem (I’m looking at your ZFS) the file write may itself just load into memory for writing later down the road. The same with a RAID controller on the system.

A much more accurate way of performing a disk benchmark is to use tools specifically geared towards this task. It will write much more data over a longer period of time. Bonnie++ is a particularly useful tool for this purpose.

Now don’t forget to remove that test file.

The Easy CIDR Cheatsheet

Classless Inter-Domain Routing (henceforth known as CIDR) for years now, I always need a bit up a help remember how many addresses are in each block and how many sub-blocks fit into larger blocks. I have the following printed out for easy reference, and here it is for your geeky enjoyment:
CIDR        Total number    Network             Description:
Notation:   of addresses:   Mask:
--------------------------------------------------------------
/0          4,294,967,296   0.0.0.0             Every Address
/1          2,147,483,648   128.0.0.0           128 /8 nets
/2          1,073,741,824   192.0.0.0           64 /8 nets
/3          536,870,912     224.0.0.0           32 /8 nets
/4          268,435,456     240.0.0.0           16 /8 nets
/5          134,217,728     248.0.0.0           8 /8 nets
/6          67,108,864      252.0.0.0           4 /8 nets
/7          33,554,432      254.0.0.0           2 /8 nets
/8          16,777,214      255.0.0.0           1 /8 net (Class A)
--------------------------------------------------------------
/9          8,388,608       255.128.0.0         128 /16 nets
/10         4,194,304       255.192.0.0         64 /16 nets
/11         2,097,152       255.224.0.0         32 /16 nets
/12         1,048,576       255.240.0.0         16 /16 nets
/13         524,288         255.248.0.0         8 /16 nets
/14         262,144         255.252.0.0         4 /16 nets
/15         131.072         255.254.0.0         2 /16 nets
/16         65,536          255.255.0.0         1 /16 (Class B)
--------------------------------------------------------------
/17         32,768          255.255.128.0       128 /24 nets
/19         16,384          255.255.192.0       64 /24 nets
/19         8,192           255.255.224.0       32 /24 nets
/20         4,096           255.255.240.0       16 /24 nets
/21         2,048           255.255.248.0       8 /24 nets
/22         1,024           255.255.252.0       4 /24 nets
/23         512             255.255.254.0       2 /24 nets
/24         256             255.255.255.0       1 /24 (Class C)
--------------------------------------------------------------
/25         128             255.255.255.128     Half of a /24
/26         64              255.255.255.192     Fourth of a /24
/27         32              255.255.255.224     Eighth of a /24
/28         16              255.255.255.240     1/16th of a /24
/29         8               255.255.255.248     5 Usable addresses
/30         4               255.255.255.252     1 Usable address
/31         2               255.255.255.254     Unusable
/32         1               255.255.255.255     Single host
--------------------------------------------------------------
Reserved Space:
	0.0.0.0/8	
	127.0.0.0/8
	192.0.2.0/24
	10.0.0.0/8
	172.16.0.0/12
	192.168.0.0/16
	169.254.0.0/16
'>

Even though I’ve been working with Classless Inter-Domain Routing (henceforth known as CIDR) for years now, I always need a bit up a help remember how many addresses are in each block and how many sub-blocks fit into larger blocks. I have the following printed out for easy reference, and here it is for your geeky enjoyment:

CIDR        Total number    Network             Description:
Notation:   of addresses:   Mask:
--------------------------------------------------------------
/0          4,294,967,296   0.0.0.0             Every Address
/1          2,147,483,648   128.0.0.0           128 /8 nets
/2          1,073,741,824   192.0.0.0           64 /8 nets
/3          536,870,912     224.0.0.0           32 /8 nets
/4          268,435,456     240.0.0.0           16 /8 nets
/5          134,217,728     248.0.0.0           8 /8 nets
/6          67,108,864      252.0.0.0           4 /8 nets
/7          33,554,432      254.0.0.0           2 /8 nets
/8          16,777,214      255.0.0.0           1 /8 net (Class A)
--------------------------------------------------------------
/9          8,388,608       255.128.0.0         128 /16 nets
/10         4,194,304       255.192.0.0         64 /16 nets
/11         2,097,152       255.224.0.0         32 /16 nets
/12         1,048,576       255.240.0.0         16 /16 nets
/13         524,288         255.248.0.0         8 /16 nets
/14         262,144         255.252.0.0         4 /16 nets
/15         131.072         255.254.0.0         2 /16 nets
/16         65,536          255.255.0.0         1 /16 (Class B)
--------------------------------------------------------------
/17         32,768          255.255.128.0       128 /24 nets
/19         16,384          255.255.192.0       64 /24 nets
/19         8,192           255.255.224.0       32 /24 nets
/20         4,096           255.255.240.0       16 /24 nets
/21         2,048           255.255.248.0       8 /24 nets
/22         1,024           255.255.252.0       4 /24 nets
/23         512             255.255.254.0       2 /24 nets
/24         256             255.255.255.0       1 /24 (Class C)
--------------------------------------------------------------
/25         128             255.255.255.128     Half of a /24
/26         64              255.255.255.192     Fourth of a /24
/27         32              255.255.255.224     Eighth of a /24
/28         16              255.255.255.240     1/16th of a /24
/29         8               255.255.255.248     5 Usable addresses
/30         4               255.255.255.252     1 Usable address
/31         2               255.255.255.254     Unusable
/32         1               255.255.255.255     Single host
--------------------------------------------------------------
Reserved Space:
	0.0.0.0/8	
	127.0.0.0/8
	192.0.2.0/24
	10.0.0.0/8
	172.16.0.0/12
	192.168.0.0/16
	169.254.0.0/16

Of course I’m not the first one to come up with this. Modified based on the cheat sheet from Samat Jain.

Let me know if you have any improvements or suggestions.

The Dirty Little Secret About SSL Certificates

The dirty little secret about SSL certificates is that:

Anyone can become a certificate authority.

The tools to become a certificate authority, and therefore to publish your own SSL certificates, is included in a wide variety of systems – chances are if you have an Ubuntu or CentOS install you already have the capability of becoming an SSL certificate authority via OpenSSL.

1
2
openssl genrsa -des3 -out ca.key 4096
openssl req -new -x509 -days 365 -key ca.key -out ca.crt

The security, and by that I mean trust, that SSL certificates provide in major modern browsers is that only certificates that are signed by a limited number of authorities are trusted. Currently there are about 50 trusted certificate authorities in the world. [Wikipedia] If the certificate that is presented to your browser is signed by one of those CAs, then your browser trusts that it is a legitimate certificate.

Unfortunately in the real world, no computer system should be assumed safe. I would presume that all of the major CAs – Thawte, Comodo, DigiNotar and others have their private key under lock stock and barrel, but simply put, no computer system is safe from intrusion.

The Difference Between Encryption and Trust

SSL certificates play two roles in a browsing session – encryption and trust.

When you visit an SSL site on the HTTPS protocol, you are encrypting your session between two places. In a typical situation, the connection between your browser and server is encrypted, therefore any party which is trying to sniff your data in-between the two endpoints can not see your data.

Trust also occurs when you use an SSL certificate. When you visit mail.google.com, you assume that the certificate is only held by Google and therefore the data you are actually receiving is from mail.google.com, not mail.attacker.com.

The Man-In-The-Middle Attack

A man in the middle attack occurs when your internet connection has been intercepted and someone is playing an active role of sniffing your data in between the two connections. When traffic is unencrypted, this is trivial in nature. When it is encrypted, for example with an SSL certificate, it becomes much more difficult. If you are not planning on modifying the data and just want to see what is occurring between the two connections, it looks something like this:

MITM Intercepts traffic from legitimate HTTPS server -> MITM decodes the content and then re-encodes with its own SSL certificate -> MITM passes all traffic back and forth using the fake SSL certificate on the client’s side, while using the real SSL certificate on the server side.

This all relies on the client’s browser accepting the SSL certificate that the MITM presents. This is why the recent DigiNotar false SSL certificate in Iran for *.google.com is so troubling. Once you have a “legitimate” SSL certificate then a MITM can decode the data without the client even knowing. This violates both the trust and encryption aspects of SSL certificates.

What is being done to protect us against MITM attacks like this?

Google is using its massive number of web crawlers to take inventory of all SSL certificates it finds. It no doubt includes this in its search rankings as well (because if a web site bothers to get an SSL certificate, it indicates it is probably a higher value site), but it can be used to increase the security of sites as well when integrated into Chrome. EFF also runs the SSL Certificate Observatory which has a similar function. The way the *.google.com certificate was discovered was that Chrome gave an error when it noticed the serial number of the certificate did not match what Google had crawled previously. This is all well and good, but it does not work in all browsers and also still allows the site to load, and I doubt a non-technically savvy person would have caught it.

Revocation lists help to recall bad certificates, but by the time a certificate is discovered and revoked the damage has already been done.

The problem is that the whole CA system is flawed. Putting trust into 50 or so companies really is a disservice for end users. Let’s say the US government puts pressure on one of the CAs to issue a similar certificate. Not to mention any hacker gaining access to the CA’s root private certificate.

There are also some at work on a SSL certificate system mixed in with DNSSEC [Ed note: strangely enough, their certificate is currently expired]. The problem again is that the root DNS servers hold a lot of power, and traffic can be spoofed.

Convergence is another tool from @moxie__ which is currently available as a Firefox plugin. It allows you to specify trust authorities which can then tell you when a certificate is insecure. I wasn’t able to try it as I’ve upgraded to Firefox 6.0 and it wasn’t compatible, but it appears to have promise. My concern is that Joe user doesn’t have enough sense to run any security plugins that require any type of input. Any final solution to the SSL CA problem will need to be standards-based and not solved as a plugin.

What Can You Do To Help

Support the IETF and other research into alternatives to the current SSL Certificate Authority system. The SSL CA system is broke, and we need a replacement ASAP if we expect to keep our connections encrypted and private.

Zalman ZM-VE200 Review – You Need This External Hard Drive Enclosure

Fellow tech friends, I have a find for you. If you have a job, or hobby, or whatever where you find yourself meddling with a bunch of .iso files, whether to boot off of them or just to access the data on them, then I have the device for you.

It all started after I backed the Kickstarter project for the isostick. Having never heard of a device before that would accept .iso images on a filesystem and then present them to the computer as a disc drive, I thought this was (and is) a pretty cool idea.

When browsing through the comments, I saw folks mentioning that this is just like the Zalman ZM-VE200 external hard drive enclosure. So of course I decided to do some research on this newly discovered gadget.

Overview

ZM-VE200 Size Comparison

Size Comparison: ZM-VE200 on Lower Left, Normal External Drive on Top, External Disc Drive on Lower Right.

The Zalman ZM-VE200 at its core is an external sata hard drive enclosure. These have been around for a long time, allowing you to put your hard drive in an external enclosure and accessing the file system via a USB port. They are great for when you need to transfer a large amount of data and have an internet connection which isn’t up the the task in any reasonable amount of time.

This external enclosure can work just like that, an external USB drive. However, Zalman has added an extra layer of functionality on the enclosure by adding additional components which add features which I frankly haven’t seen anywhere else.

Zalman’s Additional Hardware Magic

The additional circuitry allows you to select an ISO which is present on the drive, and load it just as if it were a DVD or CDROM on the system. This means that instead of carrying around discs to install operating systems on, you simply put the ISOs on the drive and then select the correct ISO when you boot.

The Zalman ZM-VE200 Screen

When you boot/plug in the drive you actually have 3 modes available to you. Disc, Hard Drive or Dual. With Disc, files you place in the _ISO folder on the drive will be selectable via the wheel on the side of the device. As it was shipped the drive needs to be formatted as NTFS in order to show the ISO files, however with updated firmware you can actually use with FAT or NTFS.

Operation

Plugging in the hard drive

First thing you need to do is to install a SATA drive into the enclosure. This is pretty much a no-brainer, it only plugs in one direction. Slide the drive and circuitry back into the case and use the attached screws to secure the case to the drive/circuitboard. The screws are hidden by little rubber seals on the edge of the case.

The Menu Navigation Wheel

When plugging it into the system, you interact with the drive in a few ways. The initial scroll wheel position, when powered up, determines the mode:

  • Hold Up to enter ODD or “Disc” mode
  • Hold Center to enter Dual mode (both HDD and ODD modes)
  • Hold Down to enter HDD only mode

eSATA Port on the ZM-VE200

An eSATA port and cable are also supplied. I did not use this mode in my testing. It requires that you still plug in the USB cable for power requirements, and I would assume you would see faster transfer rates while in eSATA mode.

Finally there is a small switch that enables write-protect mode. This makes it so that you won’t be able to accidentally change the data on the drive.

The only problem I had with the drive was when I first plugged it into my system via a USB extension cable. The drive did not even turn on, it actually just clicked a little bit. I changed USB ports and then it seemed to work fine. Also I’ve run into a situation where I plugged in the drive to a system that was off and then booted it, and the screen lit up but stayed blank. I believe this is because this drive requires more power than some USB ports can deliver, so if you have problems with it, try another USB port first to see if that fixes your problem.

I also had some problems occasionally mounting the ISO file, usually booting into ODD mode (hold the scroll wheel “up”) seemed to fix this problem.

Final Thoughts

When installing operating systems from this drive, the process is notably faster. Meaning, the transfer speed you see off of the “disc” on the drive is much faster than a normal CD or DVD drive. While there were some technical hiccups and gotchas, the drive works very well.

This “gadget” is a must-have tool for system technicians who find themselves constantly burning ISOs to discs. My co-worker who initially made fun of my fondness for new gadgets has since said I’ll have to pry this drive from his cold, dead hands. It is so useful that I am now recommending it to all of my sysadmin friends. At $50 it is a steal and you will even make your money back because you won’t be burning so many discs.

Official Zalman ZM-VE200 Product Site

Buy From Amazon

(Updated Amazon link to SE product on 5/12/2012 – Thanks Skip!)

What a Resilver Looks Like in ZFS (and a Bug and/or Feature)

At home I have an (admittedly small) ZFS array set up to experiment with this awesome newish RAID technology. I think it has been around long enough that it can now be used in production, but I’m still getting used to the little bugs/features, and here is one that I just found.

After figuring out that I had 2 out of 3 of my 1TB Seagate Barracuda hard drives in the array fail, I had to give the entire array up for a loss and test out my backup strategy. Fortunately it worked and there was no data loss. After receiving the replacement drives in from Seagate, I rebuilt the ZFS array (using raidz again) and went along my merry way. After another 6 months or so, I started getting some funky results from my other drive. Thinking it might have some issue as with the others, I removed the drive and ran Seatools on it (by the way, Seatools doesn’t offer a 64-bit Windows version – what year is this?).

The drive didn’t show any signs of failure, so I decided to wipe it and add it back into the array to see what happens. That, of course, is easier said than done.

One of the problems I ran into is that I am using Ubuntu and fuse to run zfs. Ubuntu has this nasty habit of changing around drive identifiers when USB devices are plugged in. So now when this drive is plugged in, it is on /dev/sde instead of /dev/sdd, which is now a USB attached drive.

No problem, I figure, I’ll offline the bad drive in the zpool and replace it with the new drive location. No such luck.

First I offlined the drive using zpool offline media /dev/sdd:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
dave@cerberus:~$ sudo zpool status
  pool: media
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        media       DEGRADED     0     0     0
          raidz1-0  DEGRADED     0     0     0
            sdd     OFFLINE      0     0     0
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0

Now that it’s offline, I thought you should be able to detach it. No such luck – since it is a ‘primary’ device of the zpool it does not allow you to remove it.

1
2
dave@cerberus:~$ sudo zpool detach media /dev/sdd
cannot detach /dev/sdd: only applicable to mirror and replacing vdevs

What they want you to do is replace the drive with another drive. This drive (the same drive, with all info wiped from it) is now on /dev/sde. I try to replace it:

1
2
3
4
5
6
7
8
dave@cerberus:~$ sudo zpool replace media /dev/sdd /dev/sde
invalid vdev specification
use '-f' to override the following errors:
/dev/sde is part of active pool 'media'
dave@cerberus:~$ sudo zpool replace -f media /dev/sdd /dev/sde
invalid vdev specification
the following errors must be manually repaired:
/dev/sde is part of active pool 'media'

Even with -f it doesn’t allow the replacement, because the system thinks that the drive is part of another pool.

So basically you are stuck if trying to test a replacement with a drive that already been used in the pool. I’m sure I could replace it with another 1TB disk but what is the point of that?

I ended up resolving the problem by removing the external USB drive, therefore putting the drive back into the original /dev/sdd slot. Without issuing any commands, the system now sees the drive as the old one, and starts resilvering the drive.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
root@cerberus:/home/dave# zpool status
  pool: media
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: resilver in progress for 0h9m, 4.62% done, 3h18m to go
config:

        NAME        STATE     READ WRITE CKSUM
        media       ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdd     ONLINE       0     0    13  30.2G resilvered
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0

It is interesting to see what it looks like from an i/o perspective. The system reads from the two good drives and writes to the new (bad) one. Using iostat -x:

1
2
3
4
5
6
7
8
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          29.77    0.00   13.81   32.81    0.00   23.60

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.80    0.00    33.60     0.00    42.00     0.01   15.00  15.00   1.20
sdb               0.00     0.00  625.00    0.00 108033.20     0.00   172.85     0.56    0.90   0.49  30.80
sdc               0.00     0.00  624.20    0.00 107828.40     0.00   172.75     0.50    0.81   0.47  29.60
sdd               0.00     1.20    0.00  504.40     0.00 107729.60   213.58     9.52   18.85   1.98 100.00

It seems that ZFS is able to identify a hard drive by GID somehow but doesn’t automatically use it in the pool. This makes it so that you can’t test a drive by removing it, formatting it, and putting it into a new location. Basically, zfs assumes that your drives are always going to be in the same /dev location, which isn’t always true. As soon as you attach a USB drive in Ubuntu things are going to shift around.

After the resilver is complete, the zpool status is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
root@cerberus:/home/dave# zpool status
  pool: media
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: resilver completed after 0h16m with 0 errors on Sun May 15 07:35:46 2011
config:

        NAME        STATE     READ WRITE CKSUM
        media       ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdd     ONLINE       0     0    13  50.0G resilvered
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0

errors: No known data errors

You can now clear the error with:

1
2
root@cerberus:/home/dave# zpool clear media
root@cerberus:/home/dave#

Zpool status now shows no errors:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
root@cerberus:/home/dave# zpool status
  pool: media
 state: ONLINE
 scrub: resilver completed after 0h16m with 0 errors on Sun May 15 07:35:46 2011
config:

        NAME        STATE     READ WRITE CKSUM
        media       ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdd     ONLINE       0     0     0  50.0G resilvered
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0

errors: No known data errors

So now the question I have is this: Are you able to manually update or remove the drive status somewhere in your system? How did zfs know that this drive already had a pool installed on it? I zeroed the drive and verified with fdisk there were no partition on it. Is there a file somewhere on the system that stores this information, or is it written somewhere on the drive?

ZFS is great, but it still has some little issues like this that give me pause before using it in a production system. Then again, I suppose all massive disk array systems have their little quirks!

Disabling The hald-addon-storage Service On CentOS/RedHat

The haldHardware Access Layer Daemon – runs several processes in order to keep track of what hardware is installed on your system. This includes polling USB Drives and ‘hot-swap’ devices to check for changes along with a host of other tasks.

You might see it running on your system as follows:

1
2
3
4
2474 ?        S      0:00  \_ hald-runner
2481 ?        S      0:00      \_ hald-addon-acpi: listening on acpid socket /var/run/acpid.socket
2487 ?        S      0:00      \_ hald-addon-keyboard: listening on /dev/input/event0
2495 ?        S     41:47      \_ hald-addon-storage: polling /dev/hdc

If your system is static and the devices do not change, you can actually disable this service using a policy entry.

Create a file in your policy directory, for example /etc/hal/fdi/policy/99-custom.fdi. Add the text:

1
2
3
4
5
6
7
8
9
<?xml version="1.0" encoding="UTF-8"?>

<deviceinfo version="0.2">
    <device>
        <match key="storage.removable" bool="true">
            <remove key="info.addons" type="strlist">hald-addon-storage</remove>
        </match>
    </device>
</deviceinfo>

Save and reload the hald using /etc/init.d/haldaemon restart.

And you will find that service no longer is polling your hardware.

Of course to turn it back on, remove that policy entry and restart the haldaemon again, it will be back in service.

Solution Credit: Linuxforums User cn77