The results of the investigation into the recent DigiNotar SSL CA breach reads like a laundry list of "Things Not To Do?" on your critical servers and networks: no antivirus, no centralized logging, and outdated/vulnerable software exposed to the Internet, among other items. What's funny about the above list is that if the breached systems had been part of DigiNotar's PCI cardholder data environment, then DigiNotar could never have passed a PCI QSA audit as all three items I noted above are required by the PCI DSS. While I couldn't verify that DigiNotar accepts credit card payments for its SSL certificates, it almost assuredly does (or did!). It almost certainly had undergone a PCI QSA audit, too.
What are we to conclude from this information? If my preceding two assumptions are true, then it would appear that DigiNotar likely protected its servers and networks involved in accepting and processing credit card transactions better than it protected the servers and networks involved in generating SSL certificates.
There is no reason not to have antivirus loaded on every server and workstation and no reason not to conduct regular vulnerability scans of your external services in an effort to identify vulnerable software. For medium-sized businesses (50 or more users, 2 or more IT guys) there should be one person in IT who is designated to watch vendor software websites for security announcements and new releases for all software in use that is exposed to the Internet. The organization should be committed to at least protecting the external services, even if it can't spare the resources to perform the same on the internal network.
On to the central point of this blog post: Centralized Logging. This area is where things get a bit more involved and difficult. It is not too hard to purchase and setup a machine with 1TB of drive space that could adequately serve as a collector of logging data. It is also not too difficult to setup most common systems (switches, routers, firewalls, and Windows and Unix servers) to log to this system. Where the difficulty lies is making that data useful in near-real time, rather than as a source of information after a breach. To make that data useful you will need an event correlator, which is usually part of a larger service called SIEM (Security Information and Event Management). To date, I have not been made aware of any SIEM products that are affordable to purchase for most small businesses. And, that is to say nothing of the cost in personnel time to properly wield such a product. From what I have seen, the open-source SIEM products are even harder to configure and use than the commercial products, so I can't recommend any free (or low-cost) alternatives.
So, what is a smaller sized company to do? That's a good question. If you can afford an SIEM product, buy one and pay a Managed Security Services Provider (MSSP) (like True!) to setup and manage the device. If you can't afford a full SIEM product, at least purchase an inexpensive server with two 1TB drives, install Ubuntu, put the drives in a software RAID-1 configuration, and setup a syslog daemon (Syslog-ng is perfect) to collect logs from the network. At least if you are breached you (or the investigator you hire--True!) have a lot more information at your disposal to determine the extent of the breach.