My .com client has a mid-sized server farm at a Savvis IDC in the US. Since they are an e-tailor Christmas is one of their busiest seasons. As part of this years ramp up we’ve purchased and installed many new servers for the website, most of which is dell hardware.
Each web server node in this install runs a variation of Red Hat Linux, consisting of a base RH-9 install with numerous add ons and tweaks specific to the company. Dell hardware is quite Linux friendly, my only real gripe is with their choice of network chipsets. Why is it that on an ultra cheap PE750 you get a good, well supported Intel 10/100/1000 ethernet controller but on a $4000 PE1750 you get a Broadcom 10/100/1000 controller?
I have never really had much luck with Broadcom based network cards, usually I have to fight with the buggy TG3 kernel driver or download and compile a driver module from broadcom that isn’t without it’s bugs. Both Cisco and Foundry enterprise grade switches seem to have auto-negotiation issues with the Broadcom chipset that ships in the dell hardware. This means I have to force both sides of the link to 100-FDX, which is a pain in the butt.
Also, I’ve recently had trouble with the new Dell Cerc (Adaptec) SATA raid cards and the Linux kernel. Both the 2.4.x and 2.6.x aacraid drivers that support the new Cerc card seem to crash under heavy I/O which doesn’t really evoke confidence when you’re setting up a small fileserver. The last time this happened to me I ended up installing FreeBSD instead due to it’s much more stable Adaptec Raid controller support. Actually, in general I’ve had much better luck with Dell Perc cards based on AMI megaraid chipsets, the Linux driver for these seems quite stable and has been for a long time.