Building a Beast!

Aug 6th, 20102 Comments

This is another step in my evolutionary quest to build Linux based server from cost-effective generic components that can rival the performance of some of the best expensive commercial systems on the market.

First let’s look at the kit I’m using. We have a Chenbro Rack-mount case with six Hotswap bays, an ASUS SATA III, USB 3 AM3 slot motherboard, an 3.2G AMD II X6, 16Gb of Dual-port RAM, 5 WD Velociraptor 10K rpm hard drives and a decent 2U power supply.

Note that although we’re using SATA III hardware Linux support for this is currently not terribly good so we’re going with a SATA II configuration but leaving ourselves open to using SATA III when the Kernel catches up with SATA III specs.

So moving on, first we need to start with the case. There are many cheap cases knocking around, this one is a little more expensive but well worth the money. Not only do you get the hot swap system but the built in fans, brackets, cables etc are all well-suited to self-building and don’t leave anything out.

First we start with the case and insert the motherboard, CPU, PSU, memory and hard drives (not forgetting the CPU fan!) and try an initial boot. This is where I hit my first snag. The initial boot lasted around 6 seconds before the machine shut itself down, my assumption based on the red flashing light on the motherboard was a memory incompatibility. Not uncommon with modern boards and 4GB DIMM modules. So, what to do .. I swapped out pretty much everything in an effort to nail down the problem specifically, until eventually I started the machine by shorting the power pins, just because I’d not re-patched the appropriate cable – and it worked!

Closer examination revealed a problem with the front-panel PCB on the case, it looked like a short somewhere, so I’m afraid I took it apart. In the end it was a solder-short across the power switch itself – simply braking the strand and re-assembling solved the problem and we were up and running. Isn’t is amazing how much of a mess one can make with cables (!) Anyway, so much for a work in progress, here’s what it looks like when everything is put back where it should be and the cables are tidied up.

In this instance I’m using a motherboard with onboard VGA and one of the features of this configuration is that I only get 5 internal SATA channels, so if you look closely you’ll see I’ve added a Lycom twin channel PCI express SATA III expansion card for about twenty quid, and as these boards only come with a single onboard 1G NIC, I’ve also added a Lycom PCI express Gbit NIC card. Both are Marvel chipset devices and seem to run quite happily with Linux.

Why do I need 7 channels? Well for a start there are six bays in the hot-swap module, and unfortunately ASUS have started using an optional on-board IDE controller which seems not to work with Linux (at all) , so I’ve had to use a SATA CD-ROM device rather than IDE.

So, here it is in all it’s glory, everything fitted and running except the rack-mount rails.

So why is it a beast?

Well, essentially the performance compared to machines we’ve seen in the past or indeed compared to cheap rack-mount servers you might buy from the likes of Dell, is absolutely staggering.

First we have a six-core processor running at 3.2GHz and overclocked to 3.8GHz producing over 44,000 bogomips on 64 bits. Then we have a sustained disk throughput of over 500Mb/sec using software RAID. Compare this to the average desktop machine that sports around 4000 bogomips and can generally chug through data on a hard drive at around 60Mb/sec. This thing can easily run 10 virtual machines using KVM, each of which will easily outstrip an average desktop workstation, that’s a lot of power in a 2U package.

Catalogue of parts …

  • Chenbro RM 217 2U Rackmount Case
  • 510W EPS 12V PSU
  • ASUS M4A88TD-V EVO/USB3
  • AMD Phenom II X6 1090T
  • 450Gb SATA III WD Velociraptor
  • 16Gb of DDR3-1333 UDIMM
  • Samsung SATA DVD
  • PE-115 SATA III 2-Port PCIe
  • Lycom 1Gb Marvel NIC

Now I’ve just priced up a Dell Poweredge R510 with an equivalent hardware configuration using Dell’s online pricing tool. I don’t know how they would actually compare head-to-head, but based on benchmarks I’ve carried out on similar systems I’m thinking the beast would win, albeit there shouldn’t be a vast amount in it either way. Cost of the Dell machine is just shy of £8000, if anyone wants a beast and doesn’t want to build their own, I can provide one for £2499 with a warranty and options on on-site maintenance and support.

(Feel free to contact me via the Forums!)

Does sort of make you wonder whether companies aren’t paying a little over the odds for their servers … (!)

Oh, and you’re asking if it’s going to work and be reliable?

Well, these are the boxes that underpin the Linux.co.uk cluster .. so .. :)

avatar
About author:

All entries by

2 Responses to “Building a Beast!”

  1. Any ideas on what an updated spec would look like – we usually base our Centos Server kit on Intel chips so I’d be interested to know what you’re using now

  2. avatar madpenguin says:

    AMD Phenom II X1100T, ASUS M4A89GTD Motherboard and MegaRAID LSI controller. This pushes it to 6622 Bogomips, better memory through put on a wider bus, and around 1Gb/sec on the RAID array. Also using Spinpoint F3′s which are a quarter of the price and deliver pretty good sequential performance.

Leave a Reply

You must be logged in to post a comment.