Storage Upgrade Stage 3 - Building Butters#

Finally, after two weekends and hours of work, I get to do what I started out trying to do - building a six drive RAID 5 array out of terabyte hard drives. Cartman's old 5U case was all cleared out, I had all the components, now all I had to do was assemble the beast. I had a little problem with the Adaptec 3805 controller.

The 3805 is actually an SAS controller, using the mini-SAS plugs that handle four drives each. On the web site the specification says that the board comes with a pair of mini-SAS to SATA cables, but there were no such cables in my box. Turns out I had ordered the OEM version of the board (the only one available), and it had no cables in it - which makes sense, its an OEM board, the OEM is always going to want to do something unique with the board.

Fine, I'll order my own cables. But NOBODY has stock on mini-SAS cables. I flip out at the supplier, and he calls Adaptec, and they offer to give me a pair of cables for free (which was mighty nice of them), if I'll pay the shipping. Totally worth it, I had ordered the wrong product and they were willing to fix it. A FedEx overnight shipment later, I had cables.

There's so much room in the 5U case that things came together rather quickly. The motherboard dropped in without a hitch, as did the drive array caddy. Then came the tricky bit...

The drives don't fit!Like Cartman, Butters has a separate pair of mirrored boot drives, although in this case the drives are 7200rpm SATA II drives, rather than the Ultra-160 SCSI drives of Cartman.

In the 5U case, the boot drives hang from the card retaining bar... and the first hitch of the build occurs. In a test hanging (shown to the right), the pair of drives hit the CPU fans. This is bad.

When a situation like this arises, first you curse. Then the full reality of the situation hits - all the work you've done for the past few days may have been for naught, this machine won't fit into this case.

I ran into the same issue with Cartman during his rebuild, I had to modify the cooling blocks to use lower-profile fans to avoid conflicting with the hanging hard drives. But I didn't have that option this time... no handy low-profile fans, no alternative cooling blocks. I needed a different solution.

Solution - move the drives.And here's the solution - move the drives. It's not like the new machine is full of cards anyway, it has exactly one, the Adaptec 3805 raid controller. And that card is low-profile anyway.

So I removed all the card holders from the bar and moved the mounting bracket so that the drives would hang away from the CPU fans. Problem solved. 

That was really the only hitch in the assembly of Butters, and it only took me a few minutes to solve it. I like this new drive position better, it puts the drives right inline with the main fan, so there'll be plenty of cooling air coming over those drives.

 A little more fussing with wiring and I was on my way with a successful boot of the new motherboard...

IMG_7870_small Notice that I plugged one of the 1TB drives into the machine as well, getting ready for the transfer of all that data back onto a shiny new 5TB array.

Ah, if only it was that easy. First I had to get a server install done. Which you think would be easy - a brand new motherboard, it should be no problem to get things up and going with Windows Server 2003, right?

Wrong.

Since I was planning to use this machine to run virtual machines, of course I wanted a 64 bit operating system on it - there's 16GB of RAM in there, how else would I address it all?

So I installed Windows Server 2003 SP2 64 bit edition. And the installation went cleanly, but didn't recognize the pair of built-in gigabit NICs. I wasn't all that surprised, after all, brand new motherboard, I'd need to install the drivers separately. Now if only I could find them.

On the Tyan web site you can see all sorts of drivers for the S2927, including drivers for Windows 2003 Server 64 bit, so you'd think there would be NIC drivers there. In fact, under the heading "Driver Packs" there is a pack for Windows 2003 Server 64 bit which SAYS it has LAN/NIC drivers. However, if you actually download it, there's no NIC drivers in there. In fact, if you open up the zip file, the README doc lists everything in the driver pack and it does NOT include the NIC drivers.

I tried installing it anyway, but to no avail - the NICs were still unrecognized.

However, the Adaptec software worked great AND the 5TB array was able to be built. But it was going to take more than 24 hours to prep itself, so it was worth tinkering with other configurations before settling for this.

So I headed over to the nVidia site... perhaps the reference drivers for the nVidia chipset would handle the NICs better. The chipset on this motherboard is the nVidia nForce Professional 3600 series. And lo and behold, there ARE reference drivers for Windows 2003 Server 64 bit. But they TOO could not recognize the NICs.

I even tried the prerelease tool on the download page to detect what drivers to use, and it recommended the Vista drivers! Figuring it couldn't be any worse, I tried them too... and this time the NICs were recognized, but were not functioning.

So now I'm afraid - afraid that my motherboard is defective. But now that I have nothing to lose I thought "what the heck, let's try Windows 2008 Server!" I had Release Candidate 0 handy, it was worth a shot.

Windows 2008 Server RC0 is a massive 2.5GB, I had to make a DVD for the install. But it installed flawlessly and recognized the motherboard, including the NICs. I was fully operational. And Windows 2008 Server is beautiful... but its a release candidate!

So now my motherboard was working perfectly, I installed the Adaptec RAID controller software. It installed, recognized the controller AND the drives. For the first time I had everything working, admittedly on a release candidate. How could I resist? I configured the 5TB array and let it rip.

The build ran overnight and finished perfectly. I had a 5TB drive array!

I shutdown Butters, closed it up and stuck it in the rack.

Powered it up again, but when it booted, there was no drive array! I rebooted again, still no array. What was going on? Pulled Butters back out of the rack, opened it up, booted it again... still no array.

I went into the 3805 BIOS to configure the array and it didn't show up until I selected "Refresh Array." Then it showed the complete array, in perfect condition!

Baffled, I exited the BIOS settings which caused a reboot... and the array vanished again. This time when I finished booting into Windows, I opened up the Adaptec configuration manager... it showed a failed controller and failed drivers. I selected "Refresh Array", and it still showed as everything failed - but Windows suddenly found the array! The drive letter popped up and everything acted fine.

Oddly enough, I was a bit suspicious.

So I started loading data onto the array. I wasn't going to erase any backups, so I waited for it to fail.

Loading went much faster than backing up, since the drive was plugged directly into the machine. Within a few hours, I had everything reloaded.

I was still suspicious.

I configured the file shares and got both the music and television archives up and running. They worked perfectly.

Now I really had a problem - I was running a release candidate OS, the configuration software says the array has failed (although the BIOS says its fine, once you refresh), but Windows itself is perfectly happy with it. And my family was happy to have the music and video back online. I couldn't very well take it back down. As long as it didn't reboot, the array seemed to stay up. Scary.

I sent a tech support request to Tyan, hopefully they'll have something useful to say. I really ought to go back to Windows 2003 Server 64 bit, but only if I can get the NICs to work.

Sunday, October 14, 2007 4:33:23 PM (Pacific Standard Time, UTC-08:00) #    Comments [1]  | 

 

Storage Upgrade Stage 2 - Moving Cartman#

My parts arrived during the week, but it wasn't until the weekend that I actually had time to start putting things together.

However, before I could build the new machine with the new parts, I had to get Cartman out of the 5U case, which meant moving him into the 4U case I had. The 4U case was populated with a rather old Linux machine that I hadn't powered up in a couple of years. So all of that came out, leaving a clean 4U case, ready for loading:

The 4U Case Emptied

Once the case was clear, out came the rack again and Cartman shut down and pulled. I put Cartman on a separate table from my regular service table and gradually disassembled it and moved the parts into the 4U case.

Cartman motherboard in 4U case

Here you can see Cartman's motherboard loaded into the 4U case. Its an old Tyan board with dual P3 processors and 512MB of RAM. A great board in its day, its terribly dated now.

Notice also a pair of PCI-X slots, both of which are normally occupied - one with the Adaptec 29160 SCSI controller, the other with the Adaptec 2820SA RAID controller. The RAID controller just runs the big SATA storage array, the boot mirror runs off the SCSI controller, as well as the DVD drive and the external tape drive.

The drive array sits in the big gap on the left side of the case (right side if you're looking in the front), and the pair of boot drives live in the little gap full of wires between the drive array and the DVD.

Mounting the motherboard is always the trickiest bit of the build, once that's done, the rest goes quickly. The only difference between Cartman in the 5U case and Cartman in the 4U case is a four drive RAID array instead of six drives.

Cartman up and running in the 4U caseThis is what Cartman looks like fully loaded into the 4U case and booting up. You can see the SCSI ribbon cable running to the pair of drives in the center front of the case and the four blue SATA cables running from the RAID controller to the array chassis. Even the DVD is SCSI, although where the drives are SCSI-160 LVD, the DVD is SCSI-40.

The other two cards in Cartman are the video and gigabit ethernet. This is an old machine, very little is onboard. But you can appreciate why I have to replace Cartman. All those things we're used to having right on the motherboard have to be added card-by-card.

Cartman was none-the-worse-for-wear in the move, still working the same way, with just the RAID array being down.

The new drive array  In fact, as you can see from this shot at the front of the case, I did not install the terabyte drives into the chassis, since I currently have backup data on three of the drives, and I need six for the new array in the 5U case.

When I'm finally able to clean off those backup drives (when I believe everything is stable), I'll build a RAID 1+0 array using the old Adaptec controller. That will stay under the 2TB limit with the 1TB drives and give me the reliability and performance I want.

Ultimately, Cartman will be retired, but really only the motherboard. All the drives are fresh, what's needed is a new multi-core, multi-processor motherboard with a ton of RAM.

I'm thinking that since the 5U machine has an AMD motherboard, I'll put an Intel board in this machine. Probably something from ASUS, we'll see.

Cartman went back into the server closet without incident... tomorrow the 5U case gets a new motherboard, and a new machine will be born: Butters!

Saturday, October 13, 2007 5:03:23 PM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

Upgrading Storage Capacity on Cartman#

So if you didn't get the hint, I'm upgrading the capacity of my servers in my server closet.

Last time I upgraded capacity it was in Cartman, migrating from a 400GB six drive RAID array to a 2TB six drive RAID array. With the new 1TB hard drives, I was ready to move to a 2TB six drive RAID array.

I bought ten Seagate Barracuda ES.2 1TB drives, six for Cartman, and four to go into a different rebuilt server.

So stage 1, actually started back on Friday, was to back everything up... rather than taking the chance of pulling Cartman and pulling into him directly, I hooked up the 1TB drives, one at a time, to my Phillip, one of my workstations, and copied everything off. It took until Sunday to finish the copy across three drives.

Backing up onto 1TB drives.

Here you see one of the drives getting loaded across the network the slow, but low risk way. Even with gigabit ethernet, transferring data from the old array on Cartman through the network, into Phillip and then out via SATA takes a long time.

By the way - they may call them 1TB drives, but they format to 933GB. That whole 1000 bytes vs. 1024 bytes thing is getting out of hand. It was fine when we were dealing with smaller drive, but when you're talking 933GB vs. 1000GB, that's 7% of the capacity of the drive missing. At some point you have to call foul - this is not a 1TB drive.

So I had three drives filled with the contents of the old array, that left me with seven drives empty to build the new array... although I only needed six.

Once the backup was finished, it was time to pull Cartman, which meant opening up the server closet and pulling the rack.

Server rack pulled out

Other half of the rack closet

Here's the server rack pulled, Cartman is near the bottom, just above the 2000VA UPS. The long grey 1U box is an Exabyte 1x10 SCSI tape backup unit. You can also see the power supply of my temporary Exchange rig that has been running some two years as just a power supply, motherboard and hard drive sitting on a towel. I'm tempting fate, I know.

You can see how the rack pulls out on the rails, using folding arms in behind with cables running across the arms.

Beside the server rack, the second shot is the network rack that has the dual internet connections, all the patch bay wiring for network, telephone and cable. The 1U console is pulled out to shut down Cartman, its wired back to the server rack where the KVM switch is. 

 

Here's a look into Cartman for the first time in a couple of years:

A naked Cartman!

Looks about the same as last time.

That's the end of the photos, because things went downhill from here and I stopped thinking camera and started thinking much meaner thoughts.

I carefully extracted the six 400GB drives that have been the 2TB array for the past couple of years. I figured I could always go back to the original drives. I replaced those drives with six blank 1TB drives. Fired up Cartman and built a new array.

The Adaptec 2810SA controller recognized the drives fine, but wouldn't create an array bigger than 2.1TB. It appears to be a hard limit of the controller. I upgraded firmware on the controller, to no avail. I tried configuring it in Windows 2003 Server and directly in the firmware, hit the same limit either way.

So much for that - now I have to make a choice. I could build two three drive arrays of 2TB each, or replace the controller. I wasn't going to sacrifice an extra drive for this, I needed a new controller.

So now that I admitted I needed new hardware, it was time to revisit my thoughts of hardware migration in general.

The original versions of these servers go all the way back to 2000, with upgrades on the way. One of the issues I've run into again and again is that migrating to new servers is hard, so hard that old servers are tough to retire, they just go on and on until they fail and you're forced to give them up. Cartman, after all, is a dual P3 machine, still going on. I've upgraded the OS, replaced the CPU fans, swapped the drives a couple of time... but its still an old machine.

My new vision of the rack is to go to completely virtualized servers. I want to build a pair of high performance multiple processor servers with lots of RAM and 64 bit operating systems running multiple virtual machines. I need a pair so that I can fail between them - they will back each other up and each should be capable of running the entire server farm itself.

Cartman, obviously, is not qualified for this job. So Cartman will have to go away eventually.

Through a series of unexpected events that I shall not go into in detail, I ended up in possession of a Tyan S2927 motherboard with a pair of AMD dual core processors and a bunch of RAM. This was a motherboard able to take on my virtualization mission - it just needed to find a place to live.

The need for a new RAID controller capable of handling arrays bigger than two terabytes gave me the excuse to make the big move - migrate Cartman out of the 5U case with six drive bays into the 4U case with four drive bays, and move the new Tyan motherboard into the case with a controller able to get me my 5TB array.

However, that meant I had to wait for more parts to arrive, which meant waiting a few days. I'm ordering in more RAM for the S2927 board (might as well fill it) and an Adaptec 3805, which will go into a PCI-e slot and handle the big array.

I ended my day by putting Cartman back in the rack and back online again, although without the drive array. We could live without the big storage for a few days.

Sunday, October 7, 2007 4:09:23 PM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

Migrating web servers, upgrading dasBlog...#

Decided not to work on Sunday for a change.

Instead, I upgraded servers! Ah, such a geek.

My old web server Stan is very very old... P3 1Ghz with 512MB of RAM. Running Windows 2000, it has been a workhorse of a machine. I put Stan together in November of 2000. Hard to believe it has been essentially running unmodified for over six years. But that also means those hard drives have over 50,000 hours on them, which makes them ticking time bombs. And that's what the SMART reporting is saying too.

Stan is just too old to upgrade, he needs to be replaced.

His replacement is Jimmy, a machine I already had in the rack that was a testbed for betas of SQL Server 2005. Jimmy is a P4 3Ghz with 2GB of RAM, running Windows Server 2003 R2 SP2. Takes some time to get used to the little differences between IIS5 and IIS6, but its all bareable.

Migrating a web server is a pain in the butt. Lots of little configuration details you have to get right. To do the testing, I copied a backup of Stan's web sites onto Jimmy. However, since there are multiple sites on the web server, I depend on host header identification to sort out what site is what, which means I need to use the correct names of the web sites to access them. So what's a boy to do? I want to leave the sites up and running on the old server while I mess around with the new one.

I could have faked out a DNS server, but that seemed like a lot of work. Instead I modified the HOSTS file on my main workstation so that the web sites on Jimmy were pointed to directly. Funny how old technology serves the purpose so well.

Since HOSTS takes priority over any DNS lookup, I was able to point sites (like www.campbellassociates.ca) to the IP address of Jimmy directly. Then I could tweak and test to my heart's content.

One whammy I ran into was with FrontPage Server Extensions. For the most part my web server runs the little web sites of friends and family, and they all use FrontPage, whether Microsoft wants them to or not. While it set up the extensions easily enough, I couldn't administer the sites to set up access for the authoring accounts - no matter what account information I entered, it failed.

Turned out it wasn't me, it was a feature of Windows Server 2003 Service Pack 1. The service pack added a loopback check, making sure that the local computer name always matches the host header. And since I'm using multiple host headers, that's just not going to work. The fix is in Knowledge Base Article 896861. You have two choices: turn off loopback checking, or enter all the domain names that are legal for loopback checking.

I turned it off. Call me lazy.

Upgraded dasBlog as well. What I was really after was Akismet, the comment spam filtering solution. Unfortunately, the shipping edition of dasBlog doesn't have direct support for it. But the daily builds have it. I'm not normally a guy who runs a daily build, but for Akismet, its worth it. Take that, comment spammers!

 

Sunday, May 20, 2007 10:05:15 PM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

Rebuilding Cartman...#

I've been slowly working my way through the server rack, upgrading all of my servers. Some of the machines are as much as five years old, and all spinning gear (CPU fans, case fans, hard drives) are essentially ticking time bombs. In addition there is new hardware to be added to the rack, which means virtually everything in the rack has to move... the new configuration with eight servers completely fills the 30U rack.

What makes this especially challenging is that they ARE servers... they're constantly in use. I can take them down for a few minutes, but after a half hour the phone starts to ring. However, some servers are more sensitive to this than others - and Cartman is one of the least sensitive, since its largely an internal-only server.

Cartman has a variety of tasks. Primarily he's a file server, but also a domain controller (one of two), DHCP and DNS server. As a file server, he has a 400GB RAID array... doesn't sound like much, but I built it in October of 2001. Its done with a Promise SX6000 controller and six 80GB hard drives. At the time, it was a monster. Since its essentially been on since it was first built those drives have over 30,000 hours of spin time... very scary.

Before tearing Cartman apart I used Acronis True Image to image the boot drives, and I backed the entire 400GB drive array up on a single external USB 400GB drive. And yes, I used xcopy with verify and double checked everything before I tore it down.

This is what I saw after hauling Cartman out of the rack and popping the cover. Essentially identical to what I saw in October 2001 - one crammed case. You can see the six ATA/100 ribbon cables coming out of the Promise controller running to the two three drive caddies holding the 80GB drives. In the middle are the two 17GB SCSI drives that are used as boot drives, which, along with the SCSI DVD drive are run from the Adaptec 29160 SCSI controller. Oh, and an Exabyte external tape drive plugs in there too.

Disassembly of this beast starts with the metal bar running across the case that also supports the two SCSI hard drives (and a fan). Then the entire front drive array holding the DVD, floppy and two drive caddies was removed. Both the SCSI and RAID controllers were pulled as well, leaving the case pretty darn bare. With everything out I powered up the machine just to take a look and noticed that one of the CPU fans was barely spinning any more. I had planned on replacing them anyway, this was just extra incentive.

However, the motherboard is so busy that the fancy new Socket 370 cooling blocks I bought wouldn't even fit in the space! But I was able to use the old blocks by removing the worn out fans with the the fans from the new blocks.

After a thorough cleaning, I installed a gigabit network card and began the rest of the reassembly. I'm retiring the Promise controller altogether, going to a SATA array using six Hitachi Deskstar 7K400 drives. Yep, that's right... from a 400GB array to 400GB drives, for a total of two terabytes! And to drive this puppy, I'd need a SATA controller, so I went back to Adaptec for their 2810SA controller.

It actually supports eight drives, but I only had space for six, you can see the controller hard and new caddies to hold the drives. SATA cables are much tidier than ATA cables, so I got a bunch of space back in the case.

Here you can see the Chenbro caddies with three SATA cables a peice. There's one power plug for all three drives (which is very nice) and it also has a heavy blower fan pumping directly onto the drives.

The old 17GB Atlas V drives are replaced with shiny new 147GB Atlas 10Ks. More disk space!

With everything crammed back in the case, it was time to get things set up. Even before I started the install of Windows 2003 server I wanted to get the array set up. What was interesting is that every card installed in the machine had a boot BIOS in it - the SCSI controller, the RAID controller AND the gigabit network card! Getting the BIOS set up to boot from the right device took some fiddling.

Then I decided to start the array configuration from the BIOS, so I set up a RAID 5 array. Being a dilligent geek, I went to the Adaptec web site to check for latest drivers, BIOS updates, and so on. Adaptec had updates for both the 2810SA and the 29160, so I updated both BIOSes. What's stunningly annoying is that you HAVE to install BIOS updates from a floppy. The software is hard coded to read from drive A and nowhere else. Presumably I could set up a USB drive to do this, but this old SuperMicro motherboard ain't that smart.

I was glad I'd checked all this in advance, all over the readme files for the firmware were warnings that doing these upgrades would destroy the existing arrays, and you'd need to back everything up. Since I had nothing on the drives, I had nothing to fear.

Feeling smug with all my firmware flashed, I headed off into the BIOS set up for the 2810SA to get my spiffy new drive array configured. Apparently I did it wrong because I selected “Clean” to start the array rather than “Build/Verify.”

But I didn't know this at the time - off it went, ticking away to itself. I thought it might take a long time to set up a two terabyte array, but it was done in about 15 minutes... well, almost done. It got to 99% and then said “Controller Kernel Stopped Running!” And then the machine would reboot. That didn't seem good.

Every time I restarted the machine and went back into the 2810SA BIOS, I'd get the same error and reboot the machine.

In an effort to be positive about my situation, I ignored the failure and moved on - set up Windows 2003 Server. Once it was up and running, I tried to install the drivers for the controller card, but it wouldn't recognize it. That can't be good either. I filed a tech support request with Adaptec, but wouldn't hear back for 48 hours: by then I would solve it on my own.

I went to bed late, very grumpy. The next morning I woke up thinking maybe the firmware update was a mistake. So I reverted - got the old firmware, set up new floppies and attempted to install it. But it kept failing with the same error. Couldn't revert.

Then, a flash of insight, I realized what was happening to the controller - it was crashing! And right at the point of completing the array. After it rebooted, the controller would restart, see the array almost finished configuring and attempt to finish it... crashing the controller again! So, how to stop the array from rebuilding? Pull all the hard drives out! That'll slow the bugger down.

Sure enough, as soon as I pulled the drives, I was able to revert the firmware. Why I still reverted the firmware, I'm not sure - I guess I had a course in mind and thinking wasn't going to divert it. With the firmware reverted, the array had died, so when I plugged the drives back in, nothing bad happened.

Now afraid of the BIOS configuration stuff, I booted back into Windows, and reverted the driver as well to match the firmware. If you've never done this, you're a happier person than me: reverting to an older driver is a bugger. Windows 2003 Server has a rollback driver option, but it doesn't work if you haven't previously installed the older driver. So I had to do this the hardware - uninstall the driver and then carefully locate all the backup copies of the DLLs and kill them by hand. Once I had it all, installing the old driver worked, AND it came up just fine.

Now I was able to set up the RAID 5 array from Adaptec's client for Windows, which was a whole bunch clearer about the right ways to do things. And that's when I discovered that correctly building a two terabyte array takes an entire day.

The next day I discovered that my two terabyte array is actually a 1.8TB array. And that Windows understands TB, it displays that way in Windows Explorer. Funny, huh? I wonder if they have PB (as in petabyte, a thousand terabytes) in there as well.

The rest of the set up was uneventful, really... things got loaded back on, DHCP and DNS configured, and so on. The next level of excitement would come with the most dangerous update of all... converting an Exchange 2000 server to 2003!

Wednesday, February 23, 2005 6:52:24 PM (Pacific Standard Time, UTC-08:00) #    Comments [8]  | 

 

Rack Attack!#

Well, I finally broke down and started to rework my racks. I've literally avoided pulling them for more than a year, just patching things together whichever way I could. Take a look at the mess they were in before I started:

Several highlights of this mess I call my racks... notice the two bars poking out the front, those are the rails that the entire rack slide out on. Notice that between the two racks there's a new server (named “Tweak”) that has been sitting like that for six months. And notice the freakshow of a wiring mess as I've added VOIP boxes, a new router, new wireless access point (sitting on top of Tweak), and so on. Hey, its been more than a year!

The racks themselves are 30U Middle Atlantic AXS racks. The left hand one is for networking, it has a cable channel mounted on the left side for all the wiring. On the right is the server rack, which I had modified to be 30 inches deep instead of the standard 20 inches that Middle Atlantic makes for these racks. They're intended for stereo equipment, I use them for the computer gear because this way the server closet is much smaller - you don't need room to walk around it.

This is the rack pulled out onto the rails and ready for some service work. You can see the cable channel clearly now.

From the other side you can see the mess of wiring strung between the two racks... and the mess of wire in the back. Its not as bad as it looks (which is good, it looks pretty bad). Notice also the “wall-shaker“ style air conditioner that keeps the whole closet cool.

Besides the tangled mess of wiring, I also needed to add more power plugs, re-arrange some components, add new gigabit switches and additional wiring between the two racks.

A couple of hours later, the mess of wires is gone from the rack. This shot also shows the new double-sided power bar I added at the back to give myself more outlets, and the Oregon Scientific wireless temperature sensor (reading 71.6F) that lets me know the temperature inside the closet. Normally its about 68F in there. There are alarms if it climbs above 75F. Also, this gives you a pretty good look at the folding arms that hold the rack from sliding off the end of the rails, and provide a channel to route the wires on and off the rack.

Here's the beauty shot of the network rack reconfigured and back in the closet. Here's an inventory (from top-to-bottom):

  • Gear shelf contains
  • Xincom 603 Dual WAN NAT router
  • Linksys SR2024 24 port Gigabit switch
  • 2U cable tray
  • 2U 48 port Ethernet patch panel
  • 2U cable tray
  • Linksys SR224G 24 port 10/100 switch (with Gigabit uplink)
  • 1U Keyboard/Mouse/Monitor console
  • Cisco 3620 (mounted backwards)
  • 3U 48 port keystone patch panel (telephone and cable patches)
  • The old Nexland dual WAN NAT router
  • 5U gap (more UPSes will go in here in the future)
  • 1U power bar
  • 3U Hewlett-Packard rack-mount oscilloscope (long story)
  • 2U Minuteman 1000VA UPS (cut off in the photo)

That one bright green Ethernet cable you see in the shot is the patch cable for Tweak, the server still sitting on its side between the racks. I ran a new patch for it through the rack properly.

Next up, the server rack! And believe me, the network rack was the easy part of this whole process.

Tuesday, February 15, 2005 7:41:09 PM (Pacific Standard Time, UTC-08:00) #    Comments [7]  | 

 

Doing DotNetRocks!#

On June 24th I was a guest on DotNetRocks... but we didn't talk about .NET, we talked about my favorite subject, TOYS! Actually, the focus was on water-cooled computers, which is definitely toy-ish, although we digressed into a number of equally entertaining topics.

There were a variety of questions, so I figured I'd best answer them here. First off, I put together a little photo-pictorial of one of my water cooling conversions.

One of the gizmos I used in that photo-pictorial but didn't take a photo of is this little motherboard power adapter that I plug into my power supply so that I can fire it up without having to actual turn the machine on. It's very useful for being able to run the pump without heating anything delicate up.

Its just a female 20 pin ATX plug that connects pins 13 (ground) and 14 (power supply on) together. So there ya go Geoff, don't say I never did nuthin fer ya.

If you looked at the water cooling page above, you may have noticed I'm using rackmount cases for my workstations. I have a server closet that's all rackmounted, but I also had my desk custom built with rackmount bays as well.

This is one of the bays being fitted out while the office was still under construction. The rack itself is a 12U Middle Atlantic SRSR Rotating Sliding Rail System rack. This rack actually slides out of the bay and then rotates once fully extended so you can get at the back of the case without digging around blind. I have two of these in the office, one for each main workstation bay. There's enough room on the rack for a UPS, two PCs and other sundry gear.

Although we didn't talk a whole lot about it on the show, for rackmount junkies, here are a couple of links to my server rack set ups. The first link is to my old rack, which ran from September 2000 to December 2002. After that, my new rack server closet was up and running, which is how it continues to this day.

This shot was taken today... the rack is essential the same as it was December 19, 2002, except that it's a whole bunch messier. Over the summer I'll be rebuilding most of the systems in here, after all, some of the hard drives are now four years old an essentially ticking time bombs well past their MTBF (Mean Time Before Failure).

Sunday, June 27, 2004 5:50:16 PM (Pacific Standard Time, UTC-08:00) #    Comments [1]  | 

 

All content © 2023, Richard Campbell
On this page
This site
Calendar
<June 2023>
SunMonTueWedThuFriSat
28293031123
45678910
11121314151617
18192021222324
2526272829301
2345678
Archives
Sitemap
Blogroll OPML
Disclaimer

Powered by: newtelligence dasBlog 1.9.7067.0

The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

Send mail to the author(s) E-mail

Theme design by Jelle Druyts


Pick a theme: