Storage Upgrade Stage 3 - Building Butters#

Finally, after two weekends and hours of work, I get to do what I started out trying to do - building a six drive RAID 5 array out of terabyte hard drives. Cartman's old 5U case was all cleared out, I had all the components, now all I had to do was assemble the beast. I had a little problem with the Adaptec 3805 controller.

The 3805 is actually an SAS controller, using the mini-SAS plugs that handle four drives each. On the web site the specification says that the board comes with a pair of mini-SAS to SATA cables, but there were no such cables in my box. Turns out I had ordered the OEM version of the board (the only one available), and it had no cables in it - which makes sense, its an OEM board, the OEM is always going to want to do something unique with the board.

Fine, I'll order my own cables. But NOBODY has stock on mini-SAS cables. I flip out at the supplier, and he calls Adaptec, and they offer to give me a pair of cables for free (which was mighty nice of them), if I'll pay the shipping. Totally worth it, I had ordered the wrong product and they were willing to fix it. A FedEx overnight shipment later, I had cables.

There's so much room in the 5U case that things came together rather quickly. The motherboard dropped in without a hitch, as did the drive array caddy. Then came the tricky bit...

The drives don't fit!Like Cartman, Butters has a separate pair of mirrored boot drives, although in this case the drives are 7200rpm SATA II drives, rather than the Ultra-160 SCSI drives of Cartman.

In the 5U case, the boot drives hang from the card retaining bar... and the first hitch of the build occurs. In a test hanging (shown to the right), the pair of drives hit the CPU fans. This is bad.

When a situation like this arises, first you curse. Then the full reality of the situation hits - all the work you've done for the past few days may have been for naught, this machine won't fit into this case.

I ran into the same issue with Cartman during his rebuild, I had to modify the cooling blocks to use lower-profile fans to avoid conflicting with the hanging hard drives. But I didn't have that option this time... no handy low-profile fans, no alternative cooling blocks. I needed a different solution.

Solution - move the drives.And here's the solution - move the drives. It's not like the new machine is full of cards anyway, it has exactly one, the Adaptec 3805 raid controller. And that card is low-profile anyway.

So I removed all the card holders from the bar and moved the mounting bracket so that the drives would hang away from the CPU fans. Problem solved. 

That was really the only hitch in the assembly of Butters, and it only took me a few minutes to solve it. I like this new drive position better, it puts the drives right inline with the main fan, so there'll be plenty of cooling air coming over those drives.

 A little more fussing with wiring and I was on my way with a successful boot of the new motherboard...

IMG_7870_small Notice that I plugged one of the 1TB drives into the machine as well, getting ready for the transfer of all that data back onto a shiny new 5TB array.

Ah, if only it was that easy. First I had to get a server install done. Which you think would be easy - a brand new motherboard, it should be no problem to get things up and going with Windows Server 2003, right?


Since I was planning to use this machine to run virtual machines, of course I wanted a 64 bit operating system on it - there's 16GB of RAM in there, how else would I address it all?

So I installed Windows Server 2003 SP2 64 bit edition. And the installation went cleanly, but didn't recognize the pair of built-in gigabit NICs. I wasn't all that surprised, after all, brand new motherboard, I'd need to install the drivers separately. Now if only I could find them.

On the Tyan web site you can see all sorts of drivers for the S2927, including drivers for Windows 2003 Server 64 bit, so you'd think there would be NIC drivers there. In fact, under the heading "Driver Packs" there is a pack for Windows 2003 Server 64 bit which SAYS it has LAN/NIC drivers. However, if you actually download it, there's no NIC drivers in there. In fact, if you open up the zip file, the README doc lists everything in the driver pack and it does NOT include the NIC drivers.

I tried installing it anyway, but to no avail - the NICs were still unrecognized.

However, the Adaptec software worked great AND the 5TB array was able to be built. But it was going to take more than 24 hours to prep itself, so it was worth tinkering with other configurations before settling for this.

So I headed over to the nVidia site... perhaps the reference drivers for the nVidia chipset would handle the NICs better. The chipset on this motherboard is the nVidia nForce Professional 3600 series. And lo and behold, there ARE reference drivers for Windows 2003 Server 64 bit. But they TOO could not recognize the NICs.

I even tried the prerelease tool on the download page to detect what drivers to use, and it recommended the Vista drivers! Figuring it couldn't be any worse, I tried them too... and this time the NICs were recognized, but were not functioning.

So now I'm afraid - afraid that my motherboard is defective. But now that I have nothing to lose I thought "what the heck, let's try Windows 2008 Server!" I had Release Candidate 0 handy, it was worth a shot.

Windows 2008 Server RC0 is a massive 2.5GB, I had to make a DVD for the install. But it installed flawlessly and recognized the motherboard, including the NICs. I was fully operational. And Windows 2008 Server is beautiful... but its a release candidate!

So now my motherboard was working perfectly, I installed the Adaptec RAID controller software. It installed, recognized the controller AND the drives. For the first time I had everything working, admittedly on a release candidate. How could I resist? I configured the 5TB array and let it rip.

The build ran overnight and finished perfectly. I had a 5TB drive array!

I shutdown Butters, closed it up and stuck it in the rack.

Powered it up again, but when it booted, there was no drive array! I rebooted again, still no array. What was going on? Pulled Butters back out of the rack, opened it up, booted it again... still no array.

I went into the 3805 BIOS to configure the array and it didn't show up until I selected "Refresh Array." Then it showed the complete array, in perfect condition!

Baffled, I exited the BIOS settings which caused a reboot... and the array vanished again. This time when I finished booting into Windows, I opened up the Adaptec configuration manager... it showed a failed controller and failed drivers. I selected "Refresh Array", and it still showed as everything failed - but Windows suddenly found the array! The drive letter popped up and everything acted fine.

Oddly enough, I was a bit suspicious.

So I started loading data onto the array. I wasn't going to erase any backups, so I waited for it to fail.

Loading went much faster than backing up, since the drive was plugged directly into the machine. Within a few hours, I had everything reloaded.

I was still suspicious.

I configured the file shares and got both the music and television archives up and running. They worked perfectly.

Now I really had a problem - I was running a release candidate OS, the configuration software says the array has failed (although the BIOS says its fine, once you refresh), but Windows itself is perfectly happy with it. And my family was happy to have the music and video back online. I couldn't very well take it back down. As long as it didn't reboot, the array seemed to stay up. Scary.

I sent a tech support request to Tyan, hopefully they'll have something useful to say. I really ought to go back to Windows 2003 Server 64 bit, but only if I can get the NICs to work.

Sunday, October 14, 2007 4:33:23 PM (Pacific Standard Time, UTC-08:00) #    Comments [1]  | 


Storage Upgrade Stage 2 - Moving Cartman#

My parts arrived during the week, but it wasn't until the weekend that I actually had time to start putting things together.

However, before I could build the new machine with the new parts, I had to get Cartman out of the 5U case, which meant moving him into the 4U case I had. The 4U case was populated with a rather old Linux machine that I hadn't powered up in a couple of years. So all of that came out, leaving a clean 4U case, ready for loading:

The 4U Case Emptied

Once the case was clear, out came the rack again and Cartman shut down and pulled. I put Cartman on a separate table from my regular service table and gradually disassembled it and moved the parts into the 4U case.

Cartman motherboard in 4U case

Here you can see Cartman's motherboard loaded into the 4U case. Its an old Tyan board with dual P3 processors and 512MB of RAM. A great board in its day, its terribly dated now.

Notice also a pair of PCI-X slots, both of which are normally occupied - one with the Adaptec 29160 SCSI controller, the other with the Adaptec 2820SA RAID controller. The RAID controller just runs the big SATA storage array, the boot mirror runs off the SCSI controller, as well as the DVD drive and the external tape drive.

The drive array sits in the big gap on the left side of the case (right side if you're looking in the front), and the pair of boot drives live in the little gap full of wires between the drive array and the DVD.

Mounting the motherboard is always the trickiest bit of the build, once that's done, the rest goes quickly. The only difference between Cartman in the 5U case and Cartman in the 4U case is a four drive RAID array instead of six drives.

Cartman up and running in the 4U caseThis is what Cartman looks like fully loaded into the 4U case and booting up. You can see the SCSI ribbon cable running to the pair of drives in the center front of the case and the four blue SATA cables running from the RAID controller to the array chassis. Even the DVD is SCSI, although where the drives are SCSI-160 LVD, the DVD is SCSI-40.

The other two cards in Cartman are the video and gigabit ethernet. This is an old machine, very little is onboard. But you can appreciate why I have to replace Cartman. All those things we're used to having right on the motherboard have to be added card-by-card.

Cartman was none-the-worse-for-wear in the move, still working the same way, with just the RAID array being down.

The new drive array  In fact, as you can see from this shot at the front of the case, I did not install the terabyte drives into the chassis, since I currently have backup data on three of the drives, and I need six for the new array in the 5U case.

When I'm finally able to clean off those backup drives (when I believe everything is stable), I'll build a RAID 1+0 array using the old Adaptec controller. That will stay under the 2TB limit with the 1TB drives and give me the reliability and performance I want.

Ultimately, Cartman will be retired, but really only the motherboard. All the drives are fresh, what's needed is a new multi-core, multi-processor motherboard with a ton of RAM.

I'm thinking that since the 5U machine has an AMD motherboard, I'll put an Intel board in this machine. Probably something from ASUS, we'll see.

Cartman went back into the server closet without incident... tomorrow the 5U case gets a new motherboard, and a new machine will be born: Butters!

Saturday, October 13, 2007 5:03:23 PM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 


Upgrading Storage Capacity on Cartman#

So if you didn't get the hint, I'm upgrading the capacity of my servers in my server closet.

Last time I upgraded capacity it was in Cartman, migrating from a 400GB six drive RAID array to a 2TB six drive RAID array. With the new 1TB hard drives, I was ready to move to a 2TB six drive RAID array.

I bought ten Seagate Barracuda ES.2 1TB drives, six for Cartman, and four to go into a different rebuilt server.

So stage 1, actually started back on Friday, was to back everything up... rather than taking the chance of pulling Cartman and pulling into him directly, I hooked up the 1TB drives, one at a time, to my Phillip, one of my workstations, and copied everything off. It took until Sunday to finish the copy across three drives.

Backing up onto 1TB drives.

Here you see one of the drives getting loaded across the network the slow, but low risk way. Even with gigabit ethernet, transferring data from the old array on Cartman through the network, into Phillip and then out via SATA takes a long time.

By the way - they may call them 1TB drives, but they format to 933GB. That whole 1000 bytes vs. 1024 bytes thing is getting out of hand. It was fine when we were dealing with smaller drive, but when you're talking 933GB vs. 1000GB, that's 7% of the capacity of the drive missing. At some point you have to call foul - this is not a 1TB drive.

So I had three drives filled with the contents of the old array, that left me with seven drives empty to build the new array... although I only needed six.

Once the backup was finished, it was time to pull Cartman, which meant opening up the server closet and pulling the rack.

Server rack pulled out

Other half of the rack closet

Here's the server rack pulled, Cartman is near the bottom, just above the 2000VA UPS. The long grey 1U box is an Exabyte 1x10 SCSI tape backup unit. You can also see the power supply of my temporary Exchange rig that has been running some two years as just a power supply, motherboard and hard drive sitting on a towel. I'm tempting fate, I know.

You can see how the rack pulls out on the rails, using folding arms in behind with cables running across the arms.

Beside the server rack, the second shot is the network rack that has the dual internet connections, all the patch bay wiring for network, telephone and cable. The 1U console is pulled out to shut down Cartman, its wired back to the server rack where the KVM switch is. 


Here's a look into Cartman for the first time in a couple of years:

A naked Cartman!

Looks about the same as last time.

That's the end of the photos, because things went downhill from here and I stopped thinking camera and started thinking much meaner thoughts.

I carefully extracted the six 400GB drives that have been the 2TB array for the past couple of years. I figured I could always go back to the original drives. I replaced those drives with six blank 1TB drives. Fired up Cartman and built a new array.

The Adaptec 2810SA controller recognized the drives fine, but wouldn't create an array bigger than 2.1TB. It appears to be a hard limit of the controller. I upgraded firmware on the controller, to no avail. I tried configuring it in Windows 2003 Server and directly in the firmware, hit the same limit either way.

So much for that - now I have to make a choice. I could build two three drive arrays of 2TB each, or replace the controller. I wasn't going to sacrifice an extra drive for this, I needed a new controller.

So now that I admitted I needed new hardware, it was time to revisit my thoughts of hardware migration in general.

The original versions of these servers go all the way back to 2000, with upgrades on the way. One of the issues I've run into again and again is that migrating to new servers is hard, so hard that old servers are tough to retire, they just go on and on until they fail and you're forced to give them up. Cartman, after all, is a dual P3 machine, still going on. I've upgraded the OS, replaced the CPU fans, swapped the drives a couple of time... but its still an old machine.

My new vision of the rack is to go to completely virtualized servers. I want to build a pair of high performance multiple processor servers with lots of RAM and 64 bit operating systems running multiple virtual machines. I need a pair so that I can fail between them - they will back each other up and each should be capable of running the entire server farm itself.

Cartman, obviously, is not qualified for this job. So Cartman will have to go away eventually.

Through a series of unexpected events that I shall not go into in detail, I ended up in possession of a Tyan S2927 motherboard with a pair of AMD dual core processors and a bunch of RAM. This was a motherboard able to take on my virtualization mission - it just needed to find a place to live.

The need for a new RAID controller capable of handling arrays bigger than two terabytes gave me the excuse to make the big move - migrate Cartman out of the 5U case with six drive bays into the 4U case with four drive bays, and move the new Tyan motherboard into the case with a controller able to get me my 5TB array.

However, that meant I had to wait for more parts to arrive, which meant waiting a few days. I'm ordering in more RAM for the S2927 board (might as well fill it) and an Adaptec 3805, which will go into a PCI-e slot and handle the big array.

I ended my day by putting Cartman back in the rack and back online again, although without the drive array. We could live without the big storage for a few days.

Sunday, October 7, 2007 4:09:23 PM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 


More Disk Space GOOD!#

Got home from Bulgaria to discover that my new hard drives had arrived!


What sort of drives you might ask? Why Seagate Barracude ES.2 1TB hard drives, of course! Because we all need ten terabytes.

Friday, October 5, 2007 12:47:14 PM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 


Fastest Water Cooled Refit in the West!#

So Patch Tuesday came by as usual, and machines were patched. I have one workstation, Terrance, that still isn't running Vista. Terrance is the giant triple screen machine, running a total of 4960x1600 worth of displays. Plugged into it is the MOTU Traveller that I do all my recording with for RunAs Radio and .NET Rocks. So needless to say, it needs to be completely reliable, since I record every week.

So, as I said, Patch Tuesday came by as usual. And as usual, Terrance didn't auto-install the patches, just downloaded 'em and let me know. And, as usual, I hit the button. And, as usual, the patches required a reboot. What could go wrong?

Terrance didn't come back.

Terrance sits at the bottom of the stack of two workstations, sitting underneath Phillip. Phillip is a far crankier computer, struggling with barely sufficient cooling. So I'm usually tinkering with Phillip to keep him happy, while Terrance is totally reliable.

Except that Terrance didn't come back.

I left Terrance off for a couple of days. I had already finished recording for the week, so I had until the following Tuesday to get things fixed up, and I figured I could wait for the weekend.

When the weekend came, I hauled Terrance out of the rack, which means shutting down and removing Phillip as well. Annoying. Moved Terrance up to the service counter and re-installed Phillip so at least I had one workstation up and running.

With the cover off and plugged into the test harness, Terrance still wouldn't power up. Well, wouldn't power up is a bit of an exaggeration... the motherboard power light is on, but the main power light wouldn't turn on, and it seemed like the power switch was useless. Sometimes I'd see the LEDs on the RAM chips sequence a bit, but the drives never spun up.

I suspected that my groovy new water cooled power supply might be the culprit. Fortunately, I have a power supply tester, so I plugged it in and powered it up.

Here's a look inside Terrance. You can see the dual video card set up (running a pair of nVidia 7800s) that are both water cooled, along with the CPU, northbridge, southbridge, hard drive and power supply. This is also the moment of truth, with the power supply tester on the left plugged into the water cooled power supply. The power supply is the blue thing on the lower right of the photo, the black block attached to it is the water cooled part. The heat sinks for the power supply are connected to the block and water passes through the block to cool it. No fan in the power supply.

So, turn on power supply, and the power supply tester should report voltages, all that good stuff.

Only it doesn't do anything.

Being the suspicious type, I pull out my spare power supply, a nice Enermax Whisper unit, not water cooled, but nice and quiet. Plug power supply tester into that, fire it up, and everything lights up. Ah ha, one dead power supply.

So, to replace the power supply, I need to breach the water loop. I hate breaching the water loop, its messy. But, Terrance is one of my external water loop equipped machines. That means I have a pair of hoses running out the back of the machine that can be connected together to be self-contained, or connected to a larger external cooling system. Ultimately my plan is to plug into the wall for water cooling, but goodness knows when I'm going to have time to finish that.

But, back to the problem at hand - I have hoses with self-sealing couplings running out the back. That makes draining the water system a whole lot simpler.

In my hand I have a bulb pump which is connected to one side of the external loop. The other side has an unsealed coupler connected to it and is stuffed into a clean yogurt container to catch all the water. 20-30 squeezes of the bulb pump later, all the water is drained out of the system into the yogurt container.

Now to actually breach the loop - doing the pump out isn't really a breach anymore because I use those self-sealing couplings to keep everything tidy. The trick to removing things from a water cooled machine is to realize they still have some water in them, so closing off the water connections is a good idea.

On Terrance, the power supply was added to the water loop between the southbridge block and the hard drive block. So I want to connect the southbridge hose to the hard drive and leave the power supply out. But to keep the power supply from leaking, I connected the existing hose running between the power supply and the hard drive to the other end of the power supply.

So you can see the hose loop on the two water cooling connectors of the power supply, and in my hand is the hose that used to run to the power supply from the southbridge chip. I just had to rotate that hose around and connect it back to the hard drive and everything was sealed up again, no mess.

So the rest was easy - extract the power supply, install the Enermax supply, reconnect the power supply and I'm off, right!

Here's a shot of the freshly wired in Enermax supply. Looks good, huh?

Except for the part where it doesn't power up at all. Motherboard light turns on, but the machine won't power up. I'm suspicious. So I retest the old water cooled power supply. Its still dead. But something else is wrong.

Experience has shown me that the one thing that can stop a water cooled computer in its tracks is a bad pump. So I unplug the pump and the machine powers up normally. I have a bad pump as well. Good news, I have a spare pump.

For whatever reason, I failed to photograph the pump installation. Its very tricky, I had to turn Terrance on his side, remove the side plate (five screws off the bottom, five screws off the side, four screws off each end), disconnect the hoses from the reservoir, wiggle the reservoir off (its press fit), slide the pump out, slide in a new pump, remount the reservoir, reconnect the hoses, put the side plate back on, reinstall all the screws. Apparently I was so busy I never snapped a photo the whole time.

This is a shot immediately after finishing the pump swap-out. The dead pump is out of the machine, the new pump is in the machine. Everything else is the same, and Terrance now works.

Refill of water was pretty simple, just power up and keep pouring water. Once the lines had been burped (leaving it running for a half hour or so with the reservoir lid off), I buttoned everything up and put Terrance back in place.

Total service time, about two hours.

So, questions: did the pump kill the power supply, or the power supply kill the pump? And what killed either one or both?

And I need to restock on spare parts.

Sunday, May 27, 2007 10:43:53 PM (Pacific Standard Time, UTC-08:00) #    Comments [1]  | 


Going Vista!#

So I was looking around my desk the other day at all my shiny machines and thinking "gee, everything is working entirely too well, I should break something!"

Actually, I had, like so many others, set up the February Community Technology Preview of Vista in a Virtual PC on my big workstation. And it worked like a hot damn. But it wasn't as pretty as it ought to be. The great new UI that is one of the big features of Vista won't run under VPC.

And that's when I started looking around my desk. After all, with all these computers, surely ONE of them can be sacrificed to the beta OS gods? Right?

So I took the plunge, burning a DVD of the 64 bit version of Vista Feb CTP and blowing away Phillip, my secondary workstation machine running a 4000+ Opteron, 2GB of RAM on an ASUS A8N-SLI motherboard with that honkin great Sapphire ATI X1900XT video card (because 512MB of RAM in your video card is a special kinda love).

And what can I say? Vista is beautiful.

But there's more to an OS than beauty - can it run the things I need? The first challenge was video drivers, but ATI came to the rescue with a lightweight, easy-to-install 64 bit beta driver, only 38MB!

The next thing I worried about was a bit tougher - Phillip is water cooled, and the only real fan in there is a Vantec 120mm fan connected to an Orbital Matrix LCD controller. This USB device has a bit of software installed on the machine so you can control the display and also vary fan speed based on a temperature sensor. Without this driver working, the fan would not spin, and ultimately, Phillip was doomed.

Amazingly, Orbital Matrix makes 64 bit drivers for their products, and LCDC, the software of choice for making the controller do its thing, fired up with no problems at all.

So now I have a functional Vista machine. Sure its beta, but so far so good! Lots more software to install and test, I'll keep y'all posted on the love.


Toys | Vista
Thursday, March 23, 2006 10:25:25 PM (Pacific Standard Time, UTC-08:00) #    Comments [1]  | 


A new addition to the family!#

Ah, not that kind of addition. Geez, what, you think I'm crazy?

Its a new computer, of course!

I've been on the look out for a TabletPC for a long time. I've never looked at them as a replacement desktop machine, I have one of those in my Dell XPS laptop. But its barely portable, and the battery life is measured in seconds (3600 of 'em to be precise). On the other hand, it has enough horsepower to grunt through running multiple VPC sessions when the work calls for it.

So since its not a desktop replacement, I think a TabletPC should be all out portable, and that, to me at least, means a slate style. Just the screen, the pen, and thou.

But most of the major manufacturers that make tablets, like Toshiba, don't make a slate. And I'm always skittish around unknown or marginal brands.

Then I got a good look at Motion Computing. TabletPCs are what they're all about. They're a premium product at a premium price, but sometimes, you gotta pay to get what you want.

So I finally broke down and bought the LE1600, with all the goodies on it. And am I ever impressed.

My handwriting is appallingly bad, a product of communicating primarily by typing since grade school. But somehow, that little gizmo can figure out my scratch.

And forget no keyboard - when you need a keyboard, the slate plugs into one, the convertible keyboard not only is a fine keyboard, but also a stand for the machine, and it snaps over the display when you want to travel with it.

The battery life is good with the little battery across the top - about two hours. For real battery life, you add on the extended battery that fits across the back, I've gone six hours using wireless... I can't imagine how long it would last with the antennas shut down.

I'm having a great time using the slate with Visio, using the slate very much like the proverbial cocktail napkin.

So here's the family photo:

The big ol' Dell is on the right, the LE 1600 on the left, and glowing menacingly in the background is the great 4960x1600 display array that is my main workstation.

I plan to carry both laptops with me on major excursions now, so I ordered a Brain Cell insert for my Tom Bihn Brain Bag - the bag actually has room for both machines.

Am I done yet? Nope, I'm on the lookout to replace the Dell. Its three years old now.

What is my perfect maximum horsepower laptop? Glad you asked.

  • 1920x1200 display - an absolute must have. More screen space good.
  • Dual core 64 bit processor - more horsepower good.
  • 4GB RAM - more RAM good.
  • 64 bit OS - gotta take advantage of the horsepower and the RAM.

After all, I do a lot of demos of SQL Server 2005 and beyond. I need to be able to run multiple VPCs fast. And big ones, too. My 64 bit workstation with 4GB of RAM and Windows Virtual Server 2005 64 bit is the best environment possible for running VPCs, and I want as close to that as I can when I'm on the road.

Does such a machine exist? Well, not in the mainstream, but its imminent. Hypersonic PC now makes the Aviator EX7 which has all the key bits, including shipping from the factory with a Windows XP x64. Another thing that interests me about Hypersonic is they offer custom paintwork... hmm, maybe a .NET Rocks Logo laptop? Alienware makes the machine in the m7700i, but not the OS. Hence they play games with offering 3 GB of RAM. Perhaps the Dell buy out has distracted them.

So many toys, so little time...

Wednesday, March 22, 2006 10:14:22 PM (Pacific Standard Time, UTC-08:00) #    Comments [1]  | 


Water Cooling Unconversion#

The nVidia 6800 Ultras that used to inhabit Phillip were the bane of my water cooling plans. I water cool for the quiet, not the performance. But these 6800s are so hot, my quietness plan was all messed up.

And its entirely my fault, too. I bought waaay too much video card. I wanted to experiment with SLI, using two video cards to run one display. Granted, the one display was a Samsung 243T with a native resolution of 1920x1200. The cards performed amazingly well, except that they were so hot, I ended up adding an additional radiator and two fans to the system to keep it cool.

The latest refit retired these toaster ovens, and I figured rather than let them sit there and rot, I'd let me friendly neighborhood computer store resell them for me. In fact, the owner came and picked them up, he had them sold before I was ready.

Unfortunately, the new owner wasn't into water cooling, so I had to convert these water cooled video cards back into air cooled ones. I had kept all the fan equipment in the original boxes, so it wasn't tough to find. Reassembly, however, is tricky.


Freshly removed from Phillip and drained of water, one water-cooled nVidia 6800 Ultra.

Four center plate screws, six spring loaded edge screws and two voltage regular mount screws later, the water cooling block is removed from the board. Notice the less than perfect impressions on the cooling block from the RAM chips of the video card. The system was never unstable, but it sure looks like this block wasn't as tightly fitted as it could be.

Deploying all the air cooling hardware. The copper block goes onto the GPU (along with the black backing plate), the angled aluminum block with the white blobs on it goes onto the RAM chips, the voltage regulator heat sink is the bottom right hand corner of the picture, and the fan assembly itself is in the top right hand corner.

After cleaning off the old thermal paste, I applied new stuff to the GPU, used the original white contact pads for the RAM, and carefully put everything together. The copper GPU plate goes on first, using spring loaded screws that go through the plate, board and into the black backing plate. Then the RAM cooler goes on with six different spring loaded screws. The comes the heat sink for the voltage regulators, held on with a pair of spring loaded clips. Finally, the fan itself is held on by three screws and plugged into the board.

Innit purdy? I like the water cooled version better myself.

That was the first one, the second one was even easier. Then a careful repack into the box, including power cables and DVI-VGA adapters for each.

These video cards were not a great investment for me - I think they were worth about 20% of what I paid for them a year later. Not counting the water jackets, which I still have and I can't imagine what I'll do with them. Maybe EBay.

Sunday, February 19, 2006 9:45:19 PM (Pacific Standard Time, UTC-08:00) #    Comments [1]  | 


Refit and Clean up of a Water Cooled SLI System#

Before I could get into rebuilding Terrance, my triple-screen system, there was an obstacle that had to be resolved first: Phillip, the SLI system that sits on top of it.

This is what my workstation rack looked like - at the bottom, barely visible, is a Minuteman 1000RM E rackmount UPS. I had my electrician rewire the outlets in my workstation bays so that the power passed through the rack closet, so this UPS could move into the rack closet, saving me 2Us and two fans.

Above the UPS is the Terrance, triple-screen system. It was the quietest thing in the stack, a P4 based system with a Matrox Parhelia to drive the three Viewsonic 18" displays. And sitting on top was the problem child: Phillip, the AMD-based gaming system with a pair of nVidia 6800 Ultras configured for SLI. This is plugged into a Samsung 243T. This rig put out 100 frames per second in Half Life 2 at 1920x1200. It also nearly melted in the process. The 6800 Ultras are just too hot. I ended up strapping a big radiator to the top of the case with a pair of Vantec 120mm Stealth fans mounted on it. Yes I know: fan bad. But melting worse.

Here's a top view of the SLI system, you can see the size of the additional radiator. This kept the system cool even under heavy SLI use. In exchange, of course, for ugliness and noise. Whenever I would record .NET Rocks! I'd have to turn this machine off.

The solution was to get rid of the 6800 Ultras. I considered going with later model SLI cards, say a pair of 7800GTs. These are actually cooler than the 6800s, and have more horsepower. Then I got a look at ATI's X1900. A 512MB video card with comparable performance to many SLI systems. In one card. How great is that? So I switched - trade in the 6800 Ultras for one Sapphire X1900XT (and a bunch of money).

I'm very much of the mindset that anything worth doing is worth doing excessively. And since I was going to totally overhaul Terrance, why not do the same for Phillip? The problem was, there really wasn't much better than the existing gear. The ASUS A8N SLI motherboard is great. The AMD processor in it, granted a single-core 4000+, but still a great processor. 7200rpm hard drive, dual burners... what could I really do to improve it? The new video card gets rid of the heat problem, so other than that, a couple of gigs of stinky fast Corsair RAM is all I could come up with.

The upside to this is that it meant I had one machine that would stay operational - it didn't need to have a scratch re-install because I wasn't changing the motherboard, just the video card.

However, it also meant breaching the water loop of the biggest, ugliest water cooled machine I've ever built.

The number one problem you face when breaching a water loop is how to do it without making a mess. The first thing I always do is take the cap off the reservoir, which allows air in to the system. The water loop is more or less air tight, so creating some pressure relief lets water leave the lines. Next I open up the highest point in the loop, which is normally the top of the radiator. In this case (look at the photo above) the top of the radiator is quite high up, and the line is essentially dry when the pump isn't running.

Ordinarily I'd use my little bulb pump to force all the water out of the system, but since its fatal encounter with the resident terrier, it was up to my lungs. So I pulled the line from the top of the upper radiator, then added a bit of hose onto the radiator connector and aimed it at ye olde yogurt container. Then I blew into the other end. And blew, and blew. There's a lot of water in the system.

Eventually I drained enough that the lower line of the upper radiator was also dry, and then I pulled that off as well, and reconnected it to the upper connector of the lower radiator.

That got the upper radiator out of the loop. I was careful in actually removing it because it still had a lot of water in it. I had to rotate it a bunch of times to get the majority of the water out.

I took a break at this point, you can see in the above photo the now completed water loop without the additional radiator. This is how I originally configured the system until I discovered that 6800 Ultras run at sun-like temperatures.

Next step, extract the 6800s.

The 6800s were plumbed into the system between the processor block and the Northbridge block, which is between the two video cards. And boy, was that ever fun to get together the first time. However, getting them out wasn't so bad - the connectors for the water blocks sit relatively high up, so with most of the water out, the lines were pretty much high and dry. I had to cut a new segment of hose to run between the processor block and Northbridge block.

Once I got the cards out, this is what I found.

The second video card in the SLI pair had sprung a leak. That goop is from the water loop dripping down into and beside the PCI-E slot. Beats me why the thing still worked. I wasn't all that concerned, since all this was all on the second PCI-E slot, and I was switching to a single card. Notice I've already turned the ASUS Patent Pending SLI mode card over, although I don't think its actually inserted correctly...

You can see where the leak came off the water jacket and dripped down onto the motherboard. I strongly suspect I melted the seals on this water jacket when it overheated... before I realized I needed a second radiator for it.

I'd worry about the 6800 Ultras later. Now it was time to fit the new video card and get things back up and running again.

You can see the new card and the new hose running from CPU to Northbridge. It looks too high in this photo, but it wasn't. Unforunately, Innovatek hasn't made a water jacket for the X1900XT yet, so I'm going to have to leave the fan on the video card for now. The good news is that its speed sensitive, so when I'm not running anything graphically intense, its pretty quiet.

One interesting problem was that the power adapter cord that came with the video card was only a four prong cable, and there's a six prong plug on the board. I tried it, and it didn't work - the machine kept coming up with a BIOS level error on the display saying "plug power into the video card." I used one of my six prong spares and it powered up fine. Price of being first with one of these cards, I guess.

What isn't in the above photo is the 2GB matched pair of Corsair 3500LLPro I stuffed in, fast response RAM with lots of head room and blinky lights.

Phillip powered up fine in this new configuration, and Half-Life 2 plays great on it.

One machine down, one to go.

Saturday, February 18, 2006 2:29:44 PM (Pacific Standard Time, UTC-08:00) #    Comments [1]  | 


Cynicism and High Resolution Monitors#


When last we left my latest journey into the realm of the resolutionally absurd, I had a couple of large boxes on the floor, one of which contained an Apple 30" Cinema display. This display uses dual-link DVI, which has been around for awhile, but is not widely understood. The reality of the DVI system is that it supports a lot of different modes, and dual-link is the most powerful and most expensive of them. Inside a dual-link DVI cable is two entirely separate sets of video signals. This is necessary to handle the 2560x1600 resolution of the Apple 30" Cinema display. A single-link DVI pretty much maxes out at 1920x1200.

So to drive the Apple 30" Cinema display at 2560x1600, I needed a dual-link capable video card and a dual-link cable. The display came with a six foot hard-wired cable, much to my annoyance, since I nee at least twelve feet of cable to reach my workstation bay. That meant finding a dual-link extension cable.

You'd think finding a dual-link video card would be easy, and you'd be right, unless you wanted assurance that you're actually getting one. It used to be that dual-link DVI was a rare and expensive feature, requiring you to order specific cards for the purpose. That's not true anymore: pretty much every nVidia 7800 series video card supports dual-link, its so common that its not mentioned anywhere in the specifications or documentation at all.

So, when I looked at my situation: new monitor, unfamiliar cabling protocol, fixing wiring with an extension and utilizing an essentially undocumented feature of a video card, I figured there was no hope in hell of it actually working. Cynical? Perhaps. But just because you're not cynical doesn't mean you aren't screwed.

And realize that the machine I wanted to rebuild is my main workstation - granted I have backup machines, but taking the main workstation down is not something I do casually.

So, I built a testbed. Left the existing machine entirely alone and bought the parts to rebuild it, with the intention of testing all those parts independently of the existing machine.

Since I was going to need two video cards, one to drive the Apple display and the other to drive the two wing displays, I wanted to get a motherboard with two PCI-Express slots in it. This means an SLI board like the ASUS A8N I currently had in my gaming system. Normally SLI uses two video cards to run one display, thereby doubling the frame rate. For my purposes, I'd be using two video cards independently, but with symmetrical performance. Sure, I could have done this with one AGP card and one PCI card - but that would suck. Dual PCI-E is the way to go.

I chose the ASUS A8N32-SLI motherboard for the job, and just for good measure, plugged an AMD 4800 Dual Core in it. Hey, two video cards deserve two processors, right? The video cards I chose are MSI's implementation of the nVidia 7800GT. These are high performance video cards, but not top of the line: I'd had enough of the heat problems with the 6800 Ultras to know better. These are great cards, lots of horsepower, but not so much that they're running in a state of near meltdown.

So, to build the test bed, I rigged up the motherboard with the processor, some spare RAM I had lying around, a hard drive and DVD player. Just sitting there at the service desk on a towel. I stuck one video card in it because I wanted to work out the first issue: could I make the Apple 30" Cinema display work in 2560x1600 mode with an extension cable. The list of failure points was long, but the key ones were whether or not I had the right video card, and whether or not 2560x1600 signals would travel through an extension cable and still be bearable on the far end.

Here's what the rig looked like:

You might just spy the screwdriver stuffed under the back of the board. The video card sticks down enough that it was popping itself out of the slot when I was testing, freaking me out when suddenly nothing worked. Getting the "machine" up and running wasn't all that difficult. I first fired things up with the video card plugged into the little 15" LCD panel you see sitting behind the board. Once I was sure the basic configuration worked, I fired it against the Apple display without the extension cable.

That worked as well, so I went ahead and did an install of Windows XP. This takes awhile, between the hard drive formatting and basic install. The rig was plenty quick. Once the base OS install was finished, I focused on video drivers. This was best done by getting network drivers running first, and downloading the latest video drivers.

640x480 on a 30" display is hilarious - the icons are the size of your fist. Then I got the nVidia reference drivers installed, and the resolution bumped up to 1280x800. Better, but not what I wanted. My mistake was plugging the display into the top connector on the video card: only the lower connector has dual-link support. Once it was down there, I got this:

And just in case you can't read it clearly:

Final test was the extension cable. Plugging it in was fine. The screen was clear and stable with and without the extension cable. On reboot, the starting low-resolution screens had little bits of distortion in them, but as soon as it kicked into high resolution mode again, it looks perfect.

So, tests complete, I guess I'm ready to tear apart some gear and get these new displays integrated into the office.

Sunday, February 12, 2006 3:46:18 PM (Pacific Standard Time, UTC-08:00) #    Comments [3]  | 


Time for New Toys!#

I've said this before, and its still true - Christmas isn't a good time for me in terms of me getting toys. Its good for everyone else because they all come to me for advice about toys. Spouses are especially interesting around Christmas, there are some that say "don't buy anything until you talk to Richard" and others say "don't you dare talk to him, he's a bad influence on you!" Either way, I'm happy to help folks out selecting gadgets. I just don't expect them for myself. After all, considering how difficult it is to buy for me when I'm doing it, I wouldn't even try to put that sort of pressure on my loved ones.

My happy time comes after Christmas for a variety of reasons. Since Christmas is over, I'm not stepping on anyone's presents if I buy myself something. Also, since January is a slow time for gear sales, my regular suppliers really appreciate my buying spree.

So, with that preamble, let me give you a few photos of what showed up over here in ToyLand...

Yes indeed, I finally pulled the trigger on ordering DigitalTigers' ZenView PowerTrio HD. The combined resolution of the three monitors involved is 4960x1600 - just over 7.9 million pixels. Woohoo! The package arrived remarkably fast, around a week or so. There are two boxes, one containing the Apple 30" Cinema Display, and the other has everything else in it: the two Samsung 204T panels, the stand, cables and instructions.

The "everything else" box was nicely packed. Each of the sets of components sat in its own foam layer. The picture above is the top layer containing the stand. Beneath that were the layers of the Samsung displays, each separate by foam and/or styrofoam. Its a great package.

Here's a look inside the Apple box. Note the large monitor. That's 30" diagonal, baby.

I didn't realize that the LG.Phillips display was actually the Apple 30" Cinema display. The big whammy there is that there's only a one year warranty on this display, unlike the Samsung panels (and virtually every other LCD out there) that have three year warranties. Also, the cables on the Apple display are hard-wired, which meant using an extension cable. My displays sit about 12 feet from the computer when you account for going through the desk, in the cable channel at the back of the desk and into the slide-out workstation bay. For the Samsung displays, I bought 12 foot DVI cables, but I had to get a dual-link DVI extension cable.

Although the stand is branded DigitalTigers, I'm pretty sure its an Ergotron stand, if for no other reason than I already had the same style stand for my old triple screen display. Although admittedly, to handle these enormous monitors, the stand is a bit bigger.

Next step - serious testing.

Thursday, February 9, 2006 10:06:04 PM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 


Hell Hath No Fury...# a wife who's computer is dead because the water pump stopped working.

Not your usual every day computer problem either, is it?

I had the electrician in here on Wednesday, I wanted the circuits for the workstation bays rewired so that they passed through the server closet. Why? I wanted to move the UPSes for the workstation bays into the server closet. This would accomplish two things: reduced noise and more space in the tiny 12U workstation bays.

Since my electrician wired the place during the renovation, he didn't have a problem with what I was doing, why I was doing it and why it had to happen. He's learned that with me, weird is the norm. The whole thing was done in just a few hours.

Of course, while the electrical work was going on, the workstation circuits had to be disconnected, which meant all the workstations were off. My wife and I worked from our laptops for the few hours that the work took. When the work was done, however, there was a casuality - the water pump.

Naturally, my machines both powered up again just fine. But my wife's workstation wouldn't power up at all. I figured it was the power supply, and I always have a spare, so I pulled the machine out, popped the cover and plugged the power supply into just the main power of the board to see if it would start, and it did.

Feeling smug at my immediate diagnosis, I pulled all the power plugs off the gear in the machine, unmounted the power supply and performed the swap out, plugging everything back in. And it didn't work. Doncha love it when that happens?

Fortunately, this had happened to me before, and I am blessed with a pretty good memory when it comes to stupid things happening to me. So I unplugged the pump and turned the machine on. It wouldn't start. I switched the power supply off for a few seconds, then turned it back on again, then tried to power up the machine and it worked. So I plugged the pump in - boom, dead machine again. Unplug the pump, power the machine, no workie. Turn off the power supply, turn it back on, power the machine, and it works. See the pattern?

What's happening is that the controlling circuitry in the pump is causing a dead short in the power supply. The power supply, to protect itself, effectively shuts off and won't power up. Until you cycle the power supply itself, its not going to turn on again.

So, I've fried another pump. How? Beats me, it sucks. I go to my favorite supplier of Innovatek gear and discover there's a shiny NEW version of the pump available, with improved electronics. Hmmm - maybe its not just me? So I order the pump immediately with overnight shipping. Admittedly, it was late at night on Wednesday when I did this, so the order wasn't filled until Thursday.

Lo and behold, on Friday the pump actually showed up! Its a miracle! So now the fun of retrofitting a pump comes into play. And you know what that means - time to drain the system.

Ah, how awful life would be without a bulb pump. You can see I'm using the pump to push air into the system and force the water out into the yogurt container.

After the bulk of the water was drained out, I turned the machine up on its side and removed the side of the case - the only way to extract the pump.

The reservoir is mounted to the pump, which is at the lowest part of the case, normally. Turning it up on its side drains the last bit of water out of it, trying to minimize the mess - I've learned this from experience.

The reservoir is pressure-fit onto the pump and takes some twisting to get off the pump. You can see in the photo that the reservoir is sitting on top of the case, its rubber-gasketed pump mount visible.

Here you can see the pump has been removed, its slide off base still in place. What I had forgotten was that the pump slides off backward (down) into the case. I had to pull the drive assembly out a few inches to get the pump loose.

The new pump dropped into place easily enough, its the updated model of the old pump, hopefully with this short-out problem resolved. I've had two pumps gone this way now, admittedly both were a couple of years old.

Once the pump is reinstalled, the reservoir is pressed into place, and then the hoses are fitted back on.

Just to complicate matters further, I swapped out the old GEForce4 video card for my more advanced ATI Radeon 9800XT with the double-sided water jacket.

With everything hooked back up, it was time to put the case back together and get things running again. These rackmount cases are awesome, but unfortunately no longer available anywhere. They're true workstation cases - no locking face plate, and there's no rivets or welds anywhere, the entire case is assembled with screws, so that every part can be removed.

Add in the lucky coincidence that the standard Innovatek radiator fits in the case along with a 120mm fan and you have, in my opinion, the best darn rack mount water cooled PC case possible. That's why I have three of 'em.

The old pump sitting outside the case now, you can see I have the pump bypass plugged into the power supply to run with pump without powering up the machine. Distilled water and a little Innovatek water conditioner are added, I discard the old water.

After some time tapping and burping lines to get all the bubbles out, the water loop ran steady, so it was time to power up fully. The machine came to life without consequence, recognized the new video card and everything was good to go.

For the moment the machine is back in its workstation bay, cover off while I check temperatures and keep an eye on things in general. Once you've breached a water loop, its worth keeping an eye on it for awhile to make sure its not leaking or anything stupid is happening.

Meantime, I still have to actually take advantage of the electrical changes and shuffle my UPSes around.

Saturday, June 25, 2005 11:35:26 AM (Pacific Standard Time, UTC-08:00) #    Comments [3]  | 


Revenge of the Radiator#

I hinted back before Tech Ed that Phillip, my big gaming machine, had died. Actually, it died only a couple of weeks after I put it together, I just didn't want to talk about it.

One thing I noticed about this machine when I built it was that it did run hot. My other machines can keep their temperatures under 35C with no load, this one with no load struggled with 45C. Add in SETI@Home working the processor at full bore and 45C requires at least 50% fan power. And then there are those darn video cards...

So one day I finally sat down to try out the full potential of the new machine with its lovely, top-of-the-line SLI video cards. I set up Half Life 2 running in 1920x1200 mode. It runs smoothly at around 100fps, in the really gnarly stuff it gets as low as 70... obnoxious, innit?

Enjoying myself immensely, I set off on a campaign of maximum destruction in Half Life 2, enjoying the view, when I feel the heat on my back. I turn around to see the temperature of the water loop hit 75C. Yeop, most of the way to boiling the water. I shut down HL2 to get rid of the load, but the damage was done. Within minutes the machine had died and wasn't coming back. Motherboard baked.

I went and did some math and discovered that each video card ran at about 80 watts. The processor generated only 55W! So when the video cards were working hard, the machine cooked.

Now I had two problems - the first was fixing the machine, which meant a motherboard transplant. This is not normally something I fear, but with watercooling its much more difficult. Especially the water cooling on this machine, with two video cards and a Northbridge chip right between them. The hoses are short and twist all over the place. And the last thing you want to do is breach the water loop.

Here's Phillip sitting on the service desk. Notice I plugged a speaker in to get a listen to any BIOS error beeps. Unfortunately there were none, supporting my belief that the motherboard was cooked.

My first attempt at extraction was to pop both video cards out of their slots. I powered up again at this point in the hopes that perhaps the video cards were dead and now I'd get a missing video card beep pattern - alas, no luck, no noise, no nothing. The machine is still dead. I'd have to peel all the water cooling gear off.

The motherboard extracted. What you didn't see is that I had to unscrew the motherboard from the case and pull it clear of the jacks in the back, then lift the board up with the water gear still on it. The problem was the Northbridge chip, which had a pair of nylon nut-and-bolt sets holding it on. The only way to get those off was to get to the nuts under the board.

The CPU was a bit tricky to remove just because there was so much surface area, but twisting and prying got the block off.

With the motherboard free, I cleaned everything up and transferred the RAM and processor to the new motherboard. In addition I removed the Northbridge fan from the new board (and put it onto the old board so it would be stock again and RMA-able).

Here the new motherboard is all prepped with fresh thermal grease, ready to be installed. And yes, I would remember to clean off the CPU block before I mounted it on the CPU again.

Remounting the water gear on board starts with the Northbridge chip block, since it once again has to be bolted down from the back. The board sits in the case at an angle so that I can get to the back of it, and I slid the block and bolts into the holes, then gently placed the nuts on the bolts until the threads catch. Then its a process of turning each nut a bit so that the block is squarely over the Northbridge chip.

Once the Northbridge is in place, everything else is lifted up so that the board can slide into place. The motherboard is screwed down and then the video cards went into place, allowing the CPU block to be replaced as well. Then the power/reset switch plugs, power/hard drive LED plugs, USB plug for the Matrix Orbital controller, SATA plug for the hard drive, IDE plug for the two DVD drives, and then all the power plugs for the motherboard (there are three: main plug, secondary 12 volt and a molex for the video) plus the additional power plugs for the video cards.

A quick top-up of water into the reservoir and I was ready to power up again for the first time in more than a month.

And the beast lives! If you look close, the screen is stopped on a BIOS error because there's no CPU fan. Which is a reasonable error since there is no CPU fan. Some quick BIOS tweaks took care of that.

So, remember when I said there was two problems? The first one is now resolved - the machine is back to life with a motherboard transplant. Problem number two is how to avoid cooking the motherboard again. Within minutes of powering up, running no high load software (like SETI@Home), the machine is already at 44C. Add SETI@Home and the temperature immediately rises a couple of degrees, causing the fan controller to turn up the fan to cool it back down again.

Fire up Half Life 2... well, I wasn't going to do that again.

I found the answer at Sprite - the guys I get most of my gear from. For whatever reason, they happened to have an Innovatek RADI-Dual in stock. I don't know why, they'd never sell the thing... well, okay, maybe not never.

This radiator is twice the size of the ones that I use in the case, and has mountings for two 120mm fans. It wouldn't fit in the case, but it would offer a whole bunch more cooling. Would it be enough? With it immediately available, it was too easy not to try it.

I mounted a pair of ultra-quiet Vantec 120mm fans, directly powered... I've burned up a couple of these lovely fans with controllers, so I didn't want to take the chance. And besides, even at full power these fans only generate 28dB of noise, so you can't hear 'em at all.

To connect the radiator into the loop I disconnected the top-side connector of the existing radiator and moved it to the bottom feed on the new radiator, then added a new hose from the top connector of the new radiator down to the old radiator. Powered up and started adding water to the pump reservoir as fast as I could to fill that new radiator. Several ounces later, everything was full and ticking along.

Its a little on the Mad Max side of things, but it sure does work!

Check out the front view of the machine, you can see the temperature of the water - just below 31C!

When I fired up SETI@Home, the temperature didn't move at all. So then the real test: play some Half Life 2. After one hour of play, the water temperature got to 32C. Methinks the fix is in!

Obviously, the system can't stay like this. But I'm afraid the only real answer to this problem is going to be much more radical: converting to central water cooling. That would involve putting a set of pumps, radiators and reservoir inside the server closet and running hoses through the walls to the two workstation bays in the office. The same way that you have a wall plate for power and a wall plate for network access, there would be a wall plate with water input and output. Then you'd just plug the machines in.

There are a bunch of advantages to the central water cooling solution. The first is that there will be a lot more water, and that water will be chilled. So the ability to cool will increase substantially. The machines will be even quieter having no fans (except the whisper fans in the power supply), no pumps and no radiators. Another huge bonus will be that the heat of the machines will actually be taken out of the room, being dumped into the server closet with its great big AC unit.

The downside is that the machines are no longer self-contained for cooling. When I have to service them, I'd need to use an external water cooling module, something like the CoolerMaster Aquagate or the Koolance Exos 2. All resolvable stuff.

So, for the moment, everything seems to be functioning here in water cooling land. I'm watching Phillip closely for any water leaks, I'm a bit concerned that the heat event might have damaged some seals. But so far, so good.

Wednesday, June 15, 2005 10:31:09 AM (Pacific Standard Time, UTC-08:00) #    Comments [3]  | 


My poor, neglected blog...#

Six weeks since my last entry... and its not that I don't have anything to say, but I've been so busy, by the time I get home, I just want to sleep.

Various highlights of the past six weeks:

  • Hung out with Tim Huckaby and his family the weekend of April 16th, lots of fun!
  • Kate Gregory and I did a duet deep dive at the end of April, talking about VSTO.
  • All the Canadian RDs got together at Microsoft Canada in Mississauga, where we found out that Craig Flanagan, our intrepid leader, was moving on to bigger and more XBoxie things.
  • Fellow RD Guy Barrette spent a week out here doing talks on Visual Studio 2005 and had a chance to visit my little toyland.
  • I test ran my SQL Querying talk for Tech Ed at both the Victoria .NET User Group and VANTUG!

Which brings me up to current events... I leave this afternoon for the Netherlands to present at SDC 2005 at Papendal outside Arnhem. From there I'm headed to New London, Connecticut to spend some time with Carl and do a few shows (including something new!). After THAT, Carl and I are both headed down to Tech Ed in Orlando (same flights and everything).

I'm doing two sessions at Tech Ed, one is my Advanced Querying Techniques, Tips & Tricks session, which drills into various querying tricks I've collected over the years. This year I'm doing it with Steve Forte, and we're going to compare and contrast SQL Server 2000 and SQL Server 2005 to demonstrate how many of this slick querying techniques change with the latest and greatest.

The other session is a reprisal of my SQL Profiler for the Developer session that I did last year - there won't be any ice cream bars this year I'm afraid. However, I do have a special guest, Vipul Shah is going to show off some of the new goodies in SQL Server 2005 for Profiler junkies.

So finally, I'll stagger home around June 9th, all spring conferenced out.

Maybe then I'll get to fixing my monster machine... it burned up a week after I finished building it, and its sat there dead ever since. Did I mention I've been busy? There isn't going to be any easy fixes, everything worked perfect, but there's just not enough cooling in that little eight inch radiator.

Friday, May 27, 2005 10:57:27 AM (Pacific Standard Time, UTC-08:00) #    Comments [2]  | 


A Water Cooling Weekend#

Yep - that happy time again. I got some new parts in and decided it was time to rebuild one of my water cooled machines. And, since I was gonna dig into one, I figured it was a good time to do check ups on the others.

First up was my wife's system. Its a P4 with a 533Mhz FSB, rock stable and the first machine I ever water cooled.

Doesn't the water look nasty? This is what happens when you let the water get too low - it cooks. A quick top off with distilled water fixed it up fine. It needs topping every three months or so. If you look really close at the pump, you'll see a white stain - this is a bit of leakage, caused by overheating. Note that at no time did the machine ever fail: like I said, rock solid.

After the refill, I ran the system for awhile and then used my handy-dandy Raytek infrared thermometer to check the processor block temperature. Look closely at the processor water block, you'll see a little red dot beside the word X-Flow, which marks the spot where the thermometer is reading the temperature. 101.5F is a good operating temp.

Next up is the development workstation, the machine with three monitors attached.

This machine, like the last, has a nice, tidy water cooling rig. The video card in question is a Matrox Parhelia. Notice there are three SATA cables? Two for my RAID 1 drive array and one for my Plextor PX-716SA SATA DVD-RW! This machine was a little low on water, not as bad as the other, but still needed to be topped up.

The gaming system had been running for awhile with air cooling while I awaited the arrival of several parts: a water cooling block for the Athlon 64, as well as new nVidia 6800 Ultra video cards and their cooling blocks. Everything showed up during the week, this was my first chance to put it all together.

I had to strip the motherboard out of the machine to replace fans with water blocks - both the processor and Northbridge chip had mounts that had to be accessed from the back of the mother board.

Here's a look at the motherboard with fans installed. That little Northbridge fan is particularily squealy noisy. On the right is the little Northbridge cooling block (which I had stashed away) plus the new Socket 939 water cooling block and mount.

Here's a look with the fans stripped off and the processor and Northbridge cleaned and ready for water block mounting.

The processor mounted up with no problems at all - screw in the new support frame, put a fresh coat of thermal paste on the chip, polish up the block, place it on top of the chip and snap the locks in place. Alas, the Northbridge chip wasn't so easy.

The first problem was the alignment on the mounting holes. The water block sits right between the two video cards, so the hose connectors have to face directly toward the front of the motherboard. The way the water block went together, there was no way to mount it that way. I had to disassemble the block to flip the mounting plate over.

In the shot you can see the bits of the water block, from the clamping nuts, to the actual hose connectors, the block itself and the mounting bracket. I didn't pull the copper base out of the block, there was no reason to (except to show you), and I risked damaging a water seal.

After flipping the mounting bracket, I discovered that the mounting posts that crappy little fan used were too short for the water block. Off to Home Depot for some nylon nuts and bolts. A quick rub-down with thermal paste, a bit of fiddling and the Northbridge block got mounted.

You can see the hose connectors are facing forward, if not exactly square to the front of the motherboard. I was willing to favor the first video card, since it wasn't quite as close to the Northbridge water block. Notice also in this shot the EIGHT SATA connectors on this ASUS A8N-SLI Deluxe motherboard. Of which I'm using only one.

With the motherboard square away it was time to put my sights on retrofitting the video cards, a pair of ASUS Extreme N6800 Ultras.

This is the card, still fully intact, with the Innovatek water block sitting beside it, along with back plate and mounting hardware. Notice on the water block there's the center plate for cooling the GPU, four contact points for RAM and the left-most edge screws down over the voltage regulators. All those holes in the block get filled with various kinds of screws.

The video card stripped, ready for cleaning and water block mounting. Note the four screws from the GPU block, five screws from the RAM block, three screws for the fan and two plastic posts for the voltage regulators. Did I mention this video card is much, much lighter with all that crap removed?

After cleaning the chips off, applying new thermal paste and assembling all the bits very carefully, you can see the back plate with its four screws, the five spring loaded screws for the RAM mounts and two screws holding down the block on the voltage regulator. The video card is all heavy again.

Did I mention there was two of them? SLI video, doncha know.

With all the water block appropriately installed, it was time to reassemble the machine.

This is the assembled version. The blue cables are power cords, the video cards take two molex connectors each, plus there's another molex plugged into the motherboard, in addition to its normal main and secondary power plugs. The red/black wires are temperature sensors (four), black/red/yellow are fan connectors, of which there is two - one for the radiator fan, the other is plugged into a water speed meter.

All those sensors and fans are connected to a Matrix Orbital display, which is wired to the system via USB, which is the silver braided cable (you have to look real close for that one). Oh, and the red braid cable running over the top is the single SATA drive.

The water plumbing is as follows:

  • Radiator
  • Reservoir
  • Pump
  • CPU
  • Video Card 1
  • Video Card 2
  • Northbridge
  • Hard drive
  • Water meter
  • Back to radiator

One temperature sensor is inside the case, the others are in the water loop, between the pump and CPU, Northbridge and hard drive, meter and radiator.

The controller is set up to vary the radiator fan speed automatically based on water temperature. While I'm still playing with the tuning, right now its set to keep the system at 45C (113F). At 44C or below it'll slow the fan down to 25% of maximum speed. Above 46C it'll increase the speed of the fan to 100% to bring the temperature down.

And if you're wondering why I'm posting this on a Monday... well, it took longer than planned to get everything finished. As usual.

Monday, April 11, 2005 8:33:01 PM (Pacific Standard Time, UTC-08:00) #    Comments [7]  | 


TechEd 2005 (Jonathan Goodyear is up to no good)#

Fellow RD Jonathan Goodyear filled the RDs in on a little secret that's going to take place at TechEd 2005... unfortunately, I can't tell you what it is.

But you can get a hint at Jon's site at

Just another reason to attend TechEd 2005, as if you needed any more incentive.

Oh, and I'll be there too: I'm presenting two sessions, my famous SQL Profiler for the Developer session (which I'm told would have won “funniest session of Tech Ed” last year if such an award existed) and one of my favorite sessions of all time, but never-before-presented-at-TechEd, SQL Querying Tips & Techniques session.

Thursday, March 3, 2005 11:15:36 AM (Pacific Standard Time, UTC-08:00) #    Comments [2]  | 


NASA on .NET Rocks!#

Just finished recording a new episode of .NET Rocks! My second as co-host.

On this week's show we interviewed Chris Maxwell and Randy Kim from who work at NASA's Ames Research Center on a product called WorldWind. Its similar to Google's Keyhole, but its free (well, paid for by the US taxpayer), and its got a stronger educational bent. Essentially they've gathered together lots of different bits of satellite data that you can use to explore the planet with. Very, very cool. And all written in C# and the .NET Framework 1.1!

The chat room tonight was really cool - plus the WorldWind folks have their own chat channel as well, so we had lots of intermingling between the groups.

Even Robert Scoble showed up and hey, we got Scobleized!


Friday, February 25, 2005 10:02:21 PM (Pacific Standard Time, UTC-08:00) #    Comments [5]  | 


Rebuilding Cartman...#

I've been slowly working my way through the server rack, upgrading all of my servers. Some of the machines are as much as five years old, and all spinning gear (CPU fans, case fans, hard drives) are essentially ticking time bombs. In addition there is new hardware to be added to the rack, which means virtually everything in the rack has to move... the new configuration with eight servers completely fills the 30U rack.

What makes this especially challenging is that they ARE servers... they're constantly in use. I can take them down for a few minutes, but after a half hour the phone starts to ring. However, some servers are more sensitive to this than others - and Cartman is one of the least sensitive, since its largely an internal-only server.

Cartman has a variety of tasks. Primarily he's a file server, but also a domain controller (one of two), DHCP and DNS server. As a file server, he has a 400GB RAID array... doesn't sound like much, but I built it in October of 2001. Its done with a Promise SX6000 controller and six 80GB hard drives. At the time, it was a monster. Since its essentially been on since it was first built those drives have over 30,000 hours of spin time... very scary.

Before tearing Cartman apart I used Acronis True Image to image the boot drives, and I backed the entire 400GB drive array up on a single external USB 400GB drive. And yes, I used xcopy with verify and double checked everything before I tore it down.

This is what I saw after hauling Cartman out of the rack and popping the cover. Essentially identical to what I saw in October 2001 - one crammed case. You can see the six ATA/100 ribbon cables coming out of the Promise controller running to the two three drive caddies holding the 80GB drives. In the middle are the two 17GB SCSI drives that are used as boot drives, which, along with the SCSI DVD drive are run from the Adaptec 29160 SCSI controller. Oh, and an Exabyte external tape drive plugs in there too.

Disassembly of this beast starts with the metal bar running across the case that also supports the two SCSI hard drives (and a fan). Then the entire front drive array holding the DVD, floppy and two drive caddies was removed. Both the SCSI and RAID controllers were pulled as well, leaving the case pretty darn bare. With everything out I powered up the machine just to take a look and noticed that one of the CPU fans was barely spinning any more. I had planned on replacing them anyway, this was just extra incentive.

However, the motherboard is so busy that the fancy new Socket 370 cooling blocks I bought wouldn't even fit in the space! But I was able to use the old blocks by removing the worn out fans with the the fans from the new blocks.

After a thorough cleaning, I installed a gigabit network card and began the rest of the reassembly. I'm retiring the Promise controller altogether, going to a SATA array using six Hitachi Deskstar 7K400 drives. Yep, that's right... from a 400GB array to 400GB drives, for a total of two terabytes! And to drive this puppy, I'd need a SATA controller, so I went back to Adaptec for their 2810SA controller.

It actually supports eight drives, but I only had space for six, you can see the controller hard and new caddies to hold the drives. SATA cables are much tidier than ATA cables, so I got a bunch of space back in the case.

Here you can see the Chenbro caddies with three SATA cables a peice. There's one power plug for all three drives (which is very nice) and it also has a heavy blower fan pumping directly onto the drives.

The old 17GB Atlas V drives are replaced with shiny new 147GB Atlas 10Ks. More disk space!

With everything crammed back in the case, it was time to get things set up. Even before I started the install of Windows 2003 server I wanted to get the array set up. What was interesting is that every card installed in the machine had a boot BIOS in it - the SCSI controller, the RAID controller AND the gigabit network card! Getting the BIOS set up to boot from the right device took some fiddling.

Then I decided to start the array configuration from the BIOS, so I set up a RAID 5 array. Being a dilligent geek, I went to the Adaptec web site to check for latest drivers, BIOS updates, and so on. Adaptec had updates for both the 2810SA and the 29160, so I updated both BIOSes. What's stunningly annoying is that you HAVE to install BIOS updates from a floppy. The software is hard coded to read from drive A and nowhere else. Presumably I could set up a USB drive to do this, but this old SuperMicro motherboard ain't that smart.

I was glad I'd checked all this in advance, all over the readme files for the firmware were warnings that doing these upgrades would destroy the existing arrays, and you'd need to back everything up. Since I had nothing on the drives, I had nothing to fear.

Feeling smug with all my firmware flashed, I headed off into the BIOS set up for the 2810SA to get my spiffy new drive array configured. Apparently I did it wrong because I selected “Clean” to start the array rather than “Build/Verify.”

But I didn't know this at the time - off it went, ticking away to itself. I thought it might take a long time to set up a two terabyte array, but it was done in about 15 minutes... well, almost done. It got to 99% and then said “Controller Kernel Stopped Running!” And then the machine would reboot. That didn't seem good.

Every time I restarted the machine and went back into the 2810SA BIOS, I'd get the same error and reboot the machine.

In an effort to be positive about my situation, I ignored the failure and moved on - set up Windows 2003 Server. Once it was up and running, I tried to install the drivers for the controller card, but it wouldn't recognize it. That can't be good either. I filed a tech support request with Adaptec, but wouldn't hear back for 48 hours: by then I would solve it on my own.

I went to bed late, very grumpy. The next morning I woke up thinking maybe the firmware update was a mistake. So I reverted - got the old firmware, set up new floppies and attempted to install it. But it kept failing with the same error. Couldn't revert.

Then, a flash of insight, I realized what was happening to the controller - it was crashing! And right at the point of completing the array. After it rebooted, the controller would restart, see the array almost finished configuring and attempt to finish it... crashing the controller again! So, how to stop the array from rebuilding? Pull all the hard drives out! That'll slow the bugger down.

Sure enough, as soon as I pulled the drives, I was able to revert the firmware. Why I still reverted the firmware, I'm not sure - I guess I had a course in mind and thinking wasn't going to divert it. With the firmware reverted, the array had died, so when I plugged the drives back in, nothing bad happened.

Now afraid of the BIOS configuration stuff, I booted back into Windows, and reverted the driver as well to match the firmware. If you've never done this, you're a happier person than me: reverting to an older driver is a bugger. Windows 2003 Server has a rollback driver option, but it doesn't work if you haven't previously installed the older driver. So I had to do this the hardware - uninstall the driver and then carefully locate all the backup copies of the DLLs and kill them by hand. Once I had it all, installing the old driver worked, AND it came up just fine.

Now I was able to set up the RAID 5 array from Adaptec's client for Windows, which was a whole bunch clearer about the right ways to do things. And that's when I discovered that correctly building a two terabyte array takes an entire day.

The next day I discovered that my two terabyte array is actually a 1.8TB array. And that Windows understands TB, it displays that way in Windows Explorer. Funny, huh? I wonder if they have PB (as in petabyte, a thousand terabytes) in there as well.

The rest of the set up was uneventful, really... things got loaded back on, DHCP and DNS configured, and so on. The next level of excitement would come with the most dangerous update of all... converting an Exchange 2000 server to 2003!

Wednesday, February 23, 2005 6:52:24 PM (Pacific Standard Time, UTC-08:00) #    Comments [8]  | 


Rack Attack!#

Well, I finally broke down and started to rework my racks. I've literally avoided pulling them for more than a year, just patching things together whichever way I could. Take a look at the mess they were in before I started:

Several highlights of this mess I call my racks... notice the two bars poking out the front, those are the rails that the entire rack slide out on. Notice that between the two racks there's a new server (named “Tweak”) that has been sitting like that for six months. And notice the freakshow of a wiring mess as I've added VOIP boxes, a new router, new wireless access point (sitting on top of Tweak), and so on. Hey, its been more than a year!

The racks themselves are 30U Middle Atlantic AXS racks. The left hand one is for networking, it has a cable channel mounted on the left side for all the wiring. On the right is the server rack, which I had modified to be 30 inches deep instead of the standard 20 inches that Middle Atlantic makes for these racks. They're intended for stereo equipment, I use them for the computer gear because this way the server closet is much smaller - you don't need room to walk around it.

This is the rack pulled out onto the rails and ready for some service work. You can see the cable channel clearly now.

From the other side you can see the mess of wiring strung between the two racks... and the mess of wire in the back. Its not as bad as it looks (which is good, it looks pretty bad). Notice also the “wall-shaker“ style air conditioner that keeps the whole closet cool.

Besides the tangled mess of wiring, I also needed to add more power plugs, re-arrange some components, add new gigabit switches and additional wiring between the two racks.

A couple of hours later, the mess of wires is gone from the rack. This shot also shows the new double-sided power bar I added at the back to give myself more outlets, and the Oregon Scientific wireless temperature sensor (reading 71.6F) that lets me know the temperature inside the closet. Normally its about 68F in there. There are alarms if it climbs above 75F. Also, this gives you a pretty good look at the folding arms that hold the rack from sliding off the end of the rails, and provide a channel to route the wires on and off the rack.

Here's the beauty shot of the network rack reconfigured and back in the closet. Here's an inventory (from top-to-bottom):

  • Gear shelf contains
  • Xincom 603 Dual WAN NAT router
  • Linksys SR2024 24 port Gigabit switch
  • 2U cable tray
  • 2U 48 port Ethernet patch panel
  • 2U cable tray
  • Linksys SR224G 24 port 10/100 switch (with Gigabit uplink)
  • 1U Keyboard/Mouse/Monitor console
  • Cisco 3620 (mounted backwards)
  • 3U 48 port keystone patch panel (telephone and cable patches)
  • The old Nexland dual WAN NAT router
  • 5U gap (more UPSes will go in here in the future)
  • 1U power bar
  • 3U Hewlett-Packard rack-mount oscilloscope (long story)
  • 2U Minuteman 1000VA UPS (cut off in the photo)

That one bright green Ethernet cable you see in the shot is the patch cable for Tweak, the server still sitting on its side between the racks. I ran a new patch for it through the rack properly.

Next up, the server rack! And believe me, the network rack was the easy part of this whole process.

Tuesday, February 15, 2005 7:41:09 PM (Pacific Standard Time, UTC-08:00) #    Comments [7]  | 


Water Cooling Vindicated...#

When last we left the Water Cooling War, I was battling unreliability of my 800Mhz FSB machines. All along I thought it was the RAM overheating, although I was thinking the heat spreaders I added would do the trick. For one of the systems (the one from my last blog entry on water cooling) has been pretty darn stable. But the other machine (the triple screen beastie) was still getting BSODs every few days.

The interesting thing about these failures is that the hard drives would disappear afterward. It actually has two Samsung 160GB SATA drives in it, which I had planned to use as mirrored drives with the onboard RAID 1 Promise controller. After all, this is my main development machine, its worth while making sure the data doesn't go anywhere. But one of the drives had disappeared, I figured it had failed, so I was running the other drive solo.

And with the BSODs, both hard drives would be gone. I'd wait a half hour or so (I do have other machines to work from after all) and the drive would come back. I knew I'd have to replace the hard drives eventually, they have a three year warranty, but extricating water cooled hard drives isn't a lot of fun.

Well, it all came to a head last night when the drive just wouldn't come back. So I hauled the machine out and ripped both hard drives out... very carefully, so that I wouldn't cause any leaks.

As you can see in the photo, the hard drives have blocks on either side, connected by a plastic tube. This plastic tube is just press fit into place to handle the variations in width between hard drives, pull too hard and it would pop off and water will go everywhere.

I replaced both drives with identical models, put the whole thing back together and viola, two hard drives, mirrored and happy. I fired up Acronis to restore the image backup I have of my workstation (the easiest way to recover a system - you DO have a backup strategy, doncha?) and left the machine whirring away til morning.

When I returned in the morning, the machine had failed, both hard drives disappeared. Guess it wasn't the drives failing after all.

So, what could it be? The onboard controller? These ASUS P4C800-E motherboards have TWO different SATA controllers on them, could they both be bad? I think not. So what would knock out both drives?

Gosh, lookie there... the SATA power adapter plugs a 4-pin molex connector and provides connectors for BOTH SATA drives. Could there be a connection problem?

I replaced the adapter with a new one, and fired the system up - both drives recognized, no problem. I have to guess that once the system heated up, the connectors got loose and deprived the drives of power. This, naturally, would cause Windows to BSOD... and then there'd be no drives left.

So in a way, overheating is still the culprit, but I suspect that water cooling is not responsible for this. Of course, I won't know for sure til its been running for a few months.

Tuesday, January 25, 2005 11:55:56 AM (Pacific Standard Time, UTC-08:00) #    Comments [2]  | 


MPx200 Alive and Well in Canada...#

Back in March, I was in Kuala Lumpur with my friend Adam Cogan, doing a Business Intelligence workshop.

At the time, I grabbed a Motorola MPx200 cell phone. Pretty much all phones in KL are unlocked, so you can buy a phone and plug your own GSM chip into it, and off you go.

I use Rogers for my cell phone service, and they offer both CDMA and GSM service, so when I got home, I switched my cell number to GSM and got the chip, plugged it into the phone, and off I went. And it did work - for phone calls at least.

The MPx200 runs Windows Mobile 2002 as its operating system, and includes a version of Internet Explorer and even MSN messenger. And, of course, Solitaire. But I couldn't get the web stuff working - I talked to Rogers, and I was honest with them, telling them that I wasn't running one of their phones. They were all enthusiastic about what I was attempting, wanted to know about the phone... in the end I couldn't get the stuff to work, and I only have so many cycles for fighting with such things, so I let it go.

Recently, I managed to get the phone upgraded to Windows Mobile 2003 - its not officially released by Motorola, although it is coming with their new MPx220. But a DNR listener happened to have a copy, so I grabbed it and, willingly risking my phone, upgraded it. It worked great, and now I have TWO games on my phone... Jawbreaker AND Solitaire. And the battery life is substantially improved.

Feeling all inspired, I called Rogers again this weekend, prepared to commit a good hour to being on hold, being switched from person-to-person, etc, and being told “Now you understand we can't guarantee this will work...“ It took three different people to reconfigure my account and get the right set of settings into the phone, but in the end, it worked! The trick is to make clear that the phone has more in common with the Treo 600 than any Motorola cell phone that Rogers handles. A proper data account and the detailed internet settings is all it took to make the thing fly.

So my thanks to Rogers for helping me out, the geek potential of my cell phone took a nice leap upward... what could be more fun than writing MSN messages with a telephone keypad?


Monday, September 27, 2004 10:29:15 AM (Pacific Standard Time, UTC-08:00) #    Comments [2]  | 


MOM 2005 and SQL Server 2005 at VANTUG#

Had a great time last night at the VANTUG General Meeting. Excellent turn out too - almost every seat was filled.

I demonstrated Microsoft Operations Manager 2005. This is a very cool technology for those of us who are responsible for the care and feeding of a bunch of servers. MOM 2005 is part of Microsoft System Center, which also includes Systems Management Server. And Microsoft System Center is a key player in the Dynamic Systems Initiative (DSI), one of Microsoft's three major initiatives (along with Trustworthy Computing and the .NET Platform).

The Dynamic Systems Initiative focuses on making all aspects of software and hardware manageable. At the software level that means building applications with management in mind - the most obvious thing putting Windows Management Instrumentation (WMI) points throughout your application. Software like SQL Server and Exchange report a steady stream of WMI data points to anyone listening. Think of it as SNMP on steroids. And MOM is all about consuming WMI data.

You can instrument your own applications in .NET from the System.Management namespaces, creating your own custom events, performance counters, and so on. Its not a simple thing to do, but if you're serious about being an application that can be maintained and used in the long term, its worth the effort. Log files are not enough any more, you want to work and play in the DSI world.

Ultimately, doing DSI properly means spending less money and time on keeping servers operational. Its a worthy goal, but like all of Microsoft's major initiatives, its not going to happen overnight: its a combination of new software, new thinking and new work. A couple of years from now, we'll look back on the way we manage systems today the same way we look at email pre-Internet.

Besides having fun with MOM 2005, I showed off the new error handling abilities of SQL Server 2005, essentially doing an abbreviated version of my T-SQL Error Handling in SQL Server 2005 session from Tech Ed Malaysia.

After that, it was all about toys. Here's the goodies I showed off:

Unfortunately, we didn't get to talking about my home server rig, but I'm sure I'll get a chance in the next month or so to talk about the trials and tribulations of keeping a half dozen servers happy and healthy at home.


DSI | Speaking | SQL Server | Toys
Thursday, September 23, 2004 10:34:44 AM (Pacific Standard Time, UTC-08:00) #    Comments [2]  | 


Tech Ed Malaysia Post-Mortem#

I'm finally back home from Kuala Lumpur after 22 hours of travelling. I couldn't bring myself to blog in my near hallucinatory travel state, so I've had a good night's sleep, woken up very early and am busy catching stuff up, including this.

Tech Ed Malaysia was lots of fun, the attendees were very enthusiastic and friendly. The speaker cadre was astounding, very talented folks and lots of fun to be around.

Although the hotel was in the middle of nowhere (I started calling it the “New Jersey of Kuala Lumpur”), we did manage to get into town a number of times and do some fine exploring. We ate a chinese dinner from an outdoor kitchen in an alleyway, and bought all kinds of gadgets from various shops.

Saturday was my last full day in Kuala Lumpur, and I started it out with doing DotNetRocks from my hotel room. I had brought my full audio rig with me, including the large condensor mike and digitizer. What I hadn't realized when I first packed it up is that the power supply for the digitizer does not support 220 volts, unlike my laptop power supply. Tim Huckaby lent me his power converter (thanks Tim!) but I didn't have the smarts to actually test it out in advance.

Since Kuala Lumpur is exactly twelve time zones away from New London Connecticut where Carl and the DNR studios actually are, I was doing the show Saturday morning live with them working Friday night. So I'm up and on my laptop at 7am, talking to Carl (where its 7pm). We talk about the toys and how to do the show and agree to reconvene at 8:15am for a sound check before the show starts at 9am.

By 7:30am I'm off for a shower and some breakfast. I'd wanted to talk about the Low Yat Plaza we'd found the night before, which is this incredible toy boy heaven. Kim Tripp had taken some photos of it and I wanted to blog those before the show so everyone could see this crazy place. Kim loaded the pictures on a USB key for me and I went back to my room.

I get back to my room and I can't open the door - mysteriously, the flip bar lock had set itself! I went down to the front desk to get some help, and they sent a maintenance guy up. In theory, these locks can only be unlocked when the door is closed, but the fellow was confident he could open it. Apparently it happens all the time?!?

It took him a half hour of fiddling, but he did get the door open. Then it was my turn for some stupidity. I wired up my audio rig, but the digitizer wouldn't power up. I couldn't make Tim's power converter work! Finally, I had missed set up time, the show had to start, so I had a good hour to futz around before break time when we'd have another chance to test things out. I figured I might as well dump the pictures and return Kim's USB key, she was leaving in another hour or so.

When I told Kim my woes, she produced another power converter for me to try, so I went back up to my room, new block in hand. Kim's block had European plugs on it, and the European plugs on my power adapter were too loose to hold her heavy block in place, so I had to use Tim's block as a converter to hold things together... the resulting contraption stuck the better part of a foot out of the wall, so I braced it with my chair to stop it from falling out.

So, from the wall working outward, the plug adapter (my laptop plug is coming out the top), into which Tim's converter (acting as an adapter) is plugged into, then Kim's converter, and finally the digitzer brick.

This did the trick - I was finally up and running. At the half way point in the show we took some time to test latency (ping-pong), which turned out to be brutal: apparently doing VOIP half way around the world takes six seconds. Here's what my running audio rig looked like:

So in the end, the show came off fine, with the severe lag I couldn't chat with Carl and Rory much, but just sort of blurted out how much fun I was having in KL, the toys to talk about and the contest. As soon as the contest was under way I disconnected and ran to return Kim's power block to her before she had to leave.

With Kim gone, Goksin, Malek, Adam, Brian and I were free to do some serious toy shopping. We headed first to Central Market, picking up T-shirts, jewelry and other odds and ends. Brian and Adam cut out early, Brian had to head to the airport by 6pm, and Adam wanted to show Brian his tool before he left. So the three remaining shoppers headed back to the Low Yat Plaza to really check out the toys. Our mission - to get gigged.

Malek, Goksin and I all wanted gigabyte storage bits. Malek and Goksin were after 1GB SD cards, and we all wanted 1GB USB key. Kim had been gloating all week about her gigabyte scores in Singapore, having acquired a 1GB Compact Flash card AND a 1GB USB key. And to top it off, she bought a 2GB Compact Flash on our first visit to Low Yat the night before. She's the gig queen, and we wanted to at least be in the club.

We started at the very top level of the mall and worked downward, and it didn't take long to find this tiny Pretek 1GB USB drive. I started calling it “The gigabyte you can fit up your nose.“ Unfortunately, we were in a show room and they had none to sell! Supposedly there was a store on the lower levels had had them, so we headed there. Along the way we did find two 1GB SD cards, so Malek and Goksin were in the gig club for sure.

When we finally found the Pretek dealer, he had only ONE of the little drives. Very annoying, but we bought it anyway. Then we went back up to the top floor to complain. The fellow there was nice enough to call around to the other shops in the mall and found one more little drive, so I got one as well. So we three boys are all in the gig club, although Queen Kim leads the way as usual.

Speaking of Queen Kim, as we were wrapping up our toy feeding frenzy, Kim phones from Singapore! She's holding in her hands the Canon 20D digital SLR, and wants to confer with the ToyBoy on pricing... she figured the price converted to about $1675 US... I told her to buy buy buy! The camera isn't even available in the US yet, I believe its going to be released Oct 15th, and the best pre-order street price I've found is $1500... a $175 premium is worth it to be first!

Ultimately, I don't know if she actually bought the camera, I hope she did, I'm sure I'll hear about it soon enough.

After shopping we ate dinner at a Japanese buffet, then headed back to the hotel - Malek had a ride to the airport for 11pm. Enough time for a couple of quick beers before he was gone.

The next morning Goksin and I got up early and had breakfast together. Goksin left at 6am, I left at 7am. I called Goksin when I was ticketed and through customs at the airport, poor Malek was still there. Apparently Emiriates airlines had botched his reservation and left him for dead. Goksin, in the finest tradition of the Anti-Suckiness club, got a ticket for his buddy to get him as far as London, I'm sure he'll get himself the rest of the way home from there with a fine tale to tell. The three of us had a cup of tea together in the airport before dispersing for our various gates for home.

Speaking | Tech Ed | Toys | Travel
Monday, September 20, 2004 5:24:34 AM (Pacific Standard Time, UTC-08:00) #    Comments [2]  | 


Kuala Lumpur - ToyBoy Heaven!#

Last night we went out into the streets of Kuala Lumpur and found the ultimate toyboy mall - six floors of toys!

Going up the escalator, you're looking into the center of the mall, its a big cylinder, six floors high, geek stores all the way around.

Looking back down, its all cell phone shops, PC hardware, gadgets, toys, you name it...

How many malls do you know have huge billboard signs for hard drives?

Toys | Travel
Friday, September 17, 2004 6:56:36 PM (Pacific Standard Time, UTC-08:00) #    Comments [5]  | 


Everything Takes A Little Longer...#

Well, one night stretched into four, as usual...

After my main workstation freaked out and stuff stopped working reliably, I pulled the motherboard and had it replaced under warranty. It meant peeling all the water cooling off very carefully to extract the board without making a mess, but I got it done.

Took a couple of days to get the motherboard actually back home. Sprite got it in late Thursday night, but I didn't actually swap it til Saturday morning. Then there's always the fiddly bits of getting it mounted again, and getting the water cooling blocks cleaned up and re-installed.

Along the way I had blown the drives off, thinking that maybe the network and raid problems might be fixed with a clean install, and I'd always wanted to re-install that system in the first place, when I had done the initial installation a few months back, somehow I'd left the Hyperthreading off, which actually installs a different kernel in XP. And I never found a way to fix it, everywhere I looked they said “re-install from scratch.”

I like a scratch re-install - it feels fresh. So off I went on my happy re-install path. Unfortunately, the built-in NIC is a late generation Intel gigabit jobbie, so the XP disk doesn't recognize it, so all the network stuff has to be done post-installation after I can get the driver loaded.

Once I got the point of getting the network stuff set up, I couldn't register with my domain - the error message said the user or password was wrong. I went around in circles for a couple of hours on this one, interspersed with a little stress relieving Doom 3 when I couldn't figure out the problem (only ID has really figured out the therapeutic value of shotgun blasts).

I could surf the Internet from the machine, but I couldn't activate and I couldn't register with the domain. And the error messages were hopeless. Then I noticed the date was set to January 1, 2002. So I opened up a command window and used net time /set to grab the time from my domain servers, and everything was fine - activated and domain registered. I think kerberos is a fine idea, I just wish they'd give some hints in the error messages, “bad user name or password” doesn't cut it.

Oh, and I threw on Windows XP Service Pack 2, of course. Then I loaded a couple of SETI@Home work units to run and went to bed. In the morning, the screen was covered in white bars and I had no user interface... although SETI@Home was still doing its thing. I used my handheld infrared thermometer to check the temperature of the Matrox Parhelia... the water loop was showing 110F (remember, this is running under full load all night), the water block on the video card was at 120F, the uncooled RAM chips were at 160F. A wee bit too hot.

So I happened to have some Tweakmonster BGA RAM sinks, mix in a little Arctic Silver Thermal Adhesive, and I'm cooling video RAM.

Notice I've STILL managed to avoid breaching the water loop!

Sunday, September 5, 2004 9:05:25 AM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 


Win a Battle, Lose a Battle...#

Fresh off the victory of my water cooled secondary workstation, I decided to finish off my primary workstation. The primary is the triple-screened machine running a Matrox Parhelia video card. I do my development work across its lovely 3840x1024 set of displays.

I had a couple of minor things outstanding on it to do - adding in a monitoring display for the front of the case that shows temperatures and fan speed. Of course, in this situation, I'm monitoring the temperature inside the case and in the water loop using a water loop sensor. And the fan I'm monitoring is actually a water loop impeller.

Harmless revisions, right? So off I go, fitting the bits into the case, breaching the water loop carefully and inserting the new components, and everything goes fine.

Until the motherboard (an ASUS P4C800-E) freaks out and stops working. Well, just the network adapter, and the RAID controller, and... well, I don't want the motherboard any more. Its all under warranty, but the price of building your own machines from parts is that you have to fix 'em too. So I called up my buddy Mike Neilsen at Sprite Computers (Mike supplies most of my components) and he ordered up a replacement motherboard for me.

Didn't take me long to pry the motherboard out of the case, and I managed to do it without breaching the water loop!

So I guess tomorrow I'm rebuilding my main workstation... a night without my beloved triple screen display! Horrors!

Wednesday, September 1, 2004 5:11:38 PM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 


The Water Cooling War...#

There's a certain contingent of folks (and you know who you are) who are living their water cooling dreams vicariously through me, and they’ll all be happy to know that I put aside some time to try and clean up my water cooling problems and finish some machines.

When last we left my intrepid bout of fiscal and productivity irresponsibility, I had hit a wall with reliability regarding my 800Mhz FSB machines. These are the high performance units that get damn hot. The problem, near as I could figure, was in the RAM. The two machines I set up for water cooling would periodically hang, and careful study of the occasional Blue Screen of Death seemed to indicate RAM problems.

The first time I got a hint that the RAM might be trouble; I stuck my finger on one of them and left the fingerprint behind. Yeah, they were hot all right. My theory (and I mentioned it on DotNetRocks some time ago) is that with the reduced airflow in the case caused by removing all those fans and replacing them with water cooling blocks, the RAM no longer gets enough air circulation.

My first attempt at a solution is to put heat spreaders on the RAM. These are copper plates at strap to either side of the RAM. Presumably they reduce the amount of heat in the RAM itself, giving the heat more area to wander to. So far I’ve notice that the heat spreaders are just as bloody hot as the RAM. I’m going to try adding a low velocity slot fan in the back of the case to try and draw more air through, seeing if that will help.

Of course, the wonder of periodic failures is that you never know if you got it right or not. You’re waiting to see if a presence is absent… in this case, the fact that I don’t get a Blue Screen of Death for six months will be my only proof that I was right.

Also, the crazy water cooling machine, the one with the Orbital Matrix controller in it to vary fan speed by temperature, had a leaky pump. I slathered the pump with silicon, but it leaked again. Then it stopped leaking. Then it started again. Then it stopped. Then I realized I was fighting with a hundred dollar part on a two thousand dollar machine and bought a new pump. So that had to go in.

And finally, since I had been noodling with this bloody machine for so long, a new cooling plate had come out for the ATI 9800 XT Pro I had, so I had to buy that too.

So, here’s a detailed record of the entire process:

1. Shut down machine.
2. Remove the cap from the reservoir.
3. Disconnect the highest point in the water loop, which for this machine is the top plug of the radiator, being careful to make sure the water line is drained (which is why you remove the cap from the reservoir, so some air can get in).
4. Unplug the ATX power connector from the motherboard.
5. Plug the bypass plug into the ATX power connector.

6. Squirt water around the room as the power comes on for the pump.
7. Switch off the power supply so the pump turns off.
8. Get water catcher (aka – the yogurt container), position so that removed hose is aimed into container.
9. Switch power supply on again.
10. Stare in confusion as the pump does not turn on.
11. Fiddle with bypass plug, switch position, try to find combination that works.
12. Go play Unreal Tournament 2004 for a half hour because blowing up digital stuff is less expensive than beating this stupid machine with a sledgehammer.
13. Return post-destruction to discover that no combination of power switch and plug will make the pump turn on.
14. Test with another power supply (What? You don’t have a spare power supply? What kind of geek are you?)
15. Pump still won’t start. Unplug pump.
16. System powers up without pump using either power supply.
17. Test power supply with replacement pump, the replacement pump works fine.
18. Devise a bulb pump to remove the water from the system without the pump.
19. Put cap back on reservoir before water comes flying out from the pressure of the bulb pump.
20. Drain majority of water from the system using bulb pump.

21. Stand machine up on its side, drain balance of water.
22. Remove side of the case in order to extricate pump.
23. Disconnect hoses, pull off reservoir, remove pump.

24. Mop up additional water spills from pump removal.
25. Pull video card for additional cooling block installation.

26. Remove existing water connect from A side cooling block already mounted to video card.
27. Add water interconnect to A side cooling block to allow B side to plug in.

28. Remove existing nylon screws holding A side cooling block on.
29. Put conductive goop on the backside RAM chips of the video card.
30. Place B side cooling block onto card, pressing interconnect into place.
31. Insert nylon screws through B side block, card, and into A side block.

32. Discover that video card will no longer go back into the AGP slot; the B side cooling block hits the hoses coming off the Northbridge chip.
33. Play some more Unreal Tournament 2004.
34. Install pump.
35. Install reservoir on pump.
36. Re-install hoses onto pump.
37. Remove the water block from the Northbridge chip.
38. Rotate Northbridge water block 180 degrees.
39. Reinstall Northbridge water block.
40. Install video card.
41. Reroute plumbing to deal with rotated Northbridge and moved connector on video card.
42. Replace pressure fit water flow sensor with screw down type.

See? Easy. Just follow this simple 42 step process!

Man, does the inside of that machine look like R2D2 barfed or what?

Monday, August 30, 2004 11:42:27 PM (Pacific Standard Time, UTC-08:00) #    Comments [6]  | 


Touring Toys...#

Kim Tripp, Clemens Vasters, Goksin Bakir, Christain Weyer and I all went out yesterday (Tuesday) to take a look around Seattle. The weather was just plain nasty, so we went for indoor type things.

In the end, we went to the Museum of Flight and did a tour of the Boeing 747 factory in Everett.

Kim Tripp, the Gadget Girl herself, has this groovy navigation system in her SUV... the discussion turned to making a business out of improved voice communication for navigation.

Clemens is an advocate of the sexy female approach, so that when you miss a turn, the system says “Don't worry baby, I know you'll get us back on track.”

Kim is on the mother-in-law side of things... “You idiot! Turn around right now! I'm not calculating another route for you!”

Methinks this week's ToyBoy spot on DNR is going to be over the top...

Drivel | Toys | Travel
Wednesday, August 25, 2004 10:20:00 AM (Pacific Standard Time, UTC-08:00) #    Comments [2]  | 


Toy Boy hangs with Gadget Girl...#

I'm down in Redmond hanging with Kim Tripp, aka The Gadget Girl.

Microsoft is throwing an evangelism party, and all us crazy evangelistic RDs are invited, along with lots of Microsoft folks.

Kim was nice enough to invite a few folks out for a day before the show started, and offer to put me up for the night... since I'm only a couple of hours drive away, I didn't have to fly, which is always good.

Apparently I'm not the only one that's toy crazy. We're having fun comparing our various gadgets.

More to come, I'm sure.

Drivel | Toys
Monday, August 23, 2004 10:51:27 PM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 


A Toshiba Tablet Give Away on DotNetRocks!#

Well, Carl's really gone and done it this time.

Microsoft is providing a Toshiba M200 Tablet PC as a prize to be given away sweepstakes style live on the DotNetRock's show August 26th.

All you gotta do is fill in a form. Find out the particular's at the DotNetRocks web site.

The prize draw is now less than a month away! Woohoo!

And don't worry, I'll still be on every week near the end of the show to talk about a cool new toy, and a not so cool toy.

Drivel | Toys
Tuesday, July 27, 2004 10:49:46 PM (Pacific Standard Time, UTC-08:00) #    Comments [3]  | 


ReelFast Film Festival...#

My friend Kent Alstad talked me into joining his team for the ReelFast Film Festival. The idea is for a team of up to ten people to write, shoot and edit a film of ten minutes or less in 48 hours using an “inspiration package.”

The concept is elegantly simple and very cool. To enter, you fill in an application form, cough up $250 and create an inspiration package. The inspiration package contains:

  • a sound bite
  • a photograph
  • a location idea
  • a surprise (typically a prop)
  • a food donation for ten people

So the contest starts on Friday, August 13th at 5pm. You pick up your package and then you have 48 hours to return a completed film. Our general plan is to write the script Friday night, shoot all Saturday and edit all Sunday.

So Kent, being the brilliant and wise project manager that he is, pulled the team together this weekend for a dry run. He's collected lots of advice from experienced film folks as well as ReelFast veterans. The idea was to do an end-to-end test of our system, using a script that has a few scenes, shooting them and editing them into a rough cut, just to see how long things take and how difficult they are. We learned a ton of stuff.

Kent went out and picked up a second-hand Steadicam Jr. off of EBay, which makes a world of difference in the quality of the filming. Handheld cameras are too jerky, and tripod mounted cameras offer too many limits for shots... being able to walk beside an actor as they walk without jerking all over the place is amazing. For a few hundred dollars, it sure changes the look of your home movies.

Drivel | Toys
Sunday, July 25, 2004 10:22:50 AM (Pacific Standard Time, UTC-08:00) #    Comments [3]  | 


Doing DotNetRocks!#

On June 24th I was a guest on DotNetRocks... but we didn't talk about .NET, we talked about my favorite subject, TOYS! Actually, the focus was on water-cooled computers, which is definitely toy-ish, although we digressed into a number of equally entertaining topics.

There were a variety of questions, so I figured I'd best answer them here. First off, I put together a little photo-pictorial of one of my water cooling conversions.

One of the gizmos I used in that photo-pictorial but didn't take a photo of is this little motherboard power adapter that I plug into my power supply so that I can fire it up without having to actual turn the machine on. It's very useful for being able to run the pump without heating anything delicate up.

Its just a female 20 pin ATX plug that connects pins 13 (ground) and 14 (power supply on) together. So there ya go Geoff, don't say I never did nuthin fer ya.

If you looked at the water cooling page above, you may have noticed I'm using rackmount cases for my workstations. I have a server closet that's all rackmounted, but I also had my desk custom built with rackmount bays as well.

This is one of the bays being fitted out while the office was still under construction. The rack itself is a 12U Middle Atlantic SRSR Rotating Sliding Rail System rack. This rack actually slides out of the bay and then rotates once fully extended so you can get at the back of the case without digging around blind. I have two of these in the office, one for each main workstation bay. There's enough room on the rack for a UPS, two PCs and other sundry gear.

Although we didn't talk a whole lot about it on the show, for rackmount junkies, here are a couple of links to my server rack set ups. The first link is to my old rack, which ran from September 2000 to December 2002. After that, my new rack server closet was up and running, which is how it continues to this day.

This shot was taken today... the rack is essential the same as it was December 19, 2002, except that it's a whole bunch messier. Over the summer I'll be rebuilding most of the systems in here, after all, some of the hard drives are now four years old an essentially ticking time bombs well past their MTBF (Mean Time Before Failure).

Sunday, June 27, 2004 5:50:16 PM (Pacific Standard Time, UTC-08:00) #    Comments [1]  | 


The most beautiful PC you ever will see...#

Hey, I like building my own gear, I'm an old time geek after all.

The case modding culture is a bit baffling to an old guy like me, though. I could handle the modders, trying to tweak their machines to maximum performance. Heck, we did that back in the old Z80 days. I must have done a couple of dozen lower-case mods for the TRS-80 Model 1.

But with the multiplayer first person shooter games came LAN parties, where a whole pack of geeks would get together somewhere and blow the heck out of each other. And that naturally leads to wanting to show off your machine... hence case modding.

There are case mods and there are case mods. However, this guy... this guy is in a league by himself. This is most beautiful PC you ever will see.


Tuesday, June 15, 2004 7:40:20 AM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 


There's no such thing as too much screen space!#

My friend Scott Hanselman has been worrying about his multi-monitor productivity. I agree whole-heartedly: this blog is being typed on a machine with a Matrox Parhelia video card driving three Viewsonic VP181b LCD panels for a combined resolution of 3840x1024. There is no such thing as too much screen space.

(Yeah, my desk is a mess, so what?)

You can see my email on the left, blog in the middle and Scott's blog on the right. when I'm developing, the code window is in the middle, with the property windows stretching over to the next monitor on the right, docs further right, and the running app on the left... ah, bliss.

9XMedia, Mass Multiples and Panoram Technologies all sell pre-configured multi-monitor set ups. I'm still looking for an excuse to set up one of 9XMedia's 5x2 oriented 22 inch display configurations... how's 19200x4800 grab ya! And only a mere $190,350! Even in more reasonable setups, these ready-made solutions are substantially more expensive than just putting something together yourself.

Viewsonic sells multiple display stands for their thin-bezel monitors. Most Viewsonic LCD panels have standard VESA 75mm or 100mm mounts, so that the stand that comes with the monitor can be easily removed and replaced by third-party stands. Mediamounts and ICW make some great stands.

I'm also in love with whole concept of LCD TV, although the many of the dedicated LCD TV's like Panasonic's line are ridiculously priced. When my wife wanted a TV in the kitchen, I combined an ICW wall mount with a Samsung 15 inch LCD monitor with tuner built in and got a slim, trim LCD TV solution for a few hundred dollars. Today the 15 inch LCD is gone, but Samsung makes a 17 inch bigger brother, also for a reasonable price.

But if you want to talk serious monitorage, I'd look at Samsung's 241MP, a beauty of a 16:9 proportioned 24 inch LCD panel running native 1920x1200. A pair of those would make a hell of a desktop, regardless of the absurd price. Another outrageous monitor is Viewsonic's VP2290b a 22 inch LCD beast with a native 3840x2400 resolution. 96 PPI? BAH! Try 204 PPI! And priced accordingly, too.

Some general thoughts on the state of LCDs today...

Current generation LCDs run with 25ms response times or better, making gaming just fine. In fact, I've played Unreal Tournament on my triple-screen set up, so I have a 143 degree field of view of how badly I suck at Unreal Tournament. Also, that whole limited angle of view thing is going away as well, with 160 degree and better viewing angles.

Good wallpaper can be a challenge. 9XMedia gives away a bunch. My current one is a shot of Mars off the rovers that happened to be the right proportions.

This whole DVI vs. VGA battle is silly too. The center display of my triple rig uses a DVI cable, the left and right are on VGA. You can't see the difference in the display. The only real difference is that DVI cables are limited in length and compatibility. A good quality VGA cable is a better bet every time.

When I'm buying LCD panels, I like name brands (Viewsonic is my favorite), and as much brightness and contrast as I can afford at the time. And as much resolution as I can get away with - no such thing as too much screen space!

Warranties for LCDs are very picky - if you read them close, you'll see that they don't cover a couple of failed pixels, although all the new Viewsonic displays I've bought recently (and that's quite a few) have been flawless: not a single bad pixel.

The most vulnerable part of an LCD panel is the backlight, which is generally a flourescent. Flourescent blubs need ballast, which means they're susceptible to power fluctuation damage. In environment where power is really bad, you can cook off a backlight in less than a year. Sure, its under warranty, but what a pain in the butt. Plugging your LCD monitors into a small UPS will protect them.

Many art folks can't stand LCD panels - they're much more limited in the number of colors they can display. Case in point was my 12 year old daughter. She's a photoshop nut, has a Wacom tablet, and drawing is what she's all about. When I built her machine, I stuck one of the 15 inch LCD panels I had in the office on it. Within seconds she noticed that the colors in her drawings were wrong. I couldn't see the difference, but then, I have no art skills. The machine went into her room with a 19 inch CRT attached.

LCD technology is still evolving - CRTs are as good as they're gonna get, but LCDs are nowhere near done. The difference between my four year old Viewsonic 18 inch LCD and my new Viewsonic 18 inch LCD panel is substantial - brighter, faster, more contrast, thinner bezel, lighter and less expensive. And its only going to get better.

Wednesday, June 9, 2004 9:35:16 AM (Pacific Standard Time, UTC-08:00) #    Comments [4]  | 


I'm a PC Plumber#

You may recall I had a little water problem with my PC... well, I managed to find and fix the leak.

The challenge of fixing the leak was finding it. I slid some paper underneath the machine to see where the water dripped out (after removing the covers from the machine) and left it run for awhile. That led me to the pump. But even knowing the water was somehow coming from the pump, I couldn't actually see the leak. So the pump had to come out.

Now, removing the pump means breaching the water loop (technically, its already breached with the leak, but still). So now I had to figure out how to get the water out of the system without getting it all over the office. The trick is to find the highest point of the water loop, where the water naturally drains out when its not in use. Or create a high point by lifting a hose as high as possible until its full of air. Hopefully, this is on a nice, long hose that you can unplug, lift out and stick into a bucket (or in my case, a large, empty yogurt container). Now you need to fire up the pump again, preferrably without burning up your system.

I have this little plug that I stick onto the main power supply connector that does two things - it makes sure that the motherboard is unplugged so the board won't power up, and it lies to the power supply so that it will turn on while not being plugged into the motherboard. The result is that the pump fires up (and the hard drive, and anything else plugged into the molex connectors).

This is the moment where you realize whether or not you unplugged the right end of the hose - either water is going to pump out into bucket, or shoot all over the case (ask me how I know). I took a shot of the pump extracted from the case, you can see the four bolts that hold the pump in on their rubber bushings.

Now that the pump was extracted, I made a little closed loop solution, hooking a hose from the output to the input of the pump. Poured a bit of water back into the pump and fired it up.

You may notice the water looks rather white and foamy - it is. The pump is under so little pressure, the water is just ripping around the loop and swirling in the reservoir. Good thing I didn't fill the reservoir all the way up, it would have shot it all over the place like an overflowing blender.

After a few seconds, I could see drips coming out of the pump housing, right beneath the reservoir mount. Turns out I actually had two leaks. The output mount sticking out of the top of the pump (which already has silicon on it) was still leaking, and there was a crack in the pump housing around the pump pickup from the reservoir. It took several tries and lots of silicon to actual get all the leaks plugged. Pulling the pump was definitely the right solution - I would have liked to have fixed it in place, but this was the only way to get it right.

Once the pump had run over night without a leak, I drained it, put it back into the machine and refitted all the hoses. After refilling the system, running it briefly to burp air from the lines and topping it up, I left it go for the night with the paper in place to find more leaks.

And this morning it has a clean bill of health. Nothing on the paper, water temperature holding steady at 38C.

Sunday, June 6, 2004 8:52:17 AM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 


Why sometimes intolerance is a virtue...#

So a few weeks ago I bought a new laptop, one of the Dell XPS tanks. Its a monster, but the performance is untouchable. And I can stand all the teasing by my fellow RDs, they just wish theirs was so big.

But it had this weird little foible... some web pages rendered really poorly. The fonts were all jagged, and sometimes it painted incredibly slowly. In some cases, web pages were just plain messed up. And I just put up with it - it wasn't that important to me to fix.

So then a friend of mine bought a Dell, partly on my recommendation. No, he didn't buy the XPS, it bought something a bit more moderate. In fact, the only thing his machine has in common with mine is that they're both Dells. Different processor, video card, etc, etc... but he has the exact same screen rendering problem!

However, not as patient as I, he insisted there must be an answer. I figured since we both have the problem, it had to be something in the default Dell configuration. Its a reasonable assumption, but finding out what could be almost impossible. I suggested that we could just blow the drives and do scratch installs of XP (something I'm prone to doing anyway, just to be sure), expecting that the problem would go away.

Maybe a half hour later, he IMs me - in the Advanced display settings there's an option for Large fonts. It increases the default font sizes of everything on your machine by 25%. And for Dell laptops with high resolution screens (like this awesome 1920x1200 screen), its set to large by default. Setting it back to normal got rid of the problem, and the fonts are really small on the screen. However, more importantly, everything is rendering normally, and nice and fast.

Why put up with tech not just the way you want it?

Drivel | Toys
Wednesday, June 2, 2004 1:07:22 PM (Pacific Standard Time, UTC-08:00) #    Comments [3]  | 


Home from Tech Ed...#

Things I found upon returning home from Tech Ed:

  • My wife and children still remember my name
  • My water-cooled machine leaked enough to run out of water and shut down
  • There was a power outage, my stand-by generator worked fine
  • My cat is mad at me for being away nine days
  • I've lost my insane craving for Haagen-Daz
  • I had a lot of toys delivered while I was gone!

The solar recharger for AA batteries arrived. Very cool. Bit bigger than I thought, even though they had the dimension and I measured them off a couple of times.

Pile of MSDN stuff, equal numbers of checks and bills, bunch of magazines and books...

Here's the coolest gizmo to arrive so far: a Xincom DPG-402. This is a dual WAN NAT router. I already have the equivalent device from Nexland, but since they were bought by Symantec, the product seems all but dead.

Yes, I have two Internet connections - DSL and Cable. I hate being offline, most of the time these are both up. When either one of them is down, the dual WAN NAT router takes care of switching everyone over to the other WAN connection. Its quite transparent - if it wasn't for the warning emails, I'd have no idea one of my connections was down.

Unfortunately, like most NAT routers, the Nexland can only handle one IP per WAN port. But the Xincom can handle more. You can pass multiple IPs through a given WAN port, only one of the IPs uses NAT, the rest pass through to specific machines. Cool.


Sunday, May 30, 2004 1:26:01 PM (Pacific Standard Time, UTC-08:00) #    Comments [1]  | 


Skype Power...#

So yesterday, as the Tech Ed conference center was emptying out, I get a request via MSN from Scott Hanselman, “Help a brother out, contact me via Skype, I want to show a friend of mine how good it is.”

Skype is beta Voice-Over-IP (VOIP) software, free to download, that lets you use your PC like a telephone, albeit just to contact other Skype users. Scott had a friend who was going overseas and wanted to stay in touch with loved ones.

I'd used Skype at home, but not on my laptop. So I had some reservations:

  • One of my fundamental rules at conferences is “Thou shalt not install software on your computer before you have finished all your sessions.” And I have one session to go.
  • I was connected to the Internet via the TechEd wireless network, which is not all that fast (there are 10,000 geeks online, after all) and is heavily filtered.
  • I have no external speakers or microphone set up for my laptop.

But Scott was persistant, so I figured what the heck, and downloaded Skype.

Now since it was the end of the day, there were relatively few people on the Tech Ed network, so my download went quickly. But what I didn't expect was that I could install the software, sign up an account, enter Scott as a contact and press connect in less than five minutes.

And I was totally blown away when Scott's voice came out over my laptop speakers clear and crisp, and even more stunned that he could hear me as well, although apparently I was quite quiet.

No tuning, no fiddling, no specialized hardware - download, install, connect, and it just works!

Friday, May 28, 2004 6:16:26 AM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 


Look ma, there's a puddle under my computer!#

Not a good hardware day.

I like water-cooled computers. Why? Because they're quiet. No 5000rpm high velocity fans on the processor, video card, etc, etc. Just a couple of slow, silent fans on the radiator and the power supply. The way it oughta be.

Putting water cooling into a computer system isn't a trivial task, but its not rocket science either. I've built three of them so far, and noise level in my office is better for it.

Unfortunately, now I have to deal with a new kind of problem... leaks.

See if you can see where this machine is leaking from.


See it? Me neither.


Thursday, May 20, 2004 6:06:56 PM (Pacific Standard Time, UTC-08:00) #    Comments [2]  | 


Toy solutions for toy problems...#

My buddy Stephen Forte has convinced me to climb Mt. Kilimanjaro with him this October.

Its not a technical climb (no hanging from ropes here), really its just a hike... to 19,300 feet. It takes place over 18 days, although only (heh - only) nine days of climbing. There's a four day safari at the end.

When I travel, I like to take lots of pictures. Like, a couple of hundred in a day. A digital camera is essential, but so is the laptop to dump all those photos out (and write silly captions for them).

Needless to say, laptops are not considered essential hardware for a hike like this, but I still want to take lots of pictures. So, I have two problems to deal with. The first is capacity - where am I going to store all these pictures? The second problem is power - not a lot of outlets on Kilimanjaro.

My camera of choice for the moment is the Olympus C-750 my wife bought me for Christmas. It uses xD picture cards, up to 512MB. Each 512MB card should hold about 600 pictures, so three ought to do it, if I can limit myself to a hundred photos a day.

One of the best things about this camera (besides the amazing 10x optical zoom) is that it uses AA batteries. I have a couple of sets of Nickel-Metal-Hydride batteries for it. So I suppose I could carry ten pounds of AA batteries for it to cover all those pictures, but then I had a better idea. Silicon Solar makes a fairly small solar AA battery recharger. It does four batteries at a time, the same number of batteries the camera takes. So I can have one set charging while I'm snapping away with the other.

Of course, now that I'm thinking solar, maybe a bringing a laptop along isn't so crazy after all.

Toys | Travel
Tuesday, May 18, 2004 7:28:16 PM (Pacific Standard Time, UTC-08:00) #    Comments [5]  | 


There's feature creep in hardware, too...#

Okay, I admit it, I love toys. No, not little plastic whatzits (although they can be pretty cool too, I always had a softspot for transformers), but technology toys. I buy first, I buy often, and I have boxes and boxes of crap that didn't survive their first test runs. I like being on the bleeding edge and I have lots of blood to give.

Plus I'm pretty good at retaining what I see, so once I've looked into a toy once, I usually remember it. I think it was Steve Forte who first called me the Toy Master, when during a phone conversation about whether or not some gadget actually existed, I was firing links over IM to him without a pause in the chatter.

So when my friend Michele Bustamente asked me about networking her two demo laptops together, I knew it was going to turn into a toyfest. Its all about feature creep, y'know.

Besides asking me about the weather in Amsterdam (which was just about the worst I've seen in May yet), we also talked about networking her machines together, and we talked about crossover network cables, switches, point-to-point wireless and other good stuff like that. But, as with all “clients”, if you don't get to the heart of the matter, if you don't ask the magic question “So what do you want to do?” you really miss out on hitting a home run in terms of solving problems.

Besides just wanting to network her two demo laptops together, Michele was also thinking about her Web Services Interoperability Education Day on May 22nd, just before Tech Ed San Diego. There, she figured she'd have at least three machines involved, and possibly two projectors, and wanted to have all the machines talking to each other, possibly with Internet connectivity, and so on, and so on...

Crossover ethernet cables are fine, as long as you're prepared to live with fixed IPs and no additional connectivity. And it falls down as soon as there's three machines involved. Its a one-off solution, and you always need more.

For years I've been carrying around a little D-Link DI-713 whenever I was travelling to any form of geekfest, geekfest being defined as any place where more than two geeks are. Because as soon as you have more than two geeks together, you have connectivity issues. We all have laptops, we all want internet connectivity, and we all want to fire files back and forth between each other.

If you're in a hotel, you soon find out that hotel broadband, while nice, is really a per-machine product, and so if you have three laptops in a room, you end up hopping the wire from machine to machine and then arguing with the manager at the end of the day as to whether or not you should be charge $10 for the day, or $10 per machine per day...

The D-Link box solved the problem: its a NAT router, a switch and a wireless access point all in one. So you can plug the hotel broadband into it and everyone can share, as well as network, or use the wireless connection. It even provided DHCP support so we don't have to mess with the network settings.

Unfortunately, my little D-Link gave up the ghost a few months ago. It owed me nothing, having been the saving grace of many a geekfest, and having logged tens of thousands of miles in baggage, multiple irradiations and so on. It won't be missed though, it'll be replaced.

Meantime, it was apparent to me that Michele needed the same little gizmo for her demos. All those laptops are likely using DHCP, and they need to speak to each other, and could use some Internet access... so a quick sprint around the Internet (I have a great favorites section called Shopping) returned this list of products:

There's more, but they're essentially all the same: a NAT router, a four port switch and a wireless access point. These four all were 802.11a/b/g compatible too. There were a bunch that left out 802.11a, which is fine with me, in my experience it only takes a piece of paper to block 802.11a signals.

The non-a variants get as cheap as $50 US, the tri-mode units are $100-$200 US. They're all relatively compact, but the SMC unit is the smallest (a mere 5"x3.5"x1.25") and hey, if you're travelling, that's important. They all have decent web-based configuration, and they're all routinely updating their firmware. You couldn't go wrong with any of these units really, but I liked the SMC for its compact size and decent looks.

I see these gizmos as essential fare for anyone who's going to be working with more than one geek at a time. When we're all speaking at conferences, there's always a gathering somewhere, often the speaker who got the biggest room. This solves the networking problem.


Toys | Speaking | Travel
Sunday, May 16, 2004 6:38:22 AM (Pacific Standard Time, UTC-08:00) #    Comments [1]  | 


All content © 2023, Richard Campbell
On this page
This site
<June 2023>
Blogroll OPML

Powered by: newtelligence dasBlog 1.9.7067.0

The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

Send mail to the author(s) E-mail

Theme design by Jelle Druyts

Pick a theme: