RunAs Radio #8: Brian Komar on PKI#

Look out! Two security guys on the show! Brian Komar and Greg really dug into the details of the Public Key Infrastructure in this show. I was mostly along for the ride, but really enjoyed discussing the challenges around Extended Validation Certificates. And then there was the whole problem of the green vs. red address bar background in Internet Explorer 7...

Wednesday, May 30, 2007 6:29:58 AM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

Fastest Water Cooled Refit in the West!#

So Patch Tuesday came by as usual, and machines were patched. I have one workstation, Terrance, that still isn't running Vista. Terrance is the giant triple screen machine, running a total of 4960x1600 worth of displays. Plugged into it is the MOTU Traveller that I do all my recording with for RunAs Radio and .NET Rocks. So needless to say, it needs to be completely reliable, since I record every week.

So, as I said, Patch Tuesday came by as usual. And as usual, Terrance didn't auto-install the patches, just downloaded 'em and let me know. And, as usual, I hit the button. And, as usual, the patches required a reboot. What could go wrong?

Terrance didn't come back.

Terrance sits at the bottom of the stack of two workstations, sitting underneath Phillip. Phillip is a far crankier computer, struggling with barely sufficient cooling. So I'm usually tinkering with Phillip to keep him happy, while Terrance is totally reliable.

Except that Terrance didn't come back.

I left Terrance off for a couple of days. I had already finished recording for the week, so I had until the following Tuesday to get things fixed up, and I figured I could wait for the weekend.

When the weekend came, I hauled Terrance out of the rack, which means shutting down and removing Phillip as well. Annoying. Moved Terrance up to the service counter and re-installed Phillip so at least I had one workstation up and running.

With the cover off and plugged into the test harness, Terrance still wouldn't power up. Well, wouldn't power up is a bit of an exaggeration... the motherboard power light is on, but the main power light wouldn't turn on, and it seemed like the power switch was useless. Sometimes I'd see the LEDs on the RAM chips sequence a bit, but the drives never spun up.

I suspected that my groovy new water cooled power supply might be the culprit. Fortunately, I have a power supply tester, so I plugged it in and powered it up.

Here's a look inside Terrance. You can see the dual video card set up (running a pair of nVidia 7800s) that are both water cooled, along with the CPU, northbridge, southbridge, hard drive and power supply. This is also the moment of truth, with the power supply tester on the left plugged into the water cooled power supply. The power supply is the blue thing on the lower right of the photo, the black block attached to it is the water cooled part. The heat sinks for the power supply are connected to the block and water passes through the block to cool it. No fan in the power supply.

So, turn on power supply, and the power supply tester should report voltages, all that good stuff.

Only it doesn't do anything.

Being the suspicious type, I pull out my spare power supply, a nice Enermax Whisper unit, not water cooled, but nice and quiet. Plug power supply tester into that, fire it up, and everything lights up. Ah ha, one dead power supply.

So, to replace the power supply, I need to breach the water loop. I hate breaching the water loop, its messy. But, Terrance is one of my external water loop equipped machines. That means I have a pair of hoses running out the back of the machine that can be connected together to be self-contained, or connected to a larger external cooling system. Ultimately my plan is to plug into the wall for water cooling, but goodness knows when I'm going to have time to finish that.

But, back to the problem at hand - I have hoses with self-sealing couplings running out the back. That makes draining the water system a whole lot simpler.

In my hand I have a bulb pump which is connected to one side of the external loop. The other side has an unsealed coupler connected to it and is stuffed into a clean yogurt container to catch all the water. 20-30 squeezes of the bulb pump later, all the water is drained out of the system into the yogurt container.

Now to actually breach the loop - doing the pump out isn't really a breach anymore because I use those self-sealing couplings to keep everything tidy. The trick to removing things from a water cooled machine is to realize they still have some water in them, so closing off the water connections is a good idea.

On Terrance, the power supply was added to the water loop between the southbridge block and the hard drive block. So I want to connect the southbridge hose to the hard drive and leave the power supply out. But to keep the power supply from leaking, I connected the existing hose running between the power supply and the hard drive to the other end of the power supply.

So you can see the hose loop on the two water cooling connectors of the power supply, and in my hand is the hose that used to run to the power supply from the southbridge chip. I just had to rotate that hose around and connect it back to the hard drive and everything was sealed up again, no mess.

So the rest was easy - extract the power supply, install the Enermax supply, reconnect the power supply and I'm off, right!

Here's a shot of the freshly wired in Enermax supply. Looks good, huh?

Except for the part where it doesn't power up at all. Motherboard light turns on, but the machine won't power up. I'm suspicious. So I retest the old water cooled power supply. Its still dead. But something else is wrong.

Experience has shown me that the one thing that can stop a water cooled computer in its tracks is a bad pump. So I unplug the pump and the machine powers up normally. I have a bad pump as well. Good news, I have a spare pump.

For whatever reason, I failed to photograph the pump installation. Its very tricky, I had to turn Terrance on his side, remove the side plate (five screws off the bottom, five screws off the side, four screws off each end), disconnect the hoses from the reservoir, wiggle the reservoir off (its press fit), slide the pump out, slide in a new pump, remount the reservoir, reconnect the hoses, put the side plate back on, reinstall all the screws. Apparently I was so busy I never snapped a photo the whole time.

This is a shot immediately after finishing the pump swap-out. The dead pump is out of the machine, the new pump is in the machine. Everything else is the same, and Terrance now works.

Refill of water was pretty simple, just power up and keep pouring water. Once the lines had been burped (leaving it running for a half hour or so with the reservoir lid off), I buttoned everything up and put Terrance back in place.

Total service time, about two hours.

So, questions: did the pump kill the power supply, or the power supply kill the pump? And what killed either one or both?

And I need to restock on spare parts.

Sunday, May 27, 2007 10:43:53 PM (Pacific Standard Time, UTC-08:00) #    Comments [1]  | 

 

ASP.NET Scaling Chalk Talk at TechEd US#

I've been assembling my notes for my Chalk Talk on ASP.NET Scaling at TechEd US in Orlando.

The Chalk Talk will be held on Friday June 8 at 1pm, in the ASP.NET Community Area.

The biggest challenge in talking about scaling is to not fall into a discussion on performance. Most folks mix conversation about scaling and performance together, on the assumption that excellent performance provides excellent scaling. It isn't true - in some cases, to get great scalability, you have to impede performance. In reality, the best case scenario for scaling up an application is to maintain performance, not to improve it.

Performance is all about how quickly your web page responds to a request, scale is about how many requests you can handle at once. The "at once" part of that statement is important, since the idea that excellent performance provides excellent scale only is true when requests are not "at once", but fairly close together. If you could compute every page in 50ms and you only got requests every 100ms, you'd only be handling one request at a time... your great performance has given you the illusion of great scale. A lot of people consider this scaling, but its not really. Real scale is all about how your site handles simultaneous traffic.

There are two fundamental techniques for scaling: specialization and distribution.

Specialization is the process of separating out specific tasks that your web application does and building/buying specialized resources to handle those tasks better. You already do this - you have a separate database from your web servers. When you get into large scale web sites, image handling often becomes a specialization. You could set up dedicate image servers, or even offload that work to a third party company like Akamai. Getting the load of image handling out of your web servers allows them to handle more of the requests that they need to handle: Processing ASP.NET web pages. Obviously the challenge of making specialization work is going through every web page and altering the image tags so that they point at the image servers: Time consuming, but not especially hard. That's scaling by specialization.

The other technique for scaling is distribution. The key to distribution is creating multiple copies of the same resources and balancing work between them. Typically this would be multiple, identical web servers and a load balancer. The challenge to making distribution work well is effective load balancing, and that means a lack of affinity. That means no data specific to a given session kept in the web server, all of that information has to be available to every web server in the farm. There are a variety of affinite resources in ASP.NET, the best known of which is Session, and there are a variety of methods for making those resources non-affinite, the best known method being to write them to SQL Server.

This is where we get into the performance/scaling compromise: moving Session data out of the web server and over to SQL Server definitely slows down performance, in exchange for being much more scalable. But this is not a simple curve - sure, this method is slower per request on average, but that speed doesn't change for longer as the number of simultaneous requests increases.

Distribution also opens up advantages for reliability and maintainability, in exchange for dealing with the complexity of multiple servers. That's outside the scope of purely looking at scalability, but its certainly relevant to the equation over all. Its also important to remember that scalability isn't the only reason to have a web farm.

Of course, you can combine these two techniques, having specialized resources and distributing them across multiple servers. And this adds an additional advantage: You can scale each of those specialized resources independently. So if you need to improve the scalability of images, expand the image server farm.

The key to both these techniques is good instrumentation: You need to know where the problems are. Specialization helps because it creates clear boundaries between the various resources involved in a web application. And often you'll find that the non-affinity step you skipped becomes your key problem scaling up - and it will be instrumentation that will show that too you. Of course, then we get into the argument of whether or not the instrumentation *is* the problem, because it too exerts a certain amount of load on the servers.

There's more than just this to talk about as well: There are a variety of techniques for going to a non-affinity solution, there's also the challenges of caching at scale and invalidation.

And don't forget the database! As you scale up your web farm, the database can represent a serious bottleneck. Solving that is a huge task on its own, involving its own implementations around specialization and distribution.

I had originally suggested this topic as a breakout session, but I'm really looking forward to doing it as a Chalk Talk, for the higher level of interaction I expect to have with the audience. Chalk Talks are a lot more intimate, I'm going to steer clear of a slide deck and focus on using the white board to look at the various evolutions of a web application as it scales up.

Hope to see you there!

Friday, May 25, 2007 10:56:04 AM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

RunAs Radio #7: Rory McCaw on MOM 2007#

Our seventh show, Greg and I dug into the thinking around the latest edition of Microsoft Operations Manager with Rory McCaw. Of course, now its called System Center Operations Manager, but I think I'll always love the MOM acronym.

Let us know what you think of this show and the others, send us an email to info@runasradio.com.

Wednesday, May 23, 2007 6:34:58 AM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

Cooking Up a No Code ASP.NET Tuning Solution!#

I’ve been talking about load-testing ASP.NET applications, and what it’s like when you fail. Well, now I can finally talk about why I’ve been thinking about all this stuff. I just spent the last two weeks talking to people about our launch and getting feedback from analysts about the Strangeloop AppScaler, and now I can finally talk about it in public!

 

Here are the basics: You already know how application tuning impairs the development process. Not only does it take a long time for pretty limited returns, but it takes you from this lightweight, fast ASP.NET development process—the whole reason you started using ASP.NET in the first place—to this much more ponderous endeavor, where every piece of performance tuning you do places new requirements on everything else you’re coding moving forward. Well, the Strangeloop AppScaler basically takes that entire application tuning process and puts it in a box. It’s a very, very cool thing. But now that we're out in the open, what I really want to talk about is how we got here.

 

It all started with looking for a better way to do session. Everybody’s talked about session, and everybody knows that it could be handled better, but nobody had actually done it. The default, of course, is in-process session. Since we all start development on a single web server, in-process makes sense. But as the application becomes more important, more web servers are needed, and the idea of going out-of-process comes up.

 

Microsoft provides two out-of-process approaches. One is using SQL Server, which you likely already have, since it’s where you store your data. But SQL Server is kind of overkill for session, since you're just storing a blob of session data there: you don't really need the power of SQL Server for that. SQL Server is reliable, but slow. The alternative is State Server, which is substantially faster, but isn't reliable and generally isn't a great bit of software.

 

And switching session methods is pretty trivial, since all you have to change is the web.config file. Although one issue people occasionally run into is that they haven't marked their objects for serialization. In very rare cases, they can't serialize their objects, but for the most part, it’s just about setting properties correctly.

 

Typically, most people deal with the issue by leaving session in-process and using a load balancer that supports sticky session, where the load balancer uses the ASP.NET cookie (or IP address) to *stick* a given user to the same web server each time they visit the site. While it certainly solves the problem of session, it undermines the value of a web farm. If the server goes down, the session’s lost. Some servers can end up much busier than others, so you aren't really balancing your load. And server updates tend to be a major pain.

 

To really load balance, to get the full advantage of your web farm in terms of performance and reliability, you need to get the session data out of the web server and go out-of-process. When you do that, you can load balance properly and go to any web server you want, but it means that session processing takes longer. So originally, our mission was to really look at session and figure out a way to get in-process performance but with out-of-process flexibility.

 

When we did all the math to figure out exactly why doing session out-of-process was so much slower, we found it was network trips were a major part of the processing time. Every request/response pair with out-of-process session means two additional network trip pairs: to fetch the session data at the beginning of the response computation, and to write the modified session data out at the end of the response computation. But the only reason all these network trips happen is that the request travels all the way to the web server before the server realizes it needs session data. So we thought, “What if we put the session data in front of the web server, so by the time the request gets to the web server, it already has the data?”

 

That’s what AppScaler does (well, one of the things it does). As a request comes in, it passes through AppScaler, and AppScaler says, “I don’t care what server you’re going to, here’s the session data you need.” Then it attaches the session data onto the request. When the request arrives at the web server, the session provider strips the session data out of the request and the page processes normally. When it finishes computing the response it attaches the session data to the response and sends it back to the browser. On the way out the response passes through AppScaler, and AppScaler removes the session data and stores it away in its network cache, and everything proceeds normally from there.

 

So suddenly, we’d eliminated all these extra network trips, but we were still out of process, so you still have all that flexibility. Pretty cool, right? Then we took it a step further and said, “Gee whiz, since we’re already here doing this, why don’t we just do viewstate too?” As you know, viewstate can get totally out of hand, typically due to the use of third-party controls, which is why the really performance-conscious sites don’t use third-party controls at all. And giving up third-party controls means either slowing down your development process to create controls yourself, or just not using all the controls that you otherwise might. With AppScaler, you can use all the controls you want (within reason). It takes that viewstate out of the page before it goes to the browser, so you don’t pay the performance penalty.

 

So fixing session and viewstate were the first features of AppScaler, and the results were pretty impressive—we were really cutting down page sizes and seeing substantial performance gains. And that’s when we had the big realization: Now that we’re sitting here in front of the web server farm where we can see all this traffic, there are all kinds of smart things we can do to optimize the performance of ASP.NET applications!

 

Fixing browser caching was low-hanging fruit for us. With browser caching, you mark various resource files (images, js and cs files, for example) as cacheable at the browser, normally with some sort of time limit (a day, a week, etc). Once the browser caches those items, it won’t request them again for as long as the cache is valid. That gives substantial performance gains since you cut down a lot of the resource requests that make a web page work.

 

The downside to browser caching is when you go to update the website. Unless you’re extremely careful, you can end up with browsers messing up your new pages because they use cached items instead of new items. And of course the pages that get messed up are the ones the CEO is looking at, because he hangs out on the site all the time and has everything under the sun in the browser cache. In my experience, people abandon browser caching after an event like that, and never use it again.

 

AppScaler fixes browser caching by dealing with expiration properly. First off, you specify what to cache in AppScaler, so that you don’t have to fiddle with settings on your web servers. AppScaler just automatically marks those resource files for caching as they pass through on the way to the browser. But then the really clever bit comes into play: AppScaler watches the resource files on the web server so that when there is an update, it sees it and knows the underlying files have changed.

 

Once AppScaler knows a resource file has changed, it dynamically renames it in the request/response pairs so that the browser doesn’t have it cached. It keeps up the renaming until the cache expires. So suddenly browser caching doesn’t cause problems with website updates.

 

Our experience with ASP.NET has demonstrated again and again that caching is king. And when we studied the potential of caching with AppScaler, we realized that self-learning caching was the number one performance return we could offer with this idea. Being between the browser and the web farm is the perfect place to cache and to coordinate cache expiries. As a developer, you know you have to cache, and you can write code to do it, but it’s a lot of programming, and it changes the way you have to code going forward. More than that, you have to figure out what to cache. You might guess wrong. Or more likely, because of the time and effort involved, you’re probably only going to cache a few things that are obvious.

 

AppScaler Response Cache evolved from that experience. It started out as a system for monitoring traffic, looking for where the request/response pairs match, and how frequently a response is different for a given request. It looks at parameters, such as querystring and POST elements to identify different requests. So by watching all traffic going to and from the application, AppScaler learns what to cache, and when to expire it.

 

Based on those recommendations, you can tell AppScaler to actually cache the items, or you can put it into an automatic mode, where AppScaler will cache what it thinks it should. This automated caching feature is incredibly useful for dealing with Slashdot or Digg events, where suddenly traffic is up 10 or 100 times.

 

But ultimately, the real advantage is the lack of coding – writing caching code in ASP.NET works, but it slows down the development cycle going forward. AppScaler gives you the same benefits, but without the impact on your development.

 

Now for the record, if all of this sounds very straightforward, it’s because I’m just giving the highlights here. Making all of this work together has been an extremely complex, time-consuming project. Also, while I’m really excited about it, I want to be clear that this is not going to fix every problem. If your pages are a megabyte apiece and half of that is viewstate, for example, we’re going to have a tough time helping you at any significant level of scale. You’re still going to have to do some basic tuning. But it’s when you get into the really exotic tuning, when you’re doing these miniscule kinds of tweaks and breaking pages down fraction by fraction to find out where you can squeeze a little more performance out of it—the stuff that really impairs your coding more than anything else—that’s when AppScaler can really help you out. And this is just a subset of the things it can do. I listed four features here. There are more than twenty others on the books today, and the list keeps growing.

 

Monday, May 21, 2007 2:15:50 PM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

Migrating web servers, upgrading dasBlog...#

Decided not to work on Sunday for a change.

Instead, I upgraded servers! Ah, such a geek.

My old web server Stan is very very old... P3 1Ghz with 512MB of RAM. Running Windows 2000, it has been a workhorse of a machine. I put Stan together in November of 2000. Hard to believe it has been essentially running unmodified for over six years. But that also means those hard drives have over 50,000 hours on them, which makes them ticking time bombs. And that's what the SMART reporting is saying too.

Stan is just too old to upgrade, he needs to be replaced.

His replacement is Jimmy, a machine I already had in the rack that was a testbed for betas of SQL Server 2005. Jimmy is a P4 3Ghz with 2GB of RAM, running Windows Server 2003 R2 SP2. Takes some time to get used to the little differences between IIS5 and IIS6, but its all bareable.

Migrating a web server is a pain in the butt. Lots of little configuration details you have to get right. To do the testing, I copied a backup of Stan's web sites onto Jimmy. However, since there are multiple sites on the web server, I depend on host header identification to sort out what site is what, which means I need to use the correct names of the web sites to access them. So what's a boy to do? I want to leave the sites up and running on the old server while I mess around with the new one.

I could have faked out a DNS server, but that seemed like a lot of work. Instead I modified the HOSTS file on my main workstation so that the web sites on Jimmy were pointed to directly. Funny how old technology serves the purpose so well.

Since HOSTS takes priority over any DNS lookup, I was able to point sites (like www.campbellassociates.ca) to the IP address of Jimmy directly. Then I could tweak and test to my heart's content.

One whammy I ran into was with FrontPage Server Extensions. For the most part my web server runs the little web sites of friends and family, and they all use FrontPage, whether Microsoft wants them to or not. While it set up the extensions easily enough, I couldn't administer the sites to set up access for the authoring accounts - no matter what account information I entered, it failed.

Turned out it wasn't me, it was a feature of Windows Server 2003 Service Pack 1. The service pack added a loopback check, making sure that the local computer name always matches the host header. And since I'm using multiple host headers, that's just not going to work. The fix is in Knowledge Base Article 896861. You have two choices: turn off loopback checking, or enter all the domain names that are legal for loopback checking.

I turned it off. Call me lazy.

Upgraded dasBlog as well. What I was really after was Akismet, the comment spam filtering solution. Unfortunately, the shipping edition of dasBlog doesn't have direct support for it. But the daily builds have it. I'm not normally a guy who runs a daily build, but for Akismet, its worth it. Take that, comment spammers!

 

Sunday, May 20, 2007 10:05:15 PM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

RunAs Radio #6: Wes Miller on 64 bit #

Greg and I dive into a discussion on 64 bit technologies on the desktop and server in the 6th show of RunAs Radio with Wes Miller.

As always, you can send email to info@runasradio.com or comment here for feedback on shows you'd like to see, questions, criticisms, etc.

 

Wednesday, May 16, 2007 5:02:15 AM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

Devteach in Montreal now, this fall in Vancouver!#

I'm in Montreal this week for DevTeach, the biggest little developers show in Canada.

Carl is here as well, along with many other of my favorite speakers.

On Wednesday I'll be doing my famous SQL Querying Tips & Tricks session, updated for 2007 (now with more Running Totals!).

But the biggest news came this morning: DevTeach is coming to Vancouver, November 26-30 2007!

I'm sure we'll pack the house in Vancouver, the number of speakers I've talked to over the years that have been waiting for a chance to come to Vancouver in the guise of a conference is amazing. I think its the best city in the world, but then I'm biased.

November is the rainy season for Vancouver, but if you like to ski, the end of November is right around the time the mountains open. There are three local mountains (Grouse, Seymour and Cypress) that you can take a local bus to. And then of course there's Whistler/Blackcomb, a couple of hours away. And there's another dozen ski mountains further away than that.

And besides, you're there to geek, and there's gonna be a lot of geekiness around at the end of November!

 

Tuesday, May 15, 2007 8:56:13 AM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

Failing From Your Own Success#
Know what to expect when you take your application out of the nursery and into the real world
Friday, May 11, 2007 9:25:52 AM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

RunAs Radio #5: John Savill on Application Virtualization#

Seems the last few interviews Greg and I have had just flew by, and our conversation with John Savill on Application Virtualization is no exception.

There's so much to talk about there, if you'd like to hear more, send us an email at info@runasradio.com.

 

Wednesday, May 9, 2007 8:07:45 AM (Pacific Standard Time, UTC-08:00) #    Comments [2]  | 

 

Peeking Over the Fence into the Networking Guys' Backyard Reveals a Brilliant Load Testing Solution#

We’ve been going through beta testing at Strangeloop, which means I’ve had the chance to do some serious scaling of ASP.NET. One of the interesting experiences that keeps coming up in this process is the reaction we get from customers when we’re helping them do load testing.

 

One of the things we can offer our early beta test customers is the opportunity to load test their site, with and without Mobius in the loop. We need the test data anyway, and quite a few candidates don’t really have much in the way of load testing resources ready to go. And then we test their site in our lab with our Spirent Avalanche, and they go “Wow! I need one of those!”

 

So what’s a Spirent Avalanche, you ask? Funny you should ask… It’s 3Us of load testing love.

 

Josh Bixby, our VP of product development, noticed it when he was at trade shows. One of the benefits of having our feet in both the development camp and the networking camp is that we naturally see things on the network side that a lot of developers don’t. Josh pointed out that virtually every company making networking appliances had one of these 3U boxes in their demo racks. But I’d never heard of it before. So we checked it out, and realized it was the best answer I’ve ever seen to doing load testing. I know that load testing isn’t something people want to think about unless they HAVE to think about it. But if you do have to think about it, you have to check this out.

 

I don’t need to emphasize how much of a pain load testing is. Typically, you have two options, both of which suck: If you’re doing it yourself, you may be spending literally a week setting up a load test farm, and you’re probably spending more energy making the configuration work than actually doing the test. Which is no surprise, since most likely you’re using any piece of junk you can find, trying to network together a bunch of machines with different NICs, different performance, different speeds, etc., before you even begin to configure the test. I had one customer that bought me ten identical, dedicated servers for load testing - for about the same cost as an Avalanche - but that’s the exception, not the rule. And it still gives you much less control, you have to do all your own analytics, etc.

 

It’s easy to think “Oh, I’ll just use Mercury Interactive (sorry, HP Mercury) to do my load testing.” Easy until you see the price. Paying six digits for load testing with a 20% annual maintenance contract isn’t so easy. And that’s just for software – you still supply the hardware. I don’t think anyone told Mercury that the Dot Com Boom was over.

 

So taking a page from the network guys, there’s a third way to do load testing: You get a Spirent Avalanche, hook it up, and let it do the job. One 3U box with four gigabit Ethernet ports that can generate nearly two million users by itself. So you’ve got the hardware and the software all in one box.

 

Of course, the Avalanche isn’t cheap either, although they’ve nailed the gradually pregnant business model well – you can rent the gear, and those rental charges get applied to a purchase. We spent less than $100,000 on our 2700 with all the features we needed to do web testing. It also uses TCL-based scripting, which is usually the realm of networking guys, not developers, and can be difficult to understand. TCL provides the Avalanche with the flexibility to do load testing on a lot more than just web stuff.

 

However, bundled with the Avalanche is a product called TracePlus/Web Detective (Spirent Edition), made by System Software Technology (SST). SST makes a variety of different TracePlus products for networking and web, including this version specifically for working with the Avalanche. TracePlus provides the classic capture mechanisms that you see with most load generating tools, where the tool captures your navigation of the web pages and captures them as HTTP commands. The Avalanche internally converts this to its TCL commands.

 

The Avalanche has some ability to do reporting internally (pretty graphs), but the main way we’re using it is in “Excel mode”, where it generates CSV files that we can load into spreadsheets for analysis.

 

We’re also finding that the Avalanche doesn’t understand ASP.NET things like viewstate very well, but then, neither does WAST. We’re using Visual Studio 2005 Team Edition for Testers to get really smart functional testing around specific ASP.NET features.

 

Even with these complications, it’s such a better way to do load testing than setting up servers, and infinitely better than letting your paying customers do the testing. So if you’re doing load testing, why aren’t you using one of these? Why don’t more people know about this? This is pretty standard equipment if you build networking gear. It’s not like the Avalanche is some new, earth-shattering product. It’s not even mentioned on the main page of Spirent’s Web site?!?

 

I have yet to find anyone else in the ASP.NET world using a Spirent Avalanche. I really think it’s just a cultural issue, where great stuff is getting lost in translation between the networking world and the Web development world.

 

Important lesson: If you’re not paying attention to the networking space, you should be. You may just be wasting your time wrestling with a problem that other smart people have already solved. That’s one of the cool things about working with Strangeloop; we really get to straddle the line between those two worlds.

 

Friday, May 4, 2007 4:40:47 PM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

RunAs Radio #4: Simon Goldstein on Compliance#

Greg and I had a great time talking to Simon Goldstein about Compliance - HIPAA, SOX and ISO. Simon is a co-worker of Greg's and one of the only people I've ever met who can make a compliance discussion fun and interesting.

I get the sense we're not done with this topic, please let us know!

Wednesday, May 2, 2007 7:48:25 AM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

All content © 2023, Richard Campbell
On this page
This site
Calendar
<June 2023>
SunMonTueWedThuFriSat
28293031123
45678910
11121314151617
18192021222324
2526272829301
2345678
Archives
Sitemap
Blogroll OPML
Disclaimer

Powered by: newtelligence dasBlog 1.9.7067.0

The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

Send mail to the author(s) E-mail

Theme design by Jelle Druyts


Pick a theme: