DevConnections Day 3: End of the Tradeshow, Beginning of Sessions#

And just like that, the tradeshow is over. Well, by the afternoon, anyway. I worked in the booth for the morning shift, but had to ditch after lunch to work with Kent on our first session of the conference: ASP.NET Scaling Strategies and Tactics. All these sessions are residuals of all the consulting and research we've done creating Strangeloop.

The session starts on the strategies of scaling first, and really there are only two: Specialization and Distribution. Most folks think only about distribution when they're scaling a web site, that is, adding more servers. But specialization not only plays a critical role, but should play it first. Specialization is all about breaking down your web application into smaller bits, whether it be separate SSL servers, image servers, etc.

Once you've done some specialization, distribution gets easier and more flexible.

That's the strategic part of the session, then we dig into the tactics, more of the details around what it takes to put those strategies into practice. For example, you can set up your own image servers to take the load off your ASP.NET servers, or can switch to a Content Delivery Network (like Akamai) to handle images. Most of the time, these tactics are specific to the application, ie, it depends.

When the session was over, I hustled across the conference center to do a .NET Rocks Live with Carl. Our guest - Kent Alstad. Since Kent was on the ASP.NET Scalability Panel back at Tech Ed in June, we've received a number of emails from folks asking for more... so we delivered. Since Kent was with us already, it was pretty easy.

We had a great crowd for the .NET Rocks Live, they really whooped it up. I'm sure you'll hear it when the show is published.

After that session I dropped into the Speaker Party for a couple of hours, up in the penthouse suites of The Hotel at Mandalay Bay. Waaay too many people in too small a space, incredibly loud and lots and lots of fun.

I didn't stay long though, I headed out to dinner at Sensi at the Bellagio with the Strangeloop folks and a few key influencers.

Tomorrow is another crazy busy day!

Wednesday, November 7, 2007 4:55:14 PM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

DevReach Day Two#

Carl and I grabbed an interview with Dino Esposito in a quiet room during the conference, his viewpoint on Silverlight and ASP.NET technologies is always interesting.

Dino's session on "What Partial Rendering is not AJAX" rang true for me as well - his point is that the essence of AJAX is pushing page rendering to the browser, rather than computing it on the server. But partial rendering still computes the HTML on the server and sends it to the browser to display. This undermines the goal of AJAX.

I had last session of the day (and conference) and a huge crowd for my load testing talk today, as usual there were relatively few folks in the audience that had done load testing before, so a lot of my talk focused on the fundamentals of why and where for load testing. The data we've gathered around Strangeloop is great stuff for getting people started.

Tuesday, October 2, 2007 1:41:14 PM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

DevReach Day One#

Sold out! Yep, the show is packed. Its not the biggest show in the world, but the attendees are focused and excited to be here. The keynote speech today included the local Microsoft folks and Telerik and, of course, Tim Huckaby! Tim's stories around building great applications that change the world are hard to touch. The audience was spellbound.

My work came in the afternoon, I took the Scaling Habits of ASP.NET Applications out for a spin again, with lots of interesting questions and discussion afterward.

In the evening Carl and I ran a panel discussion on WPF with Tim Huckaby, Brian Noyes and Todd Anglin.

Tomorrow is the last day, then we're touring Sofia!

Monday, October 1, 2007 1:25:14 PM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

Speaking at the SoCalCodeCamp!#

Blame Michele Leroux Bustamante for this one - she talked me into coming down to do a couple of presentations at the SoCal Code Camp.

I did my Querying Talk again, but also took The Scaling Habits of ASP.NET out for a spin for the first time since the Vancouver TechFest.

Scaling Habits is a fun talk for me because it really is a tour through the evolution of an ASP.NET application - from those early days where you're one guy with a clever idea for a web app, through to what it takes to run a large scale site with multiple servers and the related bureaucracy for operating it.

Along the way I talk about the elements of the evolving site - how much traffic is typical, the kinds of metrics that matter, and so on. And most importantly, what it takes to move to the next level of evolution for the application.

At the core of this whole concept is the idea of the Performance Equation. The Performance Equation

A quick description of each factor in the performance equation:

R Response time (in seconds)
Payload Total number of bytes being transmitted
Bandwidth The transfer rate available
RTT Round Trip Time
AppTurns Number of requests that make up the web page
Concurrent Requests How many requests will be run simultaneously to build the page
Cs Compute time on the server
Cc Compute time on the client

Now I can't take credit for this equation, I did not invent it. The original one comes from the "Field Guide to Application Delivery Systems" by Peter Sevcik and Rebecca Wetzel from NetForecast. However, I did make one change to it - the original equation does not account for simultaneous downloading of resource files and the base overhead of the page file itself. That is represented by the separate addition of an RTT and dividing the rest of the AppTurns by the number of concurrent requests.

So all of these factors go into the time it takes for a web page to fully render on your web browser after you request it.

When I display the equation to an audience, I always ask the question: "What part do you work on?" When I'm talking to ASP.NET developers, invariably the answer is Cs - Compute time on the server. After all, that's the code you wrote. But if you don't know what Cs is in relation to all the other factors of the equation, how do you know if that's the right thing to work on?

Some other interesting issues I've run into once I started looking at web performance this way:

  • In many cases bandwidth is just not the issue, we have lots. But when it *is* an issue, often we don't test with the same bandwidth that the customer has, so we don't realize when bandwidth is a problem.
  • Round Trip Time is the ping time between the customer and the server. Again, since we often test with servers that are so close to us that the ping time is ultra-low, we don't have test conditions that match with our customers. Its amazing how huge a factor bad RTT can be for performance.
  • AppTurns of course exacerbate RTT times, because its a multiplier - if you have a dozen JS files, a dozen CSS files and thirty images (which is remarkably common), you're talking about over 50 AppTurns, and even divided by Concurrent Requests, that expands response time by lots of seconds.
  • Normally, with Internet Explorer and FireFox, the number of Concurrent Requests is four. It can be adjusted at the client computer, but its very rarely done. It is possible to do a trick with URI renaming where each resource appears to come from a separate server so that you can fool the web browsers into doing more than four concurrent requests.
  • Compute time on the client becomes a significant issue when you get heavy with the Javascript, most often seen with AJAX-style pages. In my opinion, getting the browser more involved in generating a web page is a good idea, but you need to account for the cost involved. If you're only looking at server compute times, then of course AJAX looks like a brilliant solution - because you've hidden the cost.

Now that's not to say that Compute Time on the Server isn't important to the equation - it *might* be. But you should know for sure before you pour your time into improving it. Going through the exercise of breaking down where the total response time goes is a critical first step to making sure your effort is going to the right place.

Thanks again to all the folks at the SoCal Code Camp - I had a fantastic time, I'd love to come down again!

Sunday, July 1, 2007 5:07:14 PM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

ASP.NET Scaling Chalk Talk at TechEd US#

I've been assembling my notes for my Chalk Talk on ASP.NET Scaling at TechEd US in Orlando.

The Chalk Talk will be held on Friday June 8 at 1pm, in the ASP.NET Community Area.

The biggest challenge in talking about scaling is to not fall into a discussion on performance. Most folks mix conversation about scaling and performance together, on the assumption that excellent performance provides excellent scaling. It isn't true - in some cases, to get great scalability, you have to impede performance. In reality, the best case scenario for scaling up an application is to maintain performance, not to improve it.

Performance is all about how quickly your web page responds to a request, scale is about how many requests you can handle at once. The "at once" part of that statement is important, since the idea that excellent performance provides excellent scale only is true when requests are not "at once", but fairly close together. If you could compute every page in 50ms and you only got requests every 100ms, you'd only be handling one request at a time... your great performance has given you the illusion of great scale. A lot of people consider this scaling, but its not really. Real scale is all about how your site handles simultaneous traffic.

There are two fundamental techniques for scaling: specialization and distribution.

Specialization is the process of separating out specific tasks that your web application does and building/buying specialized resources to handle those tasks better. You already do this - you have a separate database from your web servers. When you get into large scale web sites, image handling often becomes a specialization. You could set up dedicate image servers, or even offload that work to a third party company like Akamai. Getting the load of image handling out of your web servers allows them to handle more of the requests that they need to handle: Processing ASP.NET web pages. Obviously the challenge of making specialization work is going through every web page and altering the image tags so that they point at the image servers: Time consuming, but not especially hard. That's scaling by specialization.

The other technique for scaling is distribution. The key to distribution is creating multiple copies of the same resources and balancing work between them. Typically this would be multiple, identical web servers and a load balancer. The challenge to making distribution work well is effective load balancing, and that means a lack of affinity. That means no data specific to a given session kept in the web server, all of that information has to be available to every web server in the farm. There are a variety of affinite resources in ASP.NET, the best known of which is Session, and there are a variety of methods for making those resources non-affinite, the best known method being to write them to SQL Server.

This is where we get into the performance/scaling compromise: moving Session data out of the web server and over to SQL Server definitely slows down performance, in exchange for being much more scalable. But this is not a simple curve - sure, this method is slower per request on average, but that speed doesn't change for longer as the number of simultaneous requests increases.

Distribution also opens up advantages for reliability and maintainability, in exchange for dealing with the complexity of multiple servers. That's outside the scope of purely looking at scalability, but its certainly relevant to the equation over all. Its also important to remember that scalability isn't the only reason to have a web farm.

Of course, you can combine these two techniques, having specialized resources and distributing them across multiple servers. And this adds an additional advantage: You can scale each of those specialized resources independently. So if you need to improve the scalability of images, expand the image server farm.

The key to both these techniques is good instrumentation: You need to know where the problems are. Specialization helps because it creates clear boundaries between the various resources involved in a web application. And often you'll find that the non-affinity step you skipped becomes your key problem scaling up - and it will be instrumentation that will show that too you. Of course, then we get into the argument of whether or not the instrumentation *is* the problem, because it too exerts a certain amount of load on the servers.

There's more than just this to talk about as well: There are a variety of techniques for going to a non-affinity solution, there's also the challenges of caching at scale and invalidation.

And don't forget the database! As you scale up your web farm, the database can represent a serious bottleneck. Solving that is a huge task on its own, involving its own implementations around specialization and distribution.

I had originally suggested this topic as a breakout session, but I'm really looking forward to doing it as a Chalk Talk, for the higher level of interaction I expect to have with the audience. Chalk Talks are a lot more intimate, I'm going to steer clear of a slide deck and focus on using the white board to look at the various evolutions of a web application as it scales up.

Hope to see you there!

Friday, May 25, 2007 10:56:04 AM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

Cooking Up a No Code ASP.NET Tuning Solution!#

I’ve been talking about load-testing ASP.NET applications, and what it’s like when you fail. Well, now I can finally talk about why I’ve been thinking about all this stuff. I just spent the last two weeks talking to people about our launch and getting feedback from analysts about the Strangeloop AppScaler, and now I can finally talk about it in public!

 

Here are the basics: You already know how application tuning impairs the development process. Not only does it take a long time for pretty limited returns, but it takes you from this lightweight, fast ASP.NET development process—the whole reason you started using ASP.NET in the first place—to this much more ponderous endeavor, where every piece of performance tuning you do places new requirements on everything else you’re coding moving forward. Well, the Strangeloop AppScaler basically takes that entire application tuning process and puts it in a box. It’s a very, very cool thing. But now that we're out in the open, what I really want to talk about is how we got here.

 

It all started with looking for a better way to do session. Everybody’s talked about session, and everybody knows that it could be handled better, but nobody had actually done it. The default, of course, is in-process session. Since we all start development on a single web server, in-process makes sense. But as the application becomes more important, more web servers are needed, and the idea of going out-of-process comes up.

 

Microsoft provides two out-of-process approaches. One is using SQL Server, which you likely already have, since it’s where you store your data. But SQL Server is kind of overkill for session, since you're just storing a blob of session data there: you don't really need the power of SQL Server for that. SQL Server is reliable, but slow. The alternative is State Server, which is substantially faster, but isn't reliable and generally isn't a great bit of software.

 

And switching session methods is pretty trivial, since all you have to change is the web.config file. Although one issue people occasionally run into is that they haven't marked their objects for serialization. In very rare cases, they can't serialize their objects, but for the most part, it’s just about setting properties correctly.

 

Typically, most people deal with the issue by leaving session in-process and using a load balancer that supports sticky session, where the load balancer uses the ASP.NET cookie (or IP address) to *stick* a given user to the same web server each time they visit the site. While it certainly solves the problem of session, it undermines the value of a web farm. If the server goes down, the session’s lost. Some servers can end up much busier than others, so you aren't really balancing your load. And server updates tend to be a major pain.

 

To really load balance, to get the full advantage of your web farm in terms of performance and reliability, you need to get the session data out of the web server and go out-of-process. When you do that, you can load balance properly and go to any web server you want, but it means that session processing takes longer. So originally, our mission was to really look at session and figure out a way to get in-process performance but with out-of-process flexibility.

 

When we did all the math to figure out exactly why doing session out-of-process was so much slower, we found it was network trips were a major part of the processing time. Every request/response pair with out-of-process session means two additional network trip pairs: to fetch the session data at the beginning of the response computation, and to write the modified session data out at the end of the response computation. But the only reason all these network trips happen is that the request travels all the way to the web server before the server realizes it needs session data. So we thought, “What if we put the session data in front of the web server, so by the time the request gets to the web server, it already has the data?”

 

That’s what AppScaler does (well, one of the things it does). As a request comes in, it passes through AppScaler, and AppScaler says, “I don’t care what server you’re going to, here’s the session data you need.” Then it attaches the session data onto the request. When the request arrives at the web server, the session provider strips the session data out of the request and the page processes normally. When it finishes computing the response it attaches the session data to the response and sends it back to the browser. On the way out the response passes through AppScaler, and AppScaler removes the session data and stores it away in its network cache, and everything proceeds normally from there.

 

So suddenly, we’d eliminated all these extra network trips, but we were still out of process, so you still have all that flexibility. Pretty cool, right? Then we took it a step further and said, “Gee whiz, since we’re already here doing this, why don’t we just do viewstate too?” As you know, viewstate can get totally out of hand, typically due to the use of third-party controls, which is why the really performance-conscious sites don’t use third-party controls at all. And giving up third-party controls means either slowing down your development process to create controls yourself, or just not using all the controls that you otherwise might. With AppScaler, you can use all the controls you want (within reason). It takes that viewstate out of the page before it goes to the browser, so you don’t pay the performance penalty.

 

So fixing session and viewstate were the first features of AppScaler, and the results were pretty impressive—we were really cutting down page sizes and seeing substantial performance gains. And that’s when we had the big realization: Now that we’re sitting here in front of the web server farm where we can see all this traffic, there are all kinds of smart things we can do to optimize the performance of ASP.NET applications!

 

Fixing browser caching was low-hanging fruit for us. With browser caching, you mark various resource files (images, js and cs files, for example) as cacheable at the browser, normally with some sort of time limit (a day, a week, etc). Once the browser caches those items, it won’t request them again for as long as the cache is valid. That gives substantial performance gains since you cut down a lot of the resource requests that make a web page work.

 

The downside to browser caching is when you go to update the website. Unless you’re extremely careful, you can end up with browsers messing up your new pages because they use cached items instead of new items. And of course the pages that get messed up are the ones the CEO is looking at, because he hangs out on the site all the time and has everything under the sun in the browser cache. In my experience, people abandon browser caching after an event like that, and never use it again.

 

AppScaler fixes browser caching by dealing with expiration properly. First off, you specify what to cache in AppScaler, so that you don’t have to fiddle with settings on your web servers. AppScaler just automatically marks those resource files for caching as they pass through on the way to the browser. But then the really clever bit comes into play: AppScaler watches the resource files on the web server so that when there is an update, it sees it and knows the underlying files have changed.

 

Once AppScaler knows a resource file has changed, it dynamically renames it in the request/response pairs so that the browser doesn’t have it cached. It keeps up the renaming until the cache expires. So suddenly browser caching doesn’t cause problems with website updates.

 

Our experience with ASP.NET has demonstrated again and again that caching is king. And when we studied the potential of caching with AppScaler, we realized that self-learning caching was the number one performance return we could offer with this idea. Being between the browser and the web farm is the perfect place to cache and to coordinate cache expiries. As a developer, you know you have to cache, and you can write code to do it, but it’s a lot of programming, and it changes the way you have to code going forward. More than that, you have to figure out what to cache. You might guess wrong. Or more likely, because of the time and effort involved, you’re probably only going to cache a few things that are obvious.

 

AppScaler Response Cache evolved from that experience. It started out as a system for monitoring traffic, looking for where the request/response pairs match, and how frequently a response is different for a given request. It looks at parameters, such as querystring and POST elements to identify different requests. So by watching all traffic going to and from the application, AppScaler learns what to cache, and when to expire it.

 

Based on those recommendations, you can tell AppScaler to actually cache the items, or you can put it into an automatic mode, where AppScaler will cache what it thinks it should. This automated caching feature is incredibly useful for dealing with Slashdot or Digg events, where suddenly traffic is up 10 or 100 times.

 

But ultimately, the real advantage is the lack of coding – writing caching code in ASP.NET works, but it slows down the development cycle going forward. AppScaler gives you the same benefits, but without the impact on your development.

 

Now for the record, if all of this sounds very straightforward, it’s because I’m just giving the highlights here. Making all of this work together has been an extremely complex, time-consuming project. Also, while I’m really excited about it, I want to be clear that this is not going to fix every problem. If your pages are a megabyte apiece and half of that is viewstate, for example, we’re going to have a tough time helping you at any significant level of scale. You’re still going to have to do some basic tuning. But it’s when you get into the really exotic tuning, when you’re doing these miniscule kinds of tweaks and breaking pages down fraction by fraction to find out where you can squeeze a little more performance out of it—the stuff that really impairs your coding more than anything else—that’s when AppScaler can really help you out. And this is just a subset of the things it can do. I listed four features here. There are more than twenty others on the books today, and the list keeps growing.

 

Monday, May 21, 2007 2:15:50 PM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

Migrating web servers, upgrading dasBlog...#

Decided not to work on Sunday for a change.

Instead, I upgraded servers! Ah, such a geek.

My old web server Stan is very very old... P3 1Ghz with 512MB of RAM. Running Windows 2000, it has been a workhorse of a machine. I put Stan together in November of 2000. Hard to believe it has been essentially running unmodified for over six years. But that also means those hard drives have over 50,000 hours on them, which makes them ticking time bombs. And that's what the SMART reporting is saying too.

Stan is just too old to upgrade, he needs to be replaced.

His replacement is Jimmy, a machine I already had in the rack that was a testbed for betas of SQL Server 2005. Jimmy is a P4 3Ghz with 2GB of RAM, running Windows Server 2003 R2 SP2. Takes some time to get used to the little differences between IIS5 and IIS6, but its all bareable.

Migrating a web server is a pain in the butt. Lots of little configuration details you have to get right. To do the testing, I copied a backup of Stan's web sites onto Jimmy. However, since there are multiple sites on the web server, I depend on host header identification to sort out what site is what, which means I need to use the correct names of the web sites to access them. So what's a boy to do? I want to leave the sites up and running on the old server while I mess around with the new one.

I could have faked out a DNS server, but that seemed like a lot of work. Instead I modified the HOSTS file on my main workstation so that the web sites on Jimmy were pointed to directly. Funny how old technology serves the purpose so well.

Since HOSTS takes priority over any DNS lookup, I was able to point sites (like www.campbellassociates.ca) to the IP address of Jimmy directly. Then I could tweak and test to my heart's content.

One whammy I ran into was with FrontPage Server Extensions. For the most part my web server runs the little web sites of friends and family, and they all use FrontPage, whether Microsoft wants them to or not. While it set up the extensions easily enough, I couldn't administer the sites to set up access for the authoring accounts - no matter what account information I entered, it failed.

Turned out it wasn't me, it was a feature of Windows Server 2003 Service Pack 1. The service pack added a loopback check, making sure that the local computer name always matches the host header. And since I'm using multiple host headers, that's just not going to work. The fix is in Knowledge Base Article 896861. You have two choices: turn off loopback checking, or enter all the domain names that are legal for loopback checking.

I turned it off. Call me lazy.

Upgraded dasBlog as well. What I was really after was Akismet, the comment spam filtering solution. Unfortunately, the shipping edition of dasBlog doesn't have direct support for it. But the daily builds have it. I'm not normally a guy who runs a daily build, but for Akismet, its worth it. Take that, comment spammers!

 

Sunday, May 20, 2007 10:05:15 PM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

Failing From Your Own Success#
Know what to expect when you take your application out of the nursery and into the real world
Friday, May 11, 2007 9:25:52 AM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

Peeking Over the Fence into the Networking Guys' Backyard Reveals a Brilliant Load Testing Solution#

We’ve been going through beta testing at Strangeloop, which means I’ve had the chance to do some serious scaling of ASP.NET. One of the interesting experiences that keeps coming up in this process is the reaction we get from customers when we’re helping them do load testing.

 

One of the things we can offer our early beta test customers is the opportunity to load test their site, with and without Mobius in the loop. We need the test data anyway, and quite a few candidates don’t really have much in the way of load testing resources ready to go. And then we test their site in our lab with our Spirent Avalanche, and they go “Wow! I need one of those!”

 

So what’s a Spirent Avalanche, you ask? Funny you should ask… It’s 3Us of load testing love.

 

Josh Bixby, our VP of product development, noticed it when he was at trade shows. One of the benefits of having our feet in both the development camp and the networking camp is that we naturally see things on the network side that a lot of developers don’t. Josh pointed out that virtually every company making networking appliances had one of these 3U boxes in their demo racks. But I’d never heard of it before. So we checked it out, and realized it was the best answer I’ve ever seen to doing load testing. I know that load testing isn’t something people want to think about unless they HAVE to think about it. But if you do have to think about it, you have to check this out.

 

I don’t need to emphasize how much of a pain load testing is. Typically, you have two options, both of which suck: If you’re doing it yourself, you may be spending literally a week setting up a load test farm, and you’re probably spending more energy making the configuration work than actually doing the test. Which is no surprise, since most likely you’re using any piece of junk you can find, trying to network together a bunch of machines with different NICs, different performance, different speeds, etc., before you even begin to configure the test. I had one customer that bought me ten identical, dedicated servers for load testing - for about the same cost as an Avalanche - but that’s the exception, not the rule. And it still gives you much less control, you have to do all your own analytics, etc.

 

It’s easy to think “Oh, I’ll just use Mercury Interactive (sorry, HP Mercury) to do my load testing.” Easy until you see the price. Paying six digits for load testing with a 20% annual maintenance contract isn’t so easy. And that’s just for software – you still supply the hardware. I don’t think anyone told Mercury that the Dot Com Boom was over.

 

So taking a page from the network guys, there’s a third way to do load testing: You get a Spirent Avalanche, hook it up, and let it do the job. One 3U box with four gigabit Ethernet ports that can generate nearly two million users by itself. So you’ve got the hardware and the software all in one box.

 

Of course, the Avalanche isn’t cheap either, although they’ve nailed the gradually pregnant business model well – you can rent the gear, and those rental charges get applied to a purchase. We spent less than $100,000 on our 2700 with all the features we needed to do web testing. It also uses TCL-based scripting, which is usually the realm of networking guys, not developers, and can be difficult to understand. TCL provides the Avalanche with the flexibility to do load testing on a lot more than just web stuff.

 

However, bundled with the Avalanche is a product called TracePlus/Web Detective (Spirent Edition), made by System Software Technology (SST). SST makes a variety of different TracePlus products for networking and web, including this version specifically for working with the Avalanche. TracePlus provides the classic capture mechanisms that you see with most load generating tools, where the tool captures your navigation of the web pages and captures them as HTTP commands. The Avalanche internally converts this to its TCL commands.

 

The Avalanche has some ability to do reporting internally (pretty graphs), but the main way we’re using it is in “Excel mode”, where it generates CSV files that we can load into spreadsheets for analysis.

 

We’re also finding that the Avalanche doesn’t understand ASP.NET things like viewstate very well, but then, neither does WAST. We’re using Visual Studio 2005 Team Edition for Testers to get really smart functional testing around specific ASP.NET features.

 

Even with these complications, it’s such a better way to do load testing than setting up servers, and infinitely better than letting your paying customers do the testing. So if you’re doing load testing, why aren’t you using one of these? Why don’t more people know about this? This is pretty standard equipment if you build networking gear. It’s not like the Avalanche is some new, earth-shattering product. It’s not even mentioned on the main page of Spirent’s Web site?!?

 

I have yet to find anyone else in the ASP.NET world using a Spirent Avalanche. I really think it’s just a cultural issue, where great stuff is getting lost in translation between the networking world and the Web development world.

 

Important lesson: If you’re not paying attention to the networking space, you should be. You may just be wasting your time wrestling with a problem that other smart people have already solved. That’s one of the cool things about working with Strangeloop; we really get to straddle the line between those two worlds.

 

Friday, May 4, 2007 4:40:47 PM (Pacific Standard Time, UTC-08:00) #    Comments [0]  | 

 

All content © 2023, Richard Campbell
On this page
This site
Calendar
<June 2023>
SunMonTueWedThuFriSat
28293031123
45678910
11121314151617
18192021222324
2526272829301
2345678
Archives
Sitemap
Blogroll OPML
Disclaimer

Powered by: newtelligence dasBlog 1.9.7067.0

The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

Send mail to the author(s) E-mail

Theme design by Jelle Druyts


Pick a theme: