Category Archives: Web Development

Cranked stuff about Web Development

Speeding Up Your Web Site with YSlow for Firebug

I’m always looking for an edge over our competitors to make using our e-commerce sites better from a usability standpoint. I think one of the easiest things to make the experience better is to make sure your site is responsive when people visit it, no matter what kind of connection they have or what they have for a computer. I decided to do some research on how to improve our sites download times and came across YSlow for Firebug.

YSlow is a Firefox extension that plugs into the Firebug extension. Any developer that doesn’t use Firebug is really missing out. So if you don’t have it, get it. Anyway, you can install YSlow right into Firefox and get access it through Firebug.

Upon analyzing our site the first time, we received a score of 42 from YSlow, which was an F. Ouch. That didn’t make me feel all that great about our site. You can see screen shots of our initial scores here and here. We scored really low for all but four of the thirteen performance criteria. I decided to attack the easiest tasks to complete first. This was Minify JS, Add an Expires header, and Gzip components.

I minified our javascript files using a utility called JSMin. It basically removes all whitespace and line returns from your file. It doesn’t compress the code all the way, but I wanted it to remain a little readable if I needed to look at the code on the live search.

Next, I wanted to handle adding an expires header. Since we use ASP.NET and C# for our web application, I was able to write a HttpHandler to do this for me. What was even better was I was able to handle the expires header and another issue, ETags configuration, all in the same snippet of code. For each request, our HttpHandler adds an empty ETag and an Expires Header of 3 days in the future. Both of these are used to determine when a cached copy of a web page needs to be refreshed. The ETag tells the browser that the version it sees now is different from the original. The Expires header obviously sets the expiration on the page.

Lastly, I wanted to GZip all of our components. This just required configuration of our IIS Server. You can also do this directly within your .NET application, but I didn’t see the value in this as IIS could do it for us.

After implementing these changes and a few other mundane ones, I ran YSlow again. Low and behold, we’d gone from a score of 42 to a score of 76. Not bad! We’re now scoring a “High C” according to YSlow. From a usability standpoint, I could definitely tell that the site responded much faster than it did when we were scoring a 42. For those of you that would like to see screen shots of the stats, you can see them here and here. Looking at the stats, you can see that we cut down the data downloaded from 413.1k to 234k, which looks like a huge improvement.

I strongly recommend anyone who’s developing web applications to take a look at YSlow. You might not be able to implement changes for all of the points it says you’re not doing well for, but even 2 or 3 changes should net you some great improvements in the performance of your site.

ASP.Net HyperLink Control And Html Encoded Ampersand’s

I just ran into some odd behavior with the HyperLink control ASP.Net. Per the W3C, you’re supposed to HtmlEncode ampersands, using & instead of ‘&’ when building URLs in your HTML code. The reason is that the ‘&’ is assumed to be an entity reference. What’s nice is most web browsers can recover from this type of error, but if you want your site to pass validation, you need to use & instead.

So I hooked up all of our URLs to use this method, especially when we wrote out URLs in our C# classes. What I found odd was if I did this using a HyperLink control instead of an HtmlAnchor control, .NET would write the & out in the URL instead of using ‘&’. Naturally this broke our site as query string references weren’t parsed properly. The fix was to use an HtmlAnchor instead.

I’m not really sure why .NET does this or if there’s another workaround for it, but this solution worked for me. I’d be curious to know the reason behind the behavior though.

Yahoo Implements OpenID

OpenID

I was reading on TechCrunch today that Yahoo has implemented OpenID, effectively tripling the number of OpenID accounts. They’ll be going into Beta at the end of the month. This is a huge win for the project, but it got me to thinking.

Remember way back when Microsoft Passport (Microsoft calls it Live ID now I believe and its used mostly on just their sites) came out it was supposed to be the answer to all our password woes? Create a Passport account and log in with the same username and password on any site that implemented it. Well, how far did it get? Nowhere. At least nowhere fast. Reason being I think implementation wasn’t all that easy and there was no real need for it without the abundance of internet users that we have today.

So what will make OpenID different? Well, first, the amount of social networking and information sites, not to mention the sheer number of people online, will make the adoption of some single account interface more appealing at some point. Second, with huge names like Yahoo, Google, Verisign, and IBM getting into the mix, something cool like this will have a shot at gaining some traction. I know I’d love to have one log in for all the sites I use daily. Remembering usernames and passwords is a pain.

Take this one step further. I’m in the e-commerce industry. I started thinking that I’d love to use something like this in all of the e-commerce sites we run. I would basically have one central spot to store authentication and account information instead of separate databases. So what if major brands started getting in on this? Think about it. Amazon, Gap, Target, WalMart, Best Buy, etc. etc. etc. are all on OpenID. You can effectively shop with the same authentication everywhere. No more forgot password reminders because you use this ID every day. You’d never forget! How cool would that be?

ESPN’s Online Video Player

ESPN Sports

I watch a ton of ESPN on T.V. Ask my fiancee, Shannan. I DVR Around the Horn and Pardon the Interuption. I watch SportsCenter all the time, especially during baseball season. I watch this stuff so much that Shannan can recognize and name the hosts and commentators on other T.V. shows.

One thing about ESPN that drives me insane is their web site. Specifically their built in video player on their index page. It plays the default video automatically. So even if I don’t care about that video, it’ll start playing and making noise as I’m reading something else. Talk about annoying. It even woke me up one night because I left my laptop on one night with ESPN loaded in the current page. It just started playing on its own!

So, this is a note to everyone working on ESPN’s web page. Cut the crap. Don’t play the video by default. You might find you save your company money every month in bandwidth in the process too.

Replacing SLI Systems Search with Google Mini

SLI Systems Search
Google Mini

For the last few years, we’ve used SLI Systems Search for web site search on both Fright Catalog and YumDrop. When we started the Import Costumes project, we decided that we’d try something different because of the increasing cost of using SLI’s service. The cost is based on the number of search queries, so the more popular the site’s become, the more the cost of search increases.

Instead of building out our own search functionality, we decided to purchase a Google Mini on the recommendation of a fellow e-tailer we know. They said the integration was fairly easy and for their searching needs, it fit the bill. So we decided to give it a shot. For the $1,995 that it cost, we’ll definitely save some money on search in the long term. We knew that we’d be giving up some of the features we get with SLI, most notably how SLI’s search algorithm “learns” different searching patterns and improves the results for any given search term as well as “automatic” related and suggested search terms.

The biggest task for the Mini’s integration was the refining of search queries. SLI allows us to refine by category and price, so we wanted to be able to do the same thing with the Mini. Luckily, you can do this by searching meta tags for different values. It took a while to figure out that some of the search parameters like as_q and partialfields weren’t working as I expected so I ended up building the query term much like you’d use on Google, i.e. site:importcostumes.com inmeta:price:$10.00..$20.00 parrot where you’re looking for parrot type products from $10 to $20.

Since product content doesn’t change all that often, we’ve also been able to cache search results as XML files on the file system. We keep them around for a 24 hour period, just in case something does change. This is great because it’ll keep some load off of the Google Mini while speeding up displaying results to our customers for popular search criteria.

We’ll definitely miss out on the learning capabilities of SLI and not being able to automatically have related and suggested search terms is a bummer. That might be something that the Google Search Appliance can do easier since you can upload data feeds to it. Maybe we’ll graduate up to that as we get more of our sites running search from the Mini. Adding them by hand is a pain, though you can batch upload them. Even with that, you still have to manually build your lists.

One other feature that I’d like to see from the Mini (if anyone from Google is reading this) is a way to automate the emailing of search reports on a regular basis. It’d be nice to have a report sent to me monthly with top search terms as well as the results they returned. This is great for deciding what product to buy and what product to sell aggressively.

Revel Video Wedding Videographer Web Site

Shannan (my fiancee) and I hired J.G. Lis of Revel Video to be our wedding videographer. During our initial meeting, he mentioned he needed his web site updated. Ironically, that’s what I do for a living, so we decided to exchange services. His existing site was basically a mashup of images that was hard to maintain. J kept having to have his designer make simple textual updates. Continue reading

YumDrop.com now on Facebook!

YumDrop.com is now on Facebook! You can become a fan of Yumdrop by clicking here. If you’re looking for sexy lingerie at good prices, check out the site! Yumdrop is quickly becoming one of our most popular web sites and its growing every day. Help us grow its popularity by becoming a fan and sharing it with your friends today!

YumDrop.com is also on MySpace. We’ve had an account there for a while, but we’re trying to build up our following. If you have a MySpace account, become a friend of YumDrop!

Usability Testing: The Beginning

We did usability testing for YumDrop way back when we redesigned it the first time. It worked out pretty well. Sales went up. Much success! However, we never did it again. Taboo #1 when it comes to usability testing. I’ve been planning on getting it rolling again lately and finally did with an impromptu session with my brother.

The amazing thing was that I was able to do it OVER THE PHONE! I basically just asked vague questions to see what he thought of different pages on our sites. From the 20 minute session, I was able to get a few good ideas, which is really all you need to start. The only downside of it is I can’t see what he’s clicking on or hovering the mouse over. The important thing I think is that I did it, even if it wasn’t perfect. Something is better than nothing after all.

One thing I was able to get out of Krug’s book was that it doesn’t really matter so much how you do your usability testing (other than a few key approaches), but that you do it and keep doing it. As you continue down the testing road, you’ll continue to improve the stumbling points people have with your web site.

Book Review: Don’t Make Me Think

I recently purchased Steve Krug’s book Don’t Make Me Think: A Common Sense Approach to Web Usability for a plane trip to California to my brother’s wedding.


51w8l2zy3wl_aa240_.jpg

Image courtesy of Amazon.com

I was able to read the entire book for the most part in about 2-3 hours (which Steve notes was on purpose). Along the way I kept some notes on a piece of paper as well as some in my head. What I found after finishing was I was starting to look at the web sites I work on (FrightCatalog.com, YumDrop.com, and ImportCostumes.com) in a different way. Steve is very accurate in stating that developer’s look at their web sites very differently than those who will visit them. We think that if we like a design aspect or we find a feature useful, the entire world will. After all, we’re trying to develop for the average user, right? Wrong. There is no average user (and if there was, developers certainly aren’t in that group). We, as developers, aren’t the target audience. The rest of the world is and we need to make sure our sites are as easy to use as possible so people don’t have to (as Steve states time and time again) muddle through.

By the end of the book, Steve delves into Usability Testing, something we’ve only done briefly for YumDrop (and it did work when we did it), but we haven’t continued. We will now. To improve our conversions, we need to get people to look at more product. That means seeing how they use the site and what prevents them from purchasing products. One of the first changes we made was altering the header of the Fright Catalog index page, putting search in the upper left and calling more attention to the button. We found that only 18% of visitors used search, which is abysmal. My guess is they didn’t see it over on the right side of the page where it was before because people scan (they don’t read) web sites left to right (like a newspaper or book). I’ll report later on if that percentage increases.

If you don’t have a copy of it, I highly recommend you buy it. Steve does an excellent job at opening your eyes to a new world of developing web sites and retaining users. I’m really excited to apply more and more of what he talks about in his book to our sites. I’ll also be reading it multiple times, which is easy because the book is short. I’m sure I missed some good tidbits!

Learning From Analytics

We’ve been stuck in a "rut" of sorts for the past few years with FrightCatalog.com redesigning and realigning our site to what we thought would generate more sales.  In the past, this has worked so to speak.  This year, not so much.  Traffic is way up, conversions are flat.  So, we started really digging into the numbers we have collected through Google Analytics.  Based on what we saw and a new way of "thinking", we’re realigned our index page to reflect what we think (based more on numbers now than a gut feeling), will get our customers into sections they really want to see.  If this works, we’ll start to tweak the index page on a more granular level to improve things as well as move these changes into other popular landing pages.

In the end, our SEO is good and our site looks great.  But we have trouble converting our visitors into customers, which is crucial for growth.  Its refreshing to finally be making decisions and changes based on actual numbers instead of a gut feeling.  What we’re really interested at this point is how well we’ll be able to interpret the numbers that are in Google Analytics and see if we can make use of that instead of a paid service such as Omniture.