Why I’m Not Buying Shares in Facebook, Inc. Any Time Soon

So last Friday was a day a lot of people, and investors, had been waiting for.  The highly anticipated public offering (IPO) of Facebook, Inc. (ticker symbol FB).  Shares opened at $38 and were quickly up to $42.  Then Monday, the first full day of trading after the IPO, the stock dipped down 13% at one point, finally settling just over 11% down at $34.01.  As of today, 5/23, the stock is down about $6 from its opening price.  Now, this isn’t surprising after an IPO.  Not every stock shoots up like Google (GOOG) and keeps on going up.   Even though some stocks can be hot when they IPO, I wasn’t interested in getting in on Facebook, even though their shares were going to be accessible to individual investors like myself.  Typically its really hard for an individual investor to get in on an IPO.

I’m not convinced Facebook has a solid business plan right now.  Their only legitimate way of generating revenue right now are ads that they display on the right hand side of their pages.  Just recently, news came out that 44% of Facebook users polled said they would never click on a Facebook ad.  And this was after GM said they were pulling their $10 million budget they had earmarked for Facebook ads because they don’t feel they are effective (or don’t work, depending on which article you read).  Now none of this means that social marketing or marketing on Facebook isn’t effective.  GM will still invest $30 million in social network marketing, so I’m not saying that social network marketing is a fools errand.

What this data does suggest though, is that Facebook might not be able to generate the revenue investors initially thought, especially as new user registrations slow.  That will mean they’ll have to find another way to generate revenue.  Might we see “sponsored” ads in our news feeds like you see on Twitter?  Perhaps, but people could ignore those too.  Also, will you be able to “ignore” ads like you can ignore posts in your feed?

Personally I think ads might have been the wrong thing to depend on for Facebook.  What they should have done was charge companies like Zynga, who create apps and games on the Facebook platform, to use the Facebook API.  Right now Zynga is basically getting a free ride on the back of Facebook.  Further, once Zynga started selling virtual items to its users, that should have been a no brainer for Facebook to hit Zynga up for some sort of licensing fee.  Don’t tell me Zynga wouldn’t pay either.  Its the same model Apple uses in their App Store when iOS users buy an app.  The same model would work for Facebook.

The bottom line is Facebook will have to find new sources for revenue besides advertising.  They can try and sell their users personal information, but why alienate your user base?  They’re already recovering from the boondoggle their IPO has become as covered in these articles here and here.  Facebook will surely recover and probably be a decent investment long term, but only once they prove that they can regularly meet and exceed investor expectations.  Only time will tell.

Update:  Market valuation guru Aswath Damodaran gives Facebook a valuation of $29 a share.  That’s a far cry from its IPO price.  So it would seem that waiting is indeed the right tactic when it comes to investing in Facebook.  Lets see what they can do as a publicly traded, and scrutinized, company.

Helicion Tech ISAPI 3 Rewrite Problem with ASP.NET 4.0, IIS6 & Extensionless URLs

For years we have used Helicon Tech’s ISAPI Rewrite plugin for IIS to generate pretty URLs for our ASP.NET sites.  A few months back I was in the process of migrating our ASP.NET web applications to the 4.0 Framework and I ran into an issue with ISAPI Rewrite and our site’s URLs.  Basically, it turned out that ISAPI Rewrite wasn’t even getting the chance to process our URL rewrites as it should.

The bottom line here is that with the 4.0 Framework and IIS 6.0, Extensionless URLs are turned on by default.  Since our rewrites were dependent on the ASPX extension to map our pretty URLs to actual pages, I had to turn this feature off.  To fix this, I had to go into the registry and find this key value:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ASP.NET\4.0.30319.0\

Then I had to add/edit this DWORD value:

EnableExtensionlessUrls = 0

I restarted IIS and ISAPI Rewrite worked like a charm.  I didn’t have this issue on my development box which runs 64-bit Windows 7 Professional and IIS 7.5.  I only had this issue on our testing and production environments which run Windows 2003 and IIS 6.0.

Breaking ASP.NET 4.0 Framework on Windows 7 and IIS 7.5 with ASP.NET 1.1 Framework

The other day, I wanted to migrate some database changes from our development environment to our staging servers. The tool we typically use to do this is kind of old and requires the ASP.NET 1.1 Framework. So, during the installation of the tool on my new Windows 7 box, I saw a message requiring the 1.1 framework and decided to install it. During its installation, a warning popped up about known compatibility issues on my system, but I decided to proceed anyway. I mean, how bad could it be?

Well, it turned out kind of bad. None of my ASP.NET 4.0 web applications on my development box would run. I kept getting an error in the framework itself:

Calling LoadLibraryEx on ISAPI filter “C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll” failed

Apparently you can’t run the 1.1 Framework alongside the 4.0 Framework, at least not on Windows 7.  It might work fine on XP or Windows 2003 Server, but I’m not 100% sure and I wasn’t going to waste too much time figuring out if you could or not.

To make a long story short, I had to install all of the ASP.NET Frameworks from my development box and re-install the 2.0 and 4.0 Frameworks and then make sure they were registered by running aspnet_regiis -r in the installation directory for each Framework version.  It took me about half a day of beating my head against my desk to figure out this faux pas.  Better to listen to the warning messages next time…

Ruby on Rails: Sending Email with ActionMailer via SES and SMTP

I’m a huge fan of Amazon web services. They’ve served me quite well with some of my ASP.NET applications. So I’d be foolish (in my opinion anyway) not to use them in my Rails applications. I’m in need of sending transactional email from some of my Rails web apps, so I set up my SES account. I didn’t want to use the amazon-ses-mailer gem since SES now supports SMTP. So I generated my SMTP credentials and went into my application.rb file to start configuring ActionMailer.

I was a little stumped after a while on how I should configure ActionMailer’s SMTP settings to use SES via SMTP. A few posts here, here, and finally on this StackOverflow post lead me to this configuration:

config.action_mailer.default_url_options = { 
  :host => "thenameofyourdomain.com" 
}

config.action_mailer.raise_delivery_errors = true
config.action_mailer.delivery_method = :smtp
    
config.action_mailer.smtp_settings = {
    :address => "email-smtp.us-east-1.amazonaws.com",
    :port => 465,
    :domain => "thenameofyourdomain.com",
    :authentication => :login,
    :user_name => "your-ses-smtp-username",
    :password => "your-ses-smtp-password"
}

Now, SES SMTP uses TLS (SSL) when authenticating, so you need one more snippet of code in an initializer when your application starts up:

require 'net/smtp'

module Net
  class SMTP
    def tls?
      true
    end
  end
end

The final hint for me to get this working was the require ‘net/smtp’. Without it the email wouldn’t actually get delivered. I wanted to record this mostly for my reference in the future, but hopefully it helps any other Rails developers who get stuck on this topic.

Create an XML Sitemap on Heroku via Amazon S3

I’ve started hosting a few simple Rails applications on Heroku and so far, I’m really pleased with their hosting service. This post isn’t as much about Heroku as it is how to serve an XML sitemap for your application. Heroku apps don’t give you file system access from within your application, so you’re forced to host your sitemap on an external service, like Amazon S3. There’s a great plugin called sitemap_generator that lets you generate a sitemap and upload it to your Amazon S3 account using carrierwave and Fog.

Even though sitemap_generater will ping all of the major search engines when you build your sitemap (which you should rebuild regularly with a rake task), you will want to configure the sitemap in Google Webmaster Tools. Unfortunately, Webmaster Tools will only let you set a sitemap to come from your domain, not another host. What can we do to fix that?

Well, the easiest solution I came up with was to create a controller to handle your sitemap, but redirect it to the location of your sitemap on S3 (via CloudFront obviously). So, lets get to the code. Create a file called sitemap_controller.rb and paste this in:

class SitemapController < ApplicationController
   def index
      redirect_to SITEMAP_PATH
   end  
end

This will redirect a call to the index action of this controller to the value of SITEMAP_PATH. But what is SITEMAP_PATH? Well, in my case, my application relies heavily on a custom Rails engine where all of my controllers and models are defined. So I figured it would be nice to configure the location of the sitemap on a per application basis. So in my actual rails application, I created an initializer and set the value of SITEMAP_PATH. Put this in sitemap.rb in config/initializers:

SITEMAP_PATH="http://somepathtoyoursitemap.com/"

That's the actual location of your sitemap on S3 (again, most likely via CloudFront). Now all that's left is to wire up a Rails route to actually respond to a request for sitemap.xml. That's done easily enough with the following:

match "/sitemap.xml", :controller => "sitemap", :action => "index"

That's it! Simply restart your app if its already running so the initializer will load and access your sitemap.

Thinking Sphinx – Indexing Models Defined in a Rails Engine

I’m back in the Ruby on Rails game after a long hiatus and my, things have changed a lot. And they’ve changed for the better. The application I’m working on, like many other web applications, requires an internal search feature. Sphinx was very reliable for me in the past, however, it seems that ultrasphinx and acts_as_sphinx has been replaced with a better Rails plugin, Thinking Sphinx. Getting started was super easy. After installing Sphinx and setting up the Thinking Sphinx gem (version 2.0.11) in my application’s Gemfile, I was ready to get started.

But, I ran into a problem. The platform I’m building leverages a Rails Engine to implement most of the application’s functionality. Thinking Sphinx wasn’t setting up any models to index, even though I had defined them. Turns out, that if you don’t define your models in a typical path that Thinking Sphinx is looking at, i.e. app/models, then you’re in trouble. However, after a bunch of searching, I found the solution to my problem. Create an initializer sphinx.rb in your config/initializers directory of your application. To it, add:

module ThinkingSphinx
  class Context
    def load_models
      MyModule::MyClass
    end
  end
end

I defined my models in a sub-folder of app/models and put them in a module, so hence the MyModule::MyClass. This explicitly tells Thinking Sphinx which models to load. Running rake thinking_sphinx:config after that change set up the sphinx config file as I expected it would. Then I ran thinking_sphinx:inde and I was off and running. Jumping into the rails console, I was able to verify that searching worked as expected. Hope that helps!

Water Damaged iPhone 4S – What Now?

A few weeks back my wife went to her work holiday party for a couple hours while I stayed home watching the baby. Great bonding time with my daughter! Anyway, she came home a few hours later with some bad news. She’d dropped her iPhone into a toilet (I’m famously telling people she tried to make it swim…which we all know that iPhone’s, indeed, can’t swim). Ruh roh!

She’d picked up some rice on the way home to dry it out, but unfortunately she’d already tried to power it on after she had picked it up out of the water. Regardless, we dried it off some more and put it into the bag of rice for the next 24 hours. Oddly, the LED flash started flashing while in the bag of rice which lead me to believe we were up the proverbial creek as far as getting it to work again. Electronics don’t just do things like that on their own unless something is wrong. We still let it sit for a while before messing with it.

The next day, with the battery dead, I blew some compressed air into the port on the bottom of the phone, the speaker vents, and the ear piece vent. After that, I plugged it in to charge. Well, to my surprise after the initial charge took hold, I heard it chime. Weee! So I ran over to check it out and yes, it was on, but nothing showed on the screen. Doh! After a few resets and restores it was obvious that while the phone could receive calls and texts (and make them using Siri), the phone was broken. It turned out that the screen wasn’t 100% dead as you could make out some icons and settings pages if you were in the right light.

Yesterday I decided to take it in to our local Apple store to see what they could do for us (after making my Genius Bar appointment of course). I had read online that they would replace it for a $199 “repair” fee even if you didn’t have AppleCare+. I explained our story to the Apple Genius telling him exactly what had happened to the phone. I didn’t try and B.S. him or anything. Just flat out told him the truth. He said, well, I can take it out back, pull it apart and see what might be wrong with it. Having nothing to lose, I said sure. After about 5-10 minutes he came back and, after a long pause, said “Well, today is your wife’s lucky day.” I was floored, we were going to get it fixed! But it got better. He said that only one of the moisture sensors had been activated and there was no sign of water damage in the phone other than that. Since there was no sign of real water damage, they were going to swap the phone out. FOR FREE!. I couldn’t believe it. He returned to the back of the store to prepare some paperwork regarding the phone replacement. When he came back again he handed me a new phone out of a nondescript black box (not one of the retail boxes), made sure it worked, and sent me on my way.

Obviously we got a little lucky with the phone and the water damage, but the best part of the whole experience was it was 100% hassle free. I didn’t have to argue with the guy. I didn’t have to plead my case about not having the funds to buy a new one or pay the $199 “repair” fee (which probably would have done had it come to that). A lot of people complain about Apple, their products, or even their service but I’ve had nothing but awesome experiences with them. They stand by their customers and their products.

Apple Still Standing By 17″ MacBook Pro Batteries

In November of last year, I posted a picture of yet another MacBook Pro 17″ swollen battery. At the time, I figured that the battery was out of warranty so I didn’t run right off to Apple to get a new one. I ended up purchasing a new battery at an Apple store while running errands one day and didn’t have the battery with me to show them. This week though, I had to go to Apple to get my wife’s iPhone 4S looked at (topic of another post) and I decided to bring my swollen battery with me. I hadn’t thrown it away (bad!) and hadn’t taken it to be recycled. Good thing. After getting the iPhone sorted out, I showed the Apple Genius my MacBook battery. No questions asked, he walked over to the shelf, picked up a new battery and opened it up and handed it over. He just took the old one and put it in the box. I was floored. No hemming and hawing over it not being under warranty. Just flat out handed me a new one. So now I have two!

I’ve said it before, but I’ll say it again. No matter what you read about issues with these batteries, its my experience that Apple will just stand by their products and give you a replacement. I’m sure this battery will swell again at some point and I will no doubt be bringing it back to my local Apple store for another replacement!

Netflix Annoyances – Can’t Gift DVD Subscriptions

Netflix continues to annoy me; and I’m not even a member anymore. My wife and I canceled our account earlier this year when Netflix announced their price hikes on their DVD and Streaming subscriptions. I had thought about just canceling the Streaming subscription because the titles available sucked, especially compared to what we could get on HBO/Cinemax/Showtime, but I was just overly annoyed so I gave them the boot. We weren’t alone as thousands of subscribers punted on their Netflix subscription. My mother on the other hand, kept her subscription.

She lives alone and having access to movies to watch, especially during the winter, was worth the price she paid. This year for Christmas (hopefully she’s not reading this in case I figure out how) I wanted to get her a year long subscription to Netflix. I figured it was a gift not only she could use, but she’d enjoy. When I went to Netflix’s site though, all I could see where subscriptions to their streaming packages. It would cost $99.85 for a full year to their subscription service. Ok, great, $100 for crappy titles on-demand. No DVDs. Not even the option to gift the DVD subscription and not streaming.

Now, I could be totally off base here and their streaming service could be 1,000 times better than it was, but I haven’t had anyone tell me, “Bill, get Netflix! Their Streaming is awesome now!” If someone had, I’d probably at least check it out for a month. But no, no evidence of that. So my deal here is why would I spend $99.85 on something that I’m not convinced is of value as a gift? Why wouldn’t Netflix offer both as gifts? I understand that streaming is the wave of the future, but until you can get every title on-demand, it just doesn’t seem worth it. What do you think?

SqlCacheDependency and Query Notifications

There’s a lot of scattered information out there on how to configure ASP.NET applications to leverage Microsoft SQL Server’s Query Notification and Service Broker services for caching in ASP.NET applications. The two best step by step tutorials I’ve found online are:

http://www.simple-talk.com/sql/t-sql-programming/using-and-monitoring-sql-2005-query-notification/

http://dimarzionist.wordpress.com/2009/04/01/how-to-make-sql-server-notifications-work/

Both of those articles should get you started for sure. I ran into issues keeping our application from crashing after a period of time though while leveraging Query Notifications for caching in a few of my sites. The biggest issue I found was that I would see the following exception in our logs:

When using SqlDependency without providing an options value, SqlDependency.Start() 
must be called prior to execution of a command added to the SqlDependency instance.

Never did quite get a handle on what was going on here. I did figure out though that I could always find this in my Application log around the time that exception was thrown:

The query notification dialog on conversation handle '{A1FB449B-DEB3-E011-B6D2-002590198D55}.' closed due to the following error: '-8470Remote service has been dropped.'.

So, does this mean that I called SqlDependency.Stop() and now queued notifications aren’t going to be delivered. Are these critical errors that keep the application from coming back? I’ve read that a lot of the Query Notification messages you see in the log aren’t critical errors and can be ignored. I can’t ignore the timing of this error and the exception being thrown above though.

Anyway, I finally decided to pull this stuff out of our application until I get a better handle on what’s going on. The last straw was that I was trying to sync some database changes during a maintenance period and I couldn’t get them to sync because of a bunch of these SQL Query Notification issues. As I write this, I can’t even get my database back online as I’m waiting for ALTER DATABASE SET SINGLE_USER to complete (approaching 3 hours!!!). As I keep waiting, my Application log keeps filling up with the following Query Notification messages:

Query notification delivery could not send message on dialog ‘{FE161F6A-D6B3-E011-B6D2-002590198D55}.’. Delivery failed for notification ‘85addbaa-ce66-431d-870f-d91580a7480a;d527d584-9fd4-4b13-85bc-87cb6c2e166f‘ because of the following error in service broker: ‘The conversation handle “FE161F6A-D6B3-E011-B6D2-002590198D55” is not found.’.
For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

I had a response to a post I made on the ASP.NET Forum and it was suggested that with all the cached items in the system, that SQL Server really could not catch up. This is a problem because not only does it slow the entire system down, but when you have to cycle the SQL Server service itself, it takes forever for the system to come back up because all of the notifications get requeued or something.