Category Archives: Amazon Web Services

Ruby on Rails: Sending Email with ActionMailer via SES and SMTP

I’m a huge fan of Amazon web services. They’ve served me quite well with some of my ASP.NET applications. So I’d be foolish (in my opinion anyway) not to use them in my Rails applications. I’m in need of sending transactional email from some of my Rails web apps, so I set up my SES account. I didn’t want to use the amazon-ses-mailer gem since SES now supports SMTP. So I generated my SMTP credentials and went into my application.rb file to start configuring ActionMailer.

I was a little stumped after a while on how I should configure ActionMailer’s SMTP settings to use SES via SMTP. A few posts here, here, and finally on this StackOverflow post lead me to this configuration:

config.action_mailer.default_url_options = { 
  :host => "thenameofyourdomain.com" 
}

config.action_mailer.raise_delivery_errors = true
config.action_mailer.delivery_method = :smtp
    
config.action_mailer.smtp_settings = {
    :address => "email-smtp.us-east-1.amazonaws.com",
    :port => 465,
    :domain => "thenameofyourdomain.com",
    :authentication => :login,
    :user_name => "your-ses-smtp-username",
    :password => "your-ses-smtp-password"
}

Now, SES SMTP uses TLS (SSL) when authenticating, so you need one more snippet of code in an initializer when your application starts up:

require 'net/smtp'

module Net
  class SMTP
    def tls?
      true
    end
  end
end

The final hint for me to get this working was the require ‘net/smtp’. Without it the email wouldn’t actually get delivered. I wanted to record this mostly for my reference in the future, but hopefully it helps any other Rails developers who get stuck on this topic.

Create an XML Sitemap on Heroku via Amazon S3

I’ve started hosting a few simple Rails applications on Heroku and so far, I’m really pleased with their hosting service. This post isn’t as much about Heroku as it is how to serve an XML sitemap for your application. Heroku apps don’t give you file system access from within your application, so you’re forced to host your sitemap on an external service, like Amazon S3. There’s a great plugin called sitemap_generator that lets you generate a sitemap and upload it to your Amazon S3 account using carrierwave and Fog.

Even though sitemap_generater will ping all of the major search engines when you build your sitemap (which you should rebuild regularly with a rake task), you will want to configure the sitemap in Google Webmaster Tools. Unfortunately, Webmaster Tools will only let you set a sitemap to come from your domain, not another host. What can we do to fix that?

Well, the easiest solution I came up with was to create a controller to handle your sitemap, but redirect it to the location of your sitemap on S3 (via CloudFront obviously). So, lets get to the code. Create a file called sitemap_controller.rb and paste this in:

class SitemapController < ApplicationController
   def index
      redirect_to SITEMAP_PATH
   end  
end

This will redirect a call to the index action of this controller to the value of SITEMAP_PATH. But what is SITEMAP_PATH? Well, in my case, my application relies heavily on a custom Rails engine where all of my controllers and models are defined. So I figured it would be nice to configure the location of the sitemap on a per application basis. So in my actual rails application, I created an initializer and set the value of SITEMAP_PATH. Put this in sitemap.rb in config/initializers:

SITEMAP_PATH="http://somepathtoyoursitemap.com/"

That's the actual location of your sitemap on S3 (again, most likely via CloudFront). Now all that's left is to wire up a Rails route to actually respond to a request for sitemap.xml. That's done easily enough with the following:

match "/sitemap.xml", :controller => "sitemap", :action => "index"

That's it! Simply restart your app if its already running so the initializer will load and access your sitemap.

Uploading Content to Amazon S3 with CloudBerry Labs’ S3 Explorer

I recently made the move to Amazon S3 and CloudFront to store and server static content, in particular images, for some of my e-commerce web sites. We have thousands of images to serve to our visitors, in all different sizes. To get started, I went to Google to do some searching for some quality tools. I stumbled upon CloudBerry Labs‘ application S3 Explorer and downloaded it to give it a try. Installation was a snap and fairly quickly, I was configuring my Amazon S3 account in S3 Explorer. What’s very cool about this is that you can store as many S3 accounts that you might have, storing them for use later on. To configure an S3 connection, you will need your Amazon Access Key and your Amazon Secret Key. Now it was time to upload!

Like I mentioned earlier, we have thousands of images. In fact, we have over 27,000 images. And that’s just in one image dimension size! We have 6 sizes, so that’s well over 160,000 images. That would be a bear to do through Amazon’s S3 web interface. Especially if I needed to set headers and permissions. CloudBerry S3 Explorer came in handy for this. I selected one set of images and before I started the upload, it allowed me to set any HTTP Headers I needed on my images. After that, up they went. I’d say with my connection, it took an hour or so to get all of them up to S3, depending on the file sizes. After uploading, I needed to set permissions, which I was able to do by just selecting all of the S3 objects and setting the proper permissions. This was kind of slow because CloudBerry S3 Explorer needed to get information on all of the objects I had selected, which was over 27,000.

All in all, I think it took me a couple of days to sporadically upload and set up all of our images. The beauty is now we’re serving them from CloudFront, which makes our sites quite a bit faster. A total win win for us.

A few notes about this wonderful application:

  • It’s incredible easy to set permissions on objects. They have a check box if you want to open the objects up for the world to download, which was nice for us. It would have been nice to be able to do this before upload like HTTP Headers, but I didn’t see how.
  • Very easy to set HTTP Headers and any meta data you need on your objects. And you can do it before the upload starts!

  • One thing that confused me a little was on Windows 7, when I minimized S3 Explorer, it went into my task bar and not with other minimized applications. It took me a little while to figure out where it was hiding. At first I just thought the application had crashed on me.
  • Overwriting object preserved HTTP Headers and permissions, something I was a little concerned about.
  • Moving data between S3 folders and buckets was really easy. Again, preserves HTTP Headers and permissions.

So, all in all, my impressions of this application are really good, and I was only using the Freeware version. The pro version, for only $39.99, offers the unlimited S3 accounts and multi-threading which speeds up your uploads. Other features available in the Pro version are:

  • Compression
  • Encryption
  • Search
  • Chunking
  • FTP Support
  • Sync

For more information on CloudBerry Labs’ S3 Explorer, check out their product page for S3 Explorer. Hopefully you’ll find this nifty little application as useful as I did!

Determine if Amazon S3 Object Exists with ASP.NET SDK

After my earlier posts on invalidating Amazon CloudFront objects, I thought it would be important to see if an Amazon S3 object existed before trying to invalidate it. With the 1,000 request limit on invalidation requests before Amazon charges you for them, this seemed to be a prudent thing to do. So, I turned to the Amazon Web Services ASP.NET SDK to help me out with it. This is what I came up with:

public bool S3ObjectExists(string bucket, string key)
{
    using (AmazonS3Client client = new AmazonS3Client(this._awsAccessKey, this._awsSecretKey))
    {
        GetObjectRequest request = new GetObjectRequest();
	request.BucketName = bucket;
	request.Key = key;

	try
	{
		S3Response response = client.GetObject(request);
		if (response.ResponseStream != null)
		{
			return true;
		}
	}
	catch (AmazonS3Exception)
	{
		return false;
	}
	catch (WebException)
	{
		return false;
	}
	catch (Exception)
	{
		return false;
        }
    }
    return false;
}

I decided that if I found a valid ResponseStream on the S3Response, then I had a valid object. All I’m checking on is the object key itself, i.e. an image path in S3. Another note here is I’m checking for three different exceptions but returning false for all 3. The reason I have this coded this way for now is I wanted to see what different exceptions GetObject might throw depending on what was wrong with the request. This was done purely for testing purposes and will probably be changed in the future. For instance, I discovered that AmazonS3Exception is thrown when the object isn’t there. WebException is thrown when the object is there, but the request cannot be completed. I’m still in the testing phase with this, but I hope this helps some other Amazon Web Service developers out there.

Invalidating Content on Amazon CloudFront with ASP.NET SDK

I’m working on integrating some of Amazon’s Web Services into out eCommerce platform. I’ve been working on performance enhancements on and off for the last year and content delivery is the last big step for us. Getting started on S3 and CloudFront was pretty easy, but I ran into some issues when updating content in our S3 buckets. Luckily, Amazon added the ability to do this at the end of August. Since we use ASP.NET, I’ve started to work with their .NET SDK. Turns out, its pretty easy to invalidate some content.

public bool InvalidateContent(string distributionId, List<string> files)
{
    AmazonCloudFrontClient client = new AmazonCloudFrontClient(Settings.AWSAccessKey, Settings.AWSSecretKey);

    PostInvalidationRequest request = new PostInvalidationRequest();
    InvalidationBatch batch = new InvalidationBatch();

    batch.Paths = files;
    batch.CallerReference = GetHttpDate();

    request.InvalidationBatch = batch;
    request.DistributionId = distributionId;

    PostInvalidationResponse response = client.PostInvalidation(request);
    if (!String.IsNullOrEmpty(response.RequestId))
    {
	bool bInProgress = true;
	while (bInProgress)
	{
	    GetInvalidationRequest getReq = new GetInvalidationRequest();
	    getReq.DistributionId = distributionId;
	    getReq.InvalidationId = response.Id;

	    GetInvalidationResponse getRes = client.GetInvalidation(getReq);
	    bInProgress = getRes.Status.Equals("InProgress");

	    if (bInProgress)
	    {
		Thread.Sleep(Settings.AmazonWaitTime);
	    }
	}
    }
    else
    {
	return false;
    }

    return true;
}

InvalidateContent expects a CloudFront Distribution ID and a list of S3 files to invalidate. References to the static Settings class is just a class that reads in configuration settings from a configuration file, be it App.config or any configuration file you wish to set up. Basic values to store are your AWS Access Key, AWS Settings Key, and a wait time value before requesting information from Amazon again. There is also a method GetHttpDate(), which just returns:

System.DateTime.UtcNow.ToString("ddd, dd MMM yyyy HH:mm:ss ", System.Globalization.CultureInfo.InvariantCulture) + "GMT";

InvalidateContent() will make a PostInvalidationRequest (which is part of the ASP.NET AWS SDK) through an AmazonCloudFrontClient object. If this request is successfully posted, Amazon will return you a RequestId value that you can use to poll for the completion of your request. Keep in mind that you can only invalidate 1,000 S3 objects a month. After that Amazon will start to charge you $0.005 per object invalidated. This is per file, not per invalidation request. Hopefully you found this helpful!