Reasons why Redis is a great option for a worker queue in Magento



Alan Kent asked on Twitter

You may know that I am partial to Redis as a worker queue for Magento.  Here are some reasons why.

  1. It is stupid simple to set up (meaning nubes and philes can both use it)
  2. It is blazing fast (it’s slow log is measured in microseconds)
  3. It does pubsub for one to many
  4. It does BLPOP for many to one
  5. It’s already supported with the Cm_Redis stuff

Things you do not get

  1. Durability
  2. Complex routing rules

When I worked on my last talk for Imagine I spent a week trying to get a fast, working implementation of ActiveMQ and Rabbit.  I looked at it and thought “this is way more complicated than it needs to be” and tried Redis.  What I spent a week working on was up and running in 4 hours, including a Java-based worker and PHP refactoring.

The HA portion is missing out of the box but I believe with Sentinal you can achieve it.  In some ways it’s like MySQL.  It’s beastly easy to get set up and running but if you need to do more you can.

New Zend Server Job Queue Library


This will probably be one of the last posts I do on the Zend Server Job Queue functionality.  From this point on they will probably be less frequent, though I’m sure several posts will allude to it.  The reason I’m going to put it on the back burner is because I have written a library which is now available on GitHub.  What it does is encapsulate all of the functionality that I think needs to be there to be able to implement asynchronous functionality.  I’ve been working on it off and on for at least a year, trying out different things.

As I was working through it there were a few things that I wanted to accomplish.

  1. It needed to be easy to use
  2. It needed to be installed in a few minutes
  3. It needed to hide a lot of the implementation details.

That said, I think that I’ve been able to accomplish a lot of that.  With that, let’s take a quick run through.

Creating the job class

There is only one class that you need to know.  It is called JobAbstract and it is in the com\zend\jobqueue namespace.  Yes, the library requires PHP 5.3, though it works with PHP 5.2 code as I’ve already built a Magento extension for it.  In order to implement the job queue, on the code side, you simply create objects that represent your tasks, extending the JobAbstract class and implementing the job() method.  So if you had a job that you wanted to have send out an email it would look something like this.

Then when you want to execute this job you simply call the job

This will invoke the Manager object (which we’ll look at in a bit) which will then send the request to the specified job queue server, or to a load balancer.  The response is important if you want to get the results of the job.  If the request is serviced through a load balancer you don’t necessarily have any knowledge about where your job is going to execute.  Thus, if you want to get the result of the job you need to know how to get the job.  That information is stored in the result.  It’s a very simple, serializable object that is intended for persistent storage, such as a session or database.

Setting up the Manager

The manager is mostly self-contained.  The only thing it requires is that you provide a URL for the backend job queue end point.  The end point needs to only contain the following code.

Now, your application may require more at the end point to bootstrap it, but that’s all it needs to interact with your front end servers.

On the front end you need to tell the Manager what the URL is to connect to a job queue server.  As I said before, it can be an individual server or a load balancer in front of a hundred job queue servers, it really doesn’t matter, except that you can scale quite easily without any configuration changes.  To set the URL simply tell the Manager what the URL is.

And that’s pretty much it.

Communicating with the backend

Once you’ve sent the job off to the backend to execute you might want to interact with the results of the job.  Once the job object has finished executing, the Manager will serialize the job so you can get the results.  This would typically be done via getters and setters.  I have a good example of this in the library code on GitHub.

To execute this job you would call

Then later on


That’s pretty much it.

Implementing asynchronous functionality in Magento


ECommerce is a small thing, right?  Nobody’s doing it and it’s so simple that everyone who does it is doing it right.  When that Cyber Monday hits, nobody panics; sites stay up, they’re able to handle the load and nobody gets yelled at, right?

OK, maybe 20 years ago.

PHP eCommerce had humble beginnings.  Very humble beginnings.  And when those humble beginnings started to show there was a company that seized upon that opportunity.  The result is an eCommerce platform called Magento.  If you are reading this, it is likely that you know about Magento.  Maybe you use it, maybe you don’t.  But you probably have an opinion.  Whether you like the software or not that is the sign of a strong ecosystem.

I was asked to submit for the Magento Imagine conference in Los Angeles.  The topic of my talk was based off of a talk I did at ZendCon 2010 called “Do You Queue?”  That talk was about some general scalability considerations along with an example of a library that I wrote which allows you to utilize the Zend Server Job Queue to do easy asynchronous execution.  The talk I gave at Imagine was the same talk.

Except that it required a lot more code.  The reason for this is because I took the code that I wrote for ZendCon and created an abstraction layer that directly integrated with Magento without any changes in the core code.  What this means is that ANYONE who had any need for asynchronous execution (doing stuff outside of the inline code) can take that code and bake it into their own Magento installation.

That code is available on GitHub, the links for which I will provide in a moment.

There are three extensions (and a fourth library that they are based off of) that I wrote which can take you from simply implementing this asynchronous processing to actually doing Ajax-based payment processing.

On a very simplified level, the way the Zend Server Job Queue works is that you can tell the Job Queue to execute a URL asynchronously from the source request.  In other words if you have something that needs to execute some complex, or long running code, you can do so by simply calling a URL where that logic resides.

Which is cool, but I prefer more elegant constructions.  Maybe, just maybe, there’s a shortcut (and a gold star for you if you get the movie reference).

What I did was build a library that allows you to take this URL-based approach and change it to an object-based approach.  There are a couple of classes to be aware of, which are all part of the library which you can download from GitHub (  They are based on PHP 5.3.

  • Manager 
    • Handles connecting to the queue and passing results back and forth
  • JobAbstract 
    • Abstract class that a job would be based off of
  • Response 
    • The response from the manager when a job is queued.  Contains the server name and job number

The only two classes you need to be concerned about is the JobAbstract and Response.  Any job needs to extend the JobAbstract class.  This is sort of a “gateway” class that both takes the input and provides the result of the job, typically via a getter and a setter.  To see an example of this, download the Job API code and look at the class in the folder jobs/org/eschrade/job/GetRemoteLinks.php.

To execute a job, simple instantiate the job, provide whatever data it needs and call the execute method.


That method returns a response which provides the server name and job ID. It is used later on when you check to see if the job has completed.  That is done with some very simple code as well.


If the manager returns a null value it means that the job has not finished executing.  If it has completed then it will return the instance of the original object back to you so you can use it to retrieve the results.

I don’t want to spend too much time on the details of this so you can see this working by downloading Zend Server, and getting a 30 day trial license and downloading the library code.  It should be a very quick install for you so you can see it working.  On Linux machines it should work out of the box, though Windows machines may require you to set a named queue (zend_jobqueue.named_queues) which matches your hostname to the value of zend_jobqueue.default_binding.  In my case, the value is LAP-KEVIN:tcp:// for zend_jobqueue.named_queues.

That whole introduction is to bring you to a place where you can get a minimal view of how the Magento extension I built works.  I would recommend understanding how the base code works before diving into the Magento portion.

There are two primary Magento extensions that I built that utilize this.  The first is the abstraction layer that implements my prior job API.  The second is an example, called Async_Payment, which intercepts payment requests and does them asynchronously.

The Job Queue layer is an extension called Zendserver_Jobqueue and is available on GitHub (  Once installed it will require you to provide an entry point URL for the location where the jobs will actually execute.  This can either be the local machine, a remote machine or a load balancer.  It is set in the regular configuration GUI in Magento.  My value is http://mage.local/jobqueue/, since it uses the regular router.  If you have custom routing you may need to change that.  The URL needs to call Zendserver_Jobqueue_IndexController::indexAction() which is where the Job Queue manager is invoked.

If you look in the controller code you will also see a quick example that shows how this works.  There is a sample job that is provided called Zendserver_Jobqueue_Job_Nots.  What it does take a Boolean value and nots it, providing the result for later.  The job extends Zendserver_Jobqueue_JobAbstract which, in turn, extends comzendjobqueueJobAbstract.

As a side note, as of the version of Magento that I had when writing this, which was off of the 1.5 development brach, did not support PHP 5.3 namespaces so I needed to build a mechanism that included the Zend Framework autoloader, which does.  My understanding is that this is an issue that will be fixed shortly.

The next extension is the one called Async_Payment.  What it does is use an observer to redirect payment requests to the controller in Async_Payment.  The way this is done is via configuration under the ASYNCHRONOUS PAYMENT category.  That shows the different payment methods, but adds another tab called Asynchronous Settings.  The setting here allows you to turn the asynchronous processing on and off (handled in the observer).  What you need to do to make it work is give it the view templates to watch for.  When it sees that one of those templates (comma separated) is being rendered it appends some JavaScript that overwrites some of the functionality of the one-click checkout method to redirect the payment request to the Async_Payment controller.  Still following me?  My value is checkout/onepage.phtml.  So when that view is being rendered the extension will know that it needs to inject some JavaScript into the view to take hold of the payment request.

The final payment request is redirected to the Async_Payment_IndexController class.  What it does is take the data being submitted, which is exactly the same as the normal payment request, and passes it into a job, which is then executed in Async_Payment_IndexController::taskexecAction().  Then the browser will call Async_Payment_IndexController::oneclickpingAction() to check the queue manager to see if the asynchronous payment has been completed.

The asynchronous payment job is actually quite simple.  It pretends to be a browser and does an HTTP request to the original payment URL and returns the result.  Then the next time the browser calls oneclickpingAction() the raw result is returned to the browser, interpreting it as it would have been a normal request and you’re on your way.

Where to go from here?  First; download Zed Server.  There’s a 30 day trial license that you can use to try this stuff out.  Second; download the simplified Job Queue library.   Run the unit tests and debug the code.  That’s the best way for you to understand what’s actually going on.  After that, download and install the Magento extensions.  I think it is critical to work in this order, especially if you’re a coder.  Jumping straight into the Magento extensions will probably end up confusing you without the basic job queuing mechanism being properly understood.

Have fun, and drop me a line on email (kevin @ zend) or on Twitter.

Pre-caching FTW

1 Comment

I just had an epiphany.  I’ve talked about pre-caching content before and the benefits thereof before.  But this is the first time I realized not only that there are benefits, but that doing it is BETTER than caching inline.  Let me sum up… no, there is to much.  Let me explain.

Typically caching is done like this (stolen from the ZF caching docs):

Pretty easy.  But what happens if you have code like this:

What’s so important about this code?  Is it because it is of a remote nature?  Is it because it uses GData?  Nope.  It’s because it has a username and a password.  Given the previous caching what happens if that password changes (like mine did)?  Your site is down.

So, why do I now think that pre-caching is better than inline caching?  Look at my front page.  You would never know that I’m currently having a problem because it’s still reading from the same cache key (with non-expiring data).

THAT is why I’m forming the opinion that pre-caching/asynchronous caching not only has benefits over inline caching, but that it may actually be better.  I’m not one to make blanket statements, and I’m not going to.  But I am toying with the idea of using pre-caching as the default mechanism for caching instead of the other way around.

Google Analytics feed handling

1 Comment

So there I was, looking at some other websites out there (because I think my site design sucks.  Thanks, me).  One of the things that virtually no blogs do is promote specific content.  In other words, highlight content that is most popular over a certain time frame.  So I was thinking to myself, how would I do that?  One option would be to have a database table that could record each click.  That, however, is boring and requires changes to my DB schema (evil!).  What I want to do is take my most popular pages of the last week and highlight them at the top of the web site.

Then I realized that I’m already doing it, with Google Analytics.

But how would I do it?  Turns out there’s already a proposal in the Zend Framework wiki for a Google Analytics GData service.  It’s not in the main line but it’s in good working order and you can git it from GetHub (bad joke intentional).  So I downloaded it from there and placed it in my Blog /library directory, breaking the coding standard that states that only things in the Zend Framework may have the Zend_ pseudo namespace.  Oh well, it works.

The way I have implemented this is to set it up as a precache.  What that means is that I use the Zend Server Job Queue to run it at period intervals, like once a day, and then take the results and cache them in a non-expiring cache.

This code makes use of the Task class that I had built out earlier on (go down to the “Doing it Cool-ly” section).


You might notice a few things.  First is that I have several options that I retrieve from my Zend_Application class.  Here is a copy of those options.

The count is the number of items to retrieve.  Start and end are set for strtotime().  However, the interesting one that I have x’ed out (because I don’t know if it’s a security risk) is profileId.  That is the individual website profile identifier that uniquely identifies an individual site for you.  This is different from the tracker number, such as UA-13220492-1.  To find out what the profile ID number is log in to Analytics, go to your website and hover over “View Report”.  In the URL you will see a query string value for the key “id”.  That is your profile number.

So what does this code do?  First of all it logs in to Google using the credentials you supplied.  After that we create a new service class and create a query.  In the query I need to set at least the profile ID.  But what I can also do is state the type of results I want, the metrics, start and end time and a few other things.  After I’ve done that I retrieve the data feed.

The code after that is simply code that I use to match up the URL that Google reports back to me with pages I have in the database.  I remove all of the data from the array that was built by Analytics (the foreach followed by strpos) I iterate over the Google results and add the content I want to highlight into the array.  Sweet.  Done.

Please note that the code for this may change as it is not part of Zend Framework (yet).  Or it might be declined.  Who knows?  Not me.  But until then, this seems to work pretty well for when you want to make content available based off of Google Analytics data.

Magento-based asynchronous execution

Leave a comment

Working with an off-the-shelf shopping cart usually requires a little bit of patience. Scaling an e-commerce site does have its share of problems. There is a LOT of interactivity that needs to be implemented. This can be things along the lines of generating targeted ads, sending email or charging a credit card.

To charge a credit card, the ecommerce software will usually take the credit card information from the end user, put it into some form of web service request and the request is submitted to a remote system.  While that web service request is taking place the process handling PHP is unable to take additional require requests to serve regular pages.

One option that you have is to complain that PHP doesn’t have threading.  That’s not the best thing to do.  As Marco Tabini said recently on Twitter “Every time someone mentions threading in PHP, an angel’s wings enter a race condition”.  Threading solves some problems.  However, chances are that while you may want threading you probably don’t need it.

However, while you probably don’t need threading, there are plenty of times when being able to do things asynchronously would be beneficial.  The example that I started looking at was a credit card request.  While waiting for the credit card transaction to occur you have one of two options.  1) Let the screen be blank while you’re waiting for the transaction or, 2) use some kind of output buffering and progressive rendering to let the end user know that the transaction is, in fact, being processed.

However, there is another, better, option. Rather than either spending loads of CPU time to process loads of logic, such as personalized ads, or have long wait times, such as processing a credit card, you can have this processed “behind the scenes” so you can immediately respond to your customer.

A simple example of what a Job Queue architecture can look like is almost like a hub and spoke architecture except that instead of the hub being the center it is actually the outside.  Ok, so a simple Job Queue architecture is exactly the opposite of a hub and spoke architecture.  Sue me.

The way it works is that there is a backend server, or cluster of servers, that handle servicing Job Queue requests. The requests are made from your front end web servers which is sent to a URL on the backend.  The URL is where the logic is that needs to be run.

Using a simple architecture you can just have that URL be a simple script that is run.  However, I prefer a more structured solution if I am going to integrate asynchronous processing in my application.

This is where the Magento connection starts.  I have already written about how to implement a structured asynchronous mechanism.  This is the same implementation that I use on this blog site.  What I’ve done is take that implementation and re-implement it so that it works within the context of a Magento application.  I have placed this implementation on Github.  It is not yet part of Magento Connect, though I intend to do it and I intend for it to be provide free of charge.  However, what I also wanted to do was give others the chance to look at it and improve it prior to putting it on Magento Connect.

Implementing your own task, be it pre-processing advertisements or processing a credit card is very easy.  Processing a credit card, however, should be done with the addition of encrypting the data is that data is stored “as-is” in the Job Queue database.

Defined in this library is a class called ZendServer_JobQueue_Job_Abstract. This is the base class for defining a task.  There is only one method that you need to implement, though you can implement as many of your own method as you want, such as getters and setters.  The method is called _execute() and this is where you would implement the logic that you want to implement.  However, it is important to note that because this is run on a completely different machine once the task has been set to execute no changes that you make will be reflected in the job if it has started running already.

In the code download there is an example of how to implement this class.  It is called ZendServer_JobQueue_Model_Mock.  All it does is write to the PHP error log, but does so asynchronously from the Job Queue URL.  The code looks like this

class ZendServer_JobQueue_Model_Mock extends ZendServer_JobQueue_Job_Abstract
            protected function _execute()
                        error_log('Mock Model run');

One thing to note.  It’s freaking easy to implement this!  If you want to run this, here is your code.

$task = new ZendServer_JobQueue_Model_Mock();

Wham.  Bam.  Done.  It is now running on your Job Queue server.  I won’t get into all of the details on how it’s done, though.   You can take a look at the abstract class and understand the details yourself.  It is open source after all.

But if you were to run this code right now you would probably get an exception thrown.  That is because you have not configured your Job Queue yet.  In order to do that you need to look etc/config.xml file.  You need to edit the element config/modules/ZendServer_JobQueue/jobqueue/url and specify the URL of the job queue entry point.  Since there is an index controller for the ZendServer_JobQueue extension and I just used the standard router, the URL would be $HOST/jobqueue.  It is recommended (highly recommended) that you make this URL available over the localhost or private.  It is not by default, so I recommend that you set this up using either a virtual host that only listens on or on a machine that is behind a firewall.

So, that’s pretty much it.  Though I suppose you’ll need Zend Server as well. 

To install Zend Server you can go to and set up your system to install or download (for Windows) Zend Server.  It comes with a 30 day free trial.  Give it a shot.  If you have trouble feel free to post on the forums at or you can post a comment here and I can try to answer it.

Happy coding!

Pre-caching PHP content with Zend_Cache_Manager and the Zend Server Job Queue


With the web being what it is today there can be a lot of times when you want to aggregate data from many different sources and bring them together in a single page.  I have not done much of that on my site simply because that means that I then need to learn a bunch of different API's.  However, since Youtube is the #2 search engine I figured that it might not be a bad idea to aggregate some of my YouTube content on my page automatically.  I don't necessarily want to do a blog post about each individual video I post, but I wanted there to be some place where I could just list them out.

I have two places where I post content.  Youtube and Facebook.  However, polling each site individually for each request is not conducive to having a page that renders quickly.  The thing you do NOT want to do is poll YouTube each time someone comes to an individual page.  The way around this is to cache the contents of the YouTube or Facebook query so you don't have to do that.  Then people are able to re-use the previously defined data when they view that page.  What this does is make most of the new requests to that page much faster since they don't have to re-load that data from YouTube or Facebook.  However, there's a bit of a problem there as well.  Every X number of minutes, the cache will expire and someone will take the hit of connecting to Youtube.  With a moderately low traffic site such as mine, that hit is something I didn't want to make my users endure when they came to the site since there is a decent probability that the cache will expire in between individual page requests.  And, working for Zend, I can't have a page that renders slowly, can I.

So what I did was create a new Zend Server Job Queue task, which I have detailed several times (and there should be a link to several on the side) that would connect to both YouTube and Facebook.  This task would insert the results into a cache (you could use a database if you liked) so that when someone came to a page that they would be seeing the cached data rather than polling YouTube.  From a settings perpective, the cache is set to never expire the content there.  But because I set the task to run once an hour the content is going to be refreshed.  Using this pre-population method I am able to keep requests snappy which at the same time providing mostly up to date content.

The task to do this is relatively simple.  First I edit my application.ini file to set up the cache manager. = Core = true = null = File

By defining these ini settings, Zend_Application will automatically instantiate an instance of Zend_Cache_Manager and set up a cache that is named "video" with the individual options as specified.  What this means is that I could create another cache interface by taking these configuration lines and giving it its own configuration settings.  It could be different settings or even a completely different backend, or a different front end.

Then I create my task class.

class Admin_Task_VideoPreCache extends Esc_Queue_TaskAbstract

    protected function _execute(Zend_Application $app)
        $yt = new Zend_Gdata_YouTube();
        $options = $app->getOption('video');
        $uploads = $yt->getUserUploads($options['youtube']['id']);
        $manager = $app->getBootstrap()->getResource('cachemanager');
        /* @var $manager Zend_Cache_Manager */
        $manager->getCache('video')->save($uploads, 'youtube');
        $query = 'SELECT title, description, embed_html FROM video WHERE owner=' . $options['facebook']['id'];
        $url = ''.urlencode($query);
        $data = simplexml_load_string(file_get_contents($url));
        $videos = array();
        foreach ($data->video as $video) {
            $videos[] = array(
                'title'    => (string)$video->title,    
                'description'    => (string)$video->description,
                'embed_html'    => (string)$video->embed_html
        $manager->getCache('video')->save($videos, 'facebook');

Because the Zend_Application instance is always passed in I can easily get access to the predefined cache manager object in here for when I need to store the data at the end of the task.  Then in the task I use Zend_GData_Youtube to query YouTube and I do a simple FQL query to Facebook to get the Facebook videos (which stopped working between test, staging and production.  Go figure).

The next thing I have to do is make that data available to a view.  To do that I need to create a new controller action that queries the cache manager.

    public function myvideosAction()
        $app = $this->getInvokeArg('bootstrap')->getApplication();
        /* @var $app Zend_Application */
        $cm = $app->getBootstrap()->getResource('cachemanager');
        /* @var $cm Zend_Cache_Manager */
        $this->view->youtube = $cm->getCache('video')->load('youtube');
        $this->view->facebook = $cm->getCache('video')->load('facebook');

Then all I need to do in my view is iterate over the data and I'm pretty much good to go.  Because the cache data has been prepopulated my visitors should never have to take the hit of populating the cache and by using the Zend Server Job Queue the task of populating the cache is extremely easy to do.

Sharing feedback with Twitter using – Part 2

Leave a comment

In our previous installment we looked at setting our backend up so it could automatically retrieve the URL for a given URL and store it as part of the data for a given instance of a Content model.  What we're going to do this time is take a look at the front end components.

Sometimes I find that doing things backwards can actually make things a little more clear.  That way you can see the end result and then, as you work backwards, see how all the pieces work together.

With that in mind, let's start with our view code, since that's the most important part of the whole thing.  The first thing we are going to do is define our HTML.

<div id="sliderMessage">Message is previewed before it is sent</div>
<div id="sliderContainer">
<div id="slider" style="width: 80%; margin: auto;"></div>
<div style="width: 191px; margin-right: 40px;">
<div id="customTwitterMessage">
<textarea id="twitterMessage" name="twitterMessage"></textarea>
<font size="1">(Maximum characters: 140)
You have characters left.</font>
<div id="kudoTweetButton">
<a href="" target="_blank"><span style="color: white;">Tweet!</span></a></div>
<div class="kudos" id="slider-5" style="display: block;">Great Post!</div>
<div class="kudos" id="slider-4">Good Post</div>
<div class="kudos" id="slider-3">Decent Post</div>
<div class="kudos" id="slider-2">Didn't Like</div>
<div class="kudos" id="slider-1">Not Good</div>
<div id="sliderThanks">Thanks!</div>

There are a few elements in here.  The first is the slider with the ID of "slider".  The slider allows you to choose how high you want to rate the individual posting.  After that we have some code for writing custom Twitter messages if the review is really low.  It has the requisite 140 character limitation on it.  That is relatively simple to do, so I won't go into counting the characters.

Below that is the Tweet button. It floats to the right, so it is printed before our ratings.  After that are DIV tags that contain the individual messages.  They all have an ID that corresponds to the value of the slider and are all hidden, to start out with, except for "slider-5".  As the slider moves, each box will be displayed.

Rating the post

We have a couple of page-specific JavaScript variables that we need to have.  None of them are "required" to do this, but they are what makes it a little more automated.  All of the view script values are set in a controller.

var currentSlider = 5;
var twitterUser = "<php echo $this->twitterUser ?>";
var bitLy = "<php echo $this->content->getBitly() ?>";
$tags = array();
foreach ($this->content->getTags() as $t) {
    $t = (string) $t;
    $tags[] = '#' . preg_replace('/[W_]/', '', $t);
var contentTags = ;
var twitterText = "";

currentSlider is the default value for the rating.  twitterUser is for if you rate a posting badly you can mention the Twitter user instead of just saying it sucked.  In other wordsd, it gives them a chance to redeem themselves.  bitLy is the variable that contains the URL that we had before.  After that we echo out all of the tags that we have, but making them a little friendlier to Twitter but removing an non-white space and the underscore, since tags on Twitter generally don't have underscores.  It also adds the hash on the front of each tag.  They are then rendered as JSON because that's the easiest way to pass the information to the JavaScript in the view.  twitterText contains the full message that will be sent.

Speaking of twitterText we need to be able to set it.  That is done via the writeNormalTwitterMessage() function.  Is there an "abnormal" Twitter message?  Yep, but we'll look at that later.

function writeNormalTwitterMessage()
    count = 0;
    twitterText = $("#slider-" + currentSlider).text() + " " + bitLy + " ";
    while (twitterText.length < 140 && count < contentTags.length

           && twitterText.length + contentTags[count].length < 140) {
        twitterText += " " + contentTags[count++];
    twitterText = escape(twitterText);

Because this function is only called when the slider is moved, the custom message box is first hidden.  It is only used for non-kudos.  Then it takes the value of the currently selected DIV element and starts the string with that value, appending the value to the end of it.  Then it iterates over a loop, adding the tags that we had created previously until we reach the 140 character limit or run out of tags.  Then we escape that value and store it on the twitterText varialbe.

Now we have to implement the functionality in the slider so that when we slide it, it is able to actually set the message in the function we had just defined.

if (twitterUser && bitLy) {
        min: 1,
        max: 5,
        slide: function(event, ui) {
            $("#slider-" + currentSlider).hide();
            $("#slider-" + ui.value).show();
            currentSlider = ui.value;
            if (ui.value >= 3) {
            } else {
                if (ui.value == 1 ) {
                    $("#twitterMessage").val("@" + twitterUser + " " + bitLy + " wasn't good because ");
                } else {
                    $("#twitterMessage").val("@" + twitterUser + " I didn't like " + bitLy + " because ");

It looks like a bunch of code, but it's not.  What we do is bind to the slider and use some JSON to configure it.  We set the min as 1, the max as 5 and the default value as 5, or fully awesome kudos.  Then, for the slide event we define our functionality.  We first hide the previous slider caption DIV and then show the new one, resetting the previous value for the new one so we can hide it when we slide it again.  Then we check the value of the slider that was passed.  If it is greater or equal to 3 then the author did a good job and all we want to do is post the kudos.  If the value is 2 or 1, we want to give the author the chance to redeem him, or herself.  So we set it to give you  a text box.

The last thing to do from this side is to actually submit the text.  However, Twitter, for very good reasons, does not allow a web page to kick off some JavaScript and post a status update.  Otherwise you'd be seeing Twitter accounts being used as a spambot the likes of which you have never seen.  You could do it via an API, but your blog post isn't so important that someone will grant your website permission to do anything for them. So, to post this to Twitter, rather than using a form, we simply present a URL to be clicked on.  And the way we present that is via this code.

    url = "http:///home?status="+twitterText;
    $("#kudoTweetButton a").attr("href", url);

What this does is set the href attribute to our twitterText value so that when the user clicks on it they will be brought to the Twitter page with kudos pre-populated.  It will look something like this.

Kudos post

The Twitter user then clicks "update" and the kudo is delivered.

Try it yourself a little bit and see what you think.

New changes made to the site

Leave a comment

Well another week, another set of changes.  There are 4 primary changes that I've made to the site since last week.  They are, in no particular order

  1. Email subscriptions
  2. The addition of comments.
  3. A Twitter-based rating widget
  4. Related links

Related Links

The first is related links.  What it basically does is allow me to enter in links that I think might be pertinent to various articles on this site.  Each link can be tagged and any place where an article is displayed that has the same tags the related links will be displayed.  But it doesn't end there.  When I submit a link I make a request off to to get the short URL for it.  This allows you to share that URL easily over Twitter or Facebook.  But my purpose is actually tracking. tracks individual URLs according to who submitted them, not just based off of the URL.  So what that allows me to do is see how many times someone went to a given page because of me.  Perhaps it's narcissistic, but this is the web where narcissism abounds.

But because I am depending on a third party web service I don't want to be handling errors or slow web service requests in my main web request.  If there's a timeout or something on the web service end it could end up timing out on the browser end and I don't want that.  To solve that problem I used, TADA!, the Job Queue.  I swear, after having implemented my earlier task system I have gone Job Queue crazy.  The code for making this call is

class Admin_Task_InsertLink extends Esc_Queue_TaskAbstract
    private $_link;
    private $_tags;
    public function __construct($link, $tags)
        $this->_link = $link;
        $this->_tags = $tags;
    public function _execute(Zend_Application $app)
        $dom = new DOMDocument();
        $xpath = new DOMXPath($dom);
        $ns = $xpath->query('/html/head/title');
        if ($ns->length) {
            $title = $ns->item(0);
            if (!$title) {
            $title = $title->nodeValue;
            $api = new Esc_Api_Bitly($app->getOption('bitly'));
            $lt = new Model_DbTable_Link();
            $l = $lt->fetchNew();

... Plus a bunch of stuff for saving the tags

The API class is defined as

class Esc_Api_Bitly
    private $_options = array();
    public function __construct(array $options)
        if (!isset($options['login']) || !isset($options['key']) ) {
            throw new Esc_ApiException('login and key are required options');
        $this->_options = $options;
    public function getShortUrl($url)
        $url = ''
               . urlencode($url)
               . '&login='
               . $this->_options['login']
               . '&apiKey='
               . $this->_options['key']
               . '&format=json';
        $results = json_decode(
        if (!$results['errorCode']) {
            $res = array_shift($results['results']);
            return $res['shortUrl'];
        return null;

I found myself using the API all over the place so I just created a simple API class that I can re-use.

So when I post a link I simply invoke the code

$ilTask = new Admin_Task_InsertLink(

It sends it off to the Job Queue and does all the necessary processing offline.

Twitter based rating widget

If you look at the top part of the side bar you will see a slider.

Rating with a tweet

When you tweet it will open up a new window with the shortened URL and the tags all set up.

Twitter output

Since I will be doing another post on this later on, that's the extent of the details I will provide at this time.  Do, however, use it.   But be kind.


Comments are easy.  Comments with @#^$@#$^ CSS is hard.  If you notice, all of the comments have a "notify".  So basically, if you make a comment, after a comment has been approved, an email is going to be sent out to everyone who asked to be notified.  But do we want to wait while all of the commenters are notified?  Of course not!  Job Queue!!

class Admin_Task_SendArticleCommentNotification extends  Esc_Queue_TaskAbstract
    private $_commentKey;
    public function __construct($commentKey)
        $this->_commentKey = $commentKey;
    protected function _execute (Zend_Application $app)
        $ct = new Model_DbTable_Comment();
        $c = $ct->find($this->_commentKey)->current();
        if ($c) {
            $options = $app->getOption('site');
            if (!$options['baseurl']) return;
            /* @var $c Model_Comment */
            $content = $c->getContent();
            /* @var $content Model_Content */
            $mail = new Zend_Mail();
            $link = $content->getPageId();
            $title = $content->getTitle();
            $mail->setSubject('New comment: ' . $title);
            $text = $c->getText();
            $text = "

A new comment has been made





Read more at {$title}
            $mail->setFrom($options['email'], $options['title']);
            $subRs = $content->getCommentsWithNotifications();
            $addr = array();
            foreach ($subRs as $sub) {
                /* @var $sub Model_Comment */
                $addr[$sub->getEmail()] = 1;
            $thisEmail = $c->getEmail();
            foreach (array_keys($addr) as $addr) {
                if ($addr == $thisEmail) continue;
                $email = clone $mail;

If you get email notifications don't worry.  All of the text is filtered.


Last but not least, some people like to get emailed when a new post is made instead of getting it with Atom.  I'm cool with that.  So I added the ability to subscribe to the site via email.  So as soon as a new post goes out it will be sent to everyone who is a subscriber.  Once again; Job Queue!!!  In fact, one of the reasons for this post is to test this functionality out.  I've already tested it in dev and staging but now production.

class Admin_Task_SendNewArticleNotification extends  Esc_Queue_TaskAbstract
    private $_contentKey;
    public function __construct($contentKey)
        $this->_contentKey = $contentKey;
    protected function _execute (Zend_Application $app)
        $ct = new Model_DbTable_Content();
        $c = $ct->find($this->_contentKey)->current();
        if ($c) {
            $options = $app->getOption('site');
            if (!$options['baseurl']) return;
            /* @var $c Model_Content */
            $mail = new Zend_Mail();
            $link = $c->getPageId();
            $title = $c->getTitle();
            $mail->setSubject('New post: ' . $title);
            $text = $c->getContentSnip(2048);
            $text = "

A new posting has been made




Article Preview



Read more at {$title}
            $mail->setFrom($options['email'], $options['title']);
            $st = new Model_DbTable_Subscriber();
            $subRs = $st->fetchAll();
            foreach ($subRs as $sub) {
                /* @var $sub Model_Subscriber */
                $email = clone $mail;

Whew!  That's a lot of stuff.  Plus, it's relatively new.  So there might be some bugs in it.  But even so it was fun to build and more fun will be had in the near future.