Tag Archives: Zpcap

SimpleCloud Part 5 – SimpleDB

I started this series back in December.  In fact I wrote 3 or 4 blog posts the day before I took two weeks of vacation.  It’s now approaching the end of the next quarter so I figured I should actually make some progress on this.

The last posting dealt with the concept of storage in the cloud.  In this one we are going to talk about database access.  You have probably heard about document databases.  While RDBMS systems are awesome for when you have related data and need ACID compliance, they are hard to scale.  When I was a consultant I was onsite with a customer who had a large Oracle implementation with some performance issues and had an Oracle consultant there at the same time.  The Oracle consultant was flabbergasted that I could get done in a week whereas their analysis could take several weeks to months.  The nature of a relational database dictates that it will require a LOT of logic, horsepower and consultant dollars to handle larger-scale scalability.

So, accessing data in a scalable environment will generally be easier (possible?) if you use non-relational data.  Well.. not NON relational, just not enforcing those relations in the same way an ACID compliant RDMS would.  So a document database makes a lot of sense and Amazon’s SimpleDB fits the bill nicely.  If you’re on EC2 it really makes the most sense, unless you need immediate consistency of data.  One of the ways you make data access scalable/highly available is by having many, many machines that can provide access to that data.  But that takes time to propagate that data to those machines and, like with the relation database I was talking about earlier, if you need immediate consistency across those nodes you need a lot of logic, horsepower with a bit of luck that you don’t accidentally deadlock the whole thing.  It’s just not worth it.  SimpleDB has what’s called Eventual Consistency”.  In other words, when you update, insert or delete eventually (within 2 seconds according to AWS, I think) the data will be righted.  Most of the time you can stand having data out of date for a little bit.

We will create our configuration just like we did with the storage adapter.

1
2
3
cloud.document_adapter="Zend_Cloud_DocumentService_Adapter_SimpleDb"
cloud.aws_accesskey="XXXXXXXXXXXXXXXXXXX"
cloud.aws_secretkey="XXXXXXXXXXXXXXXXXXX"

And when we want to get our document adapter we do just as we did before

1
2
3
4
5
6
7
8
$config = new \Zend_Config_Ini(__DIR__.'/../config/config.ini');
 
\Zend_Registry::set(
    'DocumentAdapter',
    Zend_Cloud_DocumentService_Factory::getAdapter(
        $config->cloud
    )
);

Now that we have our document adapter in the registry we can work with it.  I used it in two different places.  First, in the job itself so that the job would be able insert the references to the completed images so you can query them later on.  Second, when we query them later on.

The code in the asynchronous job is

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$documentAdapter = \Zend_Registry::get('DocumentAdapter');
$docClass = $documentAdapter->getDocumentClass();
$doc = new $docClass(
  array(
    'filename'    => $fileName,
    'name'        => $this->_name,
    'height'    => $height,
    'width'        => $width,
    'size'        => filesize($tmpfname),
    'date'        => date('c')
  ),
  $this->_sourceId . '_' . $width
);
 
$documentAdapter->insertDocument("images", $doc);

What this does is ask the document adapter for the document class (just in case there are some adapter-specific pieces of functionality) and creates the new document, inserting it into the DB.  When creating a new document object the first parameter of the constructor is a name=>value pair of the data you want to store and the second parameter is the optional primary key for the data.  When you insert the document you need to specify a collection for the document to be inserted into, images, in this case which is followed by the actual document object.

When querying the collection we do so by simply… well… querying the collection.

1
2
3
4
5
$session = new \Zend_Session_Namespace('ProcTask');
$adapter = \Zend_Registry::get('DocumentAdapter');
$query = $adapter->select();
$query->where('name = ?', array($session->name))->from("images");
$results = $adapter->query('images', $query);

Notice a few things.  First we’re not creating our select object directly, we’re asking the adapter for it.  Just like with the document object, the select object may have some adapter-specific logic.  Actually that’s quite likely.  Then you provide your query parameters, which can be done in a prepared statement-like syntax.  Before passing the query object to the adapter, you must provide the collection name to the query object.  Then, to get your data you need to pass in the collection name along with the query.  Why do you need to do that for both the query object and the adapter?  I dunno.  Maybe it’s a bug, or maybe it’s a feature.  I haven’t looked.

Once you have your data you can simply iterate over it and read each member like you would a stdClass object.

1
2
3
4
5
6
foreach ($results as $result):
  echo $result->height;
  echo $result->width;
  echo $result->size;
  echo $result->date;
}

Done

SimpleCloud Part 2 – The Job Manager

In the previous installment I talked a little about the cloud, what Zend is doing in the cloud and what the example application for my ZPCAP webinar did.  One of the primary characteristics of scalability is the ability to process data as resources are available.  To do that I implemented the Zend Server Job Queue with an abstraction layer that I’ve written about three different versions for.  I think the fourth will be the charm :-).

The Zend Server Job Queue works by making an HTTP call to a server which will execute a PHP script.  That HTTP request is the “job” which is going to be executed.  The job is simply the Job Queue daemon pretending to be a browser.  While that works pretty well I prefer a mechanism that is more structured than simply running an arbitrary script.  Having small, defined, structured tasks allow you to spread those jobs over many servers quite easily.

So what I did was write a management system that is relatively simple which allows me to define those tasks and execute them on pretty much any server that is behind a load balancer.  And on the cloud, that load balancer can have a thousand machines behind it AND it can be reconfigured without changing your application.  One of the keys of elastic scalability is that you can throw an application “out there” and it will “work”.  That is why the Zend Server Job Queue is a good idea in the cloud.  Because it uses a protocol that requires one entry point to be defined and the rest is up to the infrastructure to work out.  (I personally am of the opinion that PHP developers are too dependent on config files).

There are two parts to this manager.  1) the queueing mechanism, 2) the executing mechanism.  Both are handled in the same class, named comzendjobqueueManager.  When a job is executed, it does not execute, it sends a request to the load balancer using a REST-like API.  The Job Queueing mechanism, by default, manages the queue on the local host.  I wanted the job server to manage its own queue.  This REST-like API will send the request to the load balancer, which sends it to a host.  In that REST-like call is contained the serialized object of the job that needs to be executed, along with an dependent data/references to data.  That host then queues the job on itself and then returns a serialized PHP object that provides the host name and the job number.  This result object can then be attached to a session so you can directly query the job queue server on subsequent requests.

The code for the manager is as follows.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
namespace com\zend\jobqueue;
 
class Manager
{
 
    const CONFIG_NAME    = 'JobQueueConfig';
 
    public function sendJobQueueRequest(JobAbstract $job)
    {
        $url = Zend_Registry::get(self::CONFIG_NAME)->queueurl . '?' . http_build_query(array('name' => base64_encode(get_class($job))));
        $http = new Zend_Http_Client($url);
        $http->setMethod('POST');
        $http->setRawData(base64_encode(serialize($job)));
        $body = $http->request()->getBody();
        $response = unserialize(base64_decode($body));
        if (!$response instanceof Response) {
            throw new Exception('Unable to get a properly formatted response from the server');
        }
        return $response;
    }
 
    public function getCompletedJob(Response $res)
    {
        $jq = new ZendJobQueue($res->getServerName());
        $job = $jq->getJobStatus($res->getJobNumber());
        $status = $job['status'];
        if ($status == ZendJobQueue::STATUS_OK) {
            $output = Zend_Http_Response::fromString($job['output']);
            $response = unserialize(base64_decode(trim($output->getBody())));
            return $response;
        }
    }
 
    public function executeJob()
    {
        $params = ZendJobQueue::getCurrentJobParams();
        if (isset($params['obj'])) {
            $obj = unserialize(base64_decode($params['obj']));
            if ($obj instanceof JobAbstract) {
                try {
                    $obj->run();
                    echo base64_encode(serialize($obj));
                    ZendJobQueue::setCurrentJobStatus(ZendJobQueue::OK);
                    exit;
                } catch (Exception $e) {
                    zend_monitor_set_aggregation_hint(get_class($obj) . ': ' . $e->getMessage());
                    zend_monitor_custom_event('Failed Job', $e->getMessage());
                    echo base64_encode(serialize($e));
                }
            }
        }
        ZendJobQueue::setCurrentJobStatus(ZendJobQueue::FAILED);
    }
 
    public function createJob($name)
    {
        $q = new ZendJobQueue();
        $qOptions = array('name' => base64_decode($name));
 
        $num = $q->createHttpJob(
            Zend_Registry::get(self::CONFIG_NAME)->executeurl,
            array(
                'obj' => file_get_contents('php://input')
            ),
            $qOptions
        );
 
        $response = new Response();
        $response->setJobNumber($num);
        $response->setServerName(php_uname('n'));
        echo base64_encode(serialize($response));
    }
}

 

Sequence of Events

sendJobQueueRequest() is the first to be called.  The job is passed via a parameter and is subsequently serialized.  A connection is made to the URL, which is stored in a Zend_Config object.  That URL can be a local host name or the load balancer’s host name.  Using this you can also set up different pools of servers quite easily simply by creating multiple load balancers and have each pool managed based off of its individual resource needs.

sendJobQueueRequest() called on the front end will cause createJob() to be called on the back end.  This queues the job locally by specifying a LOCAL URL that will be responsible for executing the job and creates a response object which contains the unique hostname of the machine and the unique job number on that machine.  It is serialized and echoed.  sendJobQueueRequest() then reads the response and unserializes it into a Response object which can be attached to a session.

This is the code on the backend URL that will be executed to queue the job.

1
2
3
4
5
use com\zend\jobqueue\Manager;
require_once '../bootstrap.php';
 
$q = new Manager();
$q->createJob($_GET['name']);

 

Don’t worry about the bootstrap.php yet.  It simply contains some configuration mechanisms and instantiates the SimpleCloud adapters.  We’ll cover that later.

This is the code for the response object (created in createJob()). The front end machine can call getCompletedJob() and pass the response object to check and see if the job is done.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
namespace com\zend\jobqueue;
 
class Response
{
 
    private $_jobNumber;
    private $_serverName;
 
    public function getJobNumber()
    {
        return $this->_jobNumber;
    }
 
    public function getServerName()
    {
        return $this->_serverName;
    }
 
    public function setJobNumber($num)
    {
        $this->_jobNumber = $num;
    }
 
    public function setServerName($name)
    {
        $this->_serverName = $name;
    }
 
}

 

At some point in the future, as resources are available, the URL, noted by Zend_Registry::get(self::CONFIG_NAME)->executeurl in createJob() will be executed.  The code of that URL is

1
2
3
4
5
use com\zend\jobqueue\Manager;
require_once '../bootstrap.php';
 
$q = new Manager();
$q->executeJob();

 

Pretty simple, eh?  That’s because most of the magic happens in the Manager class.  This is when executeJob() is called.  It takes that serialized object, unserializes it, and executes the run() method.  We will look at the difference between execute() and run() in a subsequent post.  If the job executes fine, the job is re-serialized and echoed.  If there is an exception thrown, THAT is serialized.

That’s the manager.  Next we will look at the abstract job class and after that we will get into the SimpleCloud components.

SimpleCloud Part 3 – The Abstract Job

We have so far looked at setting the stage and managing the job.  How about executing the job itself?  The job we will look at here will be relatively generic.  I will get into more detail after I have talked about the SimpleCloud elements.  This, here, is simply to show you the theory behind how jobs are executed.

The abstract class is pretty simple.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
namespace com\zend\jobqueue;
 
abstract class JobAbstract
{
    protected abstract function _execute();
 
    public final function run()
    {
        $this->_execute();
    }
 
    public function execute()
    {
        $mgr = new Manager();
        return $mgr->sendJobQueueRequest($this);
    }
 
}

 

There are only three methods.  The first is _execute().  This method needs to be overridden.  It is the code that will be executed on the remote server.  And because it will be serialized and executed on the remote host, the code for your job class will need to be deployed there.  You could actually send the source code for the class along with the serialized version and make the backend COMPLETELY stupid, but I would think that anyone remotely security minded could see the problem with that.

To implement a new job, do something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
namespace org\eschrade\jobs;
 
use comzendjobqueueJobAbstract;
 
class SendEmail extends JobAbstract
{
    private $mail;
 
    public function __construct(Zend_Mail $mail)
    {
        $this->mail = $mail;
    }
 
    protected function _execute()
    {
        $this->mail->send();
    }
}

 

Then to send the job to the queue call:

1
2
3
4
5
6
7
$mail = new Zend_Mail();
$mail->setSubject('This is a test');
$mail->addTo('Kevin', '[email protected]');
$mail->setBodyText('Some boring text');
 
$sendMail = new SendEmail($mail);
$sendMail->execute();

 

The execute() method is called on the front end.  But it doesn’t really execute.  It calls the queue manager and queues it on the backend servers.

Then on the backend servers (remember the executeJob() method?) the run() method is called, which actually calls the _execute() method, which contains the logic.  And while I didn’t show it here, because this job is re-serialized after execution you can store status information any other data attached to the object in that object and, once it’s unserialized on the front end after calling getCompletedJob() on the job manager.  If the job is completed it will return the unserialized instance of, in this case, orgeschradejobsSendEmail as it existed at the end of its run.

Now, to get to the SimpleCloud portion of this series; Storage.  The link for part 4, discussion storage, is in the related stuff section.

SimpleCloud Part 4 – Storage

Now that we’ve gotten some job processing code done, let’s get into the good stuff.  The first thing we’re going to look at is the storage mechanism in SimpleCloud.  The example we used was uploading an image to the server so it could be resized for viewing at multiple resolutions or junk.  Now, you could simply attach the file contents to the job class, serialize it and unserialize it on the other side.  But the Job Queue server is really not designed for that (nor are most other queueing applications).  So what we’re going to do is use the Storage mechanism in SimpleCloud (in this case, S3) to store the files temporarily and then for the resized versions.

The first thing we need to do is create the adapter.  I am simply putting it into a Zend_Registry object for later retrieval.  It, along with the Document and Queue adapters, are created in the bootstrap file.  The bootstrap file loads the autoloader, creates the config objects and then creates all of the cloud adapters.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
set_include_path(
    realpath(__DIR__ . '/../library')
    . PATH_SEPARATOR
    . get_include_path()
);
 
require_once 'Zend/Loader/Autoloader.php';
Zend_Loader_Autoloader::getInstance()->setFallbackAutoloader(true);
 
use com\zend\jobqueue\Manager;
 
$config = new Zend_Config_Ini(__DIR__.'/../config/jobqueue.ini');
Zend_Registry::set(Manager::CONFIG_NAME, $config);
$config = new Zend_Config_Ini(__DIR__.'/../config/config.ini');
 
Zend_Registry::set('Config', $config);
 
Zend_Registry::set(
    'StorageAdapter',
    Zend_Cloud_StorageService_Factory::getAdapter(
        $config->cloud
    )
);

 

The most important line is the getAdapter() line.  That takes the configuration options and creates an adapter based on those options.  It’s really quite simple.  In this case I’m using the S3 adapter.

1
2
3
4
cloud.storage_adapter="Zend_Cloud_StorageService_Adapter_S3"
cloud.aws_accesskey="XXXXXXXXXXXXXXXXXXX"
cloud.aws_secretkey="XXXXXXXXXXXXXXXXXXX"
cloud.bucket_name="zendcapimages"

A bucket name needs to be specified, and I believe it needs to be created ahead of time.  This allows you to separate your applications but still use the same account keys.  Easy, huh?  You haven’t even tried using it yet!  Here is the job (distilled to the essentials; full version will be downloadable) that is used to process the images.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
namespace org\eschrade\jobs;
 
use com\zend\jobqueue\JobAbstract;
 
class ProcessImages extends JobAbstract
{
 
    private $_resolutions = array();
    private $_sourceFile;
    private $_sourceId;
 
    public function __construct()
    {
        $this->_sourceId = sha1(uniqid(php_uname(), true));
    }
 
    public function setSourceFile($file)
    {
 
        $fileName = 'tmp/fileProcess-' . sha1(uniqid(php_uname(), true));
        $storage = Zend_Registry::get('StorageAdapter');
        $storage->storeItem($fileName, $file);
        $this->_sourceFile = $fileName;
    }
 
    protected function _execute()
    {
        try {
 
            $storageAdapter = Zend_Registry::get('StorageAdapter');
 
            foreach ($this->_resolutions as $width) {
 
                $image = imagecreatefromstring($storageAdapter->fetchItem($this->_sourceFile));
 
// cut
                if ($res) {
                    $fileName = 'public/' . $this->_sourceId . '-' . $width . '.png';
                    $tmpfname = tempnam("/tmp", $this->_sourceId);
                    if ($tmpfname && imagepng($newImage, $tmpfname)) {
                        $storageAdapter->storeItem($fileName, fopen($tmpfname, 'r'));
// cut
                       continue;
                    }    
                }
// cut
           }
        } catch (Exception $e) {
            zend_monitor_set_aggregation_hint(time());
            zend_monitor_custom_event("Internal Server Error", $e->getMessage(), array($e->getTraceAsString()));
        }
    }
}

 

The parts pertaining to the document adapter have been bolded.  The point here is that the storage and retrieval of file data is pretty much transparent.  Store/Fetch.  Integrating between the front and back end is pretty easy, too.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
if ($_SERVER['REQUEST_METHOD'] == 'POST') {
    $req = new Zend_Controller_Request_Http();
    $response = new Zend_Controller_Response_Http();
    if ($upload->isValid($req->getPost())) {
 
        $procTask = new ProcessImages();
        $upload->file->receive();
        $fileName = $upload->file->getFileName();
        $procTask->setSourceFile( file_get_contents($fileName) );
 
        $session = new Zend_Session_Namespace('ProcTask');
        $session->response    = $procTask->execute();
 
        $response->setRedirect('status.php')->sendHeaders();        
        exit;
    }
}

 

So, what is going on here?  I’ve bolded the most important parts.  When we call setSourceFile() this calls the code that uploads the file to S3.  Additionally, IIRC, there is also a stream API where you can pass a file resource and it uses that instead of the simple file contents.  That’s very useful for storing large files.  But remember in the earlier post where I said that calling execute() doesn’t actually execute it, but queues it, and that the result is a response object that provides the job number and the server host name?  There you see it getting attached the the session.  This code then forwards to another page, which we will look at in a bit.

But, as you can see, using SimpleCloud to upload files to a storage service is stupid easy when using Zend Framework.

SimpleCloud Part 1 – Setting the stage

Earlier in December I did a webinar on the Zend PHP Cloud Application Platform.  It's not some new product or anything like that, but rather a view of how our software is going to fit together.  It's not something that will be "released" in the typical software fashion.  Instead it is the mindset of our product development teams when they look at building new features.  Cloud-based pricing for Zend Server, AWS/Cloud integration in Zend Studio, and, of course, SimpleCloud.

SimpleCloud is an initiative started last year (2009) for the purpose of allowing you to build cloud-portable applications.  In other words, you would be able to build an application on your local machine and (mostly) transparently work on any of the three supported cloud platforms.  The example application I built for that webinar was one that used, not just "the Cloud", but all of the cloud services available in SimpleCloud, the Zend Server Job Queue (to scale data processing) and, of course, Studio with it's AWS integration.

The application was one that took an uploaded image and resized it.  Simple enough, unless you want it to scale.  The example application that I wrote can theoretically scale to quite high heights.  Not because I'm a great programmer, but because I utilized the underlying architecture of people smarter than me.  That's kind of what the cloud is.  Do you have the expertise to ward off a massive, worldwide DDoS?  Apparently Amazon does.  One of the prime rules of being human is to not only know your strengths, but know your weaknesses.  Humility is very hard for humans to do, and allowing for the fact that someone may be better than you at something is hard to admit.

The purpose of this application was to demonstrate how you can build an application a) for scalability, and, supplementally, b) for the cloud.  It's definitely not there to be pretty.  :-)  So what it does is implement several cloud-based features.  You could implement all of these on your own, but doing so (especially if you are a business) would probably cost you more.  Part of the cloud's appeal is that someone else is the specialist.  Could you use RabbitMQ?  Sure.  But then you have to manage it.  Could you have a massively distributed file system?  Sure! But then you have to manage it.

When you boil it all down; when you distill it to it's essentials; when you reduce it to it's finest ingredients, the cloud is just an on-demand managed service provider.  Nothing more.

So, what does this application do?

  1. It receives an image to be uploaded
  2. It stores this image on a file system
  3. Executes a job on the Zend Server Job Queue to resize the images
  4. Communicate with the browser, letting the end user know which image sizes have been processed
  5. Browse files with meta data
  6. Download resized files

Could you do all of that on your own?  Sure.  Could you do it for a couple of thousand users?  Sure.  Could you do it for a couple of thousand users who all decided to upload their images at the same time?  Nope.  Probably not.  The cloud isn't just about scalabilty, but elastic scalability.  And the chances are pretty high that you are not good at that, unless you are a large company with loads of resources to call upon.

So let's, then, take a look at what this looks like.  Check the "Related" panel for the link to part 2.

Web Analytics