WordPress + Web Services

Most WordPress setups exist on a hosting service (Shared or VPS) – one server talking to one database, boom, done. As sites grow or big sites start to use WordPress, setups grow to many servers and a handful of databases, a handful of Memcached boxes, and maybe some external Web Services or 3rd-party APIs. If the site is heavy on Editorial content, you may not every interact with external services – but if your dynamic pages pull content from many APIs, internal and external, and you don’t want to write your entire site in JavaScript: it’s time to learn some tricks of the trade for powering and scaling many requests at a time over HTTP.

Web Services

So what’s a “Web Service”? At its simplest, a Web Service is a bunch of related URLs that return data, and the Service is usually RESTful – meaning, “Clients initiate requests to servers; servers process requests and return appropriate responses.”

http://www.woo.com/get/151
http://www.woo.com/hi-five?id=12345&friend_id=54321
http://www.woo.com/hi-five/delete?id=676767

Responses are typically in XML or JSON. It sucks to parse XML, so most APIs either return JSON by default or allow you to specify format=json in the request. Most also go as far as to return JSONP if you so desire, so you can make requests to external servers without ever leaving the confines of JavaScript.

Why does JSON rule so much?

<?php

// this produces an (associative) array
$data = json_decode( $response, true );

Parsing XML into a PHP array can look like this:

<?php
require( 'wordpress/wp-load.php' );

ini_set( 'memory_limit', '2048M' );
ini_set( 'max_execution_time', 6000 );

function walk( $arr, $func ) {
    if ( !is_array( $arr ) ) return $arr;

    while ( list( $key, $value ) = each( $arr ) ) {
        if ( is_array( $value ) ) {
            $arr[$key] = walk( $value, $func );
        } else if ( is_string( $value ) ) {
            $arr[$key] = call_user_func( $func, $value );
        }
    }
    return $arr;
}

$z = new XMLReader;
$z->open( 'blob-of.xml' );

while ( $z->read() && $z->name !== 'someNode' );

while ( $z->name === 'someNode' ) {
    $node = new SimpleXMLElement( $z->readOuterXML() );
    $data = (array) $node;
    $data = walk( $data, 'trim' );

    // nightmare of casting nested SimpleXMLElements into Arrays
    // etc etc

    $z->next( 'someNode' );
}
$z->close();

JSON is just way easier and ready-made for PHP.

Services expose APIs (Application Programming Interfaces) that are used to build requests. You should be familiar with available APIs on the web by now: Facebook Connect, Twitter, Google Maps, etc.

WP_Http

WordPress has an API for dealing with HTTP – procedural and object-oriented. It is also an abstraction layer, meaning that it will pick from a variety of HTTP connection methods based on your current setup. I’m going to spend my time introducing you to cURL, but WordPress doesn’t require that you have the cURL extension installed. The order in which it looks for connection methods is cURL, streams (fopen), and fsockopen.

<?php

$url = 'http://www.google.com/a-url/that/returns/json';
$request = new WP_Http; // or use the $wp_http global
$result = $request->request( $url );
$json = $result['body'];

$data = json_decode( $json, true );

cURL

cURL (or, “Client for URLs”) is a command line tool for getting or sending files using URL syntax. Its most bare-bones use is to return the response of a URL. In Terminal:

curl www.emusic.com

This will return the contents of whatever “www.emusic.com” returns. To just read the HTTP Response Headers:

curl -I www.emusic.com

You can post from the command line as well:

curl -d "foo=bar&pass=12345&id=blah" http://www.woo.com/post-to-me

You can even make an XML-RPC call from the command line like so:

curl -i -H 'Content-Type: text/xml' --data '<?xml version="1.0"?><methodCall><methodName>demo.sayHello</methodName><params></params></methodCall>' 'http://www.woo.com/xmlrpc.php'

cURL Extension

The cURL extension for PHP is at its core support for libcurl. libcurl currently supports the http, https, ftp, gopher, telnet, dict, file, and ldap protocols. libcurl also supports HTTPS certificates, HTTP POST, HTTP PUT, FTP uploading (this can also be done with PHP’s ftp extension), HTTP form based upload, proxies, cookies, and user+password authentication.

Because the cURL extension has external dependencies, it is good to use a package manager to install it (MacPorts on OS X, yum on Linux – or PECL on either). You can run the following commands using MacPorts to install some PHP 5.4 extensions for HTTP:

sudo port install php54-curl php54-http

cURL Abstraction

We could start from scratch and write our own interface to cURL, or we could find someone who has done all of the dirty work for us. We are still going to write our own methods for talking to a cURL class, but we want to find a source that has already implemented the basics of cURL and cURL Multi.

WordPress has done this with the WP_Http class to an extent (no support or abstraction for cURL Multi). I use a variation on this guy’s code – an Object-Oriented cURL class that supports this form of “multi-threading.” The approach is very clean because it allows you to use the same interface to request one or multiple URLs:

// One URL
$curl = new CURL();
$curl->retry = 2;
$opts = array( CURLOPT_RETURNTRANSFER => true, CURLOPT_FOLLOWLOCATION => true  );
$curl->addSession( 'http://yahoo.com/', $opts );
$result = $curl->exec();
$curl->clear();

// 3 URLs
$curl = new CURL();
$opts = array( CURLOPT_RETURNTRANSFER => true, CURLOPT_FOLLOWLOCATION => true  );
$curl->addSession( 'http://yahoo.com/', $opts );
$curl->addSession( 'http://google.com/', $opts );
$curl->addSession( 'http://ask.com/', $opts );
$result = $curl->exec();
$curl->clear();

cURL Multi

cURL Multi has a procedural API in the cURL PHP extension, and it’s pretty nasty to deal with. There are standard ways it is implemented, which we took care of by using our OO class abstraction of it. The guts look like this:

<?php
$mh = curl_multi_init();

// Add all sessions to multi handle
foreach ( $this->sessions as $i => $url )
    curl_multi_add_handle( $mh, $this->sessions[$i] );

do
    $mrc = curl_multi_exec( $mh, $active );
while ( $mrc == CURLM_CALL_MULTI_PERFORM );

while ( $active && $mrc == CURLM_OK ) {
    if ( curl_multi_select( $mh ) != -1 ) {
	do
	    $mrc = curl_multi_exec( $mh, $active );
	while ( $mrc == CURLM_CALL_MULTI_PERFORM );
    }
}

It’s just a busy loop that eventually ends up returning an array of responses. You don’t get your response until all of the requests have completed, so you are only as fast as your slowest link.

Batch Requests

Almost every page on eMusic’s website is the result of multiple Web Service requests. Making these requests serially (one at a time) takes way too long, we obviously opt to use cURL Multi. I wrote a wrapper class for our methods simply called API.

Here are some methods it contains:

<?php
class API {
.....
    private function parse_response( $str ) {
	if ( is_string( $str ) && ( '404 Not Found' === $str || 0 === stripos( $str, 'Not Found' ) ) )
	    return $str;

	if ( is_array( $str ) )
	     return $str;

	$str = trim( $str );

	$arr = array();
	/**
         * Do a cheesy RegEx scan to indicate that it looks like JSON
         */
	if ( preg_match( self::JSON_REGEX, $str ) ) {
	    $str = preg_replace( '/t|ss+/', ' ', $str );
	    /**
	     * The Bjork problem...
	     * ö, etc can break JSON deserialization
	     * so we need to clean up the JSON response
	     *
	     */
	    if ( ! seems_utf8( $str ) ) {
		$str = emusic_fix_bad_chars( $str );
	    }

	    $arr = json_decode( $str, true );

	    if ( ! empty( $arr ) )
		return $arr;
	    }

	    /**
	     * Only return documents, not string version of empty array
	     */
	    if ( null === $arr )
		return $str;
	}

Take an array of URLs and make the requests in batch – the result is an array of responses:

public function multi_request( $urls ) {
	$this->curl->sessions = array();

	$opts = array( CURLOPT_ENCODING => 'identity' );

	$this->_add_existing_headers( $opts );

	foreach ( $urls as $u )
		$this->curl->addSession( $u, $opts );

	$result = $this->do_request( true );

	return $result;
}

Pass a URL and array of params and make a GET request:

public function get( $url, $params = '', $cache = true, $ttl = 0 ) {
    if ( ! empty( $params ) )
	$url = self::url( $url, $params );

    self::save_url( $url );

    $cached = null;

    if ( $cache )
	$cached = Cache::get( $this->bucketize( $url ), $url );

    if ( $cached ) {
	return $cached;
    } else {
	$this->curl->sessions = array();

	$opts = array( CURLOPT_TIMEOUT => 8 );

	$this->_add_existing_headers( $opts );

	$this->curl->addSession( $url, $opts );
	$result = $this->do_request();

	if ( $result && ! $this->is_http_error() ) {
	    $result = $this->parse_response( $result );
	    $this->add_to_cache( $url, $result, $ttl );
	    return $result;
	}
    }
}

Pass a URL and array of params and make a POST request:

public function post( $url, $args = '' ) {
	$this->curl->sessions = array();

	$opts = array(
		CURLOPT_POST => 1,
		CURLOPT_SSL_VERIFYHOST	=> 0,
		CURLOPT_SSL_VERIFYPEER	=> 0,
	);

	if ( ! empty( $args ) )
		$opts[CURLOPT_POSTFIELDS] = http_build_query( $args );
	else
		$opts[CURLOPT_POSTFIELDS] = '';

	$this->_add_existing_headers( $opts );

	$this->curl->addSession( $url, $opts );
	$response = $this->do_request();

	if ( empty( $response ) && ! $this->is_http_error() ) {
		return null;
	} else {
		return $this->parse_response( $response );
	}
}

Caching

When dealing with Web Services, you often (almost always) want to cache your responses for an arbitrary or specific amount of time. At eMusic, we have a large number of Web Services that often do very expensive things. The purpose of breaking functionality out into services is being able to scale and optimize each service individually outside of the realm of the core application logic. As long as the APIs remain unchanged, you can mess with the backend of your service and not affect my use of it.

Some of our services include: catalog-service, ratings-service, save-for-later-service, user-service, pricing-service, auth-service and the list goes on and on.

To take a advantage of caching while making batch requests, I wrote another method that does the following:

  1. Iterate through an array of URLs
  2. If it’s in the cache, remove that item from batch list
  3. After all items have been checked against the cache, make a batch request if any URLs are still in the queue
  4. For each request in the batch that is made, md5 the URL and use it as the key and the response as a value to place in the cache.
  5. If all of your URLs are cached, you make zero requests and just return the cached responses in an array

Here’s an example of how the logic works, your implementation will probably differ:

<?php

.....
public function batch( $urls, $ttl = 0, $usecache = true ) {
    $response = array();
    ob_start();

    if ( empty( $ttl ) )
	$ttl = CACHE::API_CACHE_TTL;

    array_map( 'API::save_url', $urls );

    $batch_urls = array();
    if ( is_array( $urls ) ) {
	if ( $usecache ) {
	    foreach ( $urls as $index => $url ) {
		$in = Cache::get( $this->bucketize( $url ), $url );

		if ( $in ) {
		    $response[$index] = $in;
		} else {
		    $batch_urls[$index] = $url;
		}
	    }
	}
    }
    $calls = $this->multi_request( $batch_urls );

    if ( is_array( $calls ) && count( $calls ) > 0 ) {
	$keys = array_keys( $batch_urls );
	$calls = array_combine( $keys, array_values( $calls ) );

	foreach ( $calls as $index => $c ) {
	    if ( $c ) {
		$response[$index] = $this->parse_response( $c );
		$this->add_to_cache( $batch_urls[$index], $response[$index], $ttl );
	    } else {
		$response[$index] = null;
	    }
	}
    } else if ( ! empty( $calls ) && ! empty( $batch_urls ) ) {

	reset( $batch_urls );
	$index = key( $batch_urls );
	$data = $this->parse_response( $calls );
	$this->add_to_cache( $batch_urls[$index], $data, $ttl );

	$response[$index] = $data;
    }
    ob_end_clean();

    return $response;
}
....

I have abstracted the Cache class because I sometimes use my API class without using WordPress. In my eMusic implementation, I use wp_cache_set and wp_cache_get, but this allows me to implement Cache however I want in other environments.

Request Map

Making batch requests is all well and good, but I still have to loop through the responses to do anything with them. If my batch request was for expanded catalog metadata on 10 separate albums, then a loop works just fine. If I am collecting all of the data I need for an entire page by making all of my service calls in batch, I might want to associate a callback with each individual URL. To accomplish this, I wrote a class called Request Map.

Here is a Request Map instance which makes all of the calls we need from Services to populate data on the homepage of eMusic:

<?php
$map = new RequestMap();
$map->add( get_twitter_url( 'eMusic' ), function ( $data ) {
    global $tweets;
    $tweets = $data;
} );
$map->add( get_charts( array( 'perPage' => 10 ) ), function ( $data ) {
    global $chart;

    if ( !empty( $data['albums'] ) ) {
        $chart = $data['albums'];
    }
} );
$map->add( get_trending_artists( array( 'perPage' => 5, 'return' => true ) ), function ( $data ) {
    global $trending;
    if ( !empty( $data['artists'] ) ) {
        $trending = $data['artists'];
    }
} );
$map->send();

RequestMap::add takes 2 arguments: a URL and callback that receives the data as a PHP array. Every time RequestMap::send is called, a batch request is fired for any URLs in the queue. When finished, appropriate data is passed to each callback. RequestMap::flush will reset the queue:

<?php
$map = new RequestMap();
$map->add( 'http://woo.com/1', function ( $data ) {....} );
$map->add( 'http://woo.com/2', function ( $data ) {....} );
$map->add( 'http://woo.com/3', function ( $data ) {....} );
$map->send();
$map->flush();
$map->add( 'http://woo.com/4', function ( $data ) {....} );
$map->add( 'http://woo.com/5', function ( $data ) {....} );
$map->add( 'http://woo.com/6', function ( $data ) {....} );
$map->send();

This is necessary if the second set of requests requires data from the first since PHP does not have true threading. Here is the code for my RequestMap class:

<?php
class RequestMap {
/**
 * @var array
 */
private $requests;
/**
 * @var array
 */
private $responses;
/**
 * @var int
 */
private $ttl;
/**
 * @var boolean
 */
private $useCache = true;
/**
 * @var boolean
 */
protected $error = false;

public function __construct() {
    $this->flush();
}

/**
 * @return boolean
 */
public function is_error() {
    return $this->error;
}

public function flush() {
    $this->requests = array();
}

/**
 * @return int
 */
public function getTtl() {
    if ( empty( $this->ttl ) )
	$this->ttl = CACHE::API_CACHE_TTL;

	return $this->ttl;
}

public function setTtl( $ttl ) {
    $this->ttl = $ttl;
}

/**
 * @return boolean
 */
public function getUseCache() {
    return $this->useCache;
}

public function setUseCache( $usecache ) {
    $this->useCache = $usecache;
}

/**
 *
 * @param string $url
 * @param callable $callback
 * @param array $vars
 */
public function add( $url, $callback, $vars = array() ) {
    $params = new stdClass();
    $params->url = $url;
    $params->callback = $callback;
    $params->params = (array) $vars;
    $this->requests[] = $params;
}

/**
 *
 * @return array
 */
private function getRequestUrls() {
    return array_map( function ( $item ) {
	return $item->url;
    }, $this->requests );
}

/**
 *
 * @param stdClass $item
 * @param array $response
 */
private function exec( $item, $response ) {
    $params = array_merge( array( $response ), $item->params );
    call_user_func_array( $item->callback, $params );
}

public function send() {
    if ( ! empty( $this->requests ) ) {
	$this->responses = API()->batch( $this->getRequestUrls(), $this->getTtl(), $this->useCache );

	if ( is_array( $this->responses ) ) {
	    foreach ( $this->responses as $i => $response ) {
		if ( ! empty( $this->requests[$i] ) ) {
	            $this->exec( $this->requests[$i], $response );
		}
	    }
        }
    }

    $this->flush();
}

}

It’s really not that complicated.

Conclusion

I would advise using methods like I have listed above instead of relying on JavaScript and JSONP to talk to external services. Pages can be served super-fast out of Memcached if set up properly, and your page load times will be faster – perceived or otherwise – with as little initial JavaScript as possible.

Term / Taxonomy is broken in WordPress

I meant to include this in my MySQL post, but making a separate post to address this one particular issue works just fine. In my previous post, I said that the Term tables are a mess. Here is why:

Let’s say I create 2 taxonomies, location and radio_station (FYI: an example of a “taxonomy” is a “tag” or a “category”). Here’s how I register them (simplified) in my theme or plugin:

<?php
register_taxonomy( 'radio_station', 'post', array( 'labels' => array(
    'name'          => 'Radio Stations',
    'singluar_name' => 'Radio Station'
) ) );
register_taxonomy( 'location', 'post', array( 'labels' => array(
    'name'          => 'Locations',
    'singluar_name' => 'Location'
) ) );

Once this is done, you should have extra links in the Posts fly-out:

Click each link (Tags, Radio Stations, Locations) and add “Brooklyn” as a term. This is a completely valid practice. Now change ONE of them to “Brooklyn’s Finest” and go back and check out the other 2. THEY ALL CHANGED! So that sucks right?

Why does this happen? Terms are stored in wp_terms and are constrained by uniqueness. So when you added Brooklyn that second time – even though it had a different taxonomy – it pointed at that first term, and more importantly, shackled itself to it forever by associating it with that first term’s term_id in the wp_term_taxonomy table.

wp_term_taxonomy is the first-class table of the bunch (wp_terms, wp_term_taxonomy, and wp_term_relationships). Terms are arbitrary. They are associated with taxonomies in the wp_term_taxonomy table. term_taxonomy_id is the PRIMARY KEY, and term_id is the foreign reference. Weirdly, taxonomy is not a foreign reference alluding to wp_taxonomy, there is no wp_taxonomy table! wp_term_relationships joins term_taxonomy_id with Post IDs.

So let’s say you have 1,000,000 terms but only 5 taxonomies. If 90,000 of them are post_tags, the taxonomy field for all of them will be post_tag. The opposite is not true. 5 taxonomies only point at ONE Brooklyn. In this scenario, shared terms make no sense. Changing a term’s name that that is stored at term_id in wp_terms will change the name for each and every taxonomy that term_id is associated with.

Discuss.

Understanding MySQL in WordPress

MySQL

You would be surprised how many people I have interviewed for a job that think their PHP chops are great, but their MySQL, not so much. Many great WordPress developers I know have said the same thing. Not cool. Bad SQL can bring down your site! Forget crazy scaling challenges, basic WP Query usage can bring your site down if you aren’t careful. WordPress is a framework that provides you with a lot out of the box, but you are still going to end up writing a bunch of your own PHP to make it work the way you want. The same should be true, but often isn’t, for SQL queries.

As is usually the case, for basic usage (e.g. your mom’s blog), don’t mess with SQL. If you work on an enterprise-level site and have all kinds of monitoring and benchmarking tools in place, let’s mess with some SQL 🙂

Schema

WordPress at its core has a tiny and non-complex database schema.

  • Posts, Comments, Users
  • Post-, Comment-, User- meta
  • Options
  • Links
  • Terms, Term Taxonomy, Term Relationships

The Posts table (wp_posts) is a one-size-fits-all data store for “content.” Posts, Custom Post Types, whatever. At its core, it stores a title, a blob of text (your blog post), and possibly an excerpt. Also stores time, reference to the user who wrote it, and post_type. Everything else that has to do with Posts is relegated to the wp_postmeta table. The Post Meta table (wp_postmeta) is a key / value store that arbitrarily stores anything you want. Like, literally ANYTHING. The meta_value column is of type LONGTEXT, which can store up to 4GB of data. LONGTEXT is a poor choice for key / value pairs – more in a bit.

The Comments table (wp_comments) stores the contents of comments, a reference to the user, and some additional metadata about anonymous users (User Agent, IP, etc). The Comment Meta table (wp_commentmeta) is identical to the Post Meta table, albeit with different column names.

The Users table (wp_users) stores arbitrary information about your site’s registered users: email address, string used for username, username, username as slug, etc. The User Meta table (wp_usermeta) is identical to the Post Meta table, albeit with different column names (noticing a trend?).

(We’ve already covered over 50% of the default schema…)

The Options table (wp_options) is identical to the Post Meta table, albeit with different column names…

The Links table (wp_links) is “probably” not used to its full extent by most casual bloggers. It probably isn’t used by anyone using WordPress as a CMS. The Links product seems like a labor of love from the early days of WordPress (IMHO) and shows Matt‘s fondness for XFN.

The Term tables (wp_terms, wp_term_taxonomy, wp_term_relationships) are, in my opinion, a mess and are easier to understand through the API methods for interacting with Terms. More on this later…

SQL-generation

Like most things in WordPress, SQL “just happens,” and you don’t have to know anything about it beforehand to start working with WP. You could actually go your entire life using WordPress and never once need to know anything about SQL. That’s if your entire life using WordPress is in a lightweight manner – a personal blog that doesn’t need to scale. The second you start “needing to scale” is the second you start “needing to know SQL.” Why? I thought WordPress was written by geniuses who never make mistakes? Well, sorta true, sorta very not true. WordPress works perfectly and scales perfectly for probably 99% of use cases. The 1% of cases where it needs some help are those sites that get millions of page views a day / month and sites that use complex database queries across millions of rows, which is most enterprise-level sites.

There is a lot of code in WordPress that talks to the database for you, but almost all of that code at its core reaches the same end result: a string of SQL that will get passed to some function as a database query.

There are a couple of common ways this code is generated.

WP Query

WP Query is one of WordPress’ most powerful tools. You can pass all kinds of request params to WP Query and receive in return a collection of rows containing your site’s blog posts / data. The WP_Query class will get posts for you. It will do JOINs for you and resolve foreign ids. While it can be optimized to an extent, it can’t read your mind, and it can’t predict the future. It also can’t stop you from making bad decisions or forcing the writing of slow queries.

Many parts of WP Query, get_posts(), and the like are filterable. There are all kinds of places to hook in and alter portions of generated SQL.

Here’s an example of filterable SQL in WordPress Core:

<?php
// line 2570 of query.php
// WP_Query->get_posts

$where	  = apply_filters_ref_array( 'posts_where_paged', array( $where, &$this ) );
$groupby  = apply_filters_ref_array( 'posts_groupby',	  array( $groupby, &$this ) );
$join	  = apply_filters_ref_array( 'posts_join_paged',  array( $join, &$this ) );
$orderby  = apply_filters_ref_array( 'posts_orderby',	  array( $orderby, &$this ) );
$distinct = apply_filters_ref_array( 'posts_distinct',	  array( $distinct, &$this ) );
$limits	  = apply_filters_ref_array( 'post_limits',       array( $limits, &$this ) );
$fields	  = apply_filters_ref_array( 'posts_fields',	  array( $fields, &$this ) );

// line 2589
// for cache plugins, but wide open for everyone
$where	  = apply_filters_ref_array( 'posts_where_request',	array( $where, &$this ) );
$groupby  = apply_filters_ref_array( 'posts_groupby_request',	array( $groupby, &$this ) );
$join	  = apply_filters_ref_array( 'posts_join_request',	array( $join, &$this ) );
$orderby  = apply_filters_ref_array( 'posts_orderby_request',	array( $orderby, &$this ) );
$distinct = apply_filters_ref_array( 'posts_distinct_request',	array( $distinct, &$this ) );
$fields	  = apply_filters_ref_array( 'posts_fields_request',	array( $fields, &$this ) );
$limits	  = apply_filters_ref_array( 'post_limits_request',	array( $limits, &$this ) );

// line 2614
// screw it, lets filter the resulting string of SQL
$this->request = apply_filters_ref_array('posts_request', array( $this->request, &$this ) );

Hopefully this shows the following things:

  • You can filter every part of every query which has its origins in WP_Query->get_posts()
  • You can make every query better if you know what you are doing
  • You can make every query worse if you don’t know what you are doing
  • In the end, you can clobber everything that any plugin has done by just arbitrarily altering the SQL string
WP Query is useful if you want to do something like a Taxonomy Query and have it “just work.” Let’s look at a WP Query I wrote at eMusic and then dissect it:
<?php

function get_stories( $post_types = array() ) {
    global $regions_map, $editorial_types;
    $key = 'stories-' . get_region();
    if ( !empty( $post_types ) ) {
        $key = 'stories-' . get_region() . '-' . md5( join( '', $post_types ) );
    }
    $group = 'editorial';
    $response = wp_cache_get( $key, $group );

    if ( empty( $response ) ) {
        $params = array(
            'posts_per_page'=> 10,
            'order'         => 'DESC',
            'post_type'     => $editorial_types,
            'orderby'       => 'date',
            'tax_query'     => array(
                array(
                'taxonomy'  => 'region',
                'field'     => 'term_id',
                'terms'     => array(
                    $regions_map[ 'ALL' ],
                    $regions_map[ strtoupper( get_region() ) ]
                ),
                'operator'  => 'IN'
                )
            ),
            'no_found_rows'             => true,
            'update_post_meta_cache'    => false,
            'update_post_term_cache'    => false
        );

        if ( !empty( $post_types ) ) {
            $params['post_type'] = $post_types;
        }

        $posts = new WP_Query( $params );

        if ( $posts->have_posts() ) {
            wp_cache_set( $key, $posts->posts, $group, 1800 );
            $response = $posts->posts;
        }
    }

    return $response;
}

In case it isn’t immediately obvious, I am requesting the 10 most recent posts written by our Editorial staff in reverse-chronological order filtered by the taxonomy “region” and including a subset of our large family of custom post types.

If my request is in the cache, awesome – serve it from there. Otherwise, make a WP Query request and cache the resulting posts.

Let’s list a number of optimizations I did:

  • Turned off SQL_CALC_FOUND_ROWS
  • Turned off slurping of post meta
  • Turned off slurping of terms
  • Populated a map of term_name => term_id for regionalization
  • Taking advantage of the WP cache abstraction layer

This particular WP Query won’t slurp posts and terms, but it is a good idea to turn them off everywhere you know you do not want them to be requested.

Let’s look at the SQL queries that make up my optimized request:

// hop to retrieve term_taxonomy_id
// for regions because Tax Query won't take term_taxonomy_id
// as the value for "field"
SELECT term_taxonomy_id FROM wp_term_taxonomy WHERE taxonomy = 'region' AND term_id IN (1308,1306)

// a JOIN on the Term Relationships table to get posts in the
// requested region
SELECT   wp_posts.* FROM wp_posts  INNER JOIN wp_term_relationships ON (wp_posts.ID = wp_term_relationships.object_id) WHERE 1=1  AND ( wp_term_relationships.term_taxonomy_id IN (1307,1309) ) AND wp_posts.post_type IN ('emusic_feature', 'emusic_spotlight', 'book_feature', 'book_spotlight', 'emusic_bookshelf', 'emusic_qa', 'book_qa', 'emusic_jukebox_jury', 'emusic_who', 'emusic_select', 'emusic_about', 'emusic_guide', 'emusic_list', 'emusic_stack', 'emusic_radio_program', 'emusic_review', 'book_review', 'emusic_hub', 'book_hub', 'emusic_list_hub', 'emusic_guide_hub', 'emusic_six_degrees', 'book_six_degrees', 'emusic_icon', 'book_icon') AND (wp_posts.post_status = 'publish') GROUP BY wp_posts.ID ORDER BY wp_posts.post_date DESC LIMIT 0, 10
2 queries, 565ms.

If I want to affect these queries, I can hook into WP Query with the filters I showed above. The problem I have with doing that is this: I don’t want to write a bunch functions just to remove a few clauses from one SQL statement:

<?php
// function to make my site really slow
function my_where_filter( $where ) {
    $my_where = preg_replace( '/this is overkill/', '', $where );
    return $my_where;
}

add_filter( 'posts_where_paged', 'my_where_filter' );

// all kinds of other stuff runs...
// other plugins collide with me...
// all I am doing is changing text...

$rows = result_of_slow_query();
remove_filter( 'posts_where_paged', 'my_where_filter' );

Imagine if I was filtering even 2 or 3 more clauses. What a mess…

Now, let’s remove my optimizations and see what happens when WP Query just runs like normal:

<?php

function get_stories( $post_types = array() ) {
	global $regions_map, $editorial_types;

	$params = array(
	    'posts_per_page'=> 10,
	    'order'         => 'DESC',
	    'post_type'     => $editorial_types,
	    'orderby'       => 'date',
	    'tax_query'     => array(
	        array(
	        'taxonomy'  => 'region',
	        'field'     => 'name',
	        'terms'     => array( 'ALL', strtoupper( get_region() ) ),
	        'operator'  => 'IN'
	        )
	    )
	);

	if ( !empty( $post_types ) ) {
	    $params['post_type'] = $post_types;
	}

	$posts = new WP_Query( $params );

	if ( $posts->have_posts() ) {
	    $response = $posts->posts;
	}

	return $response;
}

This version of the WP Query produces:

// hop to get term_id for regions
SELECT wp_term_taxonomy.term_id
FROM wp_term_taxonomy t
INNER JOIN wp_terms USING (term_id)
WHERE taxonomy = 'region'
AND wp_terms.name IN ('all','us')

// term_id is unusable, so go back and grab term_taxonomy_id
SELECT term_taxonomy_id FROM wp_term_taxonomy WHERE taxonomy = 'region' AND term_id IN (1308,1306)

// finally make the the request
SELECT SQL_CALC_FOUND_ROWS  wp_posts.* FROM wp_posts  INNER JOIN wp_term_relationships ON (wp_posts.ID = wp_term_relationships.object_id) WHERE 1=1  AND ( wp_term_relationships.term_taxonomy_id IN (1307,1309) ) AND wp_posts.post_type IN ('emusic_feature', 'emusic_spotlight', 'book_feature', 'book_spotlight', 'emusic_bookshelf', 'emusic_qa', 'book_qa', 'emusic_jukebox_jury', 'emusic_who', 'emusic_select', 'emusic_about', 'emusic_guide', 'emusic_list', 'emusic_stack', 'emusic_radio_program', 'emusic_review', 'book_review', 'emusic_hub', 'book_hub', 'emusic_list_hub', 'emusic_guide_hub', 'emusic_six_degrees', 'book_six_degrees', 'emusic_icon', 'book_icon') AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') GROUP BY wp_posts.ID ORDER BY wp_posts.post_date DESC LIMIT 0, 10

// we asked for 10, but I guess go ahead and do this in case we
// we want to indicate pagination or something
SELECT FOUND_ROWS()
4 queries, 658ms.

So we got this request from 4 database queries down to 2 using my optimizations. One of my optimizations that helped was $regions_map. $regions_map took my term_ids and turned them into term_taxonomy_ids. If we look closely at the first 2 queries in the non-optimized WP Query, we notice that the first 2 queries can be combined.

// produces term_id, so we have to make another database query
SELECT wp_term_taxonomy.term_id
FROM wp_term_taxonomy t
INNER JOIN wp_terms USING (term_id)
WHERE taxonomy = 'region'
AND wp_terms.name IN ('all','us')

// produces term_taxonomy_id, skip the second call
SELECT wp_term_taxonomy.term_taxonomy_id
FROM wp_term_taxonomy t
INNER JOIN wp_terms USING (term_id)
WHERE taxonomy = 'region'
AND wp_terms.name IN ('all','us')

I have another optimization called $regions_tax_map:

<?php

// term_id - for use with WP Query
$regions_map = array(
    'EU'    => 1303,
    'CA'    => 1304,
    'UK'    => 1305,
    'US'    => 1306,
    'ALL'   => 1308
);

// term_taxonomy_id, for when we want to write our own SQL
$regions_tax_map = array(
    'EU'    => 1304,
    'CA'    => 1305,
    'UK'    => 1306,
    'US'    => 1307,
    'ALL'   => 1309
);

When I snarked above that I thought the Term tables in WordPress are a mess, this is why. For any database call to have any teeth when joining with wp_term_relationships (as it inevitably does when you are using taxonomy to filter by region, etc), you have to have a term’s term_taxonomy_id for the lookup. There are many cases where this is not practical – a site that has 100s of models / makes of car in the term tables, but something like regional site code that only has 5 variations can be mirrored outside of the database and placed into code for quick access.

So let’s get that optimized WP Query down to only ONE call to the database by removing the GROUP BY clause and posts with a status of “private,” and make use of term_taxonomy_id shortcuts:

function get_stories( $post_types = array() ) {
    global $regions_tax_map, $editorial_types, $wpdb;
    $key = 'stories-' . get_region();
    if ( !empty( $post_types ) ) {
        $key = 'stories-' . get_region() . '-' . md5( join( '', $post_types ) );
    }
    $group = 'editorial';
    $response = wp_cache_get( $key, $group );

    if ( empty( $response ) ) {
        $types = empty( $post_types ) ? $editorial_types : $post_types;
        $joined = sprintf( "'%s'", join( "','", $types ) );

        $sql =<<<SQL
         SELECT wp_posts.* FROM wp_posts
         INNER JOIN wp_term_relationships
         ON (wp_posts.ID = wp_term_relationships.object_id)
         WHERE wp_term_relationships.term_taxonomy_id
            IN (
                {$regions_tax_map['ALL']},
                {$regions_tax_map[ strtoupper( get_region() ) ]}
            )
            AND wp_posts.post_type IN ({$joined})
            AND wp_posts.post_status = 'publish'
         ORDER BY wp_posts.post_date DESC
         LIMIT 0, 10
SQL;

         $response = $wpdb->get_results( $sql );

         if ( !empty( $response ) ) {
             wp_cache_set( $key, $response, $group, 1800 );
         }
    }

    return $response;
}
ONE query, 42ms.

As you can see, we are making ONE query that JOINs wp_posts and wp_term_relationships. We skip WP Query completely. We skip all 840 lines of WP_Query->get_posts(). Removing the GROUP BY clause drops the query time from 560ms to 42ms. HUGE.

We debugged the queries. We realized that all of that black voodoo is there to make sure 99% of the human population can get what they want from WP Query, but we have scalability to think about. We want to optimize ALL of our interactions with the database.

This request is a SMALL portion of our sidebar on pages that support it on eMusic’s website. However, I wasn’t originally caching this call. I was relying on the built-in WordPress post cache to take care of it. This ONE module almost brought down our site on one of our busiest traffic days at the end of last year. The panic immediately subsided when we enlisted more MySQL slaves, but I learned an important lesson: SLOW is SLOW. Don’t be a hero and cling to how you think or want WordPress to work, learn how things perform and make them as fast as possible. Also, cache like the wind.

$wpdb Abstraction Layer

A more hands-on approach is to use the Database Abstraction Layer in WordPress, namely: the $wpdb global object. $wpdb is great and provides MVC-like methods for UPDATEs and INSERTs. SELECTs are mostly hand-written when using $wpdb, which can be a good thing (like we demonstrated above).

Here are the best methods to use when retrieving one or many rows of data:

<?php

// get one row of data
$sql = $wpdb->prepare( 'SELECT ID, post_title, post_author, post_content FROM wp_posts WHERE id = %d', get_primary_key() );
$row = $wpdb->get_row( $sql, ARRAY_A );

// get a list of post IDs
$sql = $wpdb->prepare( 'SELECT ID FROM wp_posts WHERE post_type = "your_mom_jokes" AND post_author = %d', get_your_mom() );
$ids = $wpdb->get_col( $sql );

// get a bunch of rows
$sql = $wpdb->prepare( 'SELECT ID, post_title, post_author, post_content FROM wp_posts WHERE post_type = "bad_haiku"' );
$rows = $wpdb->get_results( $sql, ARRAY_A );

InnoDB vs MyISAM

WordPress “out-of-the-box” uses the MyISAM storage engine (the default storage engine for MySQL version < 5.5). For “most” users, this is just fine and dandy. MyISAM has the advantage of built-in full-text search. InnoDB is the default engine for MySQL > or = 5.5. InnoDB has row-level locking, as opposed to MyISAM’s table-level locking. Table-level locking hinders concurrency in the database when you have an astronomical number of rows in a high-traffic environment. Just one full-table scan query can cause all sorts of trouble when using MyISAM in production. Every mature RDBMS has built-in fulltext functionality, except for InnoDB (frown).

We’re going to look more at it in a minute, but realize that WordPress has some serious negatives when it comes to Search. For a giant site, you are better off looking at solutions like Lucene, Solr, and Elastic Search and providing them as a Web Service. Most sites won’t be serving every piece of content out of their WordPress database only, so this shouldn’t be outside the realm of possibility for your site’s architecture.

LONGTEXT COLUMNS

One particular thing about the WordPress database schema that is dangerous is its use of unindexed (and un-indexable) LONGTEXT columns to the store the values of key / value pairs in 4 of its tables. I suppose it makes sense to have maximum flexibility when storing data for a user base in the many millions, but the second you decide to query against the *values* in your key / value pairs, you are dancing with the devil. The “key” is stored as a VARCHAR with a standard BTREE index, and the “value” is stored as a LOB type (LONGTEXT, maximum size 4 GB). While the value for a particular key may be readily found because of the index on the key column, you cannot go the other way – you cannot, for instance, find all the users who use a particular nickname without doing a full table scan on an unindexed column. So try doing that on a table with over 10,000,000 rows and see how it performs.

Search

LOB columns cannot be indexed using standard BTREE indexes. Therefore any search against such a column invokes a full table scan, which means every block in the table must be read from disk into memory and searched. Using LIKE with wildcards (% or _) is even worse, because the entire column contents must be searched (possibly as much as 4GB of text) and, as we know, string comparison is CPU-intensive.

Another problem is the way that WordPress searches by default. Let’s say I want to search for “Arcade Fire.” WordPress is going to do wildcard searches for post_title LIKE "%Arcade%", post_title LIKE "%Fire%", post_content LIKE "%Arcade%", and post_content LIKE "%Fire%". I’m going to let you guess how well this performs. For a search like this, we really want to match the exact phrase if we’re using WordPress, or let some super advanced tool like Lucene take over and give your search results better handling.

<?php
add_filter( 'posts_search', 'refine' );

function refine( $where ) {
    if ( !empty( $where ) ) {
        $term = get_search_query();
        $term = mysql_real_escape_string( '%' . $term . '%' );
        $where = " AND ((wp_posts.post_title LIKE '$term') OR (wp_posts.post_content LIKE '$term')) ";
    }
    return $where;
}

You can also speed up instances of Search by forcing use of an index. Before we served Search natively through the WordPress front end, we were making Editorial Search available via XML-RPC to a Java client. We found that requesting a limited number of posts ordered by data was sped up by forcing index usage:

SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts USE KEY (type_status_date)....

I know many people are against this practice, but I know of others who are for it in certain circumstances: namely, the MySQL performance team at Facebook.

HyperDB

If you are working in a high traffic environment, chances are you are going to need more power than just one database server can handle. You’ve probably already scaled to several web servers, adding slaves to your master database should come next.

HyperDB is awesome and works like a champ right out of the box. You can even enable replication lag detection by uncommenting some lines of code that ship with it and installing mk-heartbeat.

Conclusion

Debug your code. Find out how fast / slow everything you are doing is. Don’t assume everything in WordPress is fast. It’s not.

eMusic.com is faster on WordPress

We’re optimizing every day (and still have work to do), but we get lots of traffic and perform tons of parallel Web Service calls – the site performs like a beast on WordPress and Amazon EC2.

From http://www.pcmag.com/article2/0,2817,2399365,00.asp, “Top Retail Sites Still Way Too Slow, Study Finds”:

Only eight Web sites finished with an average load time under four seconds: Nike.com (2.269 sec), JCPenney.com (2.65 sec), Amazon.com (2.776 sec), eMusic.com (3.279 sec), eCrater (3.542 sec), WellsFargo.com (3.81 sec) and CDUniverse (3.81 sec).

Bon Iver, “re: stacks”

I was trying to learn this song and found so many terrible tabs online that I decided to fire one up myself. It is also impossible to keep up with what chords change when during which verse without visibly seeing the differences. The strum is consistent throughout, listen to the record.

Capo VI or VII - guitar on the record is tuned to Open D, but way sharp

Intro

D  ----0------- --------------0---- --------- ----------0---- ----0--------- ------
A  ----0------- ----0---------0---- --------- ----------0---- ----0--------- ------
F# ----3------- ----0---------1---- --------- ----------1---- ----1--------- ------
D  ----2------- ----0---------0---- --------- ----------2---- ----0----H---- ------
A  -0---------- -----------2------- --------- -------0------- -2-----(0-2)-- ------
D  ------------ -0----------------- --------- --------------- -------------- ------

----0------- --------------0---- --------- ----0---------0----- -----0----------
----0------- ----0---------0---- --------- ----0---------0----- -----0----------
----3------- ----0---------1---- --------- ----1---------1----- -----1----------
----2------- ----0---------0---- --------- ----2---------0----- -----0----------
-0---------- -----------2------- --------- -0---------2-------- -----2-----H----
------------ -0----------------- --------- -------------------- -2-------(0-2)--

------------------------
----0------H--------0---
----0----0---1------0---
----0----0----------0---
---------0---2----------
-0---------------0------


Verse 1

  This my ex-ca-va-tion   to...		    ...day is kumran
----0------- --------------0----- --------- ----0----- ----0------------ -----------
----0------- ----0---------0----- --------- ----0----- ----0------------ -----------
----3------- ----0---------1----- --------- ----1----- ----1------------ -----------
----2------- ----0---------0----- --------- ----2----- ----0------------ -----------
-0---------- -----------2-------- --------- -0-------- -2--------------- -----------
------------ -0------------------ --------- ---------- ----------------- -----------
				            ( 2/4 bar )

   Everything that
           happens   from now on    This is
				     pouring rain, this is paralyzed
----0------- --------------0----- -------- ----0--------0---- -----0---------- ------
----0------- ----0---------0----- -------- ----0--------0---- -----0---------- ------
----3------- ----0---------1----- -------- ----1--------1---- -----1---------- ------
----2------- ----0---------0----- -------- ----2--------0---- -----0---------- ------
-0---------- -----------2-------- -------- -0--------2------- -----2----H----- ------
------------ -0------------------ -------- ------------------ -2------(0-2)--- ------

---------------------------------- --------------------------
----0------H---------0------H----- ----0------H-----0--------
----0----0---1-------0----0---1--- ----0----0---1---0--------
----0----0-----------0----0------- ----0----0-------0--------
---------0---2------------0---2--- ---------0---2------------
-0----------------0--------------- -0---------------0--------


Verse 2

I keep throwing it down   two                   hundred at a time
----0------- --------------0----- ---------- ----------0----- ----0---------- -------
----0------- ----0---------0----- ---------- ----------0----- ----0---------- -------
----3------- ----0---------1----- ---------- ----------1----- ----1---------- -------
----2------- ----0---------0----- ---------- ----------2----- ----0---------- -------
-0---------- -----------2-------- ---------- -------0-------- -2------------- -------
------------ -0------------------ ---------- ---------------- --------------- -------

   It's hard to find
	when you      knew it      when your money's
			  		     gone and you're
						       drunk as hell
----0------- --------------0---- --------- ----0---------0--- -----0--------- -------
----0------- ----0---------0---- --------- ----0---------0--- -----0--------- -------
----3------- ----0---------1---- --------- ----1---------1--- -----1--------- -------
----2------- ----0---------0---- --------- ----2---------0--- -----0--------- -------
-0---------- -----------2------- --------- -0---------2------ -----2----H---- -------
------------ -0----------------- --------- ------------------ -2------(0-2)-- -------

---------------------------------- --------------------------
----0------H---------0------H----- ----0------H-----0--------
----0----0---1-------0----0---1--- ----0----0---1---0--------
----0----0-----------0----0------- ----0----0-------0--------
---------0---2------------0---2--- ---------0---2------------
-0----------------0--------------- -0---------------0--------


Chorus

On your back with your racks as the
	stacks as your load
In the back and the racks and the
	 stacks are your load
In the back with your racks and you're
	unstacking your load

--------------------- ----------------------------------
-----0-------0------- ----------------------------------
----(3)------0------- -----0--0--0--0---0---0--0--0--0--  3Xs
-----2--------------- -----4--5--4--0---0---4--5--4--0--
-0-----------2------- ----------------------------------
--------------------- -5---5--5--5--5---5---5--5--5--5--


Verse 3

   I've been twisting
		to the sun                I needed to replace
----0------- --------------0---- -------- -----0------ ----0----------- -----------
----0------- ----0---------0---- -------- -----0------ ----0----------- -----------
----3------- ----0---------1---- -------- -----1------ ----1----------- -----------
----2------- ----0---------0---- -------- -----2------ ----0-----H----- -----------
-0---------- -----------2------- -------- --0--------- -2-----(0---2)-- -----------
------------ -0----------------- -------- ------------ ---------------- -----------
					  ( 2/4 bar )

     The fountain in the
	front yard is rusted out
			      All my love was down...
                                               in a frozen ground
----0------- -------------0---- -------- ----0--------0---- -----0--------- -------
----0------- ----0--------0---- -------- ----0--------0---- -----0--------- -------
----3------- ----0--------1---- -------- ----1--------1---- -----1--------- -------
----2------- ----0--------0---- -------- ----2--------0---- -----0--------- -------
-0---------- ----------2------- -------- -0--------2------- -----2----H---- -------
------------ -0---------------- -------- ------------------ -2------(0-2)-- -------

---------------------------------- --------------------------
----0------H---------0------H----- ----0------H-----0--------
----0----0---1-------0----0---1--- ----0----0---1---0--------
----0----0-----------0----0------- ----0----0-------0--------
---------0---2------------0---2--- ---------0---2------------
-0----------------0--------------- -0---------------0--------


Verse 4

   There's a black crow
	    sitting across from me   his wiry legs are crossed
----0------- --------------0----- ---------- ----0----- ----0-------------- -------
----0------- ----0---------0----- ---------- ----0----- ----0-------------- -------
----3------- ----0---------1----- ---------- ----1----- ----1-------------- -------
----2------- ----0---------0----- ---------- ----2----- ----0------H------- -------
-0---------- -----------2-------- ---------- -0-------- -2------(0---2)---- -------
------------ -0------------------ ---------- ---------- ------------------- -------
				             ( 2/4 bar )

   And he's dangling my
	keys, he even fakes a toss   Whatever
				     could it be   That has brought
                                                        me to this loss
----0------- --------------0----- ------- ----0---------0---- -----0-------- ------
----0------- ----0---------0----- ------- ----0---------0---- -----0-------- ------
----3------- ----0---------1----- ------- ----1---------1---- -----1-------- ------
----2------- ----0---------0----- ------- ----2---------0---- -----0-------- ------
-0---------- -----------2-------- ------- -0---------2------- -----2----H--- ------
------------ -0------------------ ------- ------------------- -2------(0-2)- ------

---------------------------------- --------------------------
----0------H---------0------H----- ----0------H-----0--------
----0----0---1-------0----0---1--- ----0----0---1---0--------
----0----0-----------0----0------- ----0----0-------0--------
---------0---2------------0---2--- ---------0---2------------
-0----------------0--------------- -0---------------0--------


Chorus

On your back with your racks as the
	stacks as your load
In the back and the racks and the
	stacks of your load
In the back with your racks and you're
	unstacking your load

--------------------- ----------------------------------
-----0-------0------- ----------------------------------
----(3)------0------- -----0--0--0--0---0---0--0--0--0--  3Xs
-----2--------------- -----4--5--4--0---0---4--5--4--0--
-0-----------2------- ----------------------------------
--------------------- -5---5--5--5--5---5---5--5--5--5--


Verse 5

    This is not the
             sound of a new man 		or a crispy realization
----0------- --------------0---- ------- ------- ----0----- ----0---------- --------
----0------- ----0---------0---- ------- ------- ----0----- ----0---------- --------
----3------- ----0---------1---- ------- ------- ----1----- ----1---------- --------
----2------- ----0---------0---- ------- ------- ----2----- ----0-----H---- --------
-0---------- -----------2------- ------- ------- -0-------- -2------(0-2)-- --------
------------ -0----------------- ------- ------- ---------- --------------- --------
					         ( 2/4 bar )

    It's the sound of
        the unlocking and the lift away
				  your love will be ... safe with me
----0------- --------------0----- -------- ----0---------0---- -----0--------- -----
----0------- ----0---------0----- -------- ----0---------0---- -----0--------- -----
----3------- ----0---------1----- -------- ----1---------1---- -----1--------- -----
----2------- ----0---------0----- -------- ----2---------0---- -----0--------- -----
-0---------- -----------2-------- -------- -0---------2------- -----2----H---- -----
------------ -0------------------ -------- ------------------- -2------(0-2)-- -----

---------------------------------- --------------------------
----0------H---------0------H----- ----0------H-----0--------
----0----0---1-------0----0---1--- ----0----0---1---0--------
----0----0-----------0----0------- ----0----0-------0--------
---------0---2------------0---2--- ---------0---2------------
-0----------------0--------------- -0---------------0--------

WordPress + Memcached

One of the most bizarre critiques of WordPress that I often hear is “it doesn’t come with caching” – which makes no sense because Cache is one of the best features of WordPress out of the box. That’s kind of like saying: “my iPod sucks because it doesn’t have any songs in it” – when you first buy it. Your iPod can’t predict the future and come pre-loaded with songs you love, and your WordPress environment can’t come already-installed without knowing a minimal number of things. You have to pick a username / password, you have to point at a database, and if you want to cache, you have to pick how you want to cache (you don’t HAVE to cache – but really, you HAVE to cache).

If you just have a blog that gets “light” traffic, and that blog only sits on one server: install W3 Total Cache (or WP Super Cache) and you can skip the rest of this post.

For W3TC, make sure it is not in preview mode and follow its instructions / guidance for how to get started making your site faster. W3TC can be used for a multi-server setup as well, but if you are running a website that is in a load-balanced environment, you are probably going to be more prone to use the tools I am about to show you.

Memcached

Memcached is a distributed in-memory caching system that stores key-value pairs in RAM. That will make sense to a lot of people, if it doesn’t to you – check this out:

// hey - connect me to the server where I store stuff!
connection = connect_to_server();

// store this value in that RAM of yours
// "key" holds "value"
connection.set( key, value );

// give me that value I stored with "key"
connection.get( key );

It was originally developed by Brad Fitzpatrick for LiveJournal. The concept is simple: instead of running 18 web servers / each reading from their own limited cache, each server reads / writes from a common distributed set of caches that they all can share. As long as each server is aware of the same set of “Memcached boxes,” what one server stored in the cache can be read by every other server in the cluster of web servers. The advantage to doing this is immediately apparent: 18 servers reading from the same 30GB of collective cache is more efficient than each server reading from its own smaller local cache.

Memcached (pronounced: Mem-cash-dee), or Memcache-daemon, is a process that listens by default on port 11211. Like httpd (H-T-T-P-daemon), it runs in the background, often started automatically on server load.  A lot of huge websites use Memcached – at least: Facebook, YouTube, and Twitter.

Memcached has external dependencies, so it is best to install it using a package manager like MacPorts (on OS X) or yum (Linux). Package managers give you the ability to type commands like port install memcached and port load memcached to do almost all of the initial setup for Memcached.

Memcached does not have to run on external servers. If your production site uses external servers, and you want to develop locally with Memcached as a local cache, that is just fine.

We’ll come back and discuss more tools for interacting with Memcached once we know a little more about it.

Since WordPress is written in PHP and makes a bunch of expensive SQL queries to its companion MySQL database, we need to figure out how to start taking advantage of Memcache in PHP to store the results of expensive queries, functions, and external HTTP requests in cache.

Memcache PHP Extension

Now that you (maybe) have Memcached installed, you need a way for PHP to talk to it, because it can’t out of the box. You have at least 2 choices for installing PHP extensions:

  1. Package manager
  2. PECL

If you have PECL / PEAR installed, you can run commands like pecl install memcache. Using MacPorts, it will be something more like port install php5-memcache. So what does this get you?

Mainly, access to the Memcache class (after you restart Apache) provided by the PHP extension and a procedural API for add / update / delete / retrieve-ing items from your Memcache server(s). Some examples:

<?php
/**
 * Object-oriented approach
 *
 */
$memcache = new Memcache;
// connect locally on the default port
$memcache->connect( '127.0.0.1', 11211 );

// key = set this item in the cache
// value = a string in this case, also supports objects and arrays
// false = do NOT use zlib compression to store the value
// 30 = the number of seconds to cache the item
$memcache->set( 'some_key', 'this is the value I am storing', false, 30 );

// replace / update the value of an item
$memcache->replace( 'some_key', 'but this is the value I am updating', false, 90 );

// retrieve an item
$memcache->get( 'some_key' );

/**
 * Procedural approach (functions not classes)
 *
 */
$memcache = memcache_connect( '127.0.0.1', 11211 );
memcache_add( $memcache, 'some_key', 'this is the value I am storing', false, 30 );
memcache_set( $memcache, 'some_key', 'this is the value I am storing', false, 30 );

// replace / update the value of an item
memcache_replace( $memcache, 'some_key', 'but this is the value I am updating', false, 90 );

// retrieve an item
memcache_get( $memcache, 'some_key' );

These functions work just fine, but when using WordPress, we aren’t going to call them directly – these functions will be “abstracted.”

WP Object Cache

WordPress has a cache abstraction layer built-in. What this means is that WordPress has an API for interacting with what is called the WP Object Cache – a PHP class: WP_Object_Cache. The WP Object Cache class is “pluggable” – meaning, if *you* define a class called WP_Object_Cache in your codebase and include it at runtime, that’s the class WordPress will use, otherwise it defaults to WP’s. WordPress also has a procedural API for interacting with the cache object (likely what you’ll use in your code), here’s a peek into future code you may write:

<?php

//
// Function (or "template tag") to abstract the fetching of data.
// Data comes from cache, or call is made to db then cached.
// wp_cache_set( key, value, cache_group, expires (n seconds) )
//

function get_expensive_result() {
    global $wpdb;

    $key = 'expensive_query';
    $group = 'special-group';
    $response = wp_cache_get( $key, $group );

    if ( empty( $response ) ) {
        // add the result of an expensive query
        // to the cache for 3 hours
        $response = $wpdb->get_results(.....);
        if ( !empty( $response ) )
            wp_cache_set( $key, $response, $group, 60 * 60 * 3 );
    }
    return $response;
}

$data = get_expensive_result();

WP Object Cache is agnostic as to how it is implemented. Meaning, it does either a) whatever the default behavior is in WP, which is to “cache” non-persistently or b) do whatever your WP Object Cache (your “pluggable” override) class says to do.

Here’s the catch – it won’t do anything until you add this to wp-config.php:

define( 'WP_CACHE', true );

Here’s the other catch – all that will do is try to load wp-content/advanced-cache.php and wp-content/object-cache.php which we haven’t installed yet. Those files are typically made available by a persistent cache plugin (we want our cached items to be available across page loads until they expire – duh), but let’s pause and come back to this subject a little later.

Transients API

The Transients API in WordPress is an abstraction of an abstraction. Memcached should always be viewed as a “transient cache,” meaning that sometimes you will request data, and it won’t be there and will need to be re-cached. The Transients API has a simple procedural API to act on items. It is actually the exact same thing as wp_cache_*, and even allows you to pass in expiration, it just doesn’t allow you to specify a group (‘transient’ is the group). It is really an exercise in semantics. An advantage: if you aren’t using a WP Object Cache backend (a persistent cache plugin), your “transients” will get stored in the wp_options table of your site’s WP database.

<?php
// Transients procedural API
set_transient( 'woo', 'woo' );
update_transient( 'woo', 'hoo' );
get_transient( 'woo' );
delete_transient( 'woo' );

// multisite
get_site_transient( 'multi-woo' );

// behind the scenes, this is happening
wp_cache_set( 'woo', 'woo', 'transient' );
wp_cache_set( 'woo', 'hoo', 'transient' );
wp_cache_get( 'woo', 'transient' );
wp_cache_delete( 'woo', 'transient' );

I typically use Transients when I want to store something for an extremely short period of time and then manually delete the item from the cache. Here’s an example:

<?php
function this_takes_90_secs_to_run_in_background() {
    if ( !get_transient( 'locked' ) ) {
        // this function could get called 200 times
        // if we don't set a busy indicator
        // so it doesn't get called on every page request
        set_transient( 'locked', 1 );

        // do stuff....

        // ok I'm done!
        delete_transient( 'locked' );
    }
}

add_action( 'init', 'this_takes_90_secs_to_run_in_background' );

Could this exact same thing have been accomplished by using wp_cache_* functions? You betcha, because they were (behind the scenes)! Like I said, I use Transients in specific instances – it’s really an issue of semantics.

So our next step is to make WordPress ready to start caching.

Memcached plugin / WP Object Cache backend

Wouldn’t you know it? There’s a plugin that implements WP Object Cache / Memcache from one of the lead developers of WordPress, Ryan Boren. It’s not really a “plugin” – it’s a drop-in file. Once you download the plugin, move object-cache.php into the wp-content directory, and you have pluggably-overriden WP Object Cache. If you are brave and actually look at the code in the lone file, you will notice a bunch of functions (a procedural API) for interacting with the WP Object Cache object (an instance of the Memcache class).

You will also notice that at it’s core, it is just calling the methods of the Memcache class, but it has abstracted these steps for you. It works with Multisite and has intelligently cooked up a way to cache by blog_id, group, and key. You get caching by blog_id for free, there is nothing you need to do for this to work.

Caching by group is really “namespacing” your keys. Let’s say you want to call 10 different things in your cache “query” – your choices for providing context are a) using a longer key like “bad-query” or b) just using “query” and adding the group “bad” into the fold:

<?php
wp_cache_set( 'bad-query', 'something' );
wp_cache_set( 'query', 'something', 'bad' );

This is where cache and transients completely overlap. Since you can’t specify a group with transients, “bad-query” would be your only option for the key to avoid name collision. Like I said, it is all semantics.

So we know we can call wp_cache_delete( $key, $group ) to delete an item from the cache, right? So that must be why groups are helpful as well, to remove a bunch of things from the cache at once? Nope. You can’t remove a group of items from the cache out of the box. Luckily, this infuriated me enough that I wrote a plugin that will do this for you. More in a bit, we still need to obtain an advanced-cache.php file.

Batcache

The advanced-cache.php piece of the puzzle is implemented by Batcache. Batcache is a full-page caching mechanism and was written by Andy Skelton from Automattic. WordPress.com used to host Gizomodo’s live blogs, because it would go down on its own architecture during Apple keynotes after ridiculous spikes in traffic. The idea is this: if a page on your site is requested x number of times in an elapsed period of time, cache the entire HTML output of the page and serve it straight from Memcached for a a defined amount of time. Batcache is a bonafide plugin, but the advanced-cache.php portion is a drop-in that should be moved to the wp-content directory.

Batcache will serve these fully-cached pages to anonymous users and send their browser a Cache-Control: max-age={MAX_AGE_OF_PAGE}, must-revalidate; HTTP header. There are some dangers you should be aware of when serving highly-cached pages from a dynamic website.

Batcache has a class member array called $unique which you should populate with values that would ordinarily make your page display different content for different users: Regionalization, perhaps Browser alias, country of origin of User, etc. If you are getting fancy and adding classes to the <body> tag based on browser, and you don’t add those classes to the $unique array in Batcache, you may end up serving HTML pages with an “ie ie7” class in the class attribute of the <body> tag to most of your users, regardless of what browser they are actually using. Fail.

Batcache serves these pages to “anonymous” users only. How does it know they are anonymous? It looks in the $_COOKIE super-global for some WordPress-specific cookies.

// Never batcache when cookies indicate a cache-exempt visitor.
if ( is_array( $_COOKIE) && ! empty( $_COOKIE ) ) {
    foreach ( array_keys( $_COOKIE ) as $batcache->cookie ) {
        if ( $batcache->cookie != 'wordpress_test_cookie' &&
        (
            substr( $batcache->cookie, 0, 2 ) == 'wp' ||
            substr( $batcache->cookie, 0, 9 ) == 'wordpress' ||
            substr( $batcache->cookie, 0, 14 ) == 'comment_author'
        )
    ) {
            return;
        }
    }
}

That’s all well and good unless you have implemented a custom authentication system like we have at eMusic.

The advanced-cache.php file that ships with Batcache is really a starting point. I know it may seem daunting or too complicated to dig into someone else’s plugin code, but if you are using Batcache, your site begs another level of skill and coding.

Using Memcached at eMusic

At eMusic, we use 4 dedicated Memcached servers in production – combined equaling ~28GB of RAM. When you have THAT much memory to interact with, some interesting things can happen. Here are few:

  • Your keys with no expiration will seemingly never expire until the cache starts evicting LRU (Least Recently Used) keys. Lesson learned here… always indicate expiration!
  • If you didn’t divide your keys up into a bunch of smaller groups, try to flush one portion of the cache will end up flushing tons of data that doesn’t need to be refreshed and might send a blast of traffic to your database cluster or web service tier. Lesson learned here… use MANY cache groups.
  • If you aren’t updating or deleting cache keys in your code at all, you may find that you end up with stale data often. Especially if you work with an editorial team / writers. They’ll regularly come to you with a “hey, my data is not updated on the live site!”
  • Don’t assume WordPress is getting it right when it comes to caching its own data, dig in find out how it really works. You may (will) find some objectionable things.

Johnny Cache

Remember how I said that you can’t “flush” the cache by cache group? Turns out that’s a big problem for us. Why? If we roll code or add / change a feature, we sometimes want to clear a certain cache to reflect a certain change.

Cache groups are a WordPress concept, not a Memcached concept. WordPress adds the group names to keys by convention, so if you know how to parse and create keys like the Memcached WP Object Cache backend, you can sort through the keys on your Memcached servers and group them together yourself to inspect them in the admin. If you have a list of keys for a group, you can loop through them and call wp_cache_delete( $key, $group ) on each.

Here’s some Memcache extension code to retrieve keys:

<?php
$memcache = new Memcache();
$memcache->connect( $server, '11211' );
$list = array();
$allSlabs = $memcache->getExtendedStats( 'slabs' );
$items = $memcache->getExtendedStats( 'items' );
foreach ( $allSlabs as $server => $slabs ) {
    foreach( $slabs as $slabId => $slabMeta ) {
        $cdump = $memcache->getExtendedStats( 'cachedump', (int) $slabId );
        foreach( $cdump as $keys => $arrVal ) {
            if ( !is_array( $arrVal ) ) continue;
            foreach( $arrVal as $k => $v ) {
                $list[] = $k;
            }
        }
    }
}

The Memcached backend creates keys like so:

// blog_id:group:key
1:catalog:artist-info-246809809

To parse the list of keys and sort them into groups, try this:

<?php
$keymaps = array();
foreach ( $list as $item ) {
    $parts = explode( ':', $item );
    if ( is_numeric( $parts[0] ) ) {
	$blog_id = array_shift( $parts );
	$group = array_shift( $parts );
    } else {
	$group = array_shift( $parts );
	$blog_id = 0;
    }

    if ( count( $parts ) > 1 ) {
	$key = join( ':', $parts );
    } else {
	$key = $parts[0];
    }
    $group_key = $blog_id . $group;
    if ( isset( $keymaps[$group_key] ) ) {
        $keymaps[$group_key][2][] = $key;
    } else {
	$keymaps[$group_key] = array( $blog_id, $group, array( $key ) );
    }
}

ksort( $keymaps );
foreach ( $keymaps as $group => $values ) {
    list( $blog_id, $group, $keys ) = $values;
    foreach ( $keys as $key ) {
        .........
    }
}

Johnny Cache is the front-end I wrote for the WP Admin to do this. It allows you to select one Memcached server at a time. Once selected, the servers keys (not values) are retrieved then parsed and ordered by group and blog. The plugin allows you to do the following:

  • Flush keys by cache group
  • Remove single items from the cache
  • View the contents of single items in the cache
  • Flush the cache for a specific user by User ID – helpful if you made a change by hand in the database for a specific user

Johnny Cache is still a work in progress, but flushing cache by group was important enough that I wrote my own code to handle it.

Minify-cation

Working in a load-balanced environment is a lot different than working on one server. This is no clearer than when working with tools that expect for you to dynamically create static files on the server and then point a URL directly at them. Almost all of the time, this doesn’t work. Also, if the generation of the static files is expensive, this is a task best performed once, cached, and then have the results shared with every other server.

I wrote a plugin called Minify that magically grabs all of your page’s scripts and styles, hashes the src names to create an identifier, and then combines the styles or scripts into one file and runs them through some minification classes.

I get around having to serve flat files by creating a rewrite which points to a “make” file. The make file either reads the result of this smashing-together-of-sources, or does the smashing and then saves / serves the result. The advantage here is that every request for our JavaScript and CSS comes straight out of Memcached.

Gotcha alert! To cache-bust a CDN like Akamai, you need to change the file name every time you roll new code to make sure your users aren’t getting served cached files. Query strings suck for that, so I made an increment part of the rewrite ( get_site_option( 'minify:incr' ) ). Our paths end up looking like:

http://www.emusic.com/wp-content/cache/minify-bdda2ca041434058e578f7b84eb7482b-23875598.css

// here's how it is translated
http://{HOST}/wp-content/cache/minify-{HASH}-{INCR}.{EXTENSION}

Works for both JavaScript and CSS. It’s magic.

Sitemaps

Same concept. If I want to serve google-news-sitemap.xml, I make a rewrite that points to google-news-sitemap.php. If it is in the cache, serve it, otherwise build it, store it, then serve it.

Memcached on the command line

One final note on interacting with Memcached servers: you can telnet to a server and use the command line tools bundled with it.

$ telnet localhost 11211
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
get foo
VALUE foo 0 2
hi
END
stats
STAT pid 8861

More here. Also take a look at Memcache-top.

Conclusion

There is no “definitive” way to use cache, so experiment and think through your caching strategy. A finely-tuned cache will drastically improve your website’s resilience and performance.

WordPress in Dev, QA, and Prod

if you work at a “corporation” and are using WordPress as a CMS, chances are you have multiple environments in which you do your development, testing, and then ultimately deploy to production. Syncing your environments (code, configs, database) can be a nightmare, unless you take some time to figure out just how similar your environments can be, and only focus on how they are different. I am going to share with you techniques we use at eMusic to seamlessly switch environments without having to do any manual intervention in the database, codebase, cache or configs (after the initial setup).

Database

You shouldn’t have to touch your database! You should embrace the idea that “production” is your authoritative database and only pull data DOWN, never push it UP. I know many tools exist to publish database rows from one environment’s database to another, but it is way easier to know you can always pull fresh data and “everything just works” rather than pushing data in real-time at runtime and praying for the best.

This will of course require some planning. Ok, I made a new “page” that uses rewrite and has some query vars that end up producing a very dynamic page, like an artist’s page for one of their albums. Before we roll this code to prod, I need to create the page in the WordPress admin on production so the URL will work and the page will be “live” when the code rolls.

Periodically, perhaps after each roll to production, a backup can be made of production data (an export) and “pulled down” to every other environment’s database (an import on each). I prefer to use Sequel Pro to manage our many databases and love its easy-to-export-and-import GUI tools. Sequel Pro will give you the option to compress your SQL export using Gzip compression. I have had some edge case issues where this has exposed some bad character encoding (someone pasted from Word without using the GUI button in TinyMCE…) in some of my tables, exporting SQL uncompressed has not.  Uncompressed SQL for a database with many millions of rows will produce a huge file. If you are confident in the cleanliness of your character encoding, try a compressed export for a much smaller file size (8-10x smaller).

Why this is a good method? 

  • Staging can always be replenished with fresh data, erasing any unnecessary dummy / test posts
  • What you do in dev, stays in dev, probably local to YOUR machine, so it can’t break anything
  • Prod remains prod, you can’t blow it up in real time with a bad import or a haywire push of rows / deltas

HEY WAIT A SECOND, ALL OF THE URLS IN THE DATABASE POINT TO PROD!!!!!

Output Buffering

Right, so you will need to somehow filter EVERYTHING that comes out of the database and replace production URLs with your current environment’s URLs. This can be an impossible task until you understand how output buffering works.

An output buffer will read all output that would normally get sent to the browser as bytes of HTML until you get its contents and choose to echo them out. Meaning, I can code my HTML document like I normally would, but the output buffer will swallow all of the output until I give the command for it to display. Sometimes this is called “flushing” the output buffer. Steve Souders even suggests calling the PHP method flush() right after your HTML </head> to send bytes to the user’s browser earlier and faster. Our technique kinda does the opposite of that, but what we get in return is far more valuable: a codebase that adapts to any environment, and one that rewrites the URLs on the fly for us.

This code can really go anywhere before you start echo’ing content or writing HTML, but the safest place to put it is in wp-config.php:

<?php 
define( 'DEV_HOST', $_SERVER['HTTP_HOST'] );
// single site install
define( 'PROD_HOST', 'www.emusic.com' );
// if you're using multisite
define( 'PROD_HOST', DOMAIN_CURRENT_SITE );

function your_callback_functions_name( $page ) {
    return str_replace( PROD_HOST, DEV_HOST, $page ); 
}

// pre-PHP 5.3
ob_start( 'your_callback_functions_name' );
// PHP 5.3
ob_start( function ( $page ) {
    return str_replace( PROD_HOST, DEV_HOST, $page ); 
} );

You can also include an array of URLs to replace with your DEV HOST. This is important if you have a production environment that has a host name for “wp-admin,” maybe a host name for XML-RPC servers, and a host name for your production site.

To set up a local host name for your site in Apache – you need to add an entry to your {PATH_TO_APACHE}/conf/extra/httpd-vhosts.conf file:

##
#
# Example Local Configuration (on a Mac)
#
##

<Directory /Users/staylor/Sites/>
Options Indexes MultiViews ExecCGI FollowSymLinks
AllowOverride All
Order allow,deny
Allow from all
</Directory>
<VirtualHost *:80>
    DocumentRoot "/Users/staylor/Sites/emusic/trunk"
    ServerName emusic.local
</VirtualHost>

You also need to add an entry in your /etc/hosts file:

127.0.0.1 emusic.local

AND AFTER THAT, you need to flush your local DNS cache to accept your new host:

#!/bin/bash
sudo {PATH_TO_APACHE_BIN}/apachectl restart
dscacheutil -flushcache

DEV_HOST requires a hard-coded host name (e.g. emusic.local) if $_SERVER['HTTP_HOST'] produces an IP address.

Here’s an example of filtering by searching for multiple URLs and replacing with the current host (the callback is called – passing the contents of the buffer – when the script and all output has reached its end. You can nest output buffers as well, more on this later):

// PHP 5.3
// these aren't real hosts
define( 'TOOLS_HOST', 'tools.emusic.com' );
define( 'XML_RPC_HOST', 'xml-rpc.emusic.com' );
define( 'PROD_HOST', 'www.emusic.com' );

ob_start( function ( $page ) {
    return str_replace( array( TOOLS_HOST, XML_RPC_HOST, PROD_HOST ), DEV_HOST, $page ); 
} );

Ok, I got this to work, but wait a second, we have DIFFERENT host names in staging for XML-RPC servers, etc, how do I tackle that….?

Machine-specific configs

We still use wp-config.php, but with a twist. We use MANDATORY machine configs. I’ll explain how. Inside of wp-config.php (right near the top):

define( 'DB_CONFIG_FILE', '/wp-config/hyperdb.php' );

if ( !is_file( DB_CONFIG_FILE ) ) {
    die( "You need a HyperDB config file in this machine's /wp-config folder" );
}

if ( is_file( '/wp-config/config.php' ) ) {
    require( '/wp-config/config.php' );
} else {
    die( 'You need a config file for your environment called config.php and located in /wp-config, and it needs to be owned by Apache.' );
}

Granted, you don’t have to use HyperDB. But, if you get any sort of serious traffic on your production site, you kinda HAVE to use HyperDB. HyperDB is outside the realm of this post, but just know it is a magical master-slave manager for a fancy production MySQL cluster.

The setup here is simple enough. Create a folder called wp-config (or whatever you would like) in the root directory of your machine and make sure Apache can read it.

Ok great, what goes in this /wp-config/config.php file…? Well, really anything you want, but probably these things:

// local constant for use in code
define( 'EMUSIC_CURRENT_HOST', $_SERVER['HTTP_HOST'] );

global $super_admins, $memcached_servers;
// define those here if you want, or not...

define( 'SUNRISE', 1 );

// your local database credentials
define( 'DB_NAME', 'my_database_name' );
define( 'DB_USER', 'root' );
define( 'DB_PASSWORD', 'mypassword' );
define( 'DB_HOST', '127.0.0.1' );
define( 'DB_CHARSET', 'utf8' );

// you may have environment specific S3 buckets, or not
// it's a good idea to keep these credentials in a file like this for security
// and ease of editing
define( 'AMAZON_S3_KEY', 'your_bucket_key' );
define( 'AMAZON_S3_SECRET', 'your_bucket_secret' );
define( 'AMAZON_S3_BUCKET', 'your_bucket' );

// multisite values
define( 'WP_ALLOW_MULTISITE', true );
define( 'MULTISITE', true );
define( 'SUBDOMAIN_INSTALL', false );
define( 'DOMAIN_CURRENT_SITE', 'www.emusic.com' );
$base = '/';
define( 'PATH_CURRENT_SITE', '/' );
define( 'SITE_ID_CURRENT_SITE', 1 );
define( 'BLOG_ID_CURRENT_SITE', 1 );

// maybe some hard-coded production values to filter
define( 'PRODUCTION_CURRENT_SITE', 'www.emusic.com' );
define( 'PRODUCTION_CURRENT_TOOLS', 'tools.emusic.com' );
define( 'XMLRPC_DOMAIN', 'xmlrpc.emusic.com' );

// environment-specific endpoints
define( 'XMLRPC_ENDPOINT', EMUSIC_CURRENT_HOST );
define( 'TOOLS_DOMAIN_FQDN', EMUSIC_CURRENT_HOST );

ob_start( function ( $page ) {
   // this might be overkill, but it makes sure you are 
   // getting rid of any "wrong" URL that made its way into
   // the database somehow
    return str_replace(
        array(
            XMLRPC_DOMAIN,
            TOOLS_DOMAIN_FQDN,
            DOMAIN_CURRENT_SITE,
            PRODUCTION_CURRENT_TOOLS 
        ),
        EMUSIC_CURRENT_HOST,
        $page
     );
} );

// custom WordPress configs:
// we use an svn:external for WordPress code
// in a different directory other than root
//
// we move our assets directory and exclude it from Subversion
// more on that later...
define( 'WP_CONTENT_URL',
    'http://' . DOMAIN_CURRENT_SITE . '/wp-content' );
define( 'WP_CONTENT_DIR',
    $_SERVER['DOCUMENT_ROOT'] . '/wp-content' );
define( 'EMUSIC_UPLOADS', 'assets' );

This is an example config that might look completely different in every enviroment’s machine(s):

  • Each environment might have specific overrides (a $super_admins array?)
  • You might specify 6 Memcached servers in one environment and zero in another
  • Your database credentials will undoubtedly change across environments

Sunrise

We use Multisite, so we also have to do some work in wp-content/sunrise.php. When using Multisite, WordPress allows you to get in there early by setting define( 'SUNRISE', 1 ) and completely changing what site and blog you are on by adding your own monkey business in wp-content/sunrise.php, if you so choose. You can also start adding filters and actions before the meat and potatoes of WordPress starts doing its thing. We are filtering output with the output buffer, but we have to filter PHP variables in code using WordPress filters:

$domain = EMUSIC_CURRENT_HOST;

// a bunch of code here is omitted that looks at domain
// and does some crazy stuff to switch between "blogs"
// without actually adding Wildcard DNS

// Multisite can require some unconventional
// code and admin wrangling to get things to work
// properly when you have a custom WP location,
// custom media location, AND a custom Multisite media
// location

// all you need to know:
// this code will produce $the_id (representing blog_id) and
// possibly alter $domain

// and then...

function get_environment_host( $url ) {
    global $domain;
    return str_replace(
        array(
            XMLRPC_DOMAIN,
            TOOLS_DOMAIN_FQDN,
            DOMAIN_CURRENT_SITE
        ),
        $domain,
        $url
    );
}

add_filter( 'home_url', 	'get_environment_host' );
add_filter( 'site_url', 	'get_environment_host' );
add_filter( 'network_home_url', 'get_environment_host' );
add_filter( 'network_site_url', 'get_environment_host' );
add_filter( 'network_admin_url','get_environment_host' );
add_filter( 'post_link', 	'get_environment_host' );

add_filter( 'pre_option_home', function ( $str ) use ( $domain ) {
    return 'http://' . $domain;
} );

add_filter( 'pre_option_siteurl', function ( $str ) use ( $domain ) {
    return 'http://' . $domain;
} );

// our custom image location for blogs / sites in our network
if ( $the_id > 1 ) {
    define( 'UPLOADBLOGSDIR', 0 );
    define( 'UPLOADS', 0 );
    define( 'BLOGUPLOADDIR',
        $_SERVER['DOCUMENT_ROOT'] . "/blogs.dir/{$the_id}/files/" );

    add_filter( 'pre_option_upload_path', function () use ( $the_id ) {
        return $_SERVER['DOCUMENT_ROOT'] . "/blogs.dir/{$the_id}/files/";
    } );

    add_filter( 'pre_option_upload_url_path', function () use ( $the_id, $domain ) {
        return 'http://' . $domain . "/blogs.dir/{$the_id}/files/";
    } );

// our custom image location for our main site
} else {
    add_filter( 'pre_option_upload_path', function ( $str ) {
        return $_SERVER['DOCUMENT_ROOT'] . '/' . EMUSIC_UPLOADS;
    } );

    add_filter( 'pre_option_upload_url_path', function ( $str ) {
        return 'http://' . EMUSIC_CURRENT_HOST  . '/' . EMUSIC_UPLOADS;
    } );
}

Ok wow, this is great, but what about images, how do I sync them…?

Images / S3

Trying to keep images sync’d in Subversion is a nightmare. Your production code probably isn’t a Subversion checkout (although WordPress.com is, they run trunk). It is more than likely a Subversion export that has been rsync’d across all of your many load-balanced servers / EC2 instances. If you don’t have a dedicated server for wp-admin, your images might not even end up on the same server – they could be spread across several. So that begs the following questions:

  • How is it humanly possible for servers to share images?
  • What happens if a server receives a request for an image it doesn’t have?
  • What happens when I pull the prod database to my local db but I don’t have any of those image paths in my file system?
  • How do I pull all of those images from production (each server) and somehow check them into Subversion… in real-time(!) ?

I am of the following opinion – you can’t! But this isn’t limited to Image assets. How do you serve a sitemap which is supposed to be a static file when you have 18 servers and the file is generated dynamically and saved locally? How do you minify JavaScript and then save them to flat files that can served by any server. My answer: you can’t, and shouldn’t!

So let’s start looking at solutions!

W3 Total Cache is the subject of intense debate across the WordPressosphere. Many say it does too much. When you are already using Batcache and Memcached, it sorta becomes overkill. But for me, there was one feature I always thought was invaluable. The CDN portion! So what does this CDN portion do?

The CDN code in W3TC gives you an admin interface to enter CDN (Amazon S3 or Cloudfront, Microsoft Azure, etc) credentials, and then magically when you upload a media file, it will upload that file to Amazon S3. Not only that, it will rewrite your image /media URLs to said remote location. So you keep working and uploading, it takes care of the rest. It’s magic!

To use this feature, I had to rip all kinds of code OUT of W3TC and make some changes here or there. One of the first things I knew I needed was a config that would work across any environments (much like our database works). I needed to accomplish the following things:

  • At no point in any environment do I need to have the images my local file system
  • Every environment’s media URLs should be seamlessly rewritten to their S3 counterpart
  • When I import the production database into any environment, media assets should appear as if they were always there.

This might sound super-sophisticated, but we’re only doing 2 crucial tasks:

  1. Adding an action that will upload the media to Amazon S3. W3TC already did all of that heavy-lifting. Thanks!
  2. Adding an output buffer to match media URLs against RegEx and replace them. W3TC already did all of that heavy-lifting. Thanks!

I had to make changes for this to work – altered the ways the configs work, made sure the GUI couldn’t override the config when messing around in the admin, added some constants that are defined in config.php for machine specific configuration. But ultimately, I took an existing technology and tweaked it to work to our advantage.

If you notice, I said that W3TC uses an output buffer – the output buffer starts after ours, meaning: we use nested output buffers. Their callback will run before ours, so the result of their output buffer callback will get passed to ours.

The Result

Our stuff “just works.” To get started developing with WordPress at eMusic, the following has to be done:

  • A production account on eMusic.com
  • Machine configs present in /wp-config
  • A copy of the production database
  • A Subversion checkout
  • Some entries in /etc/hosts for our many Web Services (from our Java universe)

That’s it. Here’s me talking about this and more at WordPress San Francisco 2011:

Concerts I Attended in 2011

Here’s the full list of shows I saw in 2011:

01/18 Broken Social Scene, Terminal 5 (link)
01/29 Iron & Wine, Radio City Music Hall (link)
01/31 The Hold Steady, Music Hall of Williamsburg (link)
02/02 Best Coast / Wavves, Webster Hall (link)
02/04 Tapes ‘n Tapes, Music Hall of Williamsburg (link)
02/05 Railroad Earth, Brooklyn Bowl
02/15 Drive-By Truckers, Bowery Ballroom (link)
02/27 Louis C.K., Caroline’s (link)
03/05 Akron / Family, Bowery Ballroom (link)
03/08 Bright Eyes / Superchunk / Wild Flag, Radio City Music Hall (link)
03/29 LCD Soundsystem, Terminal 5 (link)
04/02 LCD Soundsystem, Madison Square Garden (link)
04/06 Explosions in the Sky / Low, Radio City Music Hall (link)
05/09 Interpol / Blonde Redhead, Terminal 5 (link)
05/19 Fleet Foxes, United Palace (link)
05/31 Ray LaMontagne, Central Park Summerstage (link)
06/07 Okkervil River, Terminal 5 (link)
06/19 Drive-By Truckers, Brooklyn Bowl (link)
08/10 Bon Iver, Prospect Park Bandshell (link)
09/08 Broken Social Scene / TV on the Radio, Williamsburg Waterfront (link)
09/17 The Hold Steady, Beekman Beer Garden (link)
09/22 Wilco, Central Park Summerstage (link)
09/23 Wilco, Central Park Summerstage (link)
09/25 The Shins, Bowery Ballroom (link)
09/30 Tapes ‘n Tapes, Bowery Ballroom
10/03 Explosions in the Sky / The Antlers, Wellmont Theatre (link)
10/13 Iron & Wine / The Low Anthem, Terminal 5 (link)
11/10 Louis C.K., Beacon Theatre (link)
11/22 M83 / Active Child, Webster Hall (link)
11/25 Donna the Buffalo, City Winery
12/01 Dum Dum Girls, Brooklyn Bowl (link)
12/11 The War on Drugs, Bowery Ballroom (link)
12/13 The National, Beacon Theatre (link)
12/14 My Morning Jacket / Band of Horses, Madison Square Garden (link)

If I had to rank the top 5:

#1 My Morning Jacket / Band of Horses, Madison Square Garden

This show was amazing. Anytime one is in the same room as Band of Horses playing “The Funeral,” it is a good day. MMJ were on fire.There are still no shows that have topped their 2006 midnight show at Bonnaroo – the one where they played for 4 hours and covered The Who’s “A Quick One (While He’s Away).” MMJ being able to fill Madison Square Garden really caps off their decade of being nothing but road warriors. “Steam Engine” was unreal… as was most everything else… here’s the setlist:

Read More: http://www.brooklynvegan.com/archives/2011/12/my_morning_jack_33.html

#2 & #3 The Hold Steady, Music Hall of Williamsburg + The Hold Steady, Beekman Beer Garden

It’s always great to see professionals show up to work and blow the roof off of the place every time. The show at MHOW was their 8th Anniversary show, they played their first show as a band 8 years (to the day) ago at Northsix (now known as the Music Hall of Williamsburg). The first encore was perfect, the first song on their first record: Positive Jam. The show at Beekman Beer Garden was perfect. Balls-to-the-Wall from start to finish. South Street Seaport, Brooklyn / Manhattan Bridges and East River as a backdrop. Tunes were hot: opened with Ask Her for Her Adderall > Massive Nights – they played them all, including an AC/DC cover.

Read More Here: http://www.brooklynvegan.com/archives/2011/02/the_hold_steady_32.html
And here:  http://www.brooklynvegan.com/archives/2011/09/the_hold_steady_35.html

#4 Bon Iver, Prospect Park Bandshell

The best record of the year, performed on an amazing summer night in an amazing location. We also has sick seats – Ralphie got us in the VIP section. I dare you to name a better a 3-song stretch than Holocene > re: stacks > Flume.

Read More: http://www.brooklynvegan.com/archives/2011/08/bon_iver_the_ro_1.html

#5 The War on Drugs, Bowery Ballroom

Another of the year’s best records performed pretty much to perfection. It’s rare that an NYC show with this much hype actually lives up to it. “Come to the City” was possibly my favorite concert moment of 2011.

Read More: http://www.brooklynvegan.com/archives/2011/12/the_war_on_drug_9.html

WORST SHOW OF 2011

DUM DUM GIRLS @ BROOKLYN BOWL – SUCKED SO SO HARD!

Best Albums of 2011 (First Draft)

Bon Iver – Bon Iver
Hurry Up We’re Dreaming – M83
The Whole Love – Wilco
S/T II: The Cosmic Birth and Journey of Shinju TNT – Akron / Family
Only in Dreams – Dum Dum Girls
Codes and Keys – Death Cab for Cutie
Outside – Tapes ‘n Tapes
Let England Shake – PJ Harvey
Slave Ambient – The War on Drugs
Tomboy – Panda Bear
Helplessness Blues – Fleet Foxes
In the Mountain In the Cloud – Portugal. The Man.
Torches – Foster the People

Fix for “The plugin does not have a valid header”

If you’ve ever seen this error when trying to install a plugin in your WordPress Admin Panel, you have probably considered going on a murderous rampage:

“The plugin does not have a valid header.”

This error is probably due to your list of Plugins being cached in whatever WP Object Cache your blog / site is using. None of the sites you will find when searching Google will tell you this. They will tell you that you have an invalid Plugin Header. You might. But probably, your list of files is cached.

There are 2 problems here:

1) The list of plugins ( $cache_plugins ) is not pluggable

2) the generic error for all missing plugins is “The plugin does not have a valid header”

function validate_plugin($plugin) {
    if ( validate_file($plugin) )
        return new WP_Error('plugin_invalid',
            __('Invalid plugin path.'));
    if ( ! file_exists(WP_PLUGIN_DIR . '/' . $plugin) )
        return new WP_Error('plugin_not_found',
            __('Plugin file does not exist.'));

   $installed_plugins = get_plugins();
   if ( ! isset($installed_plugins[$plugin]) )
       return new WP_Error('no_plugin_header', 
           __('The plugin does not have a valid header.'));
   return 0;
}

TO FIX THIS GARBAGE = in /wp-admin/includes/plugin.php at line 219-ish, comment out the lines below then visit your Plugins list:

// if ( ! $cache_plugins = wp_cache_get('plugins', 'plugins') )
$cache_plugins = array();

// if ( isset($cache_plugins[ $plugin_folder ]) )
// return $cache_plugins[ $plugin_folder ];

When you’re done and everything works, un-comment those lines. If this didn’t fix the problem, then yes, you probably fucked up when creating the Plugin header comment.