Performance is a Problem

The (failing) New York Times has many apps and services. Some live on physical servers, others reside in The Cloud™. I manage an app that runs on multiple containers orchestrated by Kubernetes in (A)mazon (W)eb (S)ervices. Many of our apps are migrating to (G)oogle (C)loud (P)latform. Containers are great, and Kubernetes is (mostly) great. “Scaling” usually happens horizontally: you add more replicas of your app to match your traffic/workload. Kubernetes does the scaling and rolling updates for you.

Most apps at the Times are adopting the 12 Factor App methodology – if you haven’t been exposed, give this a read. tl;dr containers, resources, AWS/GCP, run identical versions of your app everywhere. For the app I manage, the network of internationalized sites including Español, I use a container for nginx and another for php-fpm – you can grab it here. I use RDS for the database, and Elasticache for the Memcached service. All seems good. We’re set for world domination.

Enter: your application’s codebase.

If your app doesn’t talk to the internet or a database, it will probably run lightning fast. This is not a guarantee, of course, but network latency will not be an issue. Your problem would be mathematical. Once you start asking the internet for data, or connect to a resource to query data, you need a game plan.

As you may have guessed, the network of sites I alluded to above is powered by WordPress™. WordPress, on its best day, is a large amount of PHP 5.2 code that hints at caching and queries a database whose schema I would generously call Less Than Ideal. WP does not promise you fast queries, but it does cache data in a non-persistent fashion out of the box. What does this mean? It sets a global variable called $wp_object_cache and stores your cache lookups for the length of the request. For the most part, WP will attempt to read from the cache before making expensive database queries. Because it would be nice to make this cache persistent (work across multiple requests/servers), WP will look for an object cache drop-in: wp-content/object-cache.php, and if it is there, will use your implementation instead of its own.

Before we go any further, let me say this: it is basically mandatory that you use persistent caching.

Keep in mind: you haven’t written any code yet – you have only installed WP. Many of us do this and think we are magically done. We are not. If you are a simple blog that gets very little traffic: you are probably ok to move on and let your hosting company do all of the work.

If you are a professional web developer/software engineer/devops ninja, you need to be aware of things – probably before you start designing your app.

Network requests are slow

When it comes to dealing with the internals of WordPress in regards to database queries, there is not a lot you can do to speed things up. The queries are already written in a lowest common denominator way against a schema which is unforgiving. WP laces caching where it can, and simple object retrieval works as expected. WordPress does not cache queries, what it does is limit complex queries to only return IDs (primary keys), so that single item lookups can be triggered and populate the cache. Here’s a crude example:

$ids = $wpdb->get_results(...INSERT SLOW QUERY...);
_prime_post_caches( $ids, true, true );
// recursively get each item
$posts = array_map('get_post', $ids);

get_post() will get the item from the cache and hydrate it, or go to the database, populate the cache, and hydrate the result. This is the default behavior of WP_Query internals: expensive query, followed by recursive lookups from the cache after it has been primed. Showing 20 posts on the homepage? Uncached, that’s at least 2 database queries. Each post has a featured image? More queries. A different author? More queries. You can see where this is going. An uncached request is capable of querying everything in sight. Like I said, it is basically mandatory that you use an object cache. You will still have to make the query for items, but each individual lookup could then be read from the cache.

You are not completely out of the woods when you read from the cache. How many individual calls to the cache are you making? Before an item can be populated in $wp_object_cache non-persistently, it must be requested from Memcached. Memcached has a mechanism for requesting items in batch, but WordPress does not use it. Cache drop-ins could implement it, but the WP internals do not have a batch mechanism implemented to collect data and make one request in batch for all items.

We know that we could speed this all up by also caching the query, but then invalidation becomes a problem. YOU would have to implement a query cache, YOU would have to write the unit tests, YOU would have to write the invalidation logic. Chances are that YOUR code will have hard-to-track-down bugs, and introduce more complexity than is reasonable.

WP_Query might be doing the best it can, for what it is and what codebase it sits in. It does do one thing that is essential: it primes the cache. Whether the cache is persistent or not, get_post() uses the Cache API internally (via WP_Post::get_instance()) to reduce the work of future lookups.

Prime the Cache

Assuming you are writing your own code to query the database, there are 3 rules of thumb:

1) make as few queries as is possible
2) tune your queries so that they are as fast as possible
3) cache the results

Once again, cache and invalidation can be hard, so you will need unit tests for your custom code (more on this in a bit).

This is all well and good for the database, but what about getting data from a web service, another website, or a “microservice,” as it were. We already know that network latency can pile up by making too many requests to the database throughout the lifecycle of a request, but quering the database is usually “pretty fast” – at least not as slow as making requests over HTTP.

HTTP requests are as fast as the network latency to hit a server and how slow that server is to respond with something useful. HTTP requests are somewhat of a black box and can time out. HTTP requests cannot be made serially – meaning, you can’t make a request, wait, make a request, wait, make a request…

HTTP requests must be batched

If I need data from 3 different web services that don’t rely on each other’s responses, you need to request them asynchronously and simultaneously. WordPress doesn’t officially offer this functionality unless you reach down into Requests and write your own userland wrapper and cache. I highly suggest installing Guzzle via Composer and including it in your project where needed.

Here is some example Guzzle code that deals with Concurrency:

use GuzzleHttp\Client;
use GuzzleHttp\Promise;

$client = new Client(['base_uri' => 'http://httpbin.org/']);

// Initiate each request but do not block
$promises = [
  'image' => $client->getAsync('/image'),
  'png'   => $client->getAsync('/image/png'),
  'jpeg'  => $client->getAsync('/image/jpeg'),
  'webp'  => $client->getAsync('/image/webp')
];

// Wait on all of the requests to complete. Throws a ConnectException
// if any of the requests fail
$results = Promise\unwrap($promises);

// Wait for the requests to complete, even if some of them fail
$results = Promise\settle($promises)->wait();

// You can access each result using the key provided to the unwrap
// function.
echo $results['image']['value']->getHeader('Content-Length')[0]
echo $results['png']['value']->getHeader('Content-Length')[0]

Before you start doing anything with HTTP, I would suggest taking a hard look at how your app intends to interact with data, and how that data fetching can be completely separated from your render logic. WordPress has a bad habit of mixing user functions that create network requests with PHP output to do what some would call “templating.” Ideally, all of your network requests will have completed by the time you start using data in your templates.

In addition to batching, almost every HTTP response is worthy of a cache entry – preferably one that is long-lived. Most responses are for data that never changes, or does so rarely. Data that might need to be refreshed can be given a short expiration. I would think in minutes and hours where possible. Our default cache policy at eMusic for catalog data was 6 hours. The data only refreshed every 24. If your cache size grows beyond its limit, entries will be passively invalidated. You can play with this to perhaps size your cache in way that even entries with no expiration will routinely get rotated.

A simple cache for an HTTP request might look like:

$response = wp_cache_get( 'meximelt', 'taco_bell' );
if ( false === $response ) {
  $endpoint = 'http://tacobell.com/api/meximelt?fire-sauce';
  $response = http_function_that_is_not_wp_remote_get( $endpoint );
  wp_cache_set( 'meximelt', $response, 'taco_bell' );
}

How your app generates its cache entries is probably its bottleneck

The key take-away from the post up until now is: how your app generates its cache entries is probably its bottleneck. If all of your network requests are cached, and especially if you are using an edge cache like Varnish, you are doing pretty well. It is in those moments that your app needs to regenerate a page, or make a bunch of requests over the network, that you will experience the most pain. WordPress is not fully prepared to help you in these moments.

I would like to walk through a few common scenarios for those who build enterprise apps and are not beholden to WordPress to provide all of the development tools.

WordPress is not prepared to scale as a microservice

WordPress is supposed to be used as a one-size-fits-all approach to creating a common app with common endpoints. WordPress assumes it is the router, the database abstraction, the thin cache layer, the HTTP client, the rendering engine, and hey why not, the API engine. WordPress does all of these things while also taking responsibility for kicking off “cron jobs” (let’s use these words loosely) and phoning home to check for updates. Because WordPress tries to do everything, its code tries to be everywhere all at once. This is why there are no “modules” in WordPress. There is one module: all of it.

A side effect of this monolith is that none of the tools accomplish everything you might expect. A drawback: because WordPress knows it might have to do many tasks in the lifecycle of one request, it eagerly loads most of its codebase before performing hardly any worthwhile functions. Just “loading WordPress” can take more than 1 second on a server without opcache enabled properly. The cost of loading WordPress at all is high. Another reason it has to load so much code is that the plugin ecosystem has been trained to expect every possible API to be in scope, regardless of what is ever used. This is the opposite of service discovery and modularity.

The reason I bring this up: microservices are just abstractions to hide the implementation details of fetching data from resources. The WordPress REST API, at its base, is just grabbing data from the database for you, at the cost of loading all of WordPress and adding network latency. This is why it is not a good solution for providing data to yourself over HTTP, but a necessary solution for providing your data to another site over HTTP. Read-only data is highly-cacheable, and eventual consistency is probably close enough for jazz in the cache layer.

Wait, what cache layer? Ah, right. There is no HTTP cache in WordPress. And there certainly is no baked-in cache for served responses. wp-content/advanced-cache.php + WP_CACHE allows you another opportunity to bypass WordPress and serve full responses from the cache, but I haven’t tried this with Batcache. If you are even considering being a provider of data to another site and want to remain highly-available, I would look at something like Varnish or Fastly (which is basically Varnish).

A primer for introducing Doctrine

There are several reasons that I need to interact with custom tables on the network of international sites I manage at the Times. Media is a global concept – one set of tables is accessed by every site in the network. Each site has its own set of metadata, to store translations. I want to be able to read the data and have it cached automatically. I want to use an ORM-like API that invalidates my cache for me. I want an API to generate SELECT statements, and I want queries to automatically prime the cache, so that when I do:

$db->query('EXPENSIVE QUERY that will cache itself (also contains item #1)!');
// already in the cache
$db->find(1);

All of this exists in Doctrine. Doctrine is an ORM. Doctrine is powerful. I started out writing my own custom code to generate queries and cache them. I caused bugs. I wrote some unit tests. I found more bugs. After enough headaches, I decided to use an open source tool, widely accepted in the PHP universe as a standard. Rather than having to write unit tests for the APIs themselves, I could focus on using the APIs and write tests for our actual implementation.

  • WordPress is missing an API for SELECT queries

    Your first thought might be: “phooey! we got WP_Query!” Indeed we do. But what about tables that are not named wp_posts… that’s what I thought. There are a myriad of valid reasons that a developer needs to create and interact with a custom database table, tables, or an entirely separate schema. We need to give ourselves permission to break out of the mental prison that might be preventing us from using better tools to accomplish tasks that are not supported by WP out of the box. We are going to talk about Doctrine, but first we are going to talk about Symfony.

  • Symfony

    Symfony is the platonic ideal for a modular PHP framework. This is, of course, debatable, but it is probably the best thing we have in the PHP universe. Symfony, through its module ecosystem, attempts to solve specific problems in discrete ways. You do not have to use all of Symfony, because its parts are isolated modules. You can even opt to just use a lighter framework, called Silex, that gives you a bundle of powerful tools with Rails/Express-like app routing. It isn’t a CMS, but it does make prototyping apps built in PHP easy, and it makes you realize that WP doesn’t attempt to solve technical challenges in the same way.

  • Composer

    You can use Composer whenever you want, with any framework or codebase, without Composer being aware of all parts of your codebase, and without Composer knowing which framework you are using. composer.json is package.json for PHP. Most importantly, Composer will autoload all of your classes for you. require_once 'WP_The_New_Class_I_added.php4' is an eyesore and unnecessary. If you have noticed, WP loads all of its library code in wp-settings.php, whether or not you are ever going to use it. This is bad, so bad. Remember: without a tuned opcache, just loading all of these files can take upwards of 1 second. Bad, so so bad. Loading classes on-demand is the way to go, especially when much of your code only initializes to *maybe* run on a special admin screen or the like.

    Here is how complicated autoloading with Composer is – in wp-config.php, please add:

    require_once 'vendor/autoload.php'
    

    I need a nap.

  • Pimple

    Pimple is the Dependency Injection Container (sometimes labeled without irony as a DIC), also written by Fabien. A DI container allows us to lazy-load resources at runtime, while also keeping their implementation details opaque. The benefit is that we can more simply mock these resources, or swap out the implementation of them, without changing the details of how they are referenced. It also allows us to group functionality in a sane way, and more cleanly expose a suite of services to our overall app container. Finally, it makes it possible to have a central location for importing libraries and declaring services based on them.

    A container is an instance of Pimple, populated by service providers:

    namespace NYT;
    
    use Pimple\Container;
    
    class App extends Container {}
    
    // and then later...
    
    $app = new App();
    $app->register( new AppProvider() );
    $app->register( new Cache\Provider() );
    $app->register( new Database\Provider() );
    $app->register( new Symfony\Provider() );
    $app->register( new AWS\Provider() );
    

    Not to scare you, and this is perhaps beyond the scope of this post, but here is what our Database provider looks like:

    namespace NYT\Database;
    
    use Pimple\Container;
    use Pimple\ServiceProviderInterface;
    use Doctrine\ORM\EntityManager;
    use Doctrine\ORM\Configuration;
    use Doctrine\Common\Persistence\Mapping\Driver\StaticPHPDriver;
    use Doctrine\DBAL\Connection;
    use NYT\Database\{Driver,SQLLogger};
    
    class Provider implements ServiceProviderInterface {
      public function register( Container $app ) {
    
        $app['doctrine.metadata.driver'] = function () {
          return new StaticPHPDriver( [
            __SRC__ . '/lib/php/Entity/',
            __SRC__ . '/wp-content/plugins/nyt-wp-bylines/php/Entity/',
            __SRC__ . '/wp-content/plugins/nyt-wp-media/php/Entity/',
          ] );
        };
    
        $app['doctrine.config'] = function ( $app ) {
          $config = new Configuration();
    
          $config->setProxyDir( __SRC__ . '/lib/php/Proxy' );
          $config->setProxyNamespace( 'NYT\Proxy' );
          $config->setAutoGenerateProxyClasses( false );
    
          $config->setMetadataDriverImpl( $app['doctrine.metadata.driver'] );
          $config->setMetadataCacheImpl( $app['cache.array'] );
    
          $cacheDriver = $app['cache.chain'];
          $config->setQueryCacheImpl( $cacheDriver );
          $config->setResultCacheImpl( $cacheDriver );
          $config->setHydrationCacheImpl( $cacheDriver );
    
          // this has to be on, only thing that caches Entities properly
          $config->setSecondLevelCacheEnabled( true );
          $config->getSecondLevelCacheConfiguration()->setCacheFactory( $app['cache.factory'] );
    
          if ( 'dev' === $app['env'] ) {
            $config->setSQLLogger( new SQLLogger() );
          }
          return $config;
        };
    
        $app['doctrine.nyt.driver'] = function () {
          // Driver calls DriverConnection, which sets the internal _conn prop
          // to the mysqli instance from wpdb
          return new Driver();
        };
    
        $app['db.host'] = function () {
          return getenv( 'DB_HOST' );
        };
    
        $app['db.user'] = function () {
          return getenv( 'DB_USER' );
        };
    
        $app['db.password'] = function () {
          return getenv( 'DB_PASSWORD' );
        };
    
        $app['db.name'] = function () {
          return getenv( 'DB_NAME' );
        };
    
        $app['doctrine.connection'] = function ( $app ) {
          return new Connection(
            // these credentials don't actually do anything
            [
              'host' => $app['db.host'],
              'user' => $app['db.user'],
              'password' => $app['db.password'],
            ],
            $app['doctrine.nyt.driver'],
            $app['doctrine.config']
          );
        };
    
        $app['db'] = function ( $app ) {
          $conn = $app['doctrine.connection'];
          return EntityManager::create( $conn, $app['doctrine.config'], $conn->getEventManager() );
        };
      }
    }	
    

    I am showing you this because it will be relevant in the next 2 sections.

  • wp-content/db.php

    WordPress allows you to override the default database implementation with your own. Rather than overriding wpdb (this is a class, yet eschews all naming conventions), we are going to instantiate it like normal, but leak the actual database resource in our override file so that we can share the connection with our Doctrine code. This gives us the benefit of using the same connection on both sides. In true WP fashion, $dbh is a protected member, and should not be publicly readable, but a loophole in the visibility scheme for the class allows us to anyway – for backwards compatibility, members that were previously initialized with var (whadup PHP4) are readable through the magic methods that decorate the class.

    Here is what we do in the DB drop-in:

    $app = NYT\getApp();
    
    $wpdb = new wpdb(
      $app['db.user'],
      $app['db.password'],
      $app['db.name'],
      $app['db.host']
    );
    
    $app['dbh'] = $wpdb->dbh;
    

    The container allows us to keep our credentials opaque, and allows us to switch out *where* our credentials originate without having to change the code here. The Provider class exposes “services” for our app to use that don’t get initialized until we actually use them. Wouldn’t this be nice throughout 100% of WordPress? The answer is yes, but as long as PHP 5.2 support is a thing, Pimple cannot be used, as it requires closures (the = function () {} bit).

  • WP_Object_Cache

    Doctrine uses a cache as well, so it would be great if we could not only share the database connection, but also share the Memcached instance that is powering the WP_Object_Cache and the internal Doctrine cache.

    Here’s our cache Provider:

    namespace NYT\Cache;
    
    use Pimple\Container;
    use Pimple\ServiceProviderInterface;
    use Doctrine\ORM\Cache\DefaultCacheFactory;
    use Doctrine\ORM\Cache\RegionsConfiguration;
    use Doctrine\Common\Cache\{ArrayCache,ChainCache,MemcachedCache};
    use \Memcached;
    
    class Provider implements ServiceProviderInterface {
      public function register( Container $app ) {
        $app['cache.array'] = function () {
          return new ArrayCache();
        };
    
        $app['memcached.servers'] = function () {
          return [
            [ getenv( 'MEMCACHED_HOST' ), '11211' ],
          ];
        };
    
        $app['memcached'] = function ( $app ) {
          $memcached = new Memcached();
          foreach ( $app['memcached.servers'] as $server ) {
            list( $node, $port ) = $server;
            $memcached->addServer(
              // host
              $node,
              // port
              $port,
              // bucket weight
              1
            );
          }
          return $memcached;
        };
    
        $app['cache.memcached'] = function ( $app ) {
          $cache = new MemcachedCache();
          $cache->setMemcached( $app['memcached'] );
          return $cache;
        };
    
        $app['cache.chain'] = function ( $app ) {
          return new ChainCache( [
            $app['cache.array'],
            $app['cache.memcached']
          ] );
        };
    
        $app['cache.regions.config'] = function () {
          return new RegionsConfiguration();
        };
    
        $app['cache.factory'] = function ( $app ) {
          return new DefaultCacheFactory(
            $app['cache.regions.config'],
            $app['cache.chain']
          );
        };
      }
    }
    
    

    WP_Object_Cache is an idea more than its actual implementation details, and with so many engines for it in the wild, the easiest way to insure we don’t blow up the universe is to have our cache class implement an interface. This will ensure that our method signatures match the reference implementation, and that our class contains all of the expected methods.

    namespace NYT\Cache;
    
    interface WPCacheInterface {
      public function key( $key, string $group = 'default' );
    
      public function get( $id, string $group = 'default' );
      public function set( $id, $data, string $group = 'default', int $expire = 0 );
      public function add( $id, $data, string $group = 'default', int $expire = 0 );
      public function replace( $id, $data, string $group = 'default', int $expire = 0 );
      public function delete( $id, string $group = 'default' );
    
      public function incr( $id, int $n = 1, string $group = 'default' );
      public function decr( $id, int $n = 1, string $group = 'default' );
    
      public function flush();
      public function close();
    
      public function switch_to_blog( int $blog_id );
    
      public function add_global_groups( $groups );
      public function add_non_persistent_groups( $groups );
    }
    

    Here’s our custom object cache that uses the Doctrine cache internals instead of always hitting Memcached directly:

    namespace NYT\Cache;
    
    use NYT\App;
    
    class ObjectCache implements WPCacheInterface {
      protected $memcached;
      protected $cache;
      protected $arrayCache;
    
      protected $globalPrefix;
      protected $sitePrefix;
      protected $tablePrefix;
    
      protected $global_groups = [];
      protected $no_mc_groups = [];
    
      public function __construct( App $app ) {
        $this->memcached = $app['memcached'];
        $this->cache = $app['cache.chain'];
        $this->arrayCache = $app['cache.array'];
    
        $this->tablePrefix = $app['table_prefix'];
        $this->globalPrefix = is_multisite() ? '' : $this->tablePrefix;
        $this->sitePrefix = ( is_multisite() ? $app['blog_id'] : $this->tablePrefix ) . ':';
      }
    
      public function key( $key, string $group = 'default' ): string
      {
        if ( false !== array_search( $group, $this->global_groups ) ) {
          $prefix = $this->globalPrefix;
        } else {
          $prefix = $this->sitePrefix;
        }
        return preg_replace( '/\s+/', '', WP_CACHE_KEY_SALT . "$prefix$group:$key" );
      }
    
      /**
       * @param int|string $id
       * @param string     $group
       * @return mixed
       */
      public function get( $id, string $group = 'default' ) {
        $key = $this->key( $id, $group );
        $value = false;
    
        if ( $this->arrayCache->contains( $key ) && in_array( $group, $this->no_mc_groups ) ) {
          $value = $this->arrayCache->fetch( $key );
        } elseif ( in_array( $group, $this->no_mc_groups ) ) {
          $this->arrayCache->save( $key, $value );
        } else {
          $value = $this->cache->fetch( $key );
        }
    
        return $value;
      }
    
      public function set( $id, $data, string $group = 'default', int $expire = 0 ): bool
      {
        $key = $this->key( $id, $group );
        return $this->cache->save( $key, $data, $expire );
      }
    
      public function add( $id, $data, string $group = 'default', int $expire = 0 ): bool
      {
        $key = $this->key( $id, $group );
        if ( in_array( $group, $this->no_mc_groups ) ) {
          $this->arrayCache->save( $key, $data );
          return true;
        } elseif ( $this->arrayCache->contains( $key ) && false !== $this->arrayCache->fetch( $key ) ) {
          return false;
        }
    
        return $this->cache->save( $key, $data, $expire );
      }
    
      public function replace( $id, $data, string $group = 'default', int $expire = 0 ): bool
      {
        $key = $this->key( $id, $group );
        $result = $this->memcached->replace( $key, $data, $expire );
    
        if ( false !== $result ) {
          $this->arrayCache->save( $key, $data );
        }
    
        return $result;
      }
    
      public function delete( $id, string $group = 'default' ): bool
      {
        $key = $this->key( $id, $group );
        return $this->cache->delete( $key );
      }
    
      public function incr( $id, int $n = 1, string $group = 'default' ): bool
      {
        $key = $this->key( $id, $group );
        $incr = $this->memcached->increment( $key, $n );
        return $this->cache->save( $key, $incr );
      }
    
      public function decr( $id, int $n = 1, string $group = 'default' ): bool
      {
        $key = $this->key( $id, $group );
        $decr = $this->memcached->decrement( $key, $n );
        return $this->cache->save( $key, $decr );
      }
    
      public function flush(): bool
      {
        if ( is_multisite() ) {
          return true;
        }
    
        return $this->cache->flush();
      }
    
      public function close() {
        $this->memcached->quit();
      }
    
      public function switch_to_blog( int $blog_id ) {
        $this->sitePrefix = ( is_multisite() ? $blog_id : $this->tablePrefix ) . ':';
      }
    
      public function add_global_groups( $groups ) {
        if ( ! is_array( $groups ) ) {
          $groups = [ $groups ];
        }
        $this->global_groups = array_merge( $this->global_groups, $groups );
        $this->global_groups = array_unique( $this->global_groups );
      }
    
      public function add_non_persistent_groups( $groups ) {
        if ( ! is_array( $groups ) ) {
          $groups = [ $groups ];
        }
        $this->no_mc_groups = array_merge( $this->no_mc_groups, $groups );
        $this->no_mc_groups = array_unique( $this->no_mc_groups );
      }
    }
    

    object-cache.php is the PHP4-style function set that we all know and love:

    use NYT\Cache\ObjectCache;
    use function NYT\getApp;
    
    function _wp_object_cache() {
      static $cache = null;
      if ( ! $cache ) {
        $app = getApp();
        $cache = new ObjectCache( $app );
      }
      return $cache;
    }
    
    function wp_cache_add( $key, $data, $group = '', $expire = 0 ) {
      return _wp_object_cache()->add( $key, $data, $group, $expire );
    }
    
    function wp_cache_incr( $key, $n = 1, $group = '' ) {
      return _wp_object_cache()->incr( $key, $n, $group );
    }
    
    function wp_cache_decr( $key, $n = 1, $group = '' ) {
      return _wp_object_cache()->decr( $key, $n, $group );
    }
    
    function wp_cache_close() {
      return _wp_object_cache()->close();
    }
    
    function wp_cache_delete( $key, $group = '' ) {
      return _wp_object_cache()->delete( $key, $group );
    }
    
    function wp_cache_flush() {
      return _wp_object_cache()->flush();
    }
    
    function wp_cache_get( $key, $group = '' ) {
      return _wp_object_cache()->get( $key, $group );
    }
    
    function wp_cache_init() {
      global $wp_object_cache;
      $wp_object_cache = _wp_object_cache();
    }
    
    function wp_cache_replace( $key, $data, $group = '', $expire = 0 ) {
      return _wp_object_cache()->replace( $key, $data, $group, $expire );
    }
    
    function wp_cache_set( $key, $data, $group = '', $expire = 0 ) {
      if ( defined( 'WP_INSTALLING' ) ) {
        return _wp_object_cache()->delete( $key, $group );
      }
      return _wp_object_cache()->set( $key, $data, $group, $expire );
    }
    
    function wp_cache_switch_to_blog( $blog_id ) {
      return _wp_object_cache()->switch_to_blog( $blog_id );
    }
    
    function wp_cache_add_global_groups( $groups ) {
      _wp_object_cache()->add_global_groups( $groups );
    }
    
    function wp_cache_add_non_persistent_groups( $groups ) {
      _wp_object_cache()->add_non_persistent_groups( $groups );
    }
    
    
  • All of the above is boilerplate. Yes, it sucks, but once it’s there, you don’t really need to touch it again. I have included it in the post in case anyone wants to try some of this stuff out on their own stack. So, finally:

    Show Me Doctrine! This is an Entity – we will verbosely describe an Asset, but as you can probably guess, we get a kickass API for free by doing this:

    namespace NYT\Media\Entity;
    
    use Doctrine\ORM\Mapping\ClassMetadata as DoctrineMetadata;
    use Symfony\Component\Validator\Mapping\ClassMetadata as ValidatorMetadata;
    use Symfony\Component\Validator\Constraints as Assert;
    
    class Asset {
      /**
       * @var int
       */
      protected $id;
      /**
       * @var int
       */
      protected $assetId;
      /**
       * @var string
       */
      protected $type;
      /**
       * @var string
       */
      protected $slug;
      /**
       * @var string
       */
      protected $modified;
    
      public function getId() {
        return $this->id;
      }
    
      public function getAssetId() {
        return $this->assetId;
      }
    
      public function setAssetId( $assetId ) {
        $this->assetId = $assetId;
      }
    
      public function getType() {
        return $this->type;
      }
    
      public function setType( $type ) {
        $this->type = $type;
      }
    
      public function getSlug() {
        return $this->slug;
      }
    
      public function setSlug( $slug ) {
        $this->slug = $slug;
      }
    
      public function getModified() {
        return $this->modified;
      }
    
      public function setModified( $modified ) {
        $this->modified = $modified;
      }
    
      public static function loadMetadata( DoctrineMetadata $metadata ) {
        $metadata->enableCache( [
          'usage' => $metadata::CACHE_USAGE_NONSTRICT_READ_WRITE,
          'region' => static::class
        ] );
    
        $metadata->setIdGeneratorType( $metadata::GENERATOR_TYPE_IDENTITY );
    
        $metadata->setPrimaryTable( [
          'name' => 'nyt_media'
        ] );
    
        $metadata->mapField( [
          'id' => true,
          'fieldName' => 'id',
          'type' => 'integer',
        ] );
    
        $metadata->mapField( [
          'fieldName' => 'assetId',
          'type' => 'integer',
          'columnName' => 'asset_id',
        ] );
    
        $metadata->mapField( [
          'fieldName' => 'type',
          'type' => 'string',
        ] );
    
        $metadata->mapField( [
          'fieldName' => 'slug',
          'type' => 'string',
        ] );
    
        $metadata->mapField( [
          'fieldName' => 'modified',
          'type' => 'string',
        ] );
      }
    
      public static function loadValidatorMetadata( ValidatorMetadata $metadata ) {
        $metadata->addGetterConstraints( 'assetId', [
          new Assert\NotBlank(),
        ] );
      }
    }
    

Now that we have been exposed to some new nomenclature, and a bunch of frightening code, let’s look at what we now get. First we need a Media Provider:

namespace NYT\Media;

use NYT\{App,LogFactory};
use Pimple\Container;
use Pimple\ServiceProviderInterface;

class Provider implements ServiceProviderInterface {
  public function register( Container $app ) {
    .....

    $app['media.repo.asset'] = function ( $app ) {
      return $app['db']->getRepository( Entity\Asset::class );
    };

    $app['media.repo.post_media'] = function ( $app ) {
      return $app['db']->getRepository( Entity\PostMedia::class );
    };
  }
}

Now we want to use this API to do some powerful stuff:

// lazy-load repository class for media assets
$repo = $app['media.repo.asset'];

// find one item by primary key, will prime cache
$asset = $repo->find( $id );

We can prime the cache whenever we want by requesting assets filtered by params we pass to ->findBy( $params ). I hope you’ve already figured it out, but we didn’t have to write any of this magic code, it is exposed automatically. Any call to ->findBy() will return from the cache or generate a SQL query, cache the resulting IDs by a hash, and normalize the cache by creating an entry for each found item. Subsequent queries have access to the same normalized cache, so queries can be optimized, prime each other, and invalidate entries when mutations occur.

// pull a bunch of ids from somewhere and get a bunch of assets at once
// will prime cache
$repo->findBy( [ 'id' => $ids ] );

// After priming the cache, sanity check items
$assets = array_reduce( $ids, function ( $carry, $id ) use ( $repo ) {
  // all items will be read from the cache
  $asset = $repo->find( $id );
  if ( $asset ) {
    $carry[] = $asset;
  }
  return $carry;
}, [] );

$repo->findBy( [ 'slug' => $oneSlug ] );
$repo->findBy( [ 'slug' => $manySlugs ] );

Here’s an example of a WP_Query-type request with 10 times more horsepower underneath:

// get a page of assets filtered by params
$assets = $repo->findBy(
  $params,
  [ 'id' => $opts['order'] ],
  $opts['perPage'],
  ( $opts['page'] - 1 ) * $opts['perPage']
);

With Doctrine, you never need to write SQL, and you certainly don’t need to write a Cache layer.

Lesson Learned

Let me explain a complex scenario where the cache priming in Doctrine turned a mess of a problem into an easy solution. The media assets used on our network of internationalized sites do not live in WordPress. WordPress has reference to the foreign IDs in the database in a custom table: nyt_media. We also store a few other pieces of identifying data, but only to support the List Table experience in the admin (we write custom List Tables, another weird nightmare of an API). The media items are referenced in posts via a [media] shortcode. Shortcodes have their detractors, but the Shortcode API, IMO, is a great way to store contextually-placed object references in freeform content.

The handler for the media shortcode does A LOT. For each shortcode, we need to request an asset from a web service over HTTP or read it from the cache. We need to read a row in the database to get the id to pass to the web service. We need to read site-specific metadata to get the translations for certain fields that will override the fields returned from the web service.

To review, for each item [media id="8675309"]:
1. Go to the database, get the nyt_media row by primary key (8675309)
2. Request the asset from a web service, using the asset_id returned by the row
3. Get the metadata from nyt_2_mediameta by nyt_media_id (8675309)
4. Don’t actually make network requests in the shortcode handler itself
5. Read all of these values from primed caches

Shortcodes get parsed when 'the_content' filters runs. To short circuit that, we hook into 'template_redirect', which happens before rendering. We match all of the shortcodes in the content for every post in the request – this is more than just the main loop, this could also be Related Content modules and the like. We build a list of all of the unique IDs that will need data.

Once we have all of the IDs, we look in the cache for existing entries. If we parsed 85 IDs, and 50 are already in the cache, we will only request the missing 35 items. It is absolutely crucial for the HTTP portion that we generate a batch query and that no single requests leak out. We are not going to make requests serially (one at a time), our server would explode and time out. Because we are doing such amazing work so far, and we are so smart about HTTP, we should be fine, right?

Let us open our hymnals to my tweet-storm from Friday:

I assumed all primary key lookups would be fast and free. Not so.

Let’s look at these lines:

// ids might be a huge array
$assets = array_reduce( $ids, function ( $carry, $id ) use ( $repo ) {
  // all items will not be read from the cache
  // we did not prime the cache yet
  $asset = $repo->find( $id );
  if ( $asset ) {
    $carry[] = $asset;
  }
  return $carry;
}, [] );

// later on in the request
// we request the bylines (authors) for all of the above posts

A ton of posts, each post has several media assets. We also store bylines in a separate table. Each post was individually querying for its bylines. The cache was not primed.

When the cache is primed, this request has a chance of being fast-ish, but we are still hitting Memcached a million times (I haven’t gotten around to the batch-requesting cache lookups).

The Solution

Thank you, Doctrine. Once I realized all of the cache misses stampeding my way, I added calls to prime both caches.

// media assets (shortcodes)
// many many single lookups reduced to one
$repo->findBy( [ 'id' => $ids ] );

Remember, WP_Query always makes an expensive query. Doctrine has a query cache, so this call might result in zero database queries. When it does make a query, it will prime the cache in a normalized fashion that all queries can share. Winning.

For all of those bylines, generating queries in Doctrine remains simple. Once I had a list of master IDs:

// eager load bylines for all related posts (there were many of these)
$app['bylines.repo.post_byline']->findBy(
  [
    'postId' => array_unique( $ids ),
    'blogId' => $app['blog_id'],
  ]
);

Conclusion

Performance is hard. Along the way last week, I realized the the default Docker image for PHP-FPM does not enable the opcache by default for PHP 7. Without it being on, wp-settings.php can take more than 1 second to load. I turned on the opcache. I will prime my caches from now on. I will not take primary key lookups for granted. I will consistently profile my app. I will use power tools when necessary. I will continue to look outside of the box and at the PHP universe as a whole.

Final note: I don’t even work on PHP that much these days. I have been full throttle Node / React / Relay since the end of last year. All of these same concepts apply to the Node universe. Your network will strangle you. Your code cannot overcome serial network requests. Your cache must be smart. We are humans, and this is harder than it looks.

is_ssl() is inadequate in load-balanced environments

If your web server is serving your site directly, – and is not behind Varnish, Fastly, Akamai, an ELB, some other proxy – this post is not for you. If you are indeed behind a proxy, you may find yourself in an awkward place when your site upgrades to HTTPS. My site, the (failing) New York Times, recently did so.

NYT

Various properties in the NYT ecosystem still run on WordPress. One of those, which will remain unnamed, is behind AWS Elastic Load Balancing, which is behind Fastly. HTTPS terminates at Fastly, which sets some HTTP headers and passes them down to the ELB on port 80. The ELB then sends port 80 traffic to a reverse DNS proxy that figures out that a certain app in our Kubernetes cluster should receive the traffic.

If you notice, all of the backend connections are happening on port 80, which is not where HTTPS traffic goes. How does this work in WordPress then, which expects HTTPS to be on port 443? (long pause) It doesn’t.

What does WordPress do?

WP checks 2 things:
1) whether $_SERVER['HTTPS'] is truthy
2) whether we are on port 443

See here: https://core.trac.wordpress.org/browser/tags/4.7/src/wp-includes/load.php#L960.

When a request is delivered via proxy, all links that are generated dynamically will have a protocol of http:, not https:. This makes sense because nothing that WordPress is checking for resembles an HTTPS request.

Why, god?

Surely, this has to be a problem elsewhere that has been identified and addressed? Correct. This is where the concept of trusted proxies and proxy HTTP headers come in. The server handling the request on behalf of the backend app can send HTTP headers downstream, alerting an app running on port 80 that it was requested over HTTPS on port 443 originally. These headers look like so:

X-Forwarded-For: OriginatingClientIPAddress, proxy1-IPAddress, proxy2-IPAddress
X-Forwarded-Proto: https
X-Forwarded-Port:  443

X-Forwarded-Proto is useful. Fastly can be configured to set this header when Fastly-SSL is true:

if (req.http.Fastly-SSL) {
  set req.http.X-Forwarded-Proto = "https";
}

The ELB can set its lb_protocol to tcp, instead of http. This will pass the HTTP request through, instead of creating a new HTTP request with proxy headers that may overwrite those from Fastly.

Fixing WordPress

But we still have a major problem: WordPress doesn’t have any way of patching is_ssl() to respect these client headers. The reasons why are too exhausting to debate – see: https://core.trac.wordpress.org/ticket/31288.

tl;dr The advice in the ticket is, basically, write your own code to handle this issue. Write your own code to handle trusted HTTP proxies and patch your servers to overwrite fastcgi params (vars that end up in $_SERVER in PHP). Most humans on earth will want to avoid this.

Next big question: how does a “real” PHP framework handle this? This is where we open our hymnals to Symfony\HttpFoundation: https://github.com/symfony/http-foundation/blob/master/Request.php#L1186. Well, there ya go.

Next problem: how can I patch is_ssl() with HttpFoundation\Request->isSecure()? Once again, you can’t, but you can use WordPress against itself and patch URL generation, which is where we have the most blatant problem (https URL in the browser bar, http links on the page).

What we did

Here’s our custom NYT\isSSL() function:

namespace NYT;

function isSSL() {
  $app = getApp();
  $request = $app['request'];
  if ( empty( $request->getTrustedProxies() ) ) {
    $request->setTrustedProxies( $request->getClientIps() );
  }
  return $request->isSecure();
}

All of our client IPs are in private IP ranges, so we are assuming that our proxy is trusted. This logic could be extended to include a whitelist instead – your call.

$app is a DI Container:

namespace NYT;

use Pimple\Container;

class App extends Container {}

We have a Symfony provider for our DI container, excerpted here:

namespace NYT\Symfony;

use Pimple\Container;
use Pimple\ServiceProviderInterface;
use Symfony\Component\HttpFoundation\Request;

class Provider implements ServiceProviderInterface {
  public function register( Container $app ) {
    $app['request'] = function () {
      return Request::createFromGlobals();
    };
  }
}

We also have a Sunrise class to patch multisite:

namespace NYT;

class Sunrise {
  protected $siteHostname;

  public function __construct( App $app ) {
    ...
    add_filter( 'option_home', [ $this, 'filterHost' ] );
    add_filter( 'option_siteurl', [ $this, 'filterHost' ] );
    ...
  }

  public function filterHost( string $host ): string
  {
    $path = parse_url( $host, PHP_URL_PATH );
    $protocol = isSSL() ? 'https://' : 'http://';
    return $protocol . $this->siteHostname . $path;
  }
}

Conclusion

This “works” for front-end traffic to our site but is not immune to every other place that is_ssl() will return the wrong value. We don’t use WordPress login on the front end (behind Fastly), so we are ignoring the fact that our cookies will still be in the wrong scheme. None of our Fastly stuff is in front of our internal tools, so we don’t require extra HTTP headers to determine the protocol.

If you search the WP codebase for is_ssl(), your stomach might sink and you might be convinced of the need for is_ssl() to be patched universally.

WordCamp Saratoga 2014 Slides

I spoke earlier today at the first-ever WordCamp Saratoga. Saratoga is a great town Upstate, my first time visiting. My talk centered around my experiences as a developer, spanning from moving to NYC in December 2006 to the present, where I am a WordPress core committer and developer. The main point, anyone can be involved as much or as little as they want, and I am proof that you can succeed just by sticking with it.

Some related tweets from today’s chat:

 

 

 

First Draft at The New York Times + WordPress

Screen Shot 2014-09-29 at 4.00.56 PM

Last week, we launched a new vertical for politics at the Times: First Draft. First Draft is a morning newsletter/politics briefing and a web destination that is updated throughout the day by reporters at the Times’ Washington bureau.

First Draft is powered by WordPress. As I have noted previously, the Times runs on a variety of systems, but WordPress powers all of its blogs. “Blogs” have a bad connotation these days – not all things that have an informal tone and render posts in reverse-chronological order are necessarily a blog, but anything that does that should probably be using WordPress. WordPress has the ability to run any website, which is why it powers 23% of the entire internet. We are always looking for projects to power with WordPress at the Times, and First Draft was a perfect fit.

The features I am going to highlight that First Draft takes advantage of are things we take for granted in WordPress. Because we get so much for free, we can easily and powerfully build products and features.

Day Archives

First Draft is made up of Day archives. The idea is: you open the site in a tab at the beginning of the day, and you receive updates throughout the day when new content is available. A lot of the code that powers the New York Times proper is proprietary PHP. For another system to power First Draft, someone would have to re-invent day archives. And they might not think to also re-invent month and year archives. You may laugh, but those conversations happen. Even if the URL structure is present in another app, your API for retrieving data needs to be able to handle the parsing of these requests.

Day archives in WP “just work” – same with a lot of other types of archives. We think nothing of the fact that pretty URLs for hierarchical taxonomies “just work.” Proprietary frameworks tend to be missing these features out of the box.

Date Query

When WP_Date_Query landed in WordPress 3.7, it came with little fanfare. “Hey, sounds cool, maybe I’ll use it someday…” It’s one of those features that, when you DO need it, you can’t imagine a time when it didn’t exist. First Draft had some quirky requirements.

There is no “home page” per se, the home page needs to redirect to the most current day … unless it’s the weekend: posts made on Saturdays and Sundays are rolled up into Friday’s stream.

Home page requests are captured in 'parse_request', we need to find a post what was made on a day Monday-Friday (not the weekend):

$q = $wp->query_vars;

// This is a home request
// send it to the most recent weekday
if ( empty( $q ) ) {
    $q = new WP_Query( array(
        'ignore_sticky_posts' => true,
        'post_type' => 'post',
        'posts_per_page' => 1,
        'date_query' => array(
            'relation' => 'OR',
            array( 'dayofweek' => 2 ),
            array( 'dayofweek' => 3 ),
            array( 'dayofweek' => 4 ),
            array( 'dayofweek' => 5 ),
            array( 'dayofweek' => 6 ),
        )
    ) );

    if ( empty( $q->posts ) ) {
        return;
    }

    $time = strtotime( reset( $q->posts )->post_date );
    $day = date( 'd', $time );
    $monthnum = date( 'm', $time );
    $year = date( 'Y', $time );

    $url = get_day_link( $year, $monthnum, $day );
    wp_redirect( $url );
    exit();
}

If the request is indeed a day archive, we need to make sure we aren’t on Saturday or Sunday:

$vars = array_keys( $q );
$keys = array( 'year', 'day', 'monthnum' );

// this is not a day query
if ( array_diff( $keys, $vars ) ) {
    return;
}

$time = $this->vars_to_time(
    $q['monthnum'],
    $q['day'],
    $q['year']
);
$day = date( 'l', $time );

// Redirect Saturday and Sunday to Friday
$new_time = false;
switch ( $day ) {
case 'Saturday':
    $new_time = strtotime( '-1 day', $time );
    break;
case 'Sunday':
    $new_time = strtotime( '-2 day', $time );
    break;
}

// this is a Saturday/Sunday query, redirect to Friday
if ( $new_time ) {
    $day = date( 'd', $new_time );
    $monthnum = date( 'm', $new_time );
    $year = date( 'Y', $new_time );

    $url = get_day_link( $year, $monthnum, $day );
    wp_redirect( $url );
    exit();
}

In 'pre_get_posts', we need to figure out if we are on a Friday, and subsequently get Saturday and Sundays posts as well, assuming that Friday is not Today:

$query->set( 'posts_per_page', -1 );

$time = $this->vars_to_time(
    $query->get( 'monthnum' ),
    $query->get( 'day' ),
    $query->get( 'year' )
);
$day = date( 'l', $time );

if ( 'Friday' === $day ) {
    $before_time = strtotime( '+3 day', $time );

    $query->set( '_day', $query->get( 'day' ) );
    $query->set( '_monthnum', $query->get( 'monthnum' ) );
    $query->set( '_year', $query->get( 'year' ) );
    $query->set( 'day', '' );
    $query->set( 'monthnum', '' );
    $query->set( 'year', '' );

    $query->set( 'date_query', array(
        array(
            'compare' => 'BETWEEN',
            'before' => date( 'Y-m-d', $before_time ),
            'after' => date( 'Y-m-d', $time ),
            'inclusive' => true
        )
    ) );
}

Screen Shot 2014-09-29 at 4.25.06 PM

Adjacent day navigation links have to be aware of weekend rollups:

function first_draft_adjacent_day( $which = 'prev' ) {
    global $wp;

    $fd = FirstDraft_Theme::get_instance();

    if ( ! is_day() ) {
        return;
    }

    $archive_date = sprintf(
        '%s-%s-%s',
        $wp->query_vars[ 'year' ],
        $wp->query_vars[ 'monthnum' ],
        $wp->query_vars[ 'day' ]
    );

    if ( $archive_date === date( 'Y-m-d' ) && 'next' === $which ) {
        return;
    }

    $archive_time = strtotime( $archive_date );
    $day_name = date( 'l', $archive_time );

    if ( 'Thursday' === $day_name && 'next' === $which ) {
        $time = strtotime( '+1 day', $archive_time );
        $day = date( 'd', $time );
        $monthnum = date( 'm', $time );
        $year = date( 'Y', $time );

        $before_time = strtotime( '+3 day', $time );

        $ids = new WP_Query( array(
            'ignore_sticky_posts' => true,
            'fields' => 'ids',
            'posts_per_page' => -1,
            'date_query' => array(
                array(
                    'compare' => 'BETWEEN',
                    'before' => date( 'Y-m-d', $before_time ),
                    'after' => date( 'Y-m-d', $time ),
                    'inclusive' => true
                )
            )
        ) );

        if ( empty( $ids->posts ) ) {
            return;
        }

        $count = count( $ids->posts );

    } elseif ( 'Friday' === $day_name && 'next' === $which ) {
        $after_time = strtotime( '+3 days', $archive_time );

        $q = new WP_Query( array(
            'ignore_sticky_posts' => true,
            'post_type' => 'post',
            'posts_per_page' => 1,
            'order' => 'ASC',
            'date_query' => array(
                array(
                    'after' => date( 'Y-m-d', $after_time )
                )
            )
        ) );

        if ( empty( $q->posts ) ) {
            return;
        }

        $date = reset( $q->posts )->post_date;
        $time = strtotime( $date );

        $day = date( 'd', $time );
        $monthnum = date( 'm', $time );
        $year = date( 'Y', $time );

        $ids = new WP_Query( array(
            'ignore_sticky_posts' => true,
            'fields' => 'ids',
            'posts_per_page' => -1,
            'year' => $year,
            'month' => $monthnum,
            'day' => $day
        ) );

        $count = count( $ids->posts );

    } else {
        // find a post with an adjacent date
        $q = new WP_Query( array(
            'ignore_sticky_posts' => true,
            'post_type' => 'post',
            'posts_per_page' => 1,
            'order' => 'prev' === $which ? 'DESC' : 'ASC',
            'date_query' => array(
                'relation' => 'AND',
                array(
                    'prev' === $which ? 'before' : 'after' => array(
                        'year' => $fd->get_query_var( 'year' ),
                        'month' => (int) $fd->get_query_var( 'monthnum' ),
                        'day' => (int) $fd->get_query_var( 'day' )
                    )
                ),
                array(
                    'compare' => '!=',
                    'dayofweek' => 1
                ),
                array(
                    'compare' => '!=',
                    'dayofweek' => 7
                )
            )
        ) );

        if ( empty( $q->posts ) ) {
            return;
        }

        $date = reset( $q->posts )->post_date;
        $time = strtotime( $date );
        $name = date( 'l', $time );

        $day = date( 'd', $time );
        $monthnum = date( 'm', $time );
        $year = date( 'Y', $time );

        if ( 'Friday' === $name ) {
            $before_time = strtotime( '+3 days', $time );

            $ids = new WP_Query( array(
                'ignore_sticky_posts' => true,
                'fields' => 'ids',
                'posts_per_page' => -1,
                'date_query' => array(
                    array(
                        'compare' => 'BETWEEN',
                        'before' => date( 'Y-m-d', $before_time ),
                        'after' => date( 'Y-m-d', $time ),
                        'inclusive' => true
                    )
                )
            ) );
        } else {
            $ids = new WP_Query( array(
                'ignore_sticky_posts' => true,
                'fields' => 'ids',
                'posts_per_page' => -1,
                'year' => $year,
                'month' => $monthnum,
                'day' => $day
            ) );
        }

        $count = count( $ids->posts );
    }

    if ( 'prev' === $which && $time === strtotime( '-1 day' ) ) {
        $text = 'Yesterday';
    } else {
        $text = first_draft_month_format( 'D. M. d', $time );
    }

    $url = get_day_link( $year, $monthnum, $day );
    return compact( 'text', 'url', 'count' );
}

WP-API (the JSON API)

The New York Times is already using the new JSON API. When we needed to provide a stream for live updates, the WP-API was a far better solution (even in its alpha state) than XML-RPC. I implore you to look another developer in the face and tell them you want to build a cool new app, and you want to share data via XML-RPC. I’ve done it, they will not like you.

We needed to make some tweaks – date_query needs to be an allowed query var:

public function __construct() {
    ...
    add_filter( 'json_query_vars', array( $this, 'json_query_vars' ) );
    ...
}

public function json_query_vars( $vars ) {
    $vars[] = 'date_query';
    return $vars;
}

This will allow us to produce urls like so:


<meta name="live_stream_endpoint" content="http://www.nytimes.com/politics/first-draft/json/posts?filter[posts_per_page]=-1&filter[date_query][0][compare]=BETWEEN&filter[date_query][0][before]=2014-09-29&filter[date_query][0][after]=2014-09-26&filter[date_query][0][inclusive]=true"/>

Good times.

oEmbed

We take for granted: oEmbed is magic. Our reporters wanted to be able to “quick-publish” everything. Done and done. oEmbed also has the power of mostly being responsive, with minimal tweaks needed to ensure this.

How many proprietary systems have an oEmbed system like this? Probably none. Being able to paste a URL on a line by itself and, voila, you get a YouTube video or Tweet is pretty insane. TinyMCE previews inline while editing is even crazier.

Conclusion

There isn’t a lot of excitement about new “blogs” at the Times, but that distaste should not be confused with WordPress as a platform. WordPress is still a powerful tool, and is often a better solution than reinventing existing technologies in a proprietary system. First Draft is proof of that.

WordPress 4.0: Under the Hood

Today was the launch of WordPress 4.0, led by my friend, Helen Hou-Sandí. Helen was a great lead and easy to collaborate with. She had her hands in everything – I was hiding in the shadows dealing with tickets and architecture.

It seems like just 4.5 months ago I was celebrating the release of WordPress 3.9, probably because I was … “Some people, when they ship code, this is how they celebrate”:

[OLD INSTAGRAM POST]

WordPress 4.0 has an ominous-sounding name, but it was really just like any other release. I had one secret ambition: sweep out as many cobwebs as I could from the codebase and make some changes for the future. LOTS of people contribute to WordPress, so me committing changes to WordPress isn’t a solo tour, but there were a few things I was singularly focused on architecturally. I also contributed to 2 of the banner features in the release.

Cleanup from 3.9

  • Fixed RTL for playlists
  • Fixed <track>s when used as the body of a video shortcode
  • You can now upload .dfxp and .srt files
  • Added code to allow the loop attribute to work when MediaElement is playing your audio/video with Flash
  • MediaElement players now have the flat aesthetic and new offical colors
    You can now overide all MediaElement instance settings instead of just pluginPath
  • Bring the list of upload_filetypes for multisite into modernity based on .com upgrades and supported extensions for audio and video.
  • In the media modal, you can now set artist and album for your audio files inline
  • Gallery JS defaults are easier to override now: https://core.trac.wordpress.org/changeset/29284

Scrutinizer

Scrutinizer CI is a tool that analyzes your codebase for mistakes, sloppiness, complexity, duplication, and test coverage. If you work at a big fancy company that employs continous delivery methodologies, you may already have tools and instrumentation running on your code each time you commit, or scheduled throughout the day. I set up Scrutinizer to run everytime I updated my fork of WordPress on GitHub.

Scrutinizer is especially great at identifying unused/dead code cruft. My first tornado of commits in 4.0 were removing dead code all over WordPress core. Start here: https://core.trac.wordpress.org/changeset/28263

A good example of dead code: https://core.trac.wordpress.org/changeset/28292

extract()

extract() sucks. Here is a post about it: https://josephscott.org/archives/2009/02/i-dont-like-phps-extract-function/

Long story short: I eliminated all* of them in core. A good example: https://core.trac.wordpress.org/changeset/28469

* There is one left: here

Hack and HHVM

Facebook has done a lot of work to make PHP magically fast through HipHop, Hack, and HHVM. HHVM ships with a tool called hackificator that will convert your .php files to .hh files, unless you have some code that doesn’t jive with Hack’s stricter requirements.

While I see no path forward to be 100% compatible with the requirements for .hh files, we can get close. So I combed the hackificator output for things we COULD change and did.

Access Modifiers

WordPress has always dipped its toes in the OOP waters, but for a long time didn’t go all the way, because for a long time it had to support aspects of PHP4. There are still some files that need to be PHP4-compatible for install. For a lot of the classes in WordPress, now was a great time to upgrade them to PHP5 and use proper access modifiers (Hack also requires them).

I broke list tables several times along the way, but we cleaned them up and eventually got there.

wp_insert_post()/wp_insert_attachment() are now one

These two functions were very similar, but it was hard to see where they diverged, ESPECIALLY because of extract(). Once I removed the extract() code from them, it was easier to annotate their differences, however esoteric.

Funny timeline:

wp_handle_upload() and wp_handle_sideload() are now one

Similar to the above, these functions were almost identical. They diverged in esoteric ways. A new function exists, _wp_handle_upload(), that they both now wrap.

Done here: https://core.trac.wordpress.org/changeset/29209

wp_script_is() now recurses properly

Dependencies are a tree, not flat, so checking if a script is enqueued should recurse its tree and its tree dependencies. This previously only checked its immediate dependencies and didn’t recurse. I added a new method, recurse_deps(), to WP_Dependencies.

https://core.trac.wordpress.org/changeset/29252
And oops: https://core.trac.wordpress.org/changeset/29253

ORDER BY

Commit: https://core.trac.wordpress.org/changeset/29027
Make/Core post about it: A more powerful ORDER BY in WordPress 4.0

LIKE escape sanity

I helped @miqrogroove shepherd this: https://core.trac.wordpress.org/changeset/28711 and https://core.trac.wordpress.org/changeset/28712

Make/Core post: like_escape() is Deprecated in WordPress 4.0

wptexturize() overhaul

This was all @miqrogroove, I just supported him. He crushed a lot of tickets and made the function a whole lot faster. We need people like him to dig deep in areas like this.

Taxonomy Roadmap

Potential roadmap for taxonomy meta and post relationships: there is one
We cleared one of the major tickets: #17689. Here: https://core.trac.wordpress.org/changeset/28733

Variable variables

Variable variables are weird and are disallowed by Hack. Allows you to do this:

$woo = 'hoo';
$$woo = 'yeah';
echo $hoo;
// yeah!

I removed all(?) of these from core … some 3rd-party libraries might still contain them.

Started that tornado here: https://core.trac.wordpress.org/changeset/28734
One of my favorite commits: https://core.trac.wordpress.org/changeset/28743

Unit Tests

I did a lot of cleanup to Unit Tests along the way. Unit tests are our only way to stay sane while committing mountains of new code to WordPress.

Some highlights:

Embeds

In 3.9, I worked with Gregory Cornelius and Andrew Ozz to implement TinyMCE previews for audio, video, playlists, and galleries. We didn’t get to previews for 3rd-party embeds (like YouTube) in time, so I threw them into my Audio/Video Bonus Pack plugin as an extra feature. Once 4.0 started, Janneke Van Dorpe suggested we throw that code into core, so we did. From there, she and Andrew Ozz did many iterations of making embed previews of YouTube, Twitter, and the like possible.

Other improvements I worked on:

  • When using Insert From URL in the media modal, your embeds will appear as a preview inline.
  • Added oEmbed support for Issuu, Mixcloud, Animoto, and YouTube Playlist URLs.
  • You can use a src attribute for embed shortcodes now, instead of using the shortcode’s body (still works, but you can’t use both at the same time)
  • I added an embed handler for YouTube URLs like: http://youtube.com/embed/acb1233 (the YouTube iframe embed URLs) – those are now converted into proper urls like: http://youtube.com/watch?v=abc1233
  • This is fucking insane, I forgot I did this – if you select a poster image for a video shortcode in the media modal, and the video is confirmed to be an attachment that doesn’t have a poster image (videos are URLs so that external URLs don’t have to be attachments), the association will be made in the background via AJAX: https://core.trac.wordpress.org/changeset/29029

While we were working on embeds, we/I decided to COMPLETELY CHANGE MCE VIEWS FOR AUDIO/VIDEO:

https://core.trac.wordpress.org/changeset/29178
Wins: https://core.trac.wordpress.org/changeset/29179

In 3.9, we were checking the browser to see what audio/video files could be played natively and only showed those in the TinyMCE previews. Now we show all of them. This is due to the great work that @avryl and @azaozz did with implementing iframe sandboxes for embeds. I took their work and ran with it – completely changed the way audio/video/playlists render in the editor. The advantage is that the code that generates the shortcode only has to be in PHP, needs no JS equivalent.

Media Grid

@ericandrewlewis drove this train for most of the release. I came in towards beta to make sure all of the code-churn was up to Koop-like standards and dealt with some esoteric issues as they arose. It was great having my co-worker dive so deep into media, another asset to the core team who greatly needs people what knowledge of that domain.

As an example of the things I worked on – I stayed up til 6am one night chatting with Koop to figure out how to attack media-models.js. Once I filled my brain, I got to places like:
https://core.trac.wordpress.org/changeset/29490

So many contributors

There are too many people, features, and commits to mention in one blog post. This is just a journal of my time spent. You all rocked. Let’s keep going.

WordPress 4.0 “Benny”

WordPress 3.9 + Audio/Video

Screen Shot 2014-04-16 at 2.54.12 PM

Previous posts on Make/Core:
Audio / Video 2.0 Update – Media Modal 
Audio / Video 2.0 Update – Playlists 
Audio / Video 2.0 Update 
Audio / Video 2.0 – codename “Disco Fries”

If you remember WordPress 3.6, we were scrambling to make Post Formats work. They did not, so they were dropped. What remained in the aftermath was rudimentary support for audio and video. You could display one audio file at a time and/or one video file at a time using a shortcode. Good, but not good enough. WordPress 3.9 has a TON of improvements, several related to visual editing, media, and a second pass at defining what audio and video can do in WordPress.

HTML5 audio and video on the web are still the Wild Wild West, I viewed 3.9 as a way to help tame the beast.

Media code from 3.5

Koop wrote an astonishing amount of beautiful Backbone-driven code in WordPress 3.5 related to overhauling and rethinking Media in WordPress. Gregory Cornelius, Andrew Ozz, and I spent the better part of 3.9 swimming around it and its relationship to TinyMCE. While there isn’t a ton of written documentation for media, I did fall on the sword and added JSDoc blocks to every class in media-views, media-model, and media-editor JS files. It is now possible to follow the chain of inheritance for every class, which is 7 levels deep at times. We’ve also built some new features, and learned how to interact with these existing APIs.

TinyMCE Views – Visual previews of your media

Screen Shot 2014-04-16 at 2.26.34 PM TinyMCE is the visual editor in WordPress. Behind the scenes, the visual editor is an iframe that contains markup. In 3.9, gcorne and azaozz did the mind-bending work of making it easier to render “MCE views” – or content that had connection to the outside world of the visual iframe via a TinyMCE plugin and mce-view.js. A lot of the work I did in building previews for audio and video inside of the editor was implementing the features and APIs they created. gcorne showed us the possibilities by making galleries appear in the visual editor. Everything else followed his lead. Screen Shot 2014-04-16 at 2.26.00 PM

Themes now have proper CSS

We went back in time to the last 5 default themes and added the basic styles necessary for audio and video to behave in a unified way. Meaning, if you switch from TwentyEleven theme to TwentyFourteen: videos should always have the same aspect ratio. Same goes for the admin, the video should always appear with dimensions that are predictable.

<audio> and <video> are now responsive

Because of the above CSS changes, audio and video are responsive throughout WordPress and on mobile. Win.

Attachment Pages

If I asked you the question – do players automatically appear for audio and video files on their respective attachment pages? You might answer, of course they do! … they did not, they do now!

Screen Shot 2014-04-16 at 2.29.58 PM

Chromeless YouTube

MediaElement supports the playback of YouTube videos without the look and feel of a YouTube player. This is great because the style of the video player will match the style of your other players.

Screen Shot 2014-04-16 at 2.33.27 PM

MediaElement updated

MediaElement.js has been updated to the latest and greatest version. HUGE thanks to John Dyer for working so closely with us and accepting pull requests when we badger him on random Saturday afternoons.

Playlists

Turning mp3 URLs into players is awesome and happens automagically in WordPress now. But what if you are sharing an entire album of your band’s tunes, or sharing your music recital on your website? Rendering 10 separate players is visually weird. We already have “galleries” for images, can we reuse the admin UI for those and make it work for playlists of audio or video files? We can (after some sweat and tears), so we did. I remember staying up all night in 2006 trying to figure out how to put my band’s music on our website. If even a niche user base of musicians are able to publish their music because of this feature, it will have been worth it.

Screen Shot 2014-04-16 at 2.38.22 PM

Manage Shortcodes

Your audio and video shortcodes now have live previews in the editor, but that’s not it… you can now click the preview to pop open the media modal and edit your content. Once there you can:

  • Add alternate playback formats for maximum native HTML5 playback
  • Add a poster image for your video, if it wasn’t done automatically on upload
  • Add subtitles to your video

Screen Shot 2014-04-16 at 2.42.56 PM

It’s pretty slick.

Screen Shot 2014-04-16 at 2.43.25 PM

Core Changes

Some other cool little treats:

  • Featured Image is turned on for attachment:audio and attachment:video = when you upload your audio and video files, if the files contain cover images, they are automatically slurped for you, uploaded, and associated as the featured image for the media file. Meaning: you will automatically have a video poster image, or your audio playlist will display the album cover along with the track.
  • Images in ID3 tags are stored via hash to prevent re-uploading = if you upload 10 tracks from an album that all have the same album cover, only one cover will uploaded and associated with all of the tracks.
  • Artist and Album are editable = your media item’s title is always used as the “song title,” but now, if your item did not contain metadata for artist and album, you can set it on the Edit Media screen.
  • The old “crystal” icon set for media items has been updated and MP6ified. They look WAY better.

Have fun with WordPress 3.9 🙂

xoxo

Rethinking Blogs at The New York Times

The New York Times

See Also: The Technology Behind the NYTimes.com Redesign

The Blogs at the Times have always run on WordPress. The New York Times, as an ecosystem, does not run on one platform or one technology. It runs on several. There are over 150 developers at the Times split across numerous teams: Web Products, Search, Blogs, iOS, Android, Mobile Web, Crosswords, Ads, BI, CMS, Video, APIs, Interactive News, and the list goes on. While PHP is frequently used, Elastic Search and Node make an appearance, and the Newspaper CMS, “Scoop,” is written in Java. Interactive likes Ruby/Rails.

The “redesign,” which launched last week, was really a re-platform: where Times development needs to head, and a rethinking of our development processes and tools. The customer-facing redesign was 2 main pieces:

  • a new Article “app” that runs inside of our new platform
  • the “reskinning” of our homepage and section fronts

What is launching today is the re-platform of Blogs from a WordPress-only service to Blogs via WordPress as an app inside of our new platform.

The Redesign

Most people who use the internet have visited an NYTimes article page –

the old design:
http://www.nytimes.com/2013/12/29/arts/music/lordes-royals-is-class-conscious.html

Lorde

the new:
http://www.nytimes.com/2014/01/15/arts/music/jay-z-offers-a-view-of-his-legacy-at-barclays-center.html?ref=music

Jay-Z at Barclay's

What is not immediately obvious to the reader is how all of this works behind the scenes.

Non-Technical

To skip past all of the technical details, click here:

How Things Used to Work

For many years at the Times, article pages were generated into static HTML files when published. This was good and bad. Good because: static files are lightning fast to serve. Bad because: those files point at static assets (CSS, JavaScript files) that can only change when the pages are re-generated and re-published. One way around this was to load a CSS file that had a bunch of @import statements (eek), with a similar loading scheme for JS (even worse).

Blogs used to load like any custom WordPress project:

  • configured as a Multisite install (amassing ~200 blogs over time)
  • lots of custom plugins and widgets
  • custom themes + a few child themes

A lot of front-end developers also write PHP and vice versa. At the Times, in many instances, the team working on the Blogs “theme” was not the same team working on the CSS/JS. So, we would have different Subversion repos for global CSS, blogs CSS; different repos for global JS, blogs JS; and a different repo for WordPress proper. When I first started working at the Times, I had to create a symlink farm of 7 different repos that would represent all of the JS and CSS that blogs were using. Good times.

On top of that, all blogs would inherit NYTimes “global” styles and scripts. A theme would end up inheriting global styles for the whole project, global styles for all blogs, and then sometimes, a specific stylesheet for the individual blog. For CSS, this would sometimes result in 40-50 (sometimes 80!) stylesheets loading. Not good.

WordPress would load jQuery, Prototype, and Scriptaculous with every request (I’m pretty sure some flavor of jQuery UI was in there too). As a result, every module within the page would just assume that our flavor of jQuery global variable NYTD.jQuery was available anywhere, and would assume that Prototype.js code could be called at will. (Spoiler alert: that was a bad idea.)

WordPress does not use native WP comments. There is an entire service at the Times called CRNR (Comments, Ratings, and Reviews) that has its own user management, taxonomy management, and community moderation tools. Modules like “CRNR” would provide us with code to “drop onto the page.” Sometimes this code included its own copy of jQuery, different version and all.

Widgets on blogs could be tightly coupled with the WordPress codebase, or they could be some code that was pasted into a freeform textarea from some other team. The Interactive News team at the Times would sometimes supply us code to “drop into the C-Column” – translation: add a widget to the sidebar. These “interactives” would sometimes include their own copy jQuery (what version…? who knows!).

How Things Work Now

The new platform has 2 main technologies at its center: the homegrown Madison Framework (PHP as MVC), and Grunt, the popular task runner than runs on Node. Our NYT codebase is a collection of several Git repos that get built into apps via Grunt and deployed by RPMs/Puppet. For any app that wants to live inside of the new shell (inherit the masthead, “ribbon,” navigation automatically), they must register their existence. After they do, they can “inherit” from other projects. I’ll explain.

Foundation

Foundation is the base application. Foundation contains the Madison PHP framework, the Magnum CSS/Responsive framework, and our base JavaScript framework. Our CSS is no longer a billion disparate files – it is LESS manifests, with plenty of custom mixins, that compile into a few CSS files. At the heart of our JS approach is RequireJS, Hammer, SockJS and Backbone (authored by Times alum Jeremy Ashkenas).

Madison is an MVC framework that utilizes the newest and shiniest OO features of PHP and is built around 2 main software design patterns: the Service Locator pattern (via Pimple), and Dependency Injection. The main “front” of any request to the new stack goes through Foundation, as it contains the main controller files for the framework. Apps register their main route via Apache rewrite rules, Madison knows which app to launch by convention based on the code that was deployed via the Grunt build.

Shared

Shared is collection of reusable modules. Write a module once, and then allow apps to include them at-will. Shared is where Madison’s “base” modules exist. Modules are just PHP template fragments which can include other PHP templates. Think of a “Page” module like so:

Page
- load Top module
- load Content module
- load Bottom module

Top (included in Page)
- load Styles module
- load Scripts module
- load Meta module

...

In your app code, if you try to embed a module by name, and it isn’t in your app’s codebase, the framework will automatically look for it in Shared. This is similar to how parent and child themes work in WordPress. This means: if you want to use ALL of the default modules, only overriding a few, you need to only specify the overriding modules in your app. Let’s say the main content of the page is a module called “PageContent/Thing” – you would include the following in your app to override what is displayed:

// page layout
$layout = array(
    'type' => 'Page',
    'name' => 'Page',
    'modules' => array(
        array(
            'type' => 'PageContent',
            'name' => 'Thing'
        ),
        .....
    )
);

// will first look in
nyt5-app-blogs/Modules/PageContent/Thing.tpl.php
// if it doesn't find it
nyt5-shared/PageContent/php/src/Thing.tpl.php

So there’s a lot happening, before we even get to our Blogs app, and we haven’t even really mentioned WordPress yet!

App-specific

Each app contains a build.json file that explains how to turn our app into a codebase that can be deployed as an application. Each app might also have the following folder structure:

js/
js/src
js/tests
less/
php/
php/src
php/tests

Our build.json files lists our LESS manifests (the files to build via Grunt) and our JS mainifests (the files to parse using r.js/Require). Our php/src directory contains the following crucial pieces:

Module/ <-- contains our Madison override templates
WordPress/ <-- contains our entire WP codebase
ApplicationConfiguration.php <-- optional configuration
ApplicationController.php <-- the main Controller for our app
wp-bootstrap.php <-- loads in global scope to load/parse WordPress

The wp-bootstrap.php file is the most interesting portion of our WordPress app, and where we do the most unconventional work to get these 2 disparate frameworks to work together. Before we even load our app in Madison proper, we have already loaded all of WordPress in an output buffer and stored the result. We can then access that result in our Madison code without any knowledge of WordPress. Alternately, we can use any WP code inside of Madison. Madison eschews procedural programming and enforces namespace-ing for all classes, so collisions haven’t happened (yet?).

Because we are turning WP content in Module content, we no longer want our themes to produce complete HTML documents: we only to produce the “content” of the page. Our Madison page layout gives us a wrapper and loads our app-specific scripts and styles. We have enough opportunities to override default template stubs to inject Blog-specific content where necessary.

In the previous incarnation of Blogs, we had to include tons of global scripts and styles. Using RequireJS, which leans on Dependency Injection, we ask for jQuery in any module and ensure that it only loads once. If we in fact do need a separate version somewhere, we can be assured that we aren’t stomping global scope, since we aren’t relying on global scope.

Using LESS imports instead of CSS file imports, we can modularize our code (even using 80 files if we want!) and combine/minify on build.

Loading WordPress in our new unconventional way lets us work with other teams and other code seamlessly. I don’t need to include the masthead/navigation markup in my theme. I don’t even need to know how it works. We can focus on making blogs work, and inherit the rest.

What I Did

For the first few months of the project, I was able to work in isolation and move the Blogs codebase from SVN to Git. I was happy that we were moving the CSS to LESS and the JS to Require/Backbone, so I took all of the old files and converted them into those modern frameworks. The Times had 3 themes that I was given free reign to rewrite and squish into one lighter, more flexible theme. Since the Times has been using WordPress since 2005, there was code from the dark ages of the internet that I was able to look at with fresh eyes and transition. Once a lot of the brute force initial work was done, I worked with a talented team of people to integrate some of the Shared components and make sure we had stylistic parity between the new Article pages and Blogs.

To see some examples in action, a sampling:

Dealbook

Bits

Well

The Lede

City Room

ArtsBeat

Public Editor’s Journal

Paul Krugman

WordPress: Autowiring Custom Post Type Metadata

The New York Times Co.

Write Less Code

I recently did a project at the New York Times, a corporate website that was highly dynamic. A lot of the front-end work was done ahead of time with dummy content. I was brought in at the end to rewrite the core logic and set up all of the dynamic pieces. EVERYTHING had to be dynamic. There were several times that I had to quickly replace a dummy HTML list with content from a collection of objects belonging to a custom post type. I didn’t want to re-invent the wheel every time I added a new post type. I wanted to write one register_post_type() call with a helper as the value for 'register_meta_box_cb'.

Here’s How

Custom post types in WordPress are really object types, much like a blog post is an instance of the Post object type. When you register a custom post type, you are really registering a new “thing” that isn’t really a “post,” it’s something else. Once you have registered this thing, you will probably use the same API as Post to interact with your data: WordPress core functions to retrieve and save your data.

By far the most annoying things about custom post types are how much code it takes to register one and how much duplicate code it takes to save arbitrary metadata. An example:

I want to create a new object called “nyt_partner” – I am going to use the title, the content, and featured image, but I also need to associate some arbitrary data with each instance of “nyt_partner”: phone number, address, twitter account, etc. I am only going to read the data (not search for it), so object (post) metadata works just fine.

Here is some example code for how one currently registers the post type, then registers the metabox to display a form for new fields, and then saves the data when the post is saved:

All that code, and all we are doing is saving a twitter field. Gross. What if our site is very custom and we are using objects all over the place? What if everything on the site needs to be editable? This code is going to bloat almost immediately, so we need to find more ways to reuse.

The first thing we need to do is use a class to contain our logic, and ditch all of the procedural code from the last example. We are going to seriously optimize this code later, but here it is as a class:

This object is better, but it can still bloat very quickly. For each post type that has custom data, you have to add a meta box in one callback, and then register the UI in another. Every time your new object is saved, you have to run it through your own save logic, which adds even more bloat. For objects that are really complex, you actually might want to create a class per type, but most of the time, the data you are saving are attributes or simple fields. It would be great if we could create a few methods to autowire the creation and saving of a field.

In the next example, we will use closures and parent scope to dramatically decrease the necessary code to register a field:

For the time being, if I need to add another post type that has one field, I can just add these lines and be done with it:

All of the magic is rolled up into the NYT_Post_Types::create_field_box() method. So, if you need to add a bunch of post types at once that only save a field, you only have to edit the init method. This works if I have only one field. If I have several, I need to add a method:

To specify the fields while registering the post type:

Another piece of magic that we wired up – you can autowire a save method for a post type (that does not use autowiring for the UI) by adding a save_{post_type} method to your class. If you create a post type called balloon, all you have to do is add a method called save_balloon to your class. Our one registered save_post callback is smart enough to call it. This is great because you don’t have to duplicate the logic to determine if the post is eligible for save.

The autowiring methods (create_field_box() and create_fields_box()) dynamically create class properties with closures, but first look for an existing method. You can’t have both. Closures actually create properties on the class, not new class methods. This makes sense because you are really decorating your object with instances of the Closure class, which is what closures are. Closures should look very familiar to you if you write JavaScript with jQuery.

Some of your custom post types will need unique method callbacks for 'register_meta_box_cb', but my bet is that MOST of them can share logic similar to what I have demonstrated above. At eMusic, we had 56 custom post types powering various parts of the site. I used similar techniques to cut down the amount of duplicated logic across the codebase.

You may not need to use these techniques if your site is simple. And note: you can’t use closures in any version of PHP before 5.3.

WordPress 3.6 + Audio/Video

Screen Shot 2013-08-01 at 1.39.43 PM

I have already written once about the new support for Audio / Video in WordPress 3.6:
http://make.wordpress.org/core/2013/04/08/audio-video-support-in-core/

I also spoke about the new features in my Code Poet interview:
http://build.codepoet.com/2013/07/18/scott-taylor-interview/

I wanted to use this space to give the users some information about the new code and how to take advantage of it.

Upload Limits

A lot of Apache / PHP installs have an arbitrarily low file upload size limit set (usually 2MB). An average .mp3 file is at least 3-5MB. Video can be way bigger. If you are on a shared host that doesn’t allow uploads over a certain size, you may have to get more creative about where your audio and video files are stored. The only difference will be not having them in your media library. You can use all of the new a/v features with remote files.

If you have access to your own server and can tweak your configs, you can change your settings to allow larger uploads. There are a bunch of ways to override – either by altering your php.ini file or setting PHP values in .htaccess and then restarting Apache. The best way is to change your .ini settings:

// find the location of your php.ini file:
php -i |grep php.ini

// in php.ini
upload_max_filesize = 2M

// change this to something bigger
upload_max_filesize = 2000M

// also change this value
post_max_size = 2000M

You may need to change permissions of php.ini to make it writable.

Mime-Types

Paul Irish’s HTML5 Boilerplate has all of the settings you need to support the types used by HTML5 audio and video <audio> and <video> tags: the file

The most useful lines:

<IfModule mod_mime.c>
  # Audio
    AddType audio/mp4              m4a f4a f4b
    AddType audio/ogg              oga ogg

  # Video
    AddType video/mp4              mp4 m4v f4v f4p
    AddType video/ogg              ogv
    AddType video/webm             webm
    AddType video/x-flv            flv
</IfModule>

If you don’t do this, your server may send the files with a mime-type of application/octet-stream. They might play, might not, but it’ll probably get weird.

Shortcodes

The simplest way to use the new media shortcodes is to not write them by hand. When you click “Insert Into Post” from Media Library for an audio or video file now, the shortcode will be written into the editor for you. Probably don’t mess with it. If you are a video savant and have multiple copies of your video or audio in all of the HTML5 formats, you can use them all, but you will have to write in the extra attributes by hand.

Examples of shortcodes inserted by media library:

// by default uses the "src" attribute
[audio src="http://tunez.net/cool-song-bro.mp3"]

// multiple files for maximum HTML5 playback
[audio mp3="http://tunez.net/cool-song-bro.mp3"
    ogg="http://tunez.net/cool-song-bro.mp3"]

The [audio] shortcode also allows the following attributes: loop, autoplay, and preload (defaults to none).

// by default uses the "src" attribute
[video width="400" height="300" src="http://tunez.net/cool-movie-bro.mp4"]

// multiple files for maximum HTML5 playback
[video width="400" height="300"
    mp4="http://tunez.net/cool-movie-bro.mp4"
    ogv="http://tunez.net/cool-movie-bro.ogv"
    webm="http://tunez.net/cool-movie-bro.webm"]

The [video] shortcode also allows the following attributes: poster, loop, autoplay, and preload (defaults to metadata).

Embed Handlers

If you put an audio or video URL on a line by itself – boom, you’re done. A player will show up if the file extension is in the list of supported types.

http://mp3-url

This is where my text starts - notice the breathing room
between the URL and the content.

Metadata

Your audio and video uploads now have metadata that is extracted when they are uploaded. Prior to 3.6, no metadata was generated for audio and video files. A/V files are typically created with ID3 tags. ID3 tags contain data like artist, album, song, genre, etc for audio files and length, dimensions, codecs, etc for video.

To access this data:

// assuming you have an attachment ID
$meta = wp_get_attachment_metadata( $attachment->ID );

// debug to see what is available
print_r( $meta );

// always check for the property's existence before doing anything
if ( ! empty( $meta['length_formatted'] ) )
     echo $meta['length_formatted'];

None of the data is guaranteed to be there. If you have uploaded audio or video previously, this data won’t be generated on the fly. We are using the getID3 library to parse the files on upload. The code is elaborate. It is also possible that your media files don’t contain ID3 tags. If your files DO contain data, some of it is displayed in the sidebar on the Edit Media page. Some of the fields on the Edit Media page are pre-populated now with data from your ID3 tags, if they were present on upload.

Images embedded in MP3s

If you add the following lines to your functions.php:

add_post_type_support( 'attachment:audio', 'thumbnail' );
add_post_type_support( 'attachment:video', 'thumbnail' );
add_theme_support( 'post-thumbnails', array( 'attachment:audio', 'attachment:video' ) );

If your mp3 contains an image, the image will be extracted, uploaded to your media library, and will be the post thumbnail for your audio file.

Screen Shot 2013-08-01 at 4.58.17 PM