Application Performance: a Skillshare

Brian Vuyk on background

Brian Vuyk

A red and blue stopwtach with three lines indicating speedy movement

Every other week our team comes together for breakout skillshares, where a few team members share technologies, ideas, processes, and/or skills with each other. Recently we learned about application performance from one of our senior engineers Brian. It was too good not to share some of the insights.

Brian has been a member of the Savas Labs team for two years and has been building custom software since 2006. When he's not behind the computer, Brian and his family enjoy a hobby farm where they keep chickens, bees, and are (too often if you ask him) visited by unexpected wild visitors. This makes for very exciting updates from Brian that the whole team enjoys.

A message in Slack from Brian that reads: "AFK for a bit. I need to inspect my bees while conditions are good. Weather has been all over the map this week, so I'm grabbing the window while I can"
A message in Slack from Brian that reads: "Rural problems... I have a skunk that I am trying to trap in my barn. It's currently curled up in the front of my snowblower. Meanwhile, my chickens are dead set on getting in there which wouldn't end well for them"
A beehive

But when Brian’s not keeping us up to date on his farm adventures, he’s helping our clients with their performance needs and sharing his knowledge with the rest of our team. In the past five years, Brian has really enjoyed implementing microservice architectures, when they are the right approach for our partner's application. Here is a snippet of insight from the skillshare.

PHP application performance

One of the languages that underpins some of the frameworks Brian has been working with for 15 years is PHP. PHP underlies popular web frameworks such as Drupal, Wordpress, Symfony and Laravel, and is at the core of many of the websites and applications that you use every day. In fact, W3Techs estimates that PHP is used by 79% of the sites on the internet with a known programming language. Furthermore, this market share has remained consistent (within ~1% fluctuation) since mid-2013.

Historically, PHP was not regarded as a high-performance language. Early iterations of the language required the code to be compiled with each run or page request. One of the most important early additions to PHP performance was the introduction of Opcode caching. Early Opcode caches such as APC saved pre-compiled portions of PHP code (called ‘opcodes’) in memory, allowing them to be quickly referenced during execution allowing for exponentially quicker server-side execution.

The introduction of PHP 7 (version 8 is now recommended) represented a large step forward in PHP performance with the inclusion of Opcache (an opcode cache) and many small tweaks resulting in a significant increase in the raw speed of the language. Further improvements with each minor version (7.1, 7.2, and 7.3) have further incrementally improved the speed of the language.

How important is raw speed?

Like so many things, it truly depends, and the application layer does not typically account for the largest bottlenecks.

For many projects that use PHP to drive web applications and sites, shortcomings with the underlying language speed are often mitigated by judicious use of caching in several layers - from high-level HTTP caches such as Varnish to application-level caching of processed data.

However, there are also use cases where caching can not easily be applied. For instance, a project may need to return (near) real-time data in responses, or maybe use PHP to perform computationally-intensive tasks such as data analytics or aggregation. In these cases, the raw speed of the language can be critically important.

Below, we are going to look at some ways that PHP’s performance can be improved substantially for these use cases.

State, Bootstrapping, and the shared-nothing architecture

Traditionally, one of the primary design philosophies behind PHP has been the concept of ‘shared-nothing’ architecture where a single request is served by a single thread. With each request, the PHP worker thread loads both the PHP engine and your application, processes the request, then tears it all back down again. As a result, any state information associated with a request is completely removed before the next request is received.

This model has some compelling benefits:

  • Developers rarely have to concern themselves with concepts like garbage collection and memory management.
  • This gives PHP the ability to scale linearly - that is, you can scale your application by simply adding more PHP processes (or workers).
  • A single PHP server can handle requests from many different applications, since each request is handled completely independently of the others.
  • It is impossible for state information to leak between requests.

However, this model also has a couple of major drawbacks from a resource efficiency standpoint:

  • The process of setting up (or bootstrapping) and tearing down your application with each request is computationally expensive, and may require almost as much as half of the overall request processing time.
  • The single-threaded nature of request handling means that any communication with the filesystem, database, network resources or other IO will require your PHP worker thread to site idle while waiting for a response.

While the benefits of this type of architecture are very compelling for rapid development and simple scalability, the associated resource inefficiencies can lead to both increased response times and significantly increased server costs.

Long live the PHP worker process!

Recognizing the tradeoffs described above, several projects have emerged in the PHP ecosystem over the past few years that implement different design philosophies to reduce both response time and server costs, while finding ways to mitigate additional development complexity.

The first tradeoff typically made is the switch from ‘shared nothing’ single-threaded request handlers to PHP application servers with long-lived worker threads. In this scenario, bootstrapping happens once when the worker thread is initially started. The worker thread then processes multiple requests passed to it by the application server, and remains alive for the lifetime of the application server.

By reducing or eliminating the bootstrapping phase on every request, we are able to deliver a significant reduction in request response times, often doubling the server’s request throughput.

Of course, this comes with tradeoffs regarding application state and memory leakage. Most of the existing code and libraries were written with the assumption that the state would be destroyed at the end of the request. While most code will likely continue to work fine, there is a chance that some classes may retain state between requests.

Take for example a class object that accumulates status messages generated during a request. If this class object is not adequately re-initialized prior to the next request, it could lead to status messages intended for one user being shown to another.

There is also potential for memory leakage - a class object that collects data or information may again not be sufficiently flushed or reinitialized between requests, and grow in memory consumption as each request adds more data until an out of memory error causes the worker thread to die.


Clearly, we're just scratching the surface of a topic that could be the focus of an entire career! If your interest is piqued, please feel free to reach out to us - we'd love to chat. 

Need to boost your performance?

We'd be happy to lend an ear.