0️⃣ Zero downtime deployment strategies for PHP apps

·

3 min read

Hey guys,

We all know that deploying your PHP application is super easy. Traditionally, we would do some something like:

git pull

composer install

(some extra commands, e.g.: php artisan config:cache)

But it comes with a risk, the deployment takes time and could be up to several minutes.

Issues can increase in the middle of the deployment, e.g.: pulling, composer install,... we would never be sure if there is no traffic when deploying a new version 🔥

Additionally, there is no guarantee that your application will be up and running without any interruption or issue as well 🥹.

So there are some strategies that I will share below and choose the best one for your applications 😉

Strategies

Traditional build with duct tapes

Still the same build as the introduction, but we added some improvements.

Instead of pulling, we'll clone the project into a new folder (use the current timestamp for the folder name)

Then we do a fresh deployment process (prepare .env, composer install,...)

After the process is done, point nginx or apache to the new folder path and restart nginx/apache.

  • PROs:

    • ??
  • CONs:

    • A workaround

    • Contains a lot of works

    • Doesn't look so reliable

High-perf mode

Nowadays, we have 2 high-perf runners: Swoole & RoadRunner.

Both are built differently but serve the same thing: keep your PHP apps running in a high-perf mode.

Basically, it will be a long-running process (not like the traditional way) like others (Node.js Express, Go,...). Your project will be loaded into the memory. So if there are file changes, it won't be updated until you restart the process.

With that, you can follow the traditional build process (git pull, composer install, scripts,...) normally fine.

After those actions, simply restart the process gratefully and boom, your app is up.

  • PROs:

    • Easy

    • You also run your PHP apps in the high-perf mode

  • CONs:

    • There will be a little downtime (might up to some seconds) when restarting the process.

    • Quite repetitive when deploying several nodes

      • If you don't deploy in parallel, there will be inconsistency for a short amount of time between nodes until the deployment is through.

Build your own Docker Image

Docker has been around for a decade, Docker was built for this 😎

This would work best if you have a CI/CD pipeline too. The flow is:

  • Run tests (unit, integration)

  • Build image

  • (Optional) Test image

  • Push the image to the registry (Docker or your private cloud)

  • Swap the image version to latest

  • Run health check

  • Done

With Docker, you can run your application in the serverless mode too, using GCP CloudRun, AWS Fargate or Fly.io 🚀

  • PROs:

    • Reliable

    • Guaranteed zero downtime

    • The swap image task is fast, thus we don't have to worry about inconsistent nodes when we have multiple nodes.

  • CONs:

    • Cost $$$ for image storage on private cloud.

Ending

Well, those are some of the strategies that I worked with, and I totally in love with the Docker way (I'm also running high-perf mode too 🥰)

Hope it helps you to achieve the zero downtime deployment.