Kubernetes Probes for Node Applications

Author
Florian SchaschkoFollow on LinkedIn
4 min read• Edited on August 23, 2024
A man probing a container in a ship dock

Only healthy pods are happy pods. And happy pods make for happy users. Ask yourself two questions:

  • Do you want your users to be happy? Of course you do!
  • Do you enjoy your apps crashing like your dreams? Neither do we!

The good news is, if you're running Node applications in Kubernetes, you're in luck. Kubernetes has built-in mechanisms to keep your applications running smoothly. One of these mechanisms are probes, which help Kubernetes determine if your applications are healthy and running as expected. And restart them automatically, if they're not.

In this article, we'll show you how to use Kubernetes probes to keep your Node applications running like a well-oiled machine. Let's get started!

Monitoring Node with Node

You might say:

Why can't I just use a process manager like pm2 to keep my app running or recover it from crashes?

Well, let's talk about that. While pm2 can be effective for managing processes on a single (multicore) VPS, it falls short in a cloud native environment. Ideally, you’re no longer dealing with non-containerized applications anyways. But if you're still running a legacy setup these days, consider these two tips:

  1. Don't. It's 2024, and you're better than that.
  2. Containerize your app. If that's not an option, consider using onboard tools like systemd instead.

You could also consider using nodemon, but let’s be real — it shines in development, but it's not made for production. So using Node to monitor Node in production just doesn’t cut it.

Self healing K8s to the rescue

Fortunately, we're not in the 2000s anymore.

And guess what? Your cloud infrastructure already comes with built-in solutions to keep your workloads in tip-top shape. Or restart them, if they're not. You've already containerized your app, now use that. Let's combine the power of Kubernetes with Node to craft a self-healing application!

One of the many great things in Kubernetes is, that it keeps an eye on your containers by default and gives them a reboot if they crash. But what if your app running in a container is behaving unpredictably — not responding or tangled up without any alerts? Sure, you could dig into the root causes, but that’s as thrilling as watching paint dry. Let's do something fun instead!

No seriously... Even the most well-coded Node apps misbehave sometimes. Therefore we need to implement a mechanism do the right thing when things go wrong. This is where Kubernetes probes come in. Probes are a way to help Kubernetes assess when to restart your app. Your Node app will thank you, and your users will love the reliability. Why? Because if you additionally have multiple instances running (which you should) and one fails:

  1. Kubernetes detects it.
  2. Traffic will automatically route to the healthy instances.
  3. The failed container will be restarted.
  4. Everyone is happy.

Hello, zero downtime, even when your app is misbehaving.

Enter Kubernetes probes

Roll up your sleeves now, we have some work to do. Consider a simple express app:

import express from "express";
const app = express();
const port = 3000;
 
app.get("/", async (req, res) => {
  res.send("Hello World!");
});
 
app.listen(port, () => {
  console.log(`Example app listening on port ${port}`);
});

What we'll do now, is implement a liveness probe that responds to a GET request. Kubernetes can then use this probe to determine if the app is healthy. Let's add an /livez endpoint to our express app first:

import express from "express";
const app = express();
const port = 3000;
 
app.get("/", async (req, res) => {
  res.send("Hello World!");
});
 
app.get("/livez", async (_req: Request, res: Response) =>
  res.status(200).json({ status: "ok" }),
);
 
app.listen(port, () => {
  console.log(`Example app listening on port ${port}`);
});

Then, on the Kubernetes side, we add a corresponding probe to our Deployment:

# ... other parts of the deployment
spec:
  template:
    spec:
      containers:
        - name: my-app
          image: my-app:latest
          livenessProbe:
            httpGet:
              path: /livez
              port: 3000
            initialDelaySeconds: 30
            periodSeconds: 15

What did we do here? We added a livenessProbe to the Deployment. The probe will make a request to the /livez endpoint of our application every 15 seconds. If the response is not 200 OK, Kubernetes will restart the container after a certain number of failed probes (failureThreshold defaults to 3). The initialDelaySeconds parameter tells Kubernetes to wait 30 seconds before trying the first probe. This is useful for applications that need some time to start up.

That's it! You've just implemented a liveness probe for your Node application running in Kubernetes. Now, Kubernetes will automatically restart your application container if it becomes unresponsive. This can help you avoid downtime and keep your applications running smoothly.

You can do the same to implement a readinessProbe (do it!). This will tell Kubernetes when your application is ready to receive traffic. It wouldn't make sense to route traffic to your application if it's not ready to handle it, right?

Legacy setups

A word about legacy setups:

  • They are still around
  • They are more common than many would prefer to acknowledge
  • If you have such a setup, you might not have the luxury of leveraging Kubernetes probes

Example: You're running your Node process on a Linux VM. In this case, you should probably do so with systemd. Consider using sd_notify to signal the health of your application to the system. This way, you can ensure that your application is restarted if it crashes. In addition you're watching over your app from one layer above, effectively bypassing the node-with-node monitoring problem.

And systemd can do a lot more, even sandboxing to some extend. We love systemd for what it is, but don't recommend it for an application that would benefit from running in Kubernetes instead.

Summary

In this article, we illustrated how Kubernetes probes can assist in maintaining smooth operation of your Node applications. By implementing probes, Kubernetes can determine if your application is healthy and running as expected. Therefore detecting and minimizing downtime. You can also combine probes with other tools like uptimerobot to monitor your applications and get alerts if they go down.


If you need help setting up probes for your Node applications or in your cluster with Kubernetes, feel free to contact us. We're here to help!