This website uses cookies. By continuing to browse, you agree to our Privacy Policy.

SIGTERM: Linux Graceful Termination | Exit Code 143, Signal 15

What is SIGTERM (signal 15)

SIGTERM (signal 15) is used in Unix-based operating systems, such as Linux, to terminate a process. The SIGTERM signal provides an elegant way to terminate a program, giving it the opportunity to prepare to shut down and perform cleanup tasks, or refuse to shut down under certain circumstances. Unix/Linux processes can handle SIGTERM in a variety of ways, including blocking and ignoring.

SIGTERM is the default behavior of the Unix/Linux kill command – when a user executes kill, behind the scenes, the operating system sends SIGTERM to the process. If the process is not

In Docker containers, a container that is terminated via a SIGTERM signal shows exit code 143 in its logs. If you are a Kubernetes user, this article will help you understand what happens behind the scenes when Kubernetes terminates a container, and how to work with the SIGTERM signal in Kubernetes.

SIGTERM (Exit Code 143) vs SIGKILL (Exit Code 137)

SIGTERM (Unix signal 15) is a “polite” Unix signal that kills the process by default, but can be handled or ignored by the process. This gives the process a chance to complete essential operations or perform cleanup before shutting down. The purpose is to kill the process regardless of whether it ended successfully or not, but to give it a chance to clean up the process first.

SIGKILL (Unix signal 9) is a “brutal” Unix signal that kills the process immediately. It is not possible to handle or ignore SIGKILL, so the process does not have an opportunity to clean up. SIGKILL should be used by Unix/Linux users as a last resort, because it can lead to errors and data corruption.

In some cases, even if SIGKILL is sent, the kernel may not be able to terminate the process. If a process is waiting for network or disk I/O, and the kernel cannot stop it, it becomes a zombie process. A restart is required to clear zombie processes from the system.

Exit codes 143 and 137 are parallel to SIGTERM and SIGKILL in Docker containers:

  • Docker exit code 143 – means the container received a SIGTERM by the underlying operating system
  • Docker exit code 137 – means the container received a SIGKILL by the underlying operating system

How Can You Send SIGTERM to a Process in Linux?

The most common way to end a process in Unix/Linux is to use the kill command, like this: kill [ID]

By default, the kill command sends a SIGTERM signal to the process.

To discover [ID], the process ID, use the command ps -aux, which lists all running processes.

How to send SIGKILL

In extreme cases, you may need to immediately terminate a process using SIGKILL. Use this command to send SIGKILL: kill -9 [ID]

Handling zombie processes

When you list running processes, you may find processes that show defunct in the CMD column. These are zombie processes that did not terminate correctly. A zombie process is:

  • No longer executing
  • Has no system space allocated
  • However retain a process ID

Zombie processes will appear in the process table until their parent process shuts down, or the operating system restarts. In many cases, zombie processes can accumulate in the process table, because multiple child processes were forked by a parent process and were not successfully killed. To avoid this situation, ensure that your application’s sigaction routine ignores the SIGCHLD signal.

How Can You Send SIGTERM to a Container in Kubernetes?

If you are a Kubernetes user, you can send a SIGTERM to a container by terminating a pod. Whenever a pod is terminated, by default, Kubernetes sends the containers in the pod a SIGTERM signal.

Pods are often terminated automatically as a result of scaling or deployment operations. To terminate a pod manually, you can send a kubectl delete command or an API call to terminate the pod.

Note that after a grace period, which is 30 seconds by default, Kubernetes sends a SIGKILL to terminate the container immediately.

The Kubernetes Graceful Termination Process and SIGTERM

Kubernetes manages clusters of containers, performing many automated operations on your applications. For example, it can scale applications up or down, update them, and remove applications from the cluster. Therefore, there are many cases in which Kubernetes needs to shut down a pod (with one or more containers), even if they are functioning properly.

There are also cases in which Kubernetes will shut down pods because they are malfunctioning, or because there are insufficient resources on the host machine (known as eviction). Whenever Kubernetes needs to terminate a pod, for any reason, it sends SIGTERM to the containers running in the pod.

Here is the full process that occurs when Kubernetes wants to terminate a pod:

  1. Pod is set to Terminating status – Kubernetes then removes it from all Services and it stops receiving new traffic. At this point, containers running on the pod are unaware of the change.
  2. preStop hook – this is a special command that is sent to containers in the pod just before the pod starts terminating. You can use this hook within containers to start a graceful shutdown. While it is preferable to directly handle the SIGTERM signal (which is sent in the next step), if you cannot do this for any reason, you can use the preStop hook without code changes to the application.
  3. SIGTERM signal sent to the pod – Kubernetes sends SIGTERM to all containers in the pod. Ideally, your applications should handle the SIGTERM signal and initiate a clean shutdown process. Note that even if you handle the preStop hook, you still need to test and be aware of how your application handles SIGTERM. Conflicting or duplicate reactions to preStop and SIGTERM can lead to production issues.
  4. Grace period – after SIGTERM is sent, Kubernetes waits for the terminationGracePeriod, which is 30 seconds by default, to allow the containers to shut down. You can customize the grace period in each pod’s YAML template. Note: Kubernetes does not wait for the preStop hook to complete – it starts counting the grace period from the moment the SIGTERM signal is sent. If the container exits on its own before the grace period ends, Kubernetes stops waiting and moves to the next step.
  5. SIGKILL signal sent to the pod – all running container processes are immediately terminated on the host, and the kubelet cleans up all related Kubernetes objects.

Handling SIGTERM and preStop in Kubernetes Applications

To ensure that pod termination does not interrupt your applications and impact end-users, you should handle pod termination.

Practically speaking, this means ensuring your application handles the SIGTERM signal and performs an orderly shutdown process when it receives it. This should include completing transactions, saving transient data, closing network connections, and erasing unneeded data.

Note that unlike in a regular Linux system, in Kubernetes, SIGTERM is followed by SIGKILL after a grace period. So you must prepare the container to shut down, and cannot simply ignore it.

Another option for handling graceful termination is the preStop hook – this lets you perform shutdown processes without code changes to the application. If you use the preStop hook, make sure that the actions performed do not duplicate, or conflict, with the actions the application performs when it receives the SIGTERM signal. It is usually best to handle either SIGTERM or preStop, to avoid conflicts.

Which Kubernetes Errors are Related to SIGTERM

Any Kubernetes error that results in a pod shutting down will result in a SIGTERM signal sent to containers within the pod:

  • At the Kubernetes level, you will see the Kubernetes error by running kubectl describe pod.
  • At the container level, you will see the exit code – 143 if the container terminated gracefully with SIGTERM, or 137 if it was forcefully terminated after the grace period.
  • At the host level, you will see the SIGTERM and SIGKILL signals sent to the container processes.

One exception is the OOMKilled error. This is a Kubernetes error that occurs because a container or pod exceeds the memory allocated to them on the host. When a container or pod is terminated due to OOMKilled, Kubernetes immediately sends a SIGKILL signal, without using SIGTERM and without using a grace period.

How Does SIGTERM Impact NGINX Ingress Controllers?

When running applications on Kubernetes, you must ensure that ingress controllers do not experience downtime. Otherwise, whenever the controller restarts or is redeployed, users will experience a slowdown or service interruption. If an ingress pod is terminated, this can result in dropped connections – this must be avoided in production.

Problem: NGINX does not perform graceful termination on SIGTERM

If you are using the official NGINX Ingress Controller, when the controller pod is terminated, Kubernetes sends a SIGTERM signal as usual.

However, the NGINX controller does not handle SIGTERM in the way Kubernetes expects:

  • When NGINX receives SIGTERM, it shuts down immediately. Basically, NGINX treats SIGTERM like SIGKILL.
  • When NGINX receives a SIGQUIT signal, it performs a graceful shutdown.

Solution: Use the preStop hook

As we discussed in the Handling SIGTERM and preStop section above, Kubernetes provides a second option for handling graceful termination – the preStop hook. You can use the preStop hook to send a SIGQUIT signal to NGINX, just before SIGTERM is sent. This avoids NGINX shutting down abruptly, and gives it the opportunity to terminate gracefully.

Troubleshooting Kubernetes Pod Termination with Komodor

As a Kubernetes administrator or user, pods or containers terminating unexpectedly can be a pain, and can result in severe production issues. The troubleshooting process in Kubernetes is complex and, without the right tools, can be stressful, ineffective and time-consuming.

Some best practices can help minimize the chances of SIGTERM or SIGKILL signals affecting your applications, but eventually something will go wrong—simply because it can.

This is the reason why we created Komodor, a tool that helps dev and ops teams stop wasting their precious time looking for needles in (hay)stacks every time things go wrong.

Acting as a single source of truth (SSOT) for all of your k8s troubleshooting needs, Komodor offers:

  • Change intelligence: Every issue is a result of a change. Within seconds we can help you understand exactly who did what and when.
  • In-depth visibility: A complete activity timeline, showing all code and config changes, deployments, alerts, code diffs, pod logs and etc. All within one pane of glass with easy drill-down options.
  • Insights into service dependencies: An easy way to understand cross-service changes and visualize their ripple effects across your entire system.
  • Seamless notifications: Direct integration with your existing communication channels (e.g., Slack) so you’ll have all the information you need, when you need it.

If you are interested in checking out Komodor, use this link to sign up for a Free Trial.

How useful was this post?

Click on a star to rate it!

Average rating 5 / 5. Vote count: 1

No votes so far! Be the first to rate this post.