Softplus function — Smooth approximation of the ReLU function

Step by step implementation with its derivative

neuralthreads
3 min readDec 1, 2021

In this post, we will talk about the Softplus function. The Softplus function is a smooth approximation of the ReLU function that removes the knee in the ReLU function graph and replaces it with a smooth curve.

You can download the Jupyter Notebook from here.

Back to the previous post

Back to the first post

3.6 What is the Softplus activation function and its derivative?

This is the definition of the Softplus function.

And it is very easy to find the derivative of the Softplus function.

This is the graph for the Softmax function and its derivative.

Softplus function and its derivative

We can easily implement the Softplus function in Python.

import numpy as np                             # importing NumPy
np.random.seed(42)
def softplus(x): # Softplus
return np.log(1 + np.exp(x))
def softplus_dash(x): # Softplus derivative
return 1/(1 + np.exp(-x))
Defining Softplus functions and its derivative

Let us have a look at an example

x = np.array([[-20], [0.5], [1.2], [-2.3], [0]])
x
softplus(x)softplus_dash(x)
Softplus example

I hope now you understand how to implement the Softplus function and its derivative.

In the next post, we will talk about Softmax activation function and its Jacobian which is the most important post in this chapter.

Watch the video on youtube and subscribe to the channel for videos and posts like this.
Every slide is 3 seconds long and without sound. You may pause the video whenever you like.
You may put on some music too if you like.

The video is basically everything in the post only in slides.

Many thanks for your support and feedback.

If you like this course, then you can support me at

It would mean a lot to me.

Continue to the next post — 3.7 Softmax Activation function and its Jacobian.

--

--