Automating the Matched Filter using Neural Nets

Praveen Krishna Murthy
5 min readAug 17, 2019

--

Photo by chuttersnap on Unsplash

Matched Filter

Matched filter is a concept used in signal processing. The technique basically maximizes the signal-to-noise ratio of the known signal in noise. Let us first understand the basic definition and math behind it.

Matched filter is defined as convolution of the input signal with a conjugated time-reversed version of the reference signal.

y(n) = x(n) * h(n), where h(n) = x(-n)

Motivation

Well, simple automation saves a lot of time. The manual effort for creation of reference signals for matched filtering should be eliminated. The process can be automated by Neural Network which can be referred as a non-linear signal estimator.

Data

The machine learning revolves around data. We have heard lot of quotes like data is the new oil, it is the new plutonium etc., Well, we have in abundance for this domain. The matched filter concept here is applied to sounds of underwater mammals, interesting. The underwater mammals make certain type of sounds which underwater archaeological scientists call them as ‘Clicks’. These are short sounds that are made by marine mammals at regular intervals. Let me show this chamber of secrets!

Time domain Signal of Underwater mammal(Cachalot, 4secs) and Single Click (20ms)

These clicks varies in time and in temporal structure for each of the mammals. There are several different families of underwater mammals. This experiment was performed mainly on Dolphins family, like oceanic dolphins, risso’s dolphin, killer whale so on. Detailed description of data used is shown.

Number of mammals : 6–8
Sampling Frequency : 48 kHz
Audio Length : 200 seconds
Number of Samples : 960,000

So with around 9.6 million samples the neural networks were trained.

Design

Let us have a design first before going any further. We should definitely have labels and inputs for the neural network training. Let me show the complete block diagram.

Block diagram of Neural Network based block diagram

The input of course is the audio signal, it is passed to matched filter block. The conventional matched filter concept is applied to audio signal. The output from matched filter is passed through denoiser block where the audio signal is cleaned using signal processing technique. The data formatting basically formats the data according to the input layer configuration of neural network. The denoised signal serves as labels and audio input signal as input to the network. The trained neural network is expected to give Matched filter output when audio signal is passed.

Data Formatting

The data formatting can be explained in detail with some good figure.

Data Formatting

The audio signals are single dimension i.e, Nx1 format signals. When the same dimension is passed through conventional matched filter block, then, Nx1 is expected on the other side, this is shown figure.

The Neural network is fed with batch wise time series data. The M1x1 of Nx1 is small part of input signal. This small part of input is fed with small part of corresponding label on the other side for the neural network to learn. Further, second frame of the time series is fed with corresponding matched filter output and this is done along the length of the audio signal.

The M can vary as per user decision and this can be 14, 21, 38, 42…,

Neural Network Architecture

Back then when this idea was implemented, I was a newbie to all this Machine Learning. So, I tried out to see how the concept works with MLP!!

Multi-Layered Perceptron

The Multi-layered perceptron is a simple neural network architecture with 3 layers which consists of input, hidden and output layers. Each layer has weights which are nothing but decimal numbers ranging between 0–1. The trained MLP neural network can be imagined as a nonlinear estimator with polynomial equation.

Multi-Layered Perceptron

Recurrent Neural Network

After using this network for training and evaluation, curiosity aroused to use different neural network architecture. As it is proven in many of the experiments that time series work much better with Recurrent Neural Network because of the presence of feedback, the experiment were conducted using the network with architecture as shown.

Recurrent Neural Network

Simple neural network architecture which is present in neural network toolbox of matlab was used in building the network. Here, 1:20 corresponds to feedback which states for every ‘t’ sample ‘t-20’ samples are considered and the neuron weight is adjusted accordingly.

Experiments and Results

Multi-Layer Perceptron

The experiments were conducted by choosing different batch sizes. The table shows the batch sizes of 14, 21, 38 and 42. These numbers are arbitrary and can be tried with different batch sizes too!

Parameters Used for MLP Architecture

Now let’s see the result obtained!

Expected Matched Filter Output

Neural Network outputs,

Neural Network Outputs

Yay! the neural network output looks really awesome. This is just the subjective analysis (Objective analysis was done with PSNR and it was pretty good too).

But, let me try the signal which was not involved at all. Like the mammal’s click structure was completely different(Eg: Model is trained to classify cat and dog but zebra was passed). And when i executed this kind of test on RNN. The results were as below.

It has introduced some noise at low amplitudes. This is to the fact that signal structure was not present during the training at all. Nevertheless, the subjective analysis looks pretty good.

Conclusion

  • Matched filter can be automated using neural nets.
  • SNR trade-off with respect to dimensions are analyzed(Yes! there is one.)
  • For further good results, of course, dataset can be broadened and network can be upgraded to LSTM, bi-directional.

References

Denoiser: U. Zölzer, DAFX: digital audio effects. John Wiley & Sons, 2011.

Code: https://github.com/Praveenk8051/ML_AI-Projects-Matlab

--

--

Praveen Krishna Murthy
Praveen Krishna Murthy

Written by Praveen Krishna Murthy

ML fanatic | Book lover | Coffee | Learning from Chaos

Responses (1)