Thursday 25 July 2024

Why Mel-frequency Cepstrum Analysis Is Not Always The Ideal Solution For Vibration Analysis

The Mel-frequency Cepstrum (MFC) and it's associated outputs, the Mel-frequency Cepstral Coefficients (MFCCs), are commonly used for speech applications such as speaker and speech recognition, using neural networks. Unfortunately, the nature of the MFC means that it is not always ideally suited to applications such as vibration analysis and predictive maintenance

The MFC uses logarithmicaly spaced frequency banks to replicate how the human ear hears sound. This approach can lead to very large savings in the number of MIPS required for the recognition part of speaker and speech recognition. Unfortunately, this logarithmic frequency space hides frequencies that are closely spaced meaning that this approach is sub-optimal for applications such as machine vibration analysis, where small variations in vibrational frequency can indicate problems with the machine, particularly the bearings.

The following diagram shows a simple Mel-spaced filterbank, with 12 separate filters:


As can be seen from the diagram, resolution of close by frequencies is a particular problem for higher frequency harmonics, where the filters have a wider bandwidth.

The problem can also be seen in the following two images, which are sampled from identical machines running with two different error modes. It can be seen that it is the higher frequency peaks (1 kHz to 2 kHz) that vary the most and this is just the region, for this Mel-spaced filterbank, where the filter bandwidths start to get exessively wide.

Vibration Mode #1

Vibration Mode #2

The solution to this problem is to use a regular Fast Fourier Transform (FFT) for the front-end processing of these types of applications and use spectral analysis of the anticpated vibration modes to observe the frequency resolution required and this will then define the FFT size required for the application.

The SigLib Digital Signal Processing and Machine Learning library includes examples for machine vibration monitoring. These can be found here.

Copyright © 2024 Delta Numerix


Thursday 27 June 2024

Using Generative AI And Large Language Models (LLMs) To Write DSP Code

Back in March 2023 I wrote the following blog post about using Generative AI and Large Language Models (LLMs) to write code: Are Chat-GPT and Google Bard The New Frontier For Writing DSP Code?

Since then, I have used these tools in many projects and have made a number of observations. In general, the more complex the task you are setting for the LLM, the more likely the performance of each is going to diverge and also the more likely it is that, as a programmer, you are going to have to test the code extensively to find the bugs.

Using these tools is a bit like an artist generating a preliminary sketch, rather than the final polished painting, with all of the correct detail.

I have found three main uses for Generative AI in coding:

  • Writing code to meet a specification
  • Documenting / commenting existing code
  • Converting code from one language / system to another

I have tried all of the following: Gemini, Google Code Assist, Chat-GPT, Bing and Co-Pilot. I have had the best coding results with Gemini (Bard) however if I find that this is struggling then I will try them all because they all have strengths and weaknesses.

It is important that you know what you want to do because there is no guarantee you will receive a correct answer! 

A useful trick is to try the same request multiple times because, unlike a traditional search engine, an LLM with give you a different responses each time. Handily, Gemini automatically generates 3 draft solutions and you can click on the tabs provide to review each.

I have observed that LLMs are much better at writing Python than lower level languages (C/C++ etc.). In Python, it will almost certainly produce a working solution using the Numpy/Scipy library functions, that may just need some final tuning.

If you are writing code for a lower level language then the best option is often to take a two stage approach:

  • Generate Python/Numpy/Scipy code
  • Convert the Python code to C - LLMs are very good at converting Numpy/Scipy functions to C

Generative AI is very good for converting between languages and Gemini will add comments to code that does not contain original comments. This is particularly useful if you work with a colleague who is not very dilligent with their code commenting ;-). It is worth noting, however, that comments are sometimes wrong due to AI misunderstanding the intention of the code.

Sometimes the conversion process will skip complex code sections, in a program, entirely so if this happens then the next step is to copy those sections and convert them separately.

Converting code from Python to C/C++ is generally very easy because they both use 0-based array indexing. Matlab, however, is more complex because it uses 1-based array indexing and this confuses the LLM. When converting Matlab code to Python or C/C+++ then I generally use the following request, which I then follow with the code section:

convert the following matlab code, with 1 based array indexing, to Python and Numpy, with 0 based array indexing

One final example of a gotcha is that Matlab uses FIR filter order whereas Scipy uses the number of coefficients.

As well as documenting code, LLMs are very good at debugging code however it is often important to explicitly specify the language in the request, rather than leaving it to the LLM to decide what language the code is written in.

Finally, Test! Test! Test!

Copyright © 2024 Delta Numerix


Wednesday 8 May 2024

Digital Filter Plus Filter Design Tool Now Open Sourced

Back in the 1980s one of my first tasks, as a junior engineer, was to write a very simple filter design program, in GWBasic!

In the 1990s I updated it to Borland C and added some new functionality.

In the 2000s I added a GUI front end and more functionality.

For the last 20 years this has done myself and my customers well and has been my goto filter design tool but has languished in recent times so I've finally found the time to open-source it and it's now part of the SigLib DSP library.

https://github.com/Numerix-DSP/siglib

Over the last few months, at the request of customers, I've also added commercial grade audio Automatic Gain Control and multi-dimensional Kalman Filters to the library.

Enjoy :-)

SigLib and all of it's components, including Digital Filter Plus are licensed for free for educational and personal use only. All other uses require a developer's license.

The 33rd Annual Running Of The University Of Oxford Digital Signal Processing Course Will Now Include A Live Option, In 2024

The 33rd annual running of the University Of Oxford Digital Signal Processing course will be running live, in Oxford, UK, from Tuesday 4th to Friday 7th June 2024.

The courses are presented by experts from industry for Engineers in industry and over the last 30 years has trained many hundreds of Engineers, from all areas of Science and Engineering.

Here is a summary of the two courses.

Digital Signal Processing (Theory and Application) - Tuesday 4th to Thursday 6th June 2024.

https://www.conted.ox.ac.uk/courses/digital-signal-processing-theory-and-application

This course provides a good understanding of DSP principles and their implementation and equips the delegate to put the ideas into practice and/or to tackle more advanced aspects of DSP. 'Hands-on' laboratory sessions are interspersed with the lectures to illustrate the taught material and allow you to pursue your own areas of interest in DSP. The hands-on sessions use specially written software running on PCs.

Subjects include:

  • Theoretical Foundations
  • Digital Filtering
  • Fourier Transforms And Frequency Domain Processing
  • DSP Hardware And Programming
  • ASIC Implementation
  • Typical DSP Applications

Digital Signal Processing Implementation (algorithms to optimization) - Friday 7th June 2024.

A one-day supplement to the Digital Signal Processing course that takes the theory and translates it into practice.

https://www.conted.ox.ac.uk/courses/digital-signal-processing-implementation-algorithms-to-optimisation

The course will include a mixed lecture and demonstration format and has been written to be independent of target processor architecture.

The course will show how to take common DSP algorithms and map them onto common processor architectures. It will also give a guide line for how to choose a DSP device, in particular how to choose and use the correct data word length for any application.

Attendee Feedback From Previous Courses:

It was informative, enjoyable and stimulating

Excellent content, very lively thanks to the 2 excellent presenters - Anonymous

A very good introduction to DSP theory

Excellent lecturers! Really useful information and very understandable

Great mix of theory and practice

The lecturers gave a detailed and excellent explanation of the fundamental topics of DSP with real world engineering practice.

This session closes the gap and clears up much confusion between classroom DSP theories and actual DSP implementation.

Very good session, with in-depth discussion on the math and background.


These courses will be held at the University of Oxford, UK

Friday 8 March 2024

Plotting a Spectrogram In Python, Using Numpy and Matplotlib

When performing frequency domain (FFT) based processing it is often useful to display a spectrogram of the frequency domain results. While there is a very good SciPy spectrogram function, this takes time domain data and does all of the clever stuff. However if you are processing data in the frequency domain you often just want to build the spectrogram dataset and keep appending FFT results to it.

A spectrogram is a 3D plot, with the following configuration:

  • Time is on the X axis
  • Frequency is on the Y axis
  • Frequency magnitude is shown in colour

This program will use the Matplotlib function imshow() to display the spectrogram.

There are two key tricks to using imshow() for this purpose:

  • Rotate the dataset so that time is on the x-axis and frequency is on the y-axis
  • Scale the x and y axes labels to correctly show time and frequency
  • Remove the second half of the FFT results - once the magnitude of the FFT result has been calculated, the two halves of the result are mirror images so we can discard the upper half.

Here's the code:

import matplotlib.pyplot as plt
import numpy as np
from scipy import signal

# Global Configuration
plotXAxisSecondsFlag = True                             # Set to True to Plot time in seconds, False to plot time in samples
plotYAxisHzFlag = True                                  # Set to True to Plot frequency in Hz, False to plot frequency in bins

Fs = 10000                                              # Sampling Frequency (Hz)
timePeriod = 10                                         # Time period in seconds
sampleLength = Fs*timePeriod

sinusoidFrequency = 1000                                # Frequency of sine wave (Hz)

fftLength = 256                                         # Length of the FFT
halfFftLength = fftLength >> 1

window = np.hanning(fftLength)

time = np.arange(sampleLength) / float(Fs)              # Generate sinusoid + harmonic with half the magnitude
x = np.sin(2*np.pi*sinusoidFrequency*time) * ((time[-1] - time)/1000) # Decreases in amplitude over time
x += 0.5 * np.sin(2*2*np.pi*sinusoidFrequency*time) * (time/1000)     # Increases in amplitude over time

# Add FFT frames to the spectrogram list - Note, we use a Python list here becasue it is very easy to append to
spectrogramDataset = []

i = 0
while i < (len(x) - fftLength):                         # Step through whole dataset
  x_discrete = x[i:i + fftLength]                       # Extract time domain frame
  x_discrete = x_discrete * window                      # Apply window function
  x_frequency = np.abs(np.fft.fft(x_discrete))          # Perform FFT
  x_frequency = x_frequency[:halfFftLength]             # Remove the redundant second half of the FFT result
  spectrogramDataset.append(x_frequency)                # Append frequency response to spectrogram dataset
  i = i + fftLength

# Plot the spectrogram
spectrogramDataset = np.asarray(spectrogramDataset)     # Convert to Numpy array then rotate and flip the dataset
spectrogramDataset = np.rot90(spectrogramDataset)
z_min = np.min(spectrogramDataset)
z_max = np.max(spectrogramDataset)
plt.figure()
plt.imshow(spectrogramDataset, cmap='gnuplot2', vmin = z_min, vmax = z_max, interpolation='nearest', aspect='auto')
plt.title('Spectrogram')
freqbins, timebins = np.shape(spectrogramDataset)
xlocs = np.float32(np.linspace(0, timebins-1, 8))
if plotXAxisSecondsFlag == True:
  plt.xticks(xlocs, ["%.02f" % (i*spectrogramDataset.shape[1]*fftLength/(timebins*Fs)) for i in xlocs]) # X axis is time (seconds)
  plt.xlabel('Time (s)')
else:
  plt.xticks(xlocs, ["%.02f" % (i*spectrogramDataset.shape[1]*fftLength/timebins) for i in xlocs])      # X axis is samples
  plt.xlabel('Time (Samples)')

ylocs = np.int16(np.round(np.linspace(0, freqbins-1, 11)))
if (plotYAxisHzFlag == True):
  plt.yticks(ylocs, ["%.02f" % (((halfFftLength-i-1)*Fs)/fftLength) for i in ylocs])  # Y axis is Hz
  plt.ylabel('Frequency (Hz)')
else:
  plt.yticks(ylocs, ["%d" % int(halfFftLength-i-1) for i in ylocs])                   # Y axis is Bins
  plt.ylabel('Frequency (Bins)')
plt.show()

 

Friday 12 January 2024

The 33rd Annual Running Of The University Of Oxford Digital Signal Processing Course Will Be Held Online Again, In 2024

The course first moved online in 2020 and has received excellent reviews from the attendees.

The course will run from Monday 8th April to Friday 17th May 2024, with live online classes one afternoon per week.

Based on the classroom course, Digital Signal Processing (Theory and Application), this online course consists of weekly live online tutorials and also includes a software lab that can be run remotely. We'll include all the same material, many of the existing labs and all the interaction of the regular course.

Online tutorials are delivered via Microsoft Teams once each week and practical exercises are set to allow you to practice the theory during the week. 

You will also have access to the course VLE (virtual learning environment) to communicate with other students, view and download course materials and tutor support is available throughout.

Code examples will be provided although no specific coding experience is required. 

The live tutorials will be on Wednesday each week from 13:00 - 14:30 and 15:00 - 16:30 (GMT) with a 30-minute break in between.

You should allow for 10 - 15 hours study time per week in addition to the weekly lessons and tutorials.

After completing the course, you should be able to understand the workings of the algorithms we explore in the course and how they can solve specific signal processing problems.

Full details are available here: https://www.conted.ox.ac.uk/courses/digital-signal-processing-online.


Friday 29 December 2023

SigLib Now Includes Kalman Filtering Functions

Over the holiday period I decided to refresh my knowledge of Kalman filters by watching the excellent video series here: Kalman Filter YouTube Lessons.

Here is a diagram to show the architecture of the Kalman Filter:


As a result, I have now added 1D and 2D Kalman filters to the SigLib DSP Library, which can be found here: https://github.com/Numerix-DSP/siglib.

In the comments here https://www.youtube.com/watch?v=Fuy73n6_bBc&list=PLX2gX-ftPVXU3oUFNATxGXY90AULiqnWT&index=27, Gergely Takács mentioned his excellent Matlab and Python examples that can be found here: https://github.com/gergelytakacs/planeKF. Inspired by Gergely I have added the same example to the SigLib DSP Library examples.