Tuesday, 26 August 2025

Numerix-DSP Digital Signal Processing And Machine Learning Videos

Here is a selection of DSP and ML related videos presented by John Edwards

DSP Online Conference

2022 - Building A Tensorflow Lite Neural Network Vibration Classifier, With A Little Help From DSP

2021 - An Introduction To High Efficiency And Multi-rate Digital Filters

2020 - Frequency Domain Signal Processing


TinyML Foundation / Edge AI Foundation

2020 - “Low MIPS & Memory Machine Learning Industrial Vibration Monitoring Solution - AKA Not All AI Applications Are Cat v Dogs on Facebook ;-)


SigLib

SigLib DSP Library Introduction

SigLib Vibration Monitoring Machine Learning Demonstration


Data Science Festival 2020 - Lunch & Learn - "The Frequency Domain And How It Can Be Used To Aid Artificial Intelligence"


The 34th Annual Running Of The University Of Oxford Digital Signal Processing Course Will Be Held Online Again, In 2025

The course first moved online in 2020 and has received excellent reviews from the attendees

The course will run from Wednesday 22 Oct 2025 to Wednesday 26 Nov 2025, with live online classes one afternoon per week.

Based on the classroom course, Digital Signal Processing (Theory and Application), this online course consists of weekly live online tutorials and also includes a software lab that can be run remotely. We'll include all the same material, many of the existing labs and all the interaction of the regular course.

Online tutorials are delivered via Microsoft Teams once each week and practical exercises are set to allow you to practice the theory during the week. 

You will also have access to the course VLE (virtual learning environment) to communicate with other students, view and download course materials and tutor support is available throughout.

Code examples will be provided although no specific coding experience is required. 

The live tutorials will be on Wednesday each week from 13:00 - 14:30 and 15:00 - 16:30 (GMT) with a 30-minute break in between.

You should allow for 10 - 15 hours study time per week in addition to the weekly lessons and tutorials.

After completing the course, you should be able to understand the workings of the algorithms we explore in the course and how they can solve specific signal processing problems.

Full details are available here: https://www.conted.ox.ac.uk/courses/digital-signal-processing-online.

Copyright © 2025 Delta Numerix

Wednesday, 22 January 2025

The 34th Annual Running Of The University Of Oxford Live Digital Signal Processing Course, In May 2025

The 34th annual running of the University Of Oxford Live Digital Signal Processing course will be running in Oxford, UK, from Tuesday 20th to Friday 23rd May 2025.

The courses are presented by experts from industry for Engineers in industry and over the last 30 years has trained many hundreds of Engineers, from all areas of Science and Engineering.

Here is a summary of the two courses.

Digital Signal Processing (Theory and Application) - Tuesday 20th to Thursday 22nd May 2025.

https://www.conted.ox.ac.uk/courses/digital-signal-processing-theory-and-application

This course provides a good understanding of DSP principles and their implementation and equips the delegate to put the ideas into practice and/or to tackle more advanced aspects of DSP. 'Hands-on' laboratory sessions are interspersed with the lectures to illustrate the taught material and allow you to pursue your own areas of interest in DSP. The hands-on sessions use specially written software running on PCs.

Subjects include:

  • Theoretical Foundations
  • Digital Filtering
  • Fourier Transforms And Frequency Domain Processing
  • DSP Hardware And Programming
  • ASIC Implementation
  • Typical DSP Applications

Digital Signal Processing Implementation (algorithms to optimization) - Friday 23rd May 2025.

A one-day supplement to the Digital Signal Processing course that takes the theory and translates it into practice.

https://www.conted.ox.ac.uk/courses/digital-signal-processing-implementation-algorithms-to-optimisation

The course will include a mixed lecture and demonstration format and has been written to be independent of target processor architecture.

The course will show how to take common DSP algorithms and map them onto common processor architectures. It will also give a guide line for how to choose a DSP device, in particular how to choose and use the correct data word length for any application.

Attendee Feedback From Previous Courses:

John is like a textbook in human form ;-)  

It was informative, enjoyable and stimulating 

Excellent content, very lively thanks to the 2 excellent presenters - Anonymous

A very good introduction to DSP theory

Excellent lecturers! Really useful information and very understandable

Great mix of theory and practice

The lecturers gave a detailed and excellent explanation of the fundamental topics of DSP with real world engineering practice.

This session closes the gap and clears up much confusion between classroom DSP theories and actual DSP implementation.

Very good session, with in-depth discussion on the math and background.


These courses will be held at the University of Oxford, UK

Copyright © 2025 Delta Numerix


Wednesday, 15 January 2025

Understanding First Order Filters

While sorting through some very old papers I came across a solution to an interesting problem that I I struggled with when I was learning DSP. I have no idea where the original problem came from so I've replicated it here, as best I can remember, along with the solution:

The following first order direct form II filter :

                 w(n)
x(n) -->+-------------------+-->y(n)
        ^         |         ^
        |       +----+      |
        |       |z^-1|      |
        |       +----+      |
        |         |         |
        |         v         |
        ----*-----------*----
           a1  w(n-1)  b1

Is defined by the following equations:

y(n) = w(n) + b1.w(n-1)     (1)

w(n) = x(n) + a1.w(n-1)     (2)

Question: Show the difference equation in terms of y and x ?

Hint: Rearranging to a direct form I filter structure will help.

Solution

Diagramatically

The original system is a Linear Time Invariant (LTI) system so the feedforward and feedback sections can be swapped without changing the system response:

x(n) -------------+-------------->y(n)
         |        ^        |
       +----+     |      +----+
       |z^-1|     |      |z^-1|
       +----+     |      +----+
         |        |        |
         v        |        v
         ----*----+----*----
            b1        a1

Hence:

y(n) = x(n) + b1.x(n-1) + a1.y(n-1)


Mathematically

From (2):

w(n-1) = x(n-1) + a1.w(n-2)     (3)

Substituting (2) and (3) into (1), to compute the output:

y(n) = x(n) + a1.w(n-1) + b1.[x(n-1) + a1.(w(n-2)]     (4)

Rearranging to combine w terms:

y(n) = x(n) + b1.x(n-1) + a1.[w(n-1) + b1.w(n-2)]     (5)

From (1):     y(n-1) = w(n-1) + b1.w(n-2)     (6)

Substituting (6) into (5) gives:

y(n) = x(n) + b1 x(n-1) + a1 y(n-1)


Copyright © 2025 Delta Numerix

Tuesday, 19 November 2024

Using Generative AI And Large Language Models (LLMs) To Write DSP Code - Autumn 2024 Update

Having previously written a couple of blog posts regarding the use of LLMs to write DSP code, I've spent the last few months working on a project that has shown the landscape has changed dramatically.

The previous blog posts are here:

Using Generative AI And Large Language Models (LLMs) To Write DSP Code


At the start of the project, Claude 3.5 Sonnet was a step up from Chat-GPT 3 and Gemini really didn't cut the mustard at all. Then Chat-GPT 4o was released and this is a game changer, it has much more knowledge about the nuances of signal processing libraries such as scipy.signal and while it still struggles sometimes to write C code, I find it is by far the best option.

While I am lucky enough to have paid access to Chat-GPT 4o, not everyone does however there is an option through GitHub Marketplace that should work for most people. The nice thing about this is that you can easily try different LLMs but I find sticking to GPT 4o is the best option for me.

Copyright © 2024 Delta Numerix


Thursday, 25 July 2024

Why Mel-frequency Cepstrum Analysis Is Not Always The Ideal Solution For Vibration Analysis

The Mel-frequency Cepstrum (MFC) and it's associated outputs, the Mel-frequency Cepstral Coefficients (MFCCs), are commonly used for speech applications such as speaker and speech recognition, using neural networks. Unfortunately, the nature of the MFC means that it is not always ideally suited to applications such as vibration analysis and predictive maintenance

The MFC uses logarithmicaly spaced frequency banks to replicate how the human ear hears sound. This approach can lead to very large savings in the number of MIPS required for the recognition part of speaker and speech recognition. Unfortunately, this logarithmic frequency space hides frequencies that are closely spaced meaning that this approach is sub-optimal for applications such as machine vibration analysis, where small variations in vibrational frequency can indicate problems with the machine, particularly the bearings.

The following diagram shows a simple Mel-spaced filterbank, with 12 separate filters:


As can be seen from the diagram, resolution of close by frequencies is a particular problem for higher frequency harmonics, where the filters have a wider bandwidth.

The problem can also be seen in the following two images, which are sampled from identical machines running with two different error modes. It can be seen that it is the higher frequency peaks (1 kHz to 2 kHz) that vary the most and this is just the region, for this Mel-spaced filterbank, where the filter bandwidths start to get exessively wide.

Vibration Mode #1

Vibration Mode #2

The solution to this problem is to use a regular Fast Fourier Transform (FFT) for the front-end processing of these types of applications and use spectral analysis of the anticpated vibration modes to observe the frequency resolution required and this will then define the FFT size required for the application.

The SigLib Digital Signal Processing and Machine Learning library includes examples for machine vibration monitoring. These can be found here.

Copyright © 2024 Delta Numerix


Thursday, 27 June 2024

Using Generative AI And Large Language Models (LLMs) To Write DSP Code

Back in March 2023 I wrote the following blog post about using Generative AI and Large Language Models (LLMs) to write code: Are Chat-GPT and Google Bard The New Frontier For Writing DSP Code?

Since then, I have used these tools in many projects and have made a number of observations. In general, the more complex the task you are setting for the LLM, the more likely the performance of each is going to diverge and also the more likely it is that, as a programmer, you are going to have to test the code extensively to find the bugs.

Using these tools is a bit like an artist generating a preliminary sketch, rather than the final polished painting, with all of the correct detail.

I have found three main uses for Generative AI in coding:

  • Writing code to meet a specification
  • Documenting / commenting existing code
  • Converting code from one language / system to another

I have tried all of the following: Gemini, Google Code Assist, Chat-GPT, Bing and Co-Pilot. I have had the best coding results with Gemini (Bard) however if I find that this is struggling then I will try them all because they all have strengths and weaknesses.

It is important that you know what you want to do because there is no guarantee you will receive a correct answer! 

A useful trick is to try the same request multiple times because, unlike a traditional search engine, an LLM with give you a different responses each time. Handily, Gemini automatically generates 3 draft solutions and you can click on the tabs provide to review each.

I have observed that LLMs are much better at writing Python than lower level languages (C/C++ etc.). In Python, it will almost certainly produce a working solution using the Numpy/Scipy library functions, that may just need some final tuning.

If you are writing code for a lower level language then the best option is often to take a two stage approach:

  • Generate Python/Numpy/Scipy code
  • Convert the Python code to C - LLMs are very good at converting Numpy/Scipy functions to C

Generative AI is very good for converting between languages and Gemini will add comments to code that does not contain original comments. This is particularly useful if you work with a colleague who is not very dilligent with their code commenting ;-). It is worth noting, however, that comments are sometimes wrong due to AI misunderstanding the intention of the code.

Sometimes the conversion process will skip complex code sections, in a program, entirely so if this happens then the next step is to copy those sections and convert them separately.

Converting code from Python to C/C++ is generally very easy because they both use 0-based array indexing. Matlab, however, is more complex because it uses 1-based array indexing and this confuses the LLM. When converting Matlab code to Python or C/C+++ then I generally use the following request, which I then follow with the code section:

convert the following matlab code, with 1 based array indexing, to Python and Numpy, with 0 based array indexing

One final example of a gotcha is that Matlab uses FIR filter order whereas Scipy uses the number of coefficients.

As well as documenting code, LLMs are very good at debugging code however it is often important to explicitly specify the language in the request, rather than leaving it to the LLM to decide what language the code is written in.

Finally, Test! Test! Test!

Copyright © 2024 Delta Numerix