Sunday, 9 October 2016

FIR Filter Design Notes

When designing an FIR filter it is handy to know how many coefficients are required for your desired implementation.

The Kaiser approximation is the algorithm that tells you how many coefficients your filter will need but not what those coefficients are. It is accurate when using an approximation design algorithm such as the Parks-McClellan (Remez exchange) algorithm.

For designing filters with the windowing functions there are no direct equivalents to Kaiser's approximation, it is more that you have to look at the characteristics of your signal requirements and compare them to the capabilities of the windowing functions. Note : filters designed using windowing functions will typically be longer than those designed using Parks-McClellan.

SigLib includes a range of windows based filter design functions. At the present time it does not include a version of the Parks-McClellan algorithm because there are so many already available. I had a customer recently use this :

I haven't used this myself but I have heard good reports.

Monday, 4 July 2016

Updated Numerix Host Library

The Numerix Host Library is a library of C/C++ functions that can be used to write DSP data to files for debugging, logging and exchange between different applications such as SigLib, Matlab, Octave, Open Office, Libre Office, Excel, Python and others.

It currently supports .bin .csv, ,dat (gnuplot), .sig (SigLib), XMOS .xmt and .wav file formats.

Numerix Host Library can be downloaded from here.

Friday, 3 June 2016

Frequency Domain Convolution And Filtering

Performing convolution and filtering in the frequency domain is a useful method of increasing the performance of these functions by using the fact that convolution in the time domain is equivalent to multiplication in the frequency domain.

The down side is that there is increased end-to-end latency due to the requirement to translate between the time and frequency domains (and back again).

Two common methods of performing this task are the Overlap And Add or the Overlap And Save.

These algorithms are summarized in the following document :

Tuesday, 2 February 2016

The Next Round Of The University Of Oxford, UK Digital Signal Processing Courses Take Place In July 2016

As part of the University Of Oxford Summer Engineering Program for Industry, the Digital Signal Processing courses are returning in July.
The courses are presented by experts from industry for Engineers in industry.
Here is a summary of the two courses.

Digital Signal Processing (Theory and Application) - Tuesday 5th to Thursday 7th July 2016
This course provides a good understanding of DSP principles and their implementation and equips the delegate to put the ideas into practice and/or to tackle more advanced aspects of DSP. 'Hands-on' laboratory sessions are interspersed with the lectures to illustrate the taught material and allow you to pursue your own areas of interest in DSP. The hands-on sessions use specially written software running on PCs.

Subjects include :

Theoretical Foundations
Digital Filtering
Fourier Transforms And Frequency Domain Processing
DSP Hardware And Programming
ASIC Implementation
Typical DSP Applications

Digital Signal Processing Implementation (algorithms to optimization) - Friday 8th July 2016

A one-day supplement to the Digital Signal Processing course that takes the theory and translates it into practice.
The course will include a mixed lecture and demonstration format and has been written to be independent of target processor architecture.
The course will show how to take common DSP algorithms and map them onto common processor architectures. It will also give a guide line for how to choose a DSP device, in particular how to choose and use the correct data word length for any application.

Attendee Feedback From Previous Courses :

It was informative, enjoyable and stimulating
Excellent content, very lively thanks to the 2 excellent presenters - Anonymous
A very good introduction to DSP theory
Excellent lecturers! Really useful information and very understandable
Great mix of theory and practice
The lecturers gave a detailed and excellent explanation of the fundamental topics of DSP with real world engineering practice.
This session closes the gap and clears up much confusion between classroom DSP theories and actual DSP implementation.
Very good session, with in-depth discussion on the math and background.

These courses will be held at the University of Oxford, UK

Sunday, 3 January 2016

VMWare Virtual Machine Notes

I need somewhere to store my notes so this seems to be as good a place as any.

How To Remove Old Ubuntu Kernel Packages
Stolen from my Brother here :

First, figure out which running kernel you have:
uname -r
Do NOT remove that kernel!

Figure out which kernels you have installed
dpkg -l | grep linux-image | grep ii
Then remove them:
apt-get autoremove <package 1> <package 2>

Shrinking Virtual Machines To Use Less Disk Space
In virtual machine :
sudo apt-get -y autoremove
sudo apt-get clean
cat /dev/zero > zero.fill;sync;sleep 1;sync;rm -f zero.fill

Download vdiskmanager-windows from here :
On the host :
vmware-vdiskmanager -k Ubuntu.vmdk

Sunday, 30 August 2015

Why Use A High Level Language For DSP ?

The field of Digital Signal Processing is constantly pushing the price / performance envelope of technology and traditionally this has required systems developers to use assembly language for the majority of the time critical signal processing routines. Today's commercial pressures have moved the "goal-posts" dramatically and typical project development timescales require a larger part of the application to be developed using a high level language. Another benefit to using a high level language for the system development is that a system can be rapidly prototyped to prove the algorithms and then hand optimised using assembly code for the time critical areas.

Primary Reasons For Using High Level Languages

  • High productivity
  • Portability
  • Maintainability
  • Code reuse
  • Optimising system cost / performance
  • Rapid prototyping and algorithm proving
  • Integration with real-time kernels and operating systems
  • Ease of debug
  • Availability of algorithms

The latest generation of compilers allows high level code to be compiled to a quality of assembly code that is very close to that which would be generated by hand. The development process is therefore very much easier than writing the algorithm in assembly code from scratch. An increasingly common development route is to develop the algorithms on a PC or Workstation and then rewrite the application for the target processor. Using the same language for development and deployment often allows the same code to be used for both, with the different I/O requirements handled through the use of conditional compilation of the source.

Modern high performance DSPs are also changing the way we view algorithmic efficiency and an increasing number of projects are written in a high level language because the savings at development time are far greater than the extra cost overhead of using faster processors at deployment.The architectures of the latest DSPs are also becoming more complex, for example with the integration of parallel execution units. This means that it is increasingly difficult for programmers to learn how to fully optimise their algorithms. When the complexity issue is coupled with the fact that the majority of DSP algorithms are block oriented vector processing algorithms and it is now becoming possible for high level language compilers to produce code that is 100% optimised.

DSP Tech Brief : The Zoom-FFT

The Zoom-FFT is a process where an input signal is mixed down to baseband and then decimated, prior to passing it into a standard FFT. The advantage is for example that if you have a sample rate of 10 MHz and require at least 10Hz resolution over a small frequency band (say 1 KHz) then you do not need a 1 Mega point FFT, just decimate by a factor of 4096 and use a 256 point FFT which is obviously quicker.

 Advantages of the Zoom FFT are :

  • Increased frequency domain resolution
  • Reduced hardware cost and complexity
  • Wider spectral range

Applications of the Zoom FFT include :

  • Ultrasonic blood flow analysis
  • R.F. communications
  • Mechanical stress analysis
  • Doppler radar

The following diagram shows the zoom process :

While the following diagram shows the basic architecture of the Zoom-FFT :

One common question is : Is the zoom FFT the same as the chirp z-transform.

The answer is : Absolutely not. The FFT calculates the FFT at N equally spaced points around the unit circle in the z-plane, the chirp z-transform modifies the locations of these points along a contour that can lie anywhere on the z-plane. In contrast, the zoom-FFT uses digital down conversion techniques to localise the standard FFT to a narrow band of frequencies that are centered on a higher frequency. The chirp z-transform is often used to analyze signals such as speech, that have certain frequency domain charactgeristics. The zoom-FFT is used to reduce the sample rate required when analysing narrowband signals - E.G. in HF communications.

These functions, and more, are available in the SigLib DSP Library.