Monday, 31 December 2012

What Analog I/O Hardware Can Be Used For Digital Signal Processing On A Laptop Or Desktop Computer ?

While it is entirely possible to use the standard audio interfaces, for audio frequency I/O, I have had great results with the Behringer U-CONTROL UCA202 (http://www.behringer.com/EN/Products/UCA202.aspx).

The UCA202 supports the Windows WDM/KS API and also ASIO although I have found that the WDM/KS interface has provided low enough latency for most of my applications that I have not needed to use ASIO. I have used this interface in a number of applications and also, to great effect, for the laboratory exercises on the Oxford University Digital Signal Processing courses.

What Software Is Good For Learning Or Implementing Digital Signal Processing On A Desktop or Laptop Computer ?


I nearly always start out at the C level and then port to Assembly code, as required.

Under Linux, GCC is the obvious choice but under Windows I use Microsoft Visual C++ V10.0 Express Edition that can be downloaded for free (http://www.microsoft.com).

For audio I/O I use the standard sound card and PortAudio (http://www.portaudio.com). PortAudio is a great open-source API for audio I/O and runs on a number of different operating systems so software written with this is very portable.
A great example of what can be done with PortAudio is Audacity (http://audacity.sourceforge.net). As an aside, Audacity is also a great tool to use for analyzing sampled waveforms and I have used it for analyzing all sorts of signals from radar to gas turbine (jet engine) vibration.
PortAudio supports several different host-APIs and the main one I use under Windows is Microsoft DirectX SDK (http://www.microsoft.com/en-us/download/details.aspx?id=6812).
See my earlier blog (http://realgonegeek.blogspot.co.uk/2012/12/compiling-portaudio-and-visual-studio-c.html) for further details on using PortAudio under Windows. I have found WDM/KS to have a very low (and predictable) latency but I have also found that it often supports less sample rate options than MME or DS.

If you wish to do some signal processing with the above software then a very simple way to generate data graphs direct from a C program is Signal Visualizer (http://signalvisualize.sourceforge.net). The nice thing about this is that it is a IP based client-server package that allows the DSP client to run on an embedded machine and display the graphs on a remote Windows or Linux computer.
Other tools that I have used for back-end display of data are GnuPlot (http://www.gnuplot.info/) and XMGrace (http://plasma-gate.weizmann.ac.il/Grace/).

If you want to write a complete GUI application for processing signals then for a host API I almost exclusively use C++ and wxWidgets (http://www.wxwidgets.org), which is an open source API that supports portability on a number of OSs in the same way that PortAudio does for audio I/O. When using wxWidgets, you should also install Bakefile (http://www.bakefile.org) which makes building wxWidgets examples much easier.

If you want to hack an application that puts all of these together and allows you to process signals via a soundcard then you can download System Analyzer (http://www.numerix-dsp.com/files/). This uses the Numerix Graphical Library (http://www.numerix-dsp.com/files/) and the free version of the Numerix SigLib DSP Library (http://www.numerix-dsp.com/free/).

What about if you want to use a different language other than C/C++ ?
I write nearly all of my DSP code in C or Assembly so rather than rewrite the functions in the target language for a particular project I typically use SWIG (http://www.swig.org/) to allow access to these C/ASM functions from other languages such as C#, Perl, PHP, Python etc.

I Am Performing A Fast Fourier Transform (FFT) On A Sine Wave Of Magnitude +/- 1 Why Is The Output Magnitude Is Much Larger ?


When I was writing general purpose DSP algorithms, this was one of the most asked questions so I thought I would discuss the subject.

When performing a Fourier transform on a sine wave of magnitude +/- 1 then the output would represent the two phasors each of magnitude 0.5 that sum together to give the original sine wave.

However when performing an N point discrete Fourier transform (DFT or FFT) on the same, but now discrete, sine wave then the results are scaled by the factor N.

As an example, if you perform a 512 point DFT or FFT on a sine wave of magnitude +/- 1 then the un-scaled output phasors will have magnitude +/- 256.

While this is a minor inconvenience when using a floating point processor, it can be a major issue when using fixed point processing because of the overflow and scaling issues. Maybe fixed point programming should be the subject of a future blog ;-)

DSP Tech Brief : D.C. Offset with Sigma-Delta ADCs/DACs


A question that I frequently had, back in the day when I was an FAE working for a general purpose DSP hardware company, often used to go like this :
Me : Good afternoon, technical support, how may I help
Customer : Hi, I have your ADC input card. I have shorted the inputs of my ADC and measured the samples on the input. They have magnitudes like 31, 32 or 33 instead of zero. I have purchased a 16 bit converter but if I take off these 5 odd bits I am only left with 10, which is not what I paid for.

How do you answer a question like this ?
Well the answer always revolved around the type of ADC/DAC that was being used. In almost all cases it was a Sigma-Delta converter and many of these devices introduce a D.C. offset in the conversion process. These devices are designed for low cost, high quality consumer audio applications that are typically A.C. coupled, so a couple of mV D.C. offset is not an issue.

Is this an issue ?
D.C. offset does not necessarily lead to a loss of resolution provided that the converter is being used in the correct way. Of course, if D.C. measurement accuracy is critical to an application then Sigma-Delta converters are probably not the correct device to use unless they include D.C. offset correction capabilities.

Another issue commonly seen with Sigma-Delta converters is the group-delay associated with the digital filtering element of the converter. This delay can sometimes be in the order of 10s of milliseconds for both ADCs and DACs. So these devices are not suitable for embedded control applications where you may require a servo loop time in the order of 1 ms. For these types of applications something like a Successive Approximation device might be more appropriate.

Despite all of these issues, they are very cheap and very high quality.

Saturday, 29 December 2012

Book Recommendation : Martin Sauter - From GSM To LTE


If you are looking for a medium level introduction to 2G, 3G and LTE then this is a great book authored by my friend, Martin.
This book doesn't go into the same minute detail as books that focus on each standard but it is a really useful overview of the different standards and how they work. This is a book that I still refer to when I need to remind myself how certain mobile comms functionaliy works.

http://books.google.co.uk/books/about/From_GSM_to_LTE.html?id=uso-6LN2YjsC

P.S. If you are interested in understanding the capabilities of mobile comms at a user level then I can thoroughly recommend his blog (http://mobilesociety.typepad.com/).

Science Book Recommendation : Piers Bizony Atom


I recently read this book and found it to be an incredible read. This is an amazing book that not only discusses the Atom (As you would guess from the title) but also why we and everything we see is here. He answers some very key questions about the universe in this book but also discusses some key questions that science can not yet answer.
Bizony is an excellent author, very readable, and I would recommend his books to anyone who is interested in science.
http://books.google.co.uk/books/about/Atom.html?id=_2ggAQAAIAAJ

How Do I Read And Write .wav Files For Storing Digital Signal Processing Data ?

There are a bunch of simple functions in the Numerix Host Library (http://www.numerix-dsp.com/files/).

Although .wav files are native to Microsoft OSs, the file format is actually a very convenient format for storing data in any environment and I have used it in embedded systems, telecomms systems and all sorts of applications.


Friday, 28 December 2012

What Is The Best Way To Learn Digital Signal Processing


Although my go to book for DSP is always Digital Signal Processing by Oppenheim And Schafer, things have moved on a long way over the last 30 years in terms of books that present DSP in an easier to learn format. The book that I recommend is : Smith, Steven W., “The Scientist and Engineer's Guide to Digital Signal Processing”. The complete book is available for download from : http://www.dspguide.com. I can also thoroughly recommend : Orfanidis, Sophocles J., “Introduction to Signal Processing”, Prentice Hall, Inc. Which is also available for download from : http://eceweb1.rutgers.edu/~orfanidi/intro2sp/.

If you have ever wanted to learn how to write an FFT algorithm (or if you just fancy an intellectual programming challenge, one quiet evening) then I can thoroughly recommend you follow the step by step guide in Steven Smith’s book. Give it a go and see if your solution matches Steven’s, on the next page – no cheating ;-)

Digital Signal Processing - A Very Portable Skill


One of the lessons I learnt many years ago was how portable DSP skills are from comms to medical to military to control theory. One project I worked on many years ago was a zoom-FFT based spectrum analyzer. I was a junior DSP engineer at the time and the senior engineer in the company referred me to the following applications note which used to be available on the Motorla web site :
Park S., Principles of Sigma-Delta Modulation for Analog-to-Digital Converters (APR8.PDF), Motorola Inc., USA
Now Sigma-Delta conversion seems to have very little in common with zoom-FFTs but I soon learnt how similar these techniques are and I still refer students to this app note, if they are ever going into narrow band applications (RF, ultrasound, sonar etc).
Sadly this app note is no longer available on the Motorola web site but a web search should locate a copy.

The next round of the Oxford University DSP courses take place in January 2013



Digital Signal Processing (theory and application) - Mon 21 to Wed 23 Jan 2013
http://www.conted.ox.ac.uk/courses/details.php?id=H600-24
This course provides a good understanding of DSP principles and their implementation and equips the delegate to put the ideas into practice and/or to tackle more advanced aspects of DSP. 'Hands-on' laboratory sessions are interspersed with the lectures to illustrate the taught material and allow you to pursue your own areas of interest in DSP. The hands-on sessions use specially written software running on PCs.

Digital Signal Processing Implementation (algorithms to optimization) - Thu 24 Jan 2013
A one-day supplement to the Digital Signal Processing course that takes the theory and translates it into practice.
http://www.conted.ox.ac.uk/courses/details.php?id=H600-25
The course will include a mixed lecture and demonstration format and has been written to be independent of target processor architecture.
The course will show how to take common DSP algorithms and map them onto common processor architectures. It will also give a guide line for how to choose a DSP device, in particular how to choose and use the correct data word length for any application.

Thursday, 27 December 2012

How to compile C programs that include trigonometry or math functionality such as sin, cos() and tan()


With some gcc installations (e.g. on my Raspberry Pi) then you need to include the math library (-lm) in the link process :
gcc -Wall program.c -o program -lm

Monday, 24 December 2012

How long does the Raspberry Pi take to calculate a C coded 1024 point floating point FFT ?

Answer : 1.63 microseconds
Note :
 20 years ago (1992) the first floating point DSP with a dedicated C compiler (The Texas Instruments TMS320C30 (30 MHz)) took 16 microseconds to run the same code
 10 years ago (2002) the state of the art floating point DSP (The Texas Instruments TMS320C6701 (167 MHz)) took 0.82 microseconds to run the same code
 Today (2012) a modern Pentium laptop (2.4 GHz) can run the same code in 0.13 microseconds
Maybe I will add another comment, in the future, to compare the cost and power consumption of the various devices.

Android DHCP Solutions

When I first bought an Android device I was frustrated because the WiFi would work fine for a few days then stop and after a while it would work again.
After a lot of research, here is my conclusion. It may not be right but works for me :
The Android device holds on to the DHCP IP address even after the router has released it and re-assigned it to another device. When the Android tries to reconnect the router will not let the Android keep the IP address so the link is never set up correctly.
Solution :
 1/ When you have a successful connection get the IP and MAC addresses of the Android then set up the router so that that IP address is reserved for that particular device.
 2/ On the Android assign a fixed IP address that is within the same subnet but not the DHCP range as your router. E.g.
  Subnet mask : 255.255.255.0
  Subnet address space : 192.168.1.1 to 192.168.1.254
  DHCP range (range set in router) : 192.168.1.1 to 192.168.1.150
  Fixed IP address range (addresses set on the devices) : 192.168.1.150 to 192.168.1.254

Compiling PortAudio and Visual Studio (C++) 2010 / 2013

Install Visual Studio (C++) 2010
This does not include 64 bit compilation support so install the Windows Software Development Kit version 7.1 from here : http://msdn.microsoft.com/en-us/windowsserver/bb980924.aspx
Convert and build the project : double click the .sln and when it has completed right click on the project and select rebuild
Additional instructions for compiling Portaudio :
http://portaudio.com/docs/v19-doxydocs/compile_windows.html
The ported project generates a .lib to link with your application and a .dll that you either put in the application directory or put into C:\Windows\System32.


One of the things I didn't understand was how the Portaudio APIs (WMME, DS etc) mapped to the hardware interfaces. There is an example (pa_devs.c) that I found really useful. I just compiled it and linked against the .obj created below; copied the .dll into the current directory and ran it. The output on the screen is most informative.

Which API to use ? :
In general I have had great results with the Direct Sound API and tend to use that.
I found the WDM-KS API has very low latency (for a laptop computer running an OS) of 4600 samples from output to input. The downside is that for most audio interfaces it only seems to support 44.1KHz (at least on the hardware that I am using) but that is OK for most applications.

Which APIs are compiled into the library ? :
These are selected using the PA_USE_XXXX #defines which can be configured using the compiler switches. To do this in Visual Studio Express do the following :
Right click on the project (portaudio)
Select "Properties"
Select the Configuration and Platform you wish to modify
Navigate to : Configuration Properties | C/C++ | Preprocessor | Preprocessor Definitions
Set your required API definition to 0 (disable) or 1 (enable) e.g. : PA_USE_WDMKS=1

For Visual Studio Express 2013 I made the following changes :
  1. Change the General Options | Target Name to add the extension _x86 : $(ProjectName)_x86
  2. Define PA_WDMKS_NO_KSGUID_LIB in the Preprocessor Definitions
and everything compiled fine.

When I used the DirectSound version of PortAudio I used the latest version of : DirectX (9.0a).

Here is a useful tip when using PortAudio with a GUI :
Modify PaUtil_DebugPrint in pa_front.c as shown below and the debug statements will be written to the file debug.log.
void PaUtil_DebugPrint( const char *format, ... )
{
va_list ap;

FILE *logFile;
logFile = fopen( "debug.log", "a" );

va_start( ap, format );
// vfprintf( stderr, format, ap );
vfprintf( logFile, format, ap );
va_end( ap );

// fflush( stderr );
fclose( logFile );
}


I may be alone on my planet but at least everybody listens to me ;-)