STM32 in final 'product'

Let me write a round up of this research on using a MCU as a speaker processor.

By this time I switched from KEILmdk to STM32CubeIDE which at this moment (november, 2021) seems to be the weapon of choice for most STM developers.

Again for future reference a quick how to on how to include a math / DSP library.

Together with your installation of the IDE you also will have installed some repositories for ARM software products. You are looking for CMSIS/DSP libraries. The path will be something like  /STM32Cube/Repository/STM32Cube_FW_F4_V1.25.2/Drivers/CMSIS/DSP/ in your home directory (in Linux). It will be similar in MacOS or Windooz.

Start a new project in the IDE, right click on projects name and Create new source folder. Name this folder DSP. Now again right click and select Import file system. Select both the source (source) and the header (include) file as in the screenshot from the above mentioned 'repository'. Click finish.          

 

Now you will have both the source and header files in your project but your compiler knows nothing about it so: right click project and find the project properties setup. Find the compilers include path settings and add the new path to your include file. Check the screenshot.

Now you can include all these nice ARM DSP tricks by:#include "arm_math.h" Try to build (that's embedded speak for compile)... Hundreds of errors, but if you check the firsts you will see that the compiler needs two PreProcessor directives. Find those in properties and add both: ARM_MATH_CM4 (we are using a M4 core) and __FPU_PRESENT = 1U (yes, we do have a floating point dohickey in our processor). O, and that's two/2 underscores! __


Now you can start programming your code making use of block processing, taking chunks of say 48 samples using the increased efficiency of the CMSIS libraries. 

Check all those filtering functions!!

There's loads to discover but let me disclose this one: 

Decades ago I met Ed Long (yup, the guy who 'invented' Time-alignment) at a proaudio trades fair. He demonstrated his closed box, single 15' sub. That was astounding (as was the excursion). His trick was to use electronics branded as ELF and not filters to EQ the sub. 


Much later Linkwitz developed a filtering method known as Linkwitz Transform. A similar way to correct the response of a sub woofer. 

Never got any good results with this method as in professional DSP's don't provide for a method to import your own Biquad coefficients. And using shelving filters and EQ will never have the required precision.


While using this CMSIS library function:

void         arm_biquad_cas_df1_32x64_q31 (const arm_biquad_cas_df1_32x64_ins_q31 *S, const q31_t *pSrc, q31_t *pDst, uint32_t blockSize)
 Processing function for the Q31 Biquad cascade 32x64 filter.

I managed to reproduce that sensation I had at that trade fair decades ago..Thanks Ed!

Do understand that you really, really need the above precision (in fixed point) to get any decent results! If you are into the math: understand how a biquad calculation works and how the representation of coefficients (which are all close to the unity circle) will effect the calculations at very low frequencies. 

(anyone remember we at some point (2010?) got this 'use for better LF response' check box in LondonArchitect? )

Posted on Instagram you can find some pictures of cabinets where I did apply this.

----->

Oh, and this is the latest board I developed, everything being stalled a bit by the global chip shortage:




Practical FIR

Part 3

As ever so often in engineering, if some new technology is introduced we go into  'nec plus ultra' modus. Suddenly you will find a huge range of applications where all and unique is about that new tech. Same for FIR filtering. 

The first appearance of FIR in (diy) audio was as a means to do room EQ-ing. The idea was to measure a systems response in a room, do quite some averaging and apply an overall EQ that would correct all and everything. First there is of course no such thing as room EQ: the only way to change the acoustics of a room is by changing it's dimensions and/or changing the absorbing/reflecting surfaces. But the solution is better then using only conventional (min. phase) EQ as we have been doing with our 31b. graphics.

Some of the room effects are min. phase and thus one could use a graphic to try to address the problems. The definition of min. phase behaviour dictates that your ideal EQ point will have both the necessary magnitude and phase compensation. 

So for these min. phase phenomena the EQ will NOT introduce 'PHASE ISSUES' (crinch).

However. Loads of stuff you would like to EQ in acoustics is NOT min. phase. Remember playing a festival and during nightfall HF content of your PA changes drastically? And changing your graph. EQ doesn't help in the way you wanted? Well that's a non min.phase phenomena and this would call for a lin. phase EQ, which as we know now is only possible with FIR.

 

Another example would be (conventional) EQ-ing across a line array. Hopefully everybody does understand you can't apply different EQ settings to different elements in your line array, right? The different phase behaviour of the different elements will in that case wreak havoc on the overall dispersion of the array! The only way you can get away with this is by using lin.phase EQ. Something that French manufacturer of brown boxes does understand.

So when and where to apply lin.phase FIR filters as a means of filtering without affecting phase is something that needs thought!

It is not the miracle solution for everything, it is just another tool and it does have pro's and con's like any other tool.

So yes, as has been explained on million places: FIR filters can be computational heavy, but why should that be mentioned over and over as a problem? 

A more interesting difference, if you really would like to set IIR versus FIR, is the way the precision of the calculations work out sonically. Fixed point vs floating point. Internal bit depth. The way rounding errors work in your algorithm. Audibility of the pre-rings (or not?) That sort of thing. In the MCU/DSP part of this blog I will write a bit about this.

If all this is cleared, one big topic still is to be debated:

IS PHASE AUDIBLE?

(or more precise does applying an allpass filter change the way (stereo) audio is percieved?)

One will understand that I (and my peers) do have an opinion contrary to that of the big names in our industry. This is the internet: I can't demonstrate, but I encourage you to find out for yourself!



Filtering by IIR or FIR

Part 1

Before we dive into the depths of FIR or IIR filtering, let's talk about audio in general.

We have this custom in Audio engineering of analysing everything in the frequency domain. Apparently something we inherited from Electrical engineering. It does seem quite logical though, after all music is all about sine waves, right? 

Wrong! 

Look at some recording you made with your favourite DAW (audacity f.e.)

The signal (in stereo) can look something like this. It is the equivalent in voltage to the air pressure or eventually the position of your eardrum of the stuff we call sound. No sine wave insight is there?

Another argument against the preoccupation by analysis in the frequency domain could be: tell me the frequency of a handclap or the whistling wind or all the consonants we use...

Should we stop talking about sound in the frequency domain? No! But do realize we are using a model of reality when we do.

Years ago by work of Helmholtz and Fourier the method of decomposing a chunk of a recording like in the picture above into it's sinusoidal components has been worked out. (in fact Helmholtz has written a paper on what happens after this decomposition, look for 'auditory roughness' ) 

And we now all have the computational power of using this analysis by FFT. If you are totally compelled by any mathematics whatsoever, think about FFT as follows: Imagine a prism. It decomposes a light beam into the different waves it is build up with into these well known rainbows. The FFT engine works a similar way in that it decomposes a chunk of audio into it's sinusoidal components.

Now on to the topic at hand.

In some other parts in this blog you can read about the use of a (dual) FFT analyser in how to get a frequency domain analyses of your device under test. In diy loudspeaker circles a more common use will be Room EQ Wizard, REW to obtain what they call frequency response (commonly abbreviated to FR). Mostly they forget about the 'phase' part of the bode plot. 

That isn't really a bad thing. As for the most part a loudspeaker (and filtering) behave as minimum phase systems. Minimum phase (not minimal!) means that the phase response is dictated by the frequency (magnitude) response and vice versa.

So a FR measurement also holds the information to derive the associated phase response. No magic here, just plain laws of physics.  So every deviation you see from the frequency response must have a corresponding phase deviation from that so desired 0 phase line. (all under the premise of 'minimum phase' or LTI) 

 There is 1 big exception: All pass filters! All pass filters will have a typical phase response WITHOUT anything reflected in the frequency response. And vice versa. If you think about it: an ideally crossed multi speaker system will have a flat FR and a overall All Pass phase behaviour. (as long as conventional,IIR filters are used) Maybe something like this for a 2-way system:

 

If you are still uncomfortable with the concept of phase, don't worry to much. Just don't confuse phase and delay. Like defined elsewhere in this blog: we use the word phase (-difference) for that part of the overall frequency dependent time difference between two components where plain, simple 'excess delay' is stripped away. So:

Time alignment does not equal Phase alignment !!

('time' alignment as a scientific term is a bit Einstein-esque , it is signal alignment..but hey..)

...onto FIR filtering part2