Jean Marie Reynaud Sonate..some restoration


Today I got a selection of components and a couple of cabinets from this famous '80s HiFi speaker.

The main reason for this investigation: a long time audio-friend is in the process of reentering the realm of mastering.  

As such he is setting up a spare room to be his studio. Several of-the-shelve commercial studio speakers passed the revue. DSP certainly also has entered in this world, so none of them sounded 'bad'. But at the same time they also lack any involvement to inspire you to to do the works. 

This might be a considered a good thing by some, but as sound engineers (without a live audience) we do need an anchor.

The same guy is also a passionate hi/end-hifi-loony with a long time affection to JMR loudspeakers (Jean Marie Reynaud) 

 

So let's see what we can learn from this.

These Sonata speakers seem to have some physical 'time-alignment'.

It is a late '70s, early '80s design! Ed Long posted his papers on time-alignment in 1976. No widely spread dual-FFT measurement systems in those days.

Sour grapes if somebody immediately states diffraction 'problems' by the steep baffle step just by looking at the speakers. Read on to understand why this is less a problem then you might think..

 

Let's measure and see if these speakers are also phase aligned, the simple filter 3e order doesn't look that complicated. 

Some things are bit unclear: I got 4 woofers and 4 tweeters. 2 woofers being rather crudely refoamed by somebody couple of years ago and 2 tweeters with, and 2 without ferrofluid (of which 1 is blown).

So first thing I did was refoaming the surrounds (see above) and measure T/S parameters and compare them to the prior refoamed.  Yup, they are different...

Also is the ferrofluid the o.g. or is it a modification by someone in a later stage?



And yes, both tweeter and woofer are most certainly as well time-aligned as phase aligned! I had the use the ferrofluid tweeters to get the best matching response.

The X-over frequency works out as a bit above 4kHz. Probably some butterworth roll off as they add up some 3 dB around that point. 

That's remarkably high and only possible because the woofers are treated with some sort off shiny coating the original Seas speakers don't seem to have.

Having such high X-frequency means almost the complete voicing is done by the speaker alone. Giving you those benefits the fullranger-driver community is looking after.

Next we can compare the 2 different refoam types: 

 

The red line is the one refoamed by yours truly, the blue line is the one refoamed in an earlier repair.  So yes, those differences in measured T/S do matter. (not much, but hey)
 

 

 

 

 

So for the conclusion: how does it sound? Well in one word: very musical. 

Still with a presentation very similar to a modern (FIR-filtered) system due to the correct phase/time align.

On the downside: also a bit old fashioned as in dusty and tired..

..to be continued to see if can get this a bit up to date..

Dynamic lin. phase EQ..some trials..

Linear phase EQ, part 1

First let's talk about EQ in general:

A few decades of running a sound-rental left me with a huge collection of superfluous copies of several brands of 31b. graphic EQ's
My personal favorite was such a KT DN360-->

 

What was (still is??) the purpose of a 31b. EQ inserted in the mains of our PA's?

Often people start jabbing about room EQ, tuning a system to a room or other semi scientific nonsense never leaving out to mention phase issues..

Once you grow to be a bit more aware about what you are doing you will realize that the use of such a graphic has (had) two purposes: 

1. you will use it in the creative process of doing your mix

2. you will try to correct problems you are experiencing with the current program material, the system (and the level)

The first one is not at debate here, let's investigate the second topic.

Loudspeaker systems in general are giant distortion generators (as compared to electronics). To elaborate on the cause or specify what distortion would lead way to far in this article, just bare with me and do understand that we are not talking about rattle and buzzes but the harmonic distortion as we sometimes also appreciate.

So with this in mind we could agree that the tweaking in the 2-4kHz region on your graphic might be triggered by the perceived distortion.. NOT the level..

If you remember: you dipped that frequency because it was 'ugly' not because it was 'loud' 

All analog EQ's and the mayor part of it's digital counterpart are min. phase. Meaning: yes, every EQ setting will affect the phase in that region in a way as is associated with it's magnitude response. Law of physics. Mostly no problem as in the thing you are correcting will be min. phase too.

The example above: using your EQ to dip those ugly distortion will affect phase in a way you don't want. Maybe that's the reason you keep tweaking in that region during a 'difficult' show.

 

Another reason why you keep changing the EQ settings in that region could be Fletcher + Munson.  Everybody knows these equal-loudness contours. The 'loudness' button on your '70's hifi amp is a direct consequence.

Look closely in the 2-4kHz region, not only your hearing is more sensitive in that aria (duh) : 

Also that sensitivity changes with level both in freq. and amount. Go from 80 to 100 dB to get an idea.


We do have tried different solutions for this. I never was happy with any dynamic EQ or desser function on any compressor of any of the shelf product.

Multiband compressor techniques are more something I could appreciate. True lin. phase that is! Here is an example of a way I did this using a free programming platform like BSS soundweb:

The signal is split in two pathways by using lin. phase FIR filters. Great care has been taken those filters when recombined don't affect the signal in any other way then adding in some delay. Both paths are compressed with different settings for attack, release, rating and so on. The compressor is made more 'sensitive' for the freq. of interest by using an EQ on the side chain (having its split feed before the FIR will give it some 'look ahead')

Still not quit happy as in you don't want a 'compressor' action, you want 'EQ' action.

So recently I started to experiment with my home brewn DSP boards. I took a metal horn driver combo, despite the felt damping it will have a ringing-bell associated distortion, certainly if it is not mounted firmly in a box. (still a really nice, controlled top end, though)



..soon more




Dynamic lin. phase EQ..some trials..

Linear phase EQ, part 2

 

The fun in coding your own DSP  is you quickly learn this Egg of Columbus, once you free your mind of thinking in the frequency domain (paraphrasing Moroder):

Everything you dream up about multiplying functions in the frequency domain like applying dynamic EQ can in fact be a rather simple arithmetic operation in the (discrete sampled) time domain.

So EQ can work out as follows: apply  the reverse filter to the signal stream on a sample per sample base. Subtract that stream also on a sample per sample base from the original stream. Take care you do your calculation on the correct sample!

Mind boggling to realize that every single sample is a representation of the music playing (right at that moment in time)


void doProces(q15_t* Lin, q15_t* Rin, q15_t* Lout, q15_t* Rout)
{

	  arm_sub_q15(Rin, Lin, Mono, n);
	  arm_q15_to_float(Mono, Monofl, n); /*adding balanced input to mono signal*/

	  arm_fir_f32(&SLF, Monofl, Low, n);
	  arm_fir_f32(&SHF, Monofl, High, n);
	  arm_fir_f32(&SEQ, Monofl, EQout, n); /*several FIR filters*/

/*RMS treshold + attack//release: blocksize=48, sample freq=48kHz --> 1 mSec per callback*/
	  arm_rms_f32(EQout, n, &rms);
	  if ((rms > 0.005) && (gainEQ<=0.8))
			  {
		  	  gainEQ = gainEQ+0.1;
			  }
	  if ((rms < 0.005) && (gainEQ>0))
	  	  	  {
		  	  gainEQ = gainEQ-0.005;
		  	  } 

/*scaling + actual EQ substraction*/
	  arm_scale_f32(EQout,gainEQ, EQout, n);
	  arm_sub_f32(High, EQout, Result, n);

/*revert float back to signed int. Lout = low, Rout= high */
	  arm_float_to_q15(Low, Lout, n);
	  arm_float_to_q15(Result, Rout, n);

}

That is a very crude bit of code to run the tests, basically I use some FIR filters to split the stream in 3 parts: high to feed the HF horn, low to feed the speaker. And a bell shaped EQ stream around 2200Hz to do the magic. Totally mocked up out of thin air (and some prior listening).

If you are really interested you can work out the attack//release timing and chosen gain slopes (also derived with some listening)

Now for the promising conclusions:

Yes. 

EQ-ing with dynamic linear phase EQ does help to create a loudspeaker that is way more versatile in that you can play it more dynamical (read louder) without being annoyed with that ripping horn distortion.

But.

Care should be taken in not to overdo the processing because it sounds like that; over processing. Also: as (live-) sound engineers we are totally used to how a all these elements like loudspeakers and/or microphones behave and in fact make use of the nonlinearities in the creative process.

 

More research is required..

Here is some food for thought: underlying is a measurement of the first 2 distortions of the HF horn under test.  We all know: 2nd harmonic is ok, 3th is bad, right? Now look where I choose (by ear) that EQ point: right at 2.2k where that 3th bumps up. (and 2nd goes down!)


Brown = SPL 1e harm.

Red = SPL 2e harm.

Yellow = SPL 3e harm.






Yet one more thing i would like to add: 

In my discussions with my great audio friend prof. dr. SA (you who you are) he once coined the phrase: equal-group-distortion. Naturally this is nothing, what he was saying is: it is the discontinuities and bumps in distortion patterns that make us jump into action..

PMC LB1 a historic investigation in legacy sound, part 1

 Part1

Today some vintage PMC LB1 studiomonitors did arrive at my desk.

For those unknown: just follow the link for a description of these UK manufactured benchmark loudspeakers, developed by some former BBC employees.

They came accompanied with a assorted collection of 'spare' tweeters and woofers.

 

 

Upon first test nothing worked as it should: we quickly found that both 1 and 2 +/- connection of the speakons where paralleled. Now that is a bit unusual, as in pro-audio we use a NL4 speakon to feed a loudspeaker with a high/low split signal. Nowadays the majority of prof. amps have 2 channels feeding 1 speakon. So the box effectively shorts the 2 channels. Not good. Oh well..easily sorted.

 

Still not very good sound. So let's open it up: and behold and wonder some fire has been inside.

Now this really made me curious. I know the loudspeakers have been driven with Bryston 4B amps. (Not to shabby!) So underpowered clipping amp couldn't be the reason.

So let's investigate...


 

 

First thing I did was try to find a schematic for this cross-over online. No luck.  So I drew it up myself:

Now to get an idea of the design philosophy we examine the values of the components closely. Immediately the doubling of C1 vs C2 (6.8uF) springs out, which will point to a textbook LinkwitzRiley 24 dB filter. In those days modelling software (like LinearX Leap) was available but apparently not wide spread. (I did acquire my copy somewhere in 1996). So we could investigate this pointer further. I de-soldered the 2 coils and measured them. Not very accurate, but both values and even more their ratio (1:4.5) prove this indeed it is a textbook LR filter.

To determine for what crossover frequency they designed that filter we have to do a bit of guess work. because you will have to know the impedance with which this filter is terminated. What if they just took the (DC !) 6 ohm resistance that is mentioned on the tweeter ? (of course this is not correct but read on..) In that case the crossover will work as a (electrically!) perfect 24dB LR crossover at 2100 Hz.

Anybody who has dabbled a bit in passive crossover design knows the horrors: components all interact with each other, are never ideal (parasitic inductance / capacitance)  and the terminating impedance is never a constant (ohmic) resistance. So yes, changing f.e. capacitors with different brands (with the SAME value, duh) will have an impact on the sonic behavior of a filter. But that is a different story.

First let's measure the tweeters to see if that terminating resistance indeed is more or less a constant 6 ohms..


And yes: the green line is an impedance measurement of the raw driver. That is rather flat to begin with so one immediately thinks: ferrofluid . Indeed in the '90's that was considered quite the bomb. Some manufacturers did use ferrofluid in everything (even 15" speakers, bad idea!)

One of the 'spare' tweeters had some note saying "possibly faulty", so I opened it up to find out. 

Now opening up drivers and in general reconing or refitting new membranes is NOT a good idea (with nowadays manufacturing tolerances) but that is an other rant, this tweeter was 'possibly' faulty anyway:

Yes, JL (in 2015) this tweeter is definitely faulty..I might even say totally foobarred.. But also filled with (a bit dried) ferrofluid so: confirmed!

Now what type/brand tweeter would this be, because they look awfully familiar?

And, suprise, suprise: when I pulled away some PMC branding sticker I found the original VIFA D27TG35 sticker. Take note of that exact number because this is remarkable!: In that same era (somewhere in the '90's) I did design and manufacture  a small multipurpose loudspeaker (build maybe a 100 units or so) with that exact same tweeter. When they went out of production I bought the remaining stock and that was the end of that build. (fairly recently I did a restoration of one of the installs-->)


And no, some Peerless or (Vifa rebranded) tweeter with a similar number will not be a replacement! But you can have your own opinion: nobody will get hurt..

Onwards with the filter, cause still some components need to be explained: more precisely the burned resistor(s). First one is a series resistor with a paralleled capacitor. This will serve as a attenuator for the tweeter and also as a slight top boost (ferrofluid, y'ken) with +3dB point at 9khz. Fair enough. But it will make the filter 'see' a higher load.

But what's that 100nF (with series 8 ohm) doing there? Those values make no sense at all, it does precisely nothing in the audible band. Also as an zobel network (to stabilize amp load) it is nonsense in that position..oh well.. brainfart from the designer??

So this leaves us the one resister that is paralleled with the combined load of tweeter and attenuator to get back to the correct load for the LR filter. From a designers perspective this is an awful solution: that resistor will get hammered with a lot of power, so no wonder it is burned. 

Previous repairs show a (also way to small) installed 13 ohm resistor. Now that's an odd value. Not likely in my opinion. 

Back to the measurement above: if I insert a 10 ohm resistor and re-measured the tweeter with attenuator I got that yellowish line: an almost flat, straight out 6 ohms, which brings us back to the desired, flat line 6ohm for our LR network topology...

Now hows that for loudspeaker forensics?


 

..on to Part 2


PMC LB1 a historic investigation in legacy sound, part 2

 Part 2

 ..on wards..

So now what will we do? Upgrade that filter with audiophile components?  Redesign it with what we know now? Or even use some DSP to make an active system?

NON of the above.

Audio reproduction is a construct so if we want to know why that box was so successful in those days we have to thread really careful!


So what I did was replacing those burned resistors with beefier ones of the original value. However, I did get rid of that mystery non working 100nF to make some room. Purists forgive me if you hear a change in the 200kHz region.

I also replaced that electrolytic cap with some MKT  I had lying around. We all agree that (old) electrolytics are a pain, right?

They serve as impedance-equalizing for the woofer so not in the 'signal path' anyway.

So how does it sound?

At first I was a bit underwhelmed, so let's look at the measurement to see if I perhaps made a mistake by swapping polarity of one of the components . (note to self, why don't you ever mark down how things where connected?)

No, totally correct. As expected: perfect all-pass behavior. Also the inverted tweeter experiment gives a deep null.

It does sound that way too, like a well balanced loudspeaker. Albeit it bit dull and boring perhaps to much zero-phase-fir-filtering for me lately?

puzzled...

 

..So let's turn it up and see what happens...

AHA !! Euphonics !! I can do euphonics, after all I have been a live sound engineer for many years. So I need a bigger amp, as I was testing with one of my whimsy yet brilliant sounding gain-clone-single-chip amps. 

Unfortunately I don't have some spare Bryston lying around but I do have some class A amp from a prior experiment.

Now stuff starts to be clear why people have blown these speakers into smithereens: they just say more more more more more..

More off what? More of that fantastic low-end! That tweeter I do know: not the best 7kHz I ever did hear, but that transmission line is real fun!

Here you go: measurement very close to the port. You can see the smoothly playing 60Hz (such nice frequency)

But it does look a bit worrisome around 220Hz, right? That's a general problem with transmission lines, maybe I can repair that a bit with some more experimental stuffing

Now will this loudspeaker work as a generic monitor loudspeaker in 2023 ?

We would first have to define what a monitor speaker is. 

Contrary to some big names in loudspeaker-design-industry (not studio people), I have the feeling that working as a mixing engineer is also a part of the creational phase. Not re-production as in playing back the end result.

So a studio monitor is not there as a reference of the end result but to make you do your creative part in a stimulating way. And press you to reach further, higher, better, newer.

These monitors came into fashion when we (in live sound) started working with band pass (6th order) subs..and hey: let's try some Portishead / Massive attack and drum 'n bass (Propellerheads!)

...woohaaaaa... this is the shit..damn!

How revealing is that!

Now does recent music like Billie Eilish, Whispering Sons, (modern) classical music work? nah, unless you really enjoy that '90's sauce on everything.


That being said: some of these sound system blokes are experiment a lot with 1/4 wave sub, as if it is something new.. So who knows what will come into fashion again, after all everything seems to go in circles.


 


And another day of listening + experimenting: Red line is port measurement without any additional stuffing, whitish line is with some extra 'sheep wool' stuffing. Sure: low end is cleaner + sounds more tight with the stuffing but the 'fun' is gone.. 

 

More research is required: I will use the extra parts to make a home brew TL and in the meantime get a better (DSP) crossover because the groupdelay of that conventional filter is starting to annoy me. The restored boxes will serve as a reference (in my memory database) to that.

Keep checking!

PMC LB1 a historic investigation in legacy sound, part 3

 Part3

 

As I mentioned in the previous post: I was getting annoyed by the overall group delay as it is introduced by the text book IIR (LR 24dB/oct) filtering.

So here's the experiment: home brew TL with a very cheap 4" speaker. I totally forgot to take pictures of the making, not that interesting at all: it is a standard 1/4 lambda backloaded TL with some folding. 

Very similar to the PMC design

The big difference will be the filtering: I used one off my DSP boards to make a overall zero phase shift X-over using FIR filtering. How to do so will be a topic for a different series of post. Soon. Maybe.

 

 

While presenting them to my audio peers it became clear to us that there's no way back to IIR filtering once you have tasted the reverberant field as it is presented by a zero phase system..

But that wasn't the topic of this experiment. We where investigating the sonic properties of a Transmission Line. Funny coincidence: right at this moment the Amsterdam Dance Event is happening and all newspapers are full with interviews with the current stars of EDM. Accompanied illustrations show pictures of studios with big (PMC) transmission line monitors. 

So this is hot stuff at the moment.



 Now how does a TL sonically compare to a different approach?

 

Very same speaker, different hornloaded tweeter but we can ignore that. Literally. We have been practising listening for some time.

In this small cabinet I used a digital implementation of the Linkwitz Transform to get some LF response squized out of that wee speaker.

In the STM32 part of this blog I did talk a bit about the intricacies of this.

So we now have two things to compare AFTER each other. As always when listening and evaluating audio stuff: you have to remember it for a few seconds. Which isn't simple and definitely needs some training..

 

 

How do they compare?


Purple line is close to the port of the TL (transmission line) while the green line is close to the speaker of the LT (linkwitz transform) closed box.

Clearly the TL has quite some lower response around 60 hz (again, such nice frequency) all though at the cost of a bit funky phase response.

Forget the resonances at 180, 300 (hey..odd harmonics..hmmm) they certainly spoil the fun in this experiment and are the main reason that some music really shines while other sounds horrific. Missing fundamental psycho-acoustics at play here.

But we are not finished yet.

 

I totally like that 60hz. From such a small box. So I made another, bit bigger TL with a 10" speaker. Right at this moment it is playing at my feet under my desk. Also at actual quiet levels it really does bring an extra dimension to the genre I am currently digging in:

Hauntology 

(EDM is too boring for me)

So you can get an idea of the sonic landscape.


Now what about those pipe harmonics in this setup?

Well, very steep FIR filtering at 80 Hz proved to solve that.

How? By using down sampling to a sample frequency that doesn't take zillions of coefficients to get the desired resolution of the linear phase filter.

Naturally (a lot of) delay has to be added to the (high end) desktop speakers. 


And now things get really interesting: 

I never was a fan of 'separate' sub woofers. Also in live sound. The setting of delays from subs to mains or vice versa always is giving you grief when they are some distance apart. Most certainly if you understand what phase-alignment involves.

Why would that be as wave lengths around those frequencies are in meters? So how could some centimeters make a difference? Well maybe the (much higher in frequency) harmonic distortion produced by subs have to be in time with your main?

With the above very steep filtering I could, at least at lower levels, get rid of that distortion and now the sub is behaving as it should: it seems as if my wee desktop speakers are producing a tremendous amount of accurate bass. Most certainly with instruments I know well (like bass guitar)

I also tried a similar setup with a conventional cabinet, but that doesn't give you the same results: a TL has a lot more efficiency at the freq. of interest, meaning the cone has to move less and thus produces less distortion compared to an (augmented) closed box.

So that leaves only the quirky phase response (at 60-70hz) from a TL to be evaluated against program material. (I could differentiate it to get the groupdelay to get a better insight)

Let's see..

..Audio reproduction is a construct..



Making a DI box

Over the past decades making a DI box has been one of my recurring activities.

From repurposing scavenged  transformers into make shift 'passive' boxes to  experiments with unbalanced to balanced chips.

The problem is phantom supply. 

All though rated at 48volts it's purpose as a PSU for any circuitry is really limited, because the way it is implemented in almost every mic pre-amplifier:

The 48 volts is fed through 2 resistors of each 6k8 in each leg of the symmetrical input, thus giving an impedance of 3k4. Which means that if your circuit draws say 10mA of current only 14 volt of that supply is left to begin with.

Your own circuit will have some resistors in the supply lines too, to unload the output from your circuit. So the voltage to work with and thus the headroom of your DI will even drop more.

Hence the really limited headroom of a lot of modern, commercial DI boxes

Old skool '70's electronics to the rescue: 

 

This circuit is no rocket science and well known in every pro-audio circle, mostly credited to Bo Hansen..

Quite a few mods though, just as a reminder for myself a picture of the early limited (as in only 20 pcs of that specific transformer in my very old stock) prototype:


 
 
 
 


By popular demand we decided to try to do a reissue fairly recently.
The problem was finding a nice affordable transformer: these Swedish ones are really nice, but man, do they charge..Pure by coincidence i stumbled across OEP. Now those i do remember from the classic BSS AR116 DI. (Which has nothing to do with the modern AR133 !!) And with some googling I found them to be used by modern Telefunken as well (speaking of brand name marketing!!)
 

So here we go a batch of class A, current driven, transformer balanced DIs in a stainless steel housing. 
Yes, stainless steel. 
Every DI gets abused once and awhile as a stage weight, box tilt, or even as a hammer so to prevent them from changing into ugly lumps of corroded, dented metal, we folded and water jet cut some stainless steel sheets:
(of course I didn't do it by myself: credits to Koos Roadservice)
 
 

 
 
 
Still not totally satisfied by the end result I learned myself how to powder coat (not difficult at all) 
 
So here it is..open for orders now..hahahaha...

KBLsystems class A transformer balanced DI



Fighting long boot times when using older RPI

It has been quite a while when did my 'audiophile' music player built.

raspberry-pi-with-balanced-output 

It has been almost a decade now and there is a zillion of dedicated distro's around to make this really easy.

Just google: RPI musicplayer.

All really nice and slick..but..taking..ages..to..boot, certainly on older PI's 

So while reviving this old project to include a transformer balanced output, (driven with a class A running NE5532! ) I worked myself through the big pile of clogged info to get a faster booting system.

Now the easiest way would be to get yourself a custom build image using a build system like buildroot, yocto, openwrt.

I did have a go at all of them but the easiest way to get yourself (like me total noob) going would be to start off with buildroot.

Just follow the walk through (maybe add in a SSH client like dropbear) and you will quickly get your own fast booting image. That is: it isn't complicated but the compilation takes hours!

The bit more nasty part comes when you want to take it further: not many people talking about this, so for future reference here's some reminders.(as long as this will be of any value in the allways changing wonderful world of linux 😂)

RPI has it's own rather special way of booting at which time you can configure all kinds of hardware add-ons by using so called dtoverlays.

If you have been working with RPI and audio you will have come across these: to configure edit the config.txt file in the 'boot' partition of your disk image. (sometimes it will have just a number instead of the name 'boot')

For a first test add 'dtparams=audio=on' to this file. This will select the onboard shitty audio out. In theory. But the inner workings of linux in these buildroot-builts don't seem to be loading the necessary kernel modules. Something with udev and dts / dto. Complicated stuff. Tell us when you know how to do this!

I got it working with 'modprobe snd-bcm2835'.

We did select the packages to get alsa and other obvious stuff like mpd, mpc and a texteditor in our simple build, right? 

Now to play some music (from that USB stick) you will have to mount it first: 'blkid' will give you a device name or UUID, create a mount point and mount. Configure '/etc/mpd.conf'  to point to that music directory and also add a default alsa-audio-out setting while you're there.

Restart MPD: first kill it by PID number found by 'ps' then start by mpd (no sysctl or systemd in default busybox) . Now mpc will have hopefully controle and some information on mpd.

Now on to even more remarkable easy to forget nooks and crannies:

We do want that i2s-dac thing running.  In the picture is a simple (i2s-slave) interface that doesn't need any configuration (by i2c). I think most of these simple boards will work, this one is an old hifiberry-dac with a pcm5102a. That is a significant hint to get it going:

First comment out the 'dtparams=audio=on' and add in 'dtoverlay=hifiberry-dac' in the aforementioned config.txt file.

But you will not have any overlays in that 'boot' directory. So download it from some github or strip it from some other distro. Probably kernel versions and .dtbo file will have some relation...

Now for the similar modprobing:


You need the 4 of them to get it going:

snd-soc-bcm2835-i2s, snd-soc-core, snd-soc-pcm5102a, snd-soc-rpi-simple-soundcard. 

Finally! Audio from your i2s board!


Now to make it persistent after reboot, you have to make a little script in the '/etc/init.d' . Give it number some lower as the mpd-start-stop script, which in my case was S95. (that's capital S!)

Don't forget to mount your USB-music-drive before starting mpd. I edited the mpd start script to mount the drive: the usual fstab methode doesn't work to well. Presumably the default mpd start script will be started before the USB drive is ready for mount.

Other nice stuff to implement:

We can us sysfs-kernel modules to manipulate GPIO ports. Make sure the pins are not in use for other functions (like i2s..duh). Here is a small script:

 
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#!/bin/sh

#gpio17 to drive a led:
echo 17 > /sys/class/gpio/export
echo out > /sys/class/gpio/gpio17/direction

#gpio27 to have a push button to do "mpc next":
echo 27 > /sys/class/gpio/export
echo in > /sys/class/gpio/gpio27/direction

while true
do
if mpc status | grep -q 'playing'; then
  echo 1 > /sys/class/gpio/gpio17/value
else
  echo 0 > /sys/class/gpio/gpio17/value
fi

if cat /sys/class/gpio/gpio27/value | grep -q '1'; then
  mpc next
fi

sleep 0.5

done

 

 

Oh, and one more thing:

Before starting to use something like 'buildroot' think of a way to separate everything from your everyday-work-computer. It will give a big mess if you don't. I used VMware. Do configure for plenty of drive space!











TAC Scorpion revival

Back in the days when everybody was using some Soundcraft n*200 (n = 1,2,3,4..) mixingdesk I used to prefer a TAC scorpion console. We are talking end '80 begin '90 here..

So this is my, way to much work, project to investigate why I did prefer that console over others and at the same time make it a bit more 2022.


Now this is most certainly not how these consoles looked, so lets start with a picture of a original TAC Scorpion:

This is not my original console, that one has disappeared into oblivion. But very similar to the one shown on the left, all VU meters where broken. They all had that after a year or 2..

So instead of trying to fix those first thing I did was get rid of some heavy metal casing. Boy are these things made like heavy armoured tanks.

I do like VU meters on every channel though! Not as much to adjust gaining (o, man I can rant on about that one). But they really come handy to quickly see which synth/sequencer/voc channel is doing that solo in a live situation😏

So I made a VU board with a vintage chip everybody knows: LM3915. As there is no audio  passing through this chip I didn't care where it came from: Aliexpress FTW! I did design a simple pcb in cool black and here you go diy VU meter:
I mounted them to the faders which I had to clean anyway. And I used the same feed point for fader and the measuring entry (as in PFL), so no long lines running across the console..





..to be continued

STM32 in final 'product'

Let me write a round up of this research on using a MCU as a speaker processor.

By this time I switched from KEILmdk to STM32CubeIDE which at this moment (november, 2021) seems to be the weapon of choice for most STM developers.

Again for future reference a quick how to on how to include a math / DSP library.

Together with your installation of the IDE you also will have installed some repositories for ARM software products. You are looking for CMSIS/DSP libraries. The path will be something like  /STM32Cube/Repository/STM32Cube_FW_F4_V1.25.2/Drivers/CMSIS/DSP/ in your home directory (in Linux). It will be similar in MacOS or Windooz.

Start a new project in the IDE, right click on projects name and Create new source folder. Name this folder DSP. Now again right click and select Import file system. Select both the source (source) and the header (include) file as in the screenshot from the above mentioned 'repository'. Click finish.          

 

Now you will have both the source and header files in your project but your compiler knows nothing about it so: right click project and find the project properties setup. Find the compilers include path settings and add the new path to your include file. Check the screenshot.

Now you can include all these nice ARM DSP tricks by:#include "arm_math.h" Try to build (that's embedded speak for compile)... Hundreds of errors, but if you check the firsts you will see that the compiler needs two PreProcessor directives. Find those in properties and add both: ARM_MATH_CM4 (we are using a M4 core) and __FPU_PRESENT = 1U (yes, we do have a floating point dohickey in our processor). O, and that's two/2 underscores! __


Now you can start programming your code making use of block processing, taking chunks of say 48 samples using the increased efficiency of the CMSIS libraries. 

Check all those filtering functions!!

There's loads to discover but let me disclose this one: 

Decades ago I met Ed Long (yup, the guy who 'invented' Time-alignment) at a proaudio trades fair. He demonstrated his closed box, single 15' sub. That was astounding (as was the excursion). His trick was to use electronics branded as ELF and not filters to EQ the sub. 


Much later Linkwitz developed a filtering method known as Linkwitz Transform. A similar way to correct the response of a sub woofer. 

Never got any good results with this method as in professional DSP's don't provide for a method to import your own Biquad coefficients. And using shelving filters and EQ will never have the required precision.


While using this CMSIS library function:

void         arm_biquad_cas_df1_32x64_q31 (const arm_biquad_cas_df1_32x64_ins_q31 *S, const q31_t *pSrc, q31_t *pDst, uint32_t blockSize)
 Processing function for the Q31 Biquad cascade 32x64 filter.

I managed to reproduce that sensation I had at that trade fair decades ago..Thanks Ed!

Do understand that you really, really need the above precision (in fixed point) to get any decent results! If you are into the math: understand how a biquad calculation works and how the representation of coefficients (which are all close to the unity circle) will effect the calculations at very low frequencies. 

(anyone remember we at some point (2010?) got this 'use for better LF response' check box in LondonArchitect? )

Posted on Instagram you can find some pictures of cabinets where I did apply this.

----->

Oh, and this is the latest board I developed, everything being stalled a bit by the global chip shortage:




Practical FIR

Part 3

As ever so often in engineering, if some new technology is introduced we go into  'nec plus ultra' modus. Suddenly you will find a huge range of applications where all and unique is about that new tech. Same for FIR filtering. 

The first appearance of FIR in (diy) audio was as a means to do room EQ-ing. The idea was to measure a systems response in a room, do quite some averaging and apply an overall EQ that would correct all and everything. First there is of course no such thing as room EQ: the only way to change the acoustics of a room is by changing it's dimensions and/or changing the absorbing/reflecting surfaces. But the solution is better then using only conventional (min. phase) EQ as we have been doing with our 31b. graphics.

Some of the room effects are min. phase and thus one could use a graphic to try to address the problems. The definition of min. phase behaviour dictates that your ideal EQ point will have both the necessary magnitude and phase compensation. 

So for these min. phase phenomena the EQ will NOT introduce 'PHASE ISSUES' (crinch).

However. Loads of stuff you would like to EQ in acoustics is NOT min. phase. Remember playing a festival and during nightfall HF content of your PA changes drastically? And changing your graph. EQ doesn't help in the way you wanted? Well that's a non min.phase phenomena and this would call for a lin. phase EQ, which as we know now is only possible with FIR.

 

Another example would be (conventional) EQ-ing across a line array. Hopefully everybody does understand you can't apply different EQ settings to different elements in your line array, right? The different phase behaviour of the different elements will in that case wreak havoc on the overall dispersion of the array! The only way you can get away with this is by using lin.phase EQ. Something that French manufacturer of brown boxes does understand.

So when and where to apply lin.phase FIR filters as a means of filtering without affecting phase is something that needs thought!

It is not the miracle solution for everything, it is just another tool and it does have pro's and con's like any other tool.

So yes, as has been explained on million places: FIR filters can be computational heavy, but why should that be mentioned over and over as a problem? 

A more interesting difference, if you really would like to set IIR versus FIR, is the way the precision of the calculations work out sonically. Fixed point vs floating point. Internal bit depth. The way rounding errors work in your algorithm. Audibility of the pre-rings (or not?) That sort of thing. In the MCU/DSP part of this blog I will write a bit about this.

If all this is cleared, one big topic still is to be debated:

IS PHASE AUDIBLE?

(or more precise does applying an allpass filter change the way (stereo) audio is percieved?)

One will understand that I (and my peers) do have an opinion contrary to that of the big names in our industry. This is the internet: I can't demonstrate, but I encourage you to find out for yourself!



Filtering by IIR or FIR

Part 1

Before we dive into the depths of FIR or IIR filtering, let's talk about audio in general.

We have this custom in Audio engineering of analysing everything in the frequency domain. Apparently something we inherited from Electrical engineering. It does seem quite logical though, after all music is all about sine waves, right? 

Wrong! 

Look at some recording you made with your favourite DAW (audacity f.e.)

The signal (in stereo) can look something like this. It is the equivalent in voltage to the air pressure or eventually the position of your eardrum of the stuff we call sound. No sine wave insight is there?

Another argument against the preoccupation by analysis in the frequency domain could be: tell me the frequency of a handclap or the whistling wind or all the consonants we use...

Should we stop talking about sound in the frequency domain? No! But do realize we are using a model of reality when we do.

Years ago by work of Helmholtz and Fourier the method of decomposing a chunk of a recording like in the picture above into it's sinusoidal components has been worked out. (in fact Helmholtz has written a paper on what happens after this decomposition, look for 'auditory roughness' ) 

And we now all have the computational power of using this analysis by FFT. If you are totally compelled by any mathematics whatsoever, think about FFT as follows: Imagine a prism. It decomposes a light beam into the different waves it is build up with into these well known rainbows. The FFT engine works a similar way in that it decomposes a chunk of audio into it's sinusoidal components.

Now on to the topic at hand.

In some other parts in this blog you can read about the use of a (dual) FFT analyser in how to get a frequency domain analyses of your device under test. In diy loudspeaker circles a more common use will be Room EQ Wizard, REW to obtain what they call frequency response (commonly abbreviated to FR). Mostly they forget about the 'phase' part of the bode plot. 

That isn't really a bad thing. As for the most part a loudspeaker (and filtering) behave as minimum phase systems. Minimum phase (not minimal!) means that the phase response is dictated by the frequency (magnitude) response and vice versa.

So a FR measurement also holds the information to derive the associated phase response. No magic here, just plain laws of physics.  So every deviation you see from the frequency response must have a corresponding phase deviation from that so desired 0 phase line. (all under the premise of 'minimum phase' or LTI) 

 There is 1 big exception: All pass filters! All pass filters will have a typical phase response WITHOUT anything reflected in the frequency response. And vice versa. If you think about it: an ideally crossed multi speaker system will have a flat FR and a overall All Pass phase behaviour. (as long as conventional,IIR filters are used) Maybe something like this for a 2-way system:

 

If you are still uncomfortable with the concept of phase, don't worry to much. Just don't confuse phase and delay. Like defined elsewhere in this blog: we use the word phase (-difference) for that part of the overall frequency dependent time difference between two components where plain, simple 'excess delay' is stripped away. So:

Time alignment does not equal Phase alignment !!

('time' alignment as a scientific term is a bit Einstein-esque , it is signal alignment..but hey..)

...onto FIR filtering part2


Filtering by FIR

 Part 2

At different places in this blog you can find information on how to setup your sound system in such way that it behaves as an overall All pass filter.
Using conventional IIR filters or even plain old analogue filters if the components don't have a delay offset that needs to be dealt with.

Now: long, long time ago when I got my first DSP (somewhere beginning '90) I had to do system setup just by ear. Measurement systems did exist (Meyer SIM) but where simply not affordable. In those days  I ran components full range and set delay by ear. It is simple. Just use common sense to get the ball park and tweak till audio nirvana.
Next you set the X-over as dictated by the text books and there goes your audio nirvana out of the window.
Quickly we did realise you need to find a way of phase alignment too, to get your filtering correct.
All rather common these days, but that sense of audio nirvana as in above never really happened in these filtered systems.
And phase by itself is inaudible, so all the big names say...

Stubborn as we are we kept on searching, using weak filter slopes (6 dB) and using coaxial sources. Till at some point that "new thing" FIR filtering entered our industry in the form of these brick wall filters in the Dolby Lake processor (2004).
Now that was eye opening, most certainly after we did realise that you could also develop your own filters and not use those dreaded brick wall filters because they sound horrible. Unfortunately my business was way to small to get the access to load custom FIR in a Dolby Lake. So the the quest for a platform with custom FIR started.
 
But first let's explain how FIR filtering works. 
In fact it is a very simple operation once you are in the digital domain.  Imagine a chunk of sampled data over time, maybe from some temperature sensor. Now if you would like to know the average over a certain period you could add say 10 samples and divide by 10. Or in other words multiply those 10 samples by 0,1 and add the results to the total averaging output value. And keep on doing this over and over again.This is called a moving average filter and in fact is a very crude high cut FIR filter.
The number of 'taps' in the above filter is 10 and the filter 'kernel' is an array with 10 (the number of taps) positions with a value each of 0,1.

You could imagine those filter kernels to be more elaborate, right? 
Now with the same mathematics as has been used with the measuring setup described in an other post you can derive those filter kernels: Fourier Transform!

Just use a iFFT to calculate the Impulse Response from a desired Frequency Response. 
The process of using an Impulse Response as a filter kernel is also called Convolution, which you will have heard of as being used in reverbs, cabinet emulations etc.
 
    The ImpulseResponse of your desired FreqResponse will be used as the filter kernel!

Rephase
Now how can we do that without getting familiar with Matlab (like I had to do 20 years ago) 
Well: thank Thomas Drugeon for REphase.  It's a very nifty little program, around for some 10 years now, to help you develop your filterkernels. For free! (but do donate some if you like it!!)
 
 
 
So to round it up: FIR filtering is about using a Impulse Response derived from your desired Frequency Response. That by itself doesn't mean all FIR filters are linear phase.  A lot of people in our industry use FIR as a synonym for lin. phase (meaning not having any impact on the phase) and start talking about 'latency' of these filters. This concept is both limited and wrong.
 
Let me explain in simple terms.
To make a lin.phase filter you start of by taking the IR of a conventional filter as in described above. It will have its associated phase response. Next make a copy, reverse all coefficients and glue this to the original IR. The filter kernel is now totally symmetric around the middle coefficient and as such will not have any impact on the phase (uhmm, well just believe me, there's heavy math behind this)
But it will also be twice as long as would be needed for a non-linear phase filter.
So it takes some time for the signal to pass through such a filter. In fact half the length of the filter kernel (as expressed in number of taps).
This will not get any faster if processors would run faster so the expression "latency" as we use when we are talking about using a PC and audio processing is not correct. 
 
Let's call it filter processing delay and do understand this is depended only on both the sample freq. and the number of taps used for the filter. 
 
Which is something we would have to discuss too, but you can find the background at different places. For now do understand that both sample freq. and the number of available taps determine the resolution of your FIR: the lower the freq. and the higher the Q of the filters: the more taps you will need. 
 
In a real world live situation there is a limit to the time difference between the moment a sound is produced and the moment that sound comes out of the speakers. (from there on it's acoustic path length differences, but we do understand this intuitively, right?) So you have to determine what for your situation is acceptable.

Examples?
Maybe the sound coming from your mainsystem shouldn't arrive after the backline/drumkit? In a club situation that is, where indeed the levels of backline/ acoustic drums are in the same order as the levels of your PA. So say something like 3 mtrs (or 10mS) in this situation?.
But what about that little, processed wedge next to that grand piano? (I ended up putting it on a little riser to overcome the timing issues).
 
 
...Maybe best to keep on using your ears!