This comprehensive and in-depth article about HDR imaging was written by Sven Bontinck, a professional photographer and a hobby-musician.
A matter of perception.
To be able to use HDR in imaging, we must ﬁrst understand what dynamic range actually means. Sometimes I notice people mistake contrast in pictures with the dynamic range. Those two concepts have some sort of relationship, but are not the same. Let me start by explaining in short how humans receive information with our eyes and ears. This is important because it influences the way we perceive what we see and hear and how we interpret that information.
We all know about the retina in our eyes where we ﬁnd the light-sensitive sensors, the rods and cones. The cones provide us daytime vision and the perception of colours. The rods allow us to see low-light levels and provide us black-and-white vision. However there is a third kind of photoreceptors, the so-called photosensitive ganglion cells. These cells give our brain information about length-of-day versus length-of-night duration, but also play an important role in the pupillary control. Every sensor need a minimum amount of incitement to be able to react. At the same time all kind of sensors have a maximum amount that they may be exposed to. Above that limit, certain protection mechanisms start interacting to prevent damage occurring to the sensors.
With our eyes we can see in very low light conditions thanks to the rods. We all recognize what happens when we stay in the dark for some time. Our pupils will dilate as much as possible and after a short time we start seeing more information where we ﬁrst saw almost nothing, although we will hardly be able to distinguish colors in what we see. This is because the low-light sensitive rods are working at that time. But also the dilating of the pupils are an adaptive process to let more light enter our eyes to lower our minimum sensitivity level.
On the other hand when we return to normal or brighter light conditions, our pupils will constrict again to lower the amount of light than can enter our eyes. When we try to look at the sun without protection, everybody knows that it is almost impossible to do it without squinting our eyelids. Again this is an attempt to protect out retina by lowering the light even more than our pupils can do. The photosensitive ganglion cells contribute to regulate the width of our pupils and work independent of the other photosensitive cells. These cells are directly connected via the optic nerve to the central nervous system and bypass the visual systems in our brain. The speed of constricting the pupils is about three times faster than the speed of dilating. This shows how important this protecting system must be. It acts very fast and responds to the brightest part of light that enters our eyes, but it does that all by itself. So it is something we cannot control like when we move an arm or leg for example.
Protecting our eyes is linked with seeing black and white.
This may seem like a weird statement, but let me explain how it works. Now we understand the basics of how our eyes and brain work together to sense light, but also how they both help to protect our sensors, we can understand what differences in light levels will cause.
Since the most important and fastest protection mechanism is there to prevent from being blinded by a strong light source, our way of seeing things will be bleed as white, everything that has less intensity or luminance will be perceived as darker depending of its level. Our eyes can see intensity differences of about 1000 to 1 when we look during daylight (compared with digital values, this means around 10 bits of information because 2^10 equals 1024 which is about the same maximum value). A value of 1 will be the darkest value and will be perceived as black without any detail, whilst a value of 1000 will be seen as pure white. This is the dynamic range we can see during one single moment.
Everything outside that range will be perceived as too bright white, or black without any details anymore. As soon as the light conditions are changing, our pupils will react correspondingly and adapt to these changes. Of course we have to realise that, if an object that we see is black one moment, we can see more details if we come closer and eliminate the influence of the brightest parts more and more because our pupils can dilate to let more light enter our eyes. Even holding your hand within the line of sight of the light source to shield our eyes from that direct light source, can increase the details you can see in parts that seemed completely black before.
So, it’s clear that our perception of light intensity is not absolute, but rather relative to the amount of light that is reﬂected from objects or emitted from light sources. At the same time that perception depends on the fact if there is a light source shining directly into our eyes or not. This all makes it pretty hard to be able to say precise number information about perception of a dynamic range in humans. The fact that our rods have a much larger dynamic range compared to our cones, they can perceive about a difference of 1000.000 to 1, does contribute to the complexity of this subject.
The more we look in low-level light circumstances, the more the rods are taking over from the cones, so the dynamic range we see will gradually increase when the light becomes darker, but only if there is no direct light source in your direct sight.The most important things to remember are that, one, the perception of light is constantly adapted to what we look at and to what amount of light is entering our eyes. Two, even direct or indirect light does make a difference because our rods will not have effect to increase the dynamic range when a direct light source is entering our eyes. And finally the protection mechanism for too bright light will always dictate the adaptation process and perception of what we will label in our brain as being white.
Different dynamic ranges to take into account.
This long introduction is necessary to understand that when using HDR in imaging, different dynamic ranges will have influence to each other and will need some matching to translate. Like we learned about our human dynamic range, a camera also has a certain range that it can capture during one certain time span. Modern cameras measure light by using their CCD or CMOS sensor chip to evaluate the differences between the levels that enters the lens and so the individual light-sensitive pixels on the chip. These levels are digitalized into values that can be stored and manipulated. Comparing with music, this process is the same as what happens in an Analogue Digital Converter when analogue sound enters the mic, goes to the soundcard and then is transformed by that ADC into digital numbers.
The kind of image ﬁle everybody knows is the standard jpg ﬁle. This ﬁle contains, simply explained, the light information of the three primary colours, red, green and blue. These three separate colours contains nothing but individual series of numbers that represents the digitized information about the brightness of each colour pixel. They are essentially three separate grey-scale images. Combined they form a colour image with information about colour, brightness and also saturation as an indirect consequence of the combination of both ﬁrst types of information.
The well-known jpg ﬁle stores that information with a dynamic range of 8 bit per colour. Combined this is means a 24 bit ﬁle. Since each colour has 8 bits to reproduce everything from black to white, this means that there are 256 different values (2^8=256) that can be used to create each primary colour level. The combination of these three colours will produce 256*256*256 = 16.777.216 possible colours. At ﬁrst sight, this seems a lot, but we may not forget that this amount of combinations does not stand for a very high dynamic range as well. The dynamic range is dictated only by the amount of bits per colour, not the combination of those three primary colours.
Back in the years when I was studying photography, I learned that a trained eye could distinguish about 130 to 140 different values from black to white on high quality photo paper and under normal good lightning conditions. Anything with a lower number would be seen as separate bands in a gradient from black to white. Every number of values higher than that is sufficient to create a ﬂuid gradient without any banding effect visible.
This means that with a dynamic range of 8 bits (per colour), it is sufficient to create images that contains al information from black to white. Remember that 8 bit means 256 level differences that can be stored or reproduced. sufficient to cover the 130 levels that we can differentiate between. However, let’s not forget that this is the range a jpg ﬁle can contain and not the camera range that we are talking about here, it is not our human dynamic range also. That jpg range is determined by the bit depth of what this standard is set to, 8 bit. Like I wrote before, a modern camera always capture a (much) bigger difference between the brightest and the darkest value that enters the pixels of the sensor. The reason why this is important is that, although 8 bit at ﬁrst seems sufficient to reproduce everything, it is not sufficient when we have to do some calculations with that range.
Bits do make a big difference.
Working with 8 bit images will cause certain problems when we have to shift the data for specific reasons. Shifting the colour (brightness) data upwards to brighten up details in shadows, can cause banding because the information with low values will be recalculated to reach higher values. This way we start seeing more details in the darkest parts. The curve function in Photoshop is the most obvious example of that technique. Compare this with what a compressor does to music. It raises the low-level sounds to a higher level so we can hear them more clearly. Back to imaging there is one problem when we do this on 8 bit images, meaning that banding can occur if we go too far with our correction. The reason is simple. If we think again at the range of about 130 to 140 level differences we can differentiate between on a picture, we can come into that range if we stretch the low levels too much to the higher levels. If for example a dark value of 13 is recalculated to a value of 15, we just created a gap of two values between those two. Let’s compare this with an 7 bit image to explain what the problem is now.
To create the same minimum and maximum perceived light intensity value with 7 bit per colour, the steps needed to This is because we only have 128 levels anymore to ﬁll a gradient from black to white, not enough to make it very ﬂuid for our eyes. Now we understand what the importance of having enough bits means, it is easier to understand why HDR images are useful to be able to shift colour and/or brightness data when we want to manipulate images or correct them afterwards. The higher that dynamic range a camera can capture and store, the more we can stretch the levels before banding will occur.
We are very sensitive to banding and if it occurs, people will see it immediately. There is a second big advantage that HDR images will give us. If a higher range in different levels is captured, we can easily manipulate the image afterwards if we actually use the whole bit depth in our imaging program. With classic 8 bit images, the internal processor of the camera besides how the captured range will be recalculated into those 8 bit (tonemapping) according to your own manual exposure settings or to some automatic or semi automatic programs settings. Your camera stores that already lowered bit depth range into the ﬁle on your memory card or computer. You cannot go back to the internal original higher bit depth by that time anymore.
If you use RAW ﬁles, then you use the internal complete dynamic range the camera is capable of capturing. To compare this again with music, think about recording in 16 bit CD quality and afterwards trying to manipulate this as 24 bit in a DAW. The result is that you gain no extra information by doing so. Interpolation will ﬁll in the spaces between the gaps, but if information is not there, it cannot be (re)calculated, it is only smoothed to prevent stepping. Instead if you record your music with 24 bit, you have access to a much bigger dynamic range and a more detailed ﬁle.
Matching those different dynamic ranges.
Now we understand a few bits about bits, pun intended, we start to understand that each concept, be it the camera’s dynamic range, the dynamic range of the medium we use to look at the image, but also the constantly changing range of our human vision, will need some sort of standardisation to match every range to another. The most important thing to remember is that our human way of processing light will determine how things are perceived, no matter what clever calculations or conversions are done to match the different ranges.
If, for example, we use a LED screen that has a contrast ratio of 1.000.000 to 1 (some modern screens have such dynamic range) this range exceeds our own daylight range we can see by far and we will lose information because the image information is spread over a wider range than 1000 to 1. Of course, at ﬁrst sight this will look like a screen that has a very high contrast, but if you lose information because of that, the ranges are not matched very well. On the other hand if we capture an image with more than 8 bit and tonemap this information into that range, it will contain more information than we normally would see. This can be a good thing if we don’t exaggerate.
Pushing a range of 14 bit information into an 8 bit one, will give a very grey-ish, unsaturated image, albeit with much more information about intensity differences that were present in the scene. In imaging such an image must often be enhanced by raising the contrast and the saturation. Otherwise it does not seem very natural anymore because we are used to see certain ratios of contrast between the objects we are looking at. What we would normally see as a black shadow, will still show a lot of detail and that can look weird.
If used moderate however, it can enrich a picture to some extent. The most effect can be seen in the highlights such as bright cloud formations and in shadow parts where there will be much more nuances and details visible that otherwise would be bleached out in the white parts, or should vanish in the dark parts as black.
Master Fader v2.0 is just around the corner and we want you to be prepared. There is one important thing iOS7 users should be aware of before this update happens. iOS7 has a new feature allowing the operating system to update apps automatically. These updates can happen before you’re even aware of them. This feature is great for consumer apps like games. However, for professional apps like Master Fader, and MyFader, an automatic update could leave you at a show needing to do an unexpected firmware update. Since updating the firmware can take up to 15 minutes, this may cause serious problems by delaying setup or the start of your show. Obviously this is not something you want to happen and we don’t either.
Because we want you to be able to update when you want and on your own time, we strongly encourage you to disable this automatic app update feature on all of your iOS7 devices.
Here’s how to disable automatic app updates and not get caught off guard at your next gig:
If you decide to disable this feature as recommended, updates to Master Fader and MyFader will happen as they have before iOS7:
We also suggest backing up your idevice to iTunes or iCloud before updating. Click here to learn how to back up and restore your content.
-Offer Valid in Canada Only- Are You In the U.S.? Click HERE
$100 Instant Savings on Mackie DL806 or DL1608
The DL806 and DL1608 Digital Mixers with iPad control redefine live sound mixing. And from November 1, 2013 until November 30, 2013 you can get $100.00 instant savings right at checkout. Just visit a participating authorized Mackie dealer in Canada to take advantage of this awesome deal.
-Offer Valid in the U.S. Only- Are You In Canada? Click HERE
The DL806 and DL1608 Digital Mixers with iPad control redefine live sound mixing. And from November 1, 2013 until November 30, 2013 you can get $100.00 cash back. Just visit a participating authorized U.S. Mackie dealer for this awesome deal. Speak with a store representative or download your rebate coupon and follow the instructions below.
To Receive Your Rebate
Purchase a Mackie DL806 or DL1608 Digital Live Sound Mixer from an authorized U.S. Mackie dealer between November 1, 2013 and November 30, 2013 to qualify for a rebate. Customers qualify for a $100 rebate per mixer. No other mixers qualify. Your rebate must be postmarked no later than December 15, 2013 to be eligible. Mail the fully completed rebate coupon, correctly indicating which mixer you purchased, along with the original box UPC codes and a copy of your receipt (keep the original) to the address below. (If you bought floor demos, you must send a copy of a store-printed receipt that indicates “no box”).
Mail Rebate Materials To:
Attn: DL Series
5235 Mission Oaks Blvd #211
Camarillo, CA 93012
Steinberg today releases WaveLab 8.0.3, a maintenance update for its WaveLab 8 and WaveLab Elements 8 mastering solutions.
The new update provides many improvements in areas such as loudness measurement, audio montage, plug-ins, user interface and key commands. The update also adds compatibilty to cocoa-based plug-ins and includes an updated version of Voxengo CurveEQ.
The update is available as a free download from the Steinberg support website.
Steinberg released a new driver update for its UR22 and CI1 USB audio interfaces.
The Yamaha Steinberg USB Driver 1.8.3 for Windows resolves some minor issues and ensures full compatibility with Windows 8.1.
The Yamaha Steinberg USB Driver 1.8.1 for Mac resolves some minor issues.
Back in time when I was at university, my very first DSP lectures were actually not about audio but image processing. Due to my interest in photography I followed this amazing and ever evolving domain over time. Later on, High Dynamic Range (HDR) image processing emerged and beside its high impact on digital photography, I immediately started to ask myself how such techniques could be translated into the audio domain. And to be honest, for quite some time I haven’t got a clue.
This image shows a typical problem digital photography still suffers from: The highlights are completely washed out and so the lowlights are turning into black abruptly w/o containing further nuances – the dynamic range performance is pretty much poor and this is actually not what the human eye would perceive since it features both: a higher dynamic range per se but also a better adoption to different (and maybe difficult) lighting conditions.
On top, we have to expect severe dynamic range limitations in the output entities whether that’s a cheap digital print, a crappy TFT display or the limited JPG file format, just as an example. Analog film and prints does have such problems in principle also but not to that much extend since they typically offer more dynamic resolution and the saturation behavior is rather soft unlike the digital hard clipping. And this is where HDR image processing chimes in.
It typically distinguishes between single- and multi-image processing. Within multi-image processing, a series of Low Dynamic Range (LDR) images are taken in different exposures and combined into one single new image which contains an extended dynamic range (thanks to some clever processing). Afterwards, this version is rendered back into an LDR image by utilizing special “tone mapping” operators which are performing a sort of dynamic range compression to obtain a better dynamic range impression but now in a LDR file.
Within single-image processing, there must be one single HDR image already available and then just tone mapping is applied. As an example, the picture below takes advantage of single-image processing from a RAW file which typically does have much higher bit-depth (12 or even 14 bit as of todays sensor tech) opposed to JPG (8 bit). As a result a lot of dynamic information can be preserved even if the output file still is just a JPG. As an added sugar, such a processed image also translates way better over a wide variety of different output devices, displays and viewing light conditions.
Hey Marseilles is everything a great indie band should be – a bit different, slightly quirky, left-of-center, with irresistible songs that burrow their way into your brain, set up camp, and refuse to leave. The band’s decidedly unconventional mix of folk and classical – or, as they themselves describe it, “folkestral” – has gained them an enthusiastic following from coast to coast and well beyond.
The band has been maintaining a busy touring schedule in support of their sophomore release, Lines We Trace. With a different venue in a different city almost every night, their touring setup needs to be mean, lean, flexible, and dependable. And Mackie’s compact mixers and powered monitors are made for the job. As accordion/keyboardist Philip Kobernik explains, the Mackie 402VLZ4 four channel ultra-compact mixer is an intrinsic part of his eclectic rig.
“I go back and forth between a Roland SR7 digital accordian, a Nord Electro, and a Moog Sub Fatty,” says Kobernik. “I send a mono feed from the Mackie mixer directly to the FOH engineer. Having the Mackie onstage with me allows me to control my own mix, and gives me a whole lot more control over my sound.”
Onstage, a second mono output from the mixer feeds a Mackie DLM12 12-inch two-way powered loudspeaker. “It really packs a wallop for such a small, lightweight package,” says Kobernik. “I use it mainly as my onstage monitor, but it’s come in really handy in some other situations too. We’ve done a number of semi-acoustic gigs at record stores and radio stations, and I’ll just plug my Nord Electro into the DLM12.”
The DLM12‘s onboard EQ and DSP adds to its versatility, says Kobernik. “Onstage, I keep the signal pretty dry, and use mainly the effects in my keyboards. But when we do those smaller shows, it’s great to be able to add a little EQ to boost the bottom end a bit, and maybe a touch of chorus or some reverb. It’s really handy to have those effects, right on the back of the unit.”
Catch Hey Marseilles when they come through your city. For more info on the band, including their music, videos, and touring schedule, visit heymarseilles.com
Cubase Elements 7, the smallest retail version of Steinberg's award-winning Cubase music production software, is now available as a free trial version for download.
Functionally identical with the retail version, the trial version will run for 30 days, so there’s plenty of time to record audio, try out the powerful recording, editing and mixing tools as well as the included instruments and effect plug-ins.
Try the new MixConsole with its integrated EQ/Dynamics channel strip modules, the Chord Track for easy chord management or the VST Amp Rack guitar tone suite and experience the highly intuitive Cubase workflow relished by professionals around the globe.
No registration is required. Just download, install and enter the world of Cubase.
As you might know, Apple's Mac OS X 10.9 has just been released. Currently, we are testing all Steinberg products for OS X 10.9 compatibility. The testing is still going on, so we can't provide you with the final results.
However, what is known so far is that all Steinberg software products (except for Cubase LE / AI / Elements 7.0.6) are affected by an issue with the CoreAudio2ASIO component when using audio hardware in class-compliant mode which may lead to dangerous digital noise. Some Steinberg hardware products need to be updated as well to ensure full compatibility with the new OS X version. We therefore recommend you not to upgrade to OS X 10.9 yet.*
We are already working on updates for affected products in order to make them available as soon as possible. We will keep you up to date.
*In case you are using Cubase 7 and want to update to OS X 10.9 anyway, please make sure to deactivate the "Display have separate Spaces" option in the Mission Control Preferences.
Steinberg is happy to announce the availability of the Cubase Elements, Cubase AI and Cubase LE 7.0.6 maintenance updates.
The update focuses on visibility and workflow enhancements for MixConsole, including the ability to display channel names in two rows as well as one-click plug-in access. The new version also includes audio stability improvements under Windows, improved UAD compatibility as well as several resolved issues in different areas.
The update is now available for download from the Steinberg website. For detailed information please view the corresponding version history on the download page.
If you prepare a track and something like “where is my damn 64dB/octave brickwall filter?” comes to your mind then this might be because you are not upfront any mixing process but are working on audio restoration instead. Or the sources just might be crap. Always remember: garbage in, garbage out.
I’ve just re-discovered preFIX.
Blowtorches? Out-of-control after parties? Lead singer tantrums?
Just how much can one mixer take?
We know the VLZ4 Series is tough. That’s why it’s a road warrior for live sound. In the latest Mixbuster mashup, the mad scientists here at Mackie bring back the competition to go up against the all-new VLZ4 to see if it remains the cream of the crop.
Who will prevail? Watch the video to find out.
As you might know, Windows 8.1 is about to be released. Currently, we are testing all Steinberg products for Windows 8.1 compatibility. The testing is still going on, so we can't provide you with the final results.
However, what is known so far is that some Steinberg hardware products need to be updated to ensure full compatibility with Windows 8.1. Our development team is already working on driver updates for the affected hardware products in order to make them available as soon as possible. Currently, we recommend you not to upgrade to Windows 8.1 yet and wait for the final testing results.
We will keep you up to date.
Sascha, are you a musician yourself or do you have some other sort of musical background? And how did you once got started developing your very own audio DSP effects?
I started learning to play bass guitar in early 1988, when I was 16. Bass is still my main instrument, although I also play a tiny bit of 6-string, but I’d say I suck at that.
The people I played with in a band in my youth where mostly close friends I grew up with, and most of us kept on making music together when we finished school a couple of years later. I still consider that period (mid-nineties) as sort of my personal heyday, musical-wise. It’s when you think you’re doing brilliant things but the world doesn’t take notice. Anyway. Although we all started out doing Metal, we eventually did Alternative and a bit of Brit-influenced Wave Rock back then.
That was also the time when more and more affordable electronic gear came up, so apart from doing the usual rock-band lineup, we also experimented with samplers, DATs, click tracks and PCs as recording devices. While that in fact made the ‘band’ context more complex – imagine loading in a dozen disks into the E-MU on every start of the rehearsal until we equipped it with an MO drive – we soon found ourselves moving away from writing songs through jamming and more to actually “assembling” them by using a mouse pointer. In hindsight, that was really challenging. Today, the DAW world and the whole process of creating music is so much simpler and intuitive, I think.
My first “DAW” was a PC running at 233Mhz, and we used PowerTracks Pro and Micro Logic – a stripped-down version of Logic -, although the latter never clicked with me. In 1996 or 97 – can’t remember – I purchased Cubase and must have ordered right within a grace period, as I soon got a letter from Steinberg saying they now finished the long-awaited VST version and I could have it for free, if I want. WTF? I had no idea what they were talking about. But Virtual Studio Technology, that sounded like I was given the opportunity to upgrade myself to being “professional”. How flattering, you clever marketing guys. Yes, gimme the damn thing, hehe.
When VST arrived, I was blown away. I had a TSR-8 reel machine, a DA-88 and a large Allen&Heath desk within reach and was used to run the computer as a midi sequencer mainly. And now, I could do it all inside that thing. Unbelievable. Well, the biggest challenge then was finding an affordable audio card, and I bought myself one that only had S/PDif in & outputs and was developed by a German electronics magazine and sold in small amounts through a big retail store in Cologne, exclusively. 500 Deutschmarks for 16 bits on an ISA card. Wow.
The first plugin I bought was Waves Audio Track, sort of a channel strip, which was a cross-promotion offer from Steinberg back then, 1997, I guess. I can still recall its serial number by heart.
Soon, the plugin scene lifted off, and I collected everything I could, like the early mda stuff, NorthPole and other classics. As our regular band came to nothing, we gathered our stuff and ran sort of a small project studio where we recorded other bands and musicians and started using the PC as the main recording device. I upgraded the audio hardware to an Echo Darla card, but one of my mates soon brought in a Layla rack unit so that we had plenty of physical ins and outs.
You really couldn’t foresee where the audio industry would go, at least I couldn’t. I went fine with this “hybrid” setup for quite a long time, and did lots of recording and editing back then, but wasn’t even thinking of programming audio software myself at all. I had done a few semesters of EE studies, but without really committing myself much.
Then the internet came along. In 1998, I made a cut and started taking classes in Informatics. Finished in 2000, I moved far away, from West Germany, to Berlin and had my first “real” job in one of those “new economy” companies, doing web-based programming and SQL. That filled the fridge and was fun to do somehow, but wasn’t really challenging. As my classes included C, C++ and also Assembler, and I still got a copy of Microsoft’s Visual Studio, I signed up to the VST SDK one day. At first, I might have done pretty much the same thing as everybody: compile the “gain” and “delay” plugin examples and learn how it all fits together. VST was still at version 1 at that time, so there were no instruments yet, but I wasn’t interested much in those anyway, or at least I could imagine writing myself a synthesizer. What I was more interested in was how to manipulate the audio so that it could sound like a compressor or a tube device. I was really keen on dynamics processing at that time, perhaps because I always had too few of those units. I had plenty available when I was working part-time as a live-sound engineer, but back in my home studio, a cheap Alesis, dbx or Behringer was all I could afford. So why not try to program one? I basically knew how to read schematics, I knew how to solder, and I thought I knew how things should sound like, so I just started out hacking things together. Probably in the most ignorant and naive way, from today’s perspective. I had no real clue, and no serious tool set, apart from an old student’s copy of Maple and my beloved Corel 7. But there were helpful people on the internet and a growing community of people devoted to audio software, and that was perhaps the most important factor. You just weren’t alone.
For your “digitalfishphones” series you already developed two dynamic processors: A compressor and a transient shaper. How do such devices relate to another or what makes them stand out from each other (Meaning both, their application and their technical design underneath)?
Of course, I have to look at it from a today’s perspective, which is very different from the golden “digitalfishphones” days, a whole decade ago.
In hindsight, it was all a happy accident. I had only a coarse idea how stuff worked. I knew a bit of electronics, I’ve always done DIY projects since my childhood, but I was missing lots of fundamentals at the time I wrote plugins like endorphin and dominion, especially the math to do proper circuit modeling. Things went better with the fish fillets, but they’re also highly-based on empirical programming, so to say. If there’s one thing guiding me throughout all these years, it’s my sense of hearing, my listening experience and my intuition. In fact, these senses – and “skills”, if you want – are still serving as my main rules.
In former times, almost 80 to 90 percent of the time on the plugins were listening and tweaking, apart from just trying things out, even if they looked wrong on paper. Perhaps that’s the “magic” of the plugins, their interior is far from optimised, they’re not streamlined processors. They’re not just addressing one main issue, but moreover doing lots of stuff, and sometimes perhaps doing unnecessary things. For example, there is this saturation control on Blockfish. A similar thing is part of endorphin’s interior. The way I did saturation during that period was kind of common to all dfp plugins: take a low-shelf filter, feed the output to an asymmetrical clipping stage that uses a dynamic DC signal, and let the output of that clipper feed a second low-shelf filter. Both filters are a mirror-image of one another. Sometimes, this setup gets enriched by global feedback or a complete compression stage as a “nested” element. Everything that adds up to some “signature sound” seemed valid to me. I wanted to have things a bit more complex and unpredictable. I grew up with analog audio gear, I know how that sounds. And I had a feeling that there has to be more to a sound than to just serve with an algorithmic solution to a problem like “when going over the threshold, lower the audio by one-third”. And I still think the same way today, although I acquired a lot more technical background in the meantime and probably found out about the origin of “soul” in some real-life processes that interested me most. But, sure, the learning goes on.
What did you learned through your time developing audio effects for Magix/Samplitude? How did DSP designs changed over the years and what is the challenge today?
The time at Magix and the projects where I had been involved were perhaps a bit unusual, mainly because of the constellation, at least I always felt that way. I was originally hired for doing web-based development, but when they heard about the dfp plugins having caused a bit of a stir, they offered me a job in the Samplitude team and to do DSP exclusively for the Magix products. It was quite clear that my contribution was and would always be a bit different from their existing portfolio. The main DSP team consisted of “real” engineers and were into some serious stuff while I was like code punk and tweak head, trying to make the best of it. While these two approaches were communicated throughout the years, I was very aware that I need to improve my own skills, mature and become a better team player.
What I definitely learned was how to acquire new methods, work discipline, organize myself and learn how to cope with increasing demands from the market and the target audience. I soon found out that making audio software on a commercial scale must be quite different from a freeware show, as I formerly just felt like building a bunch of plugins out of curiosity and release it into the wild, without thinking much about that happens next. Suddenly, the learning process included a great deal of responsibility, reliability and facing your own mistakes. Such things always turn back to you when you least expect them, like for instance when someone inherits your code five years later and wants to do a platform port.
And a feeling of responsibility might even remain after you’ve left a company. A challenge, yes, but perhaps a personal one…
Of course, DSP designs did massively change throughout the last decade. While things were mostly about doing basic processing jobs in the earlier years, we now have hundreds of tools that specialize on a job, tools that accurately emulate a certain behaviour or signature sound, which is far beyond basic processing. In a nutshell, I’d say recent DSP work often has “soul”, in the most positive way. Not only have computers become faster, it’s obviously also the knowledge of the developers that grew. For instance, I had done a tape simulation in 2003, for Samplitude. But it wasn’t possible to do it really faithfully, considering the typical DSP power PC at that time, and also considering the time schedule for developing a full-blown emulation of all the processes involved. At least I couldn’t, at that time. It always bothered me somehow.
At u-he, I recalled a couple of ideas that were on my mind all these years but just never materialized. I basically knew how tape machines worked, but it soon turned out I had to invest almost a whole year of research and dive more deeply into the depths of magnetic recording, more than I first thought. Considering the modeling we felt we should do in the Satin plugin, it is great that current CPUs allow for more complex algorithms, process parallel as vectors, and run stuff at many times the original sampling rate, so that you can, for instance, implement a high-frequency bias oscillator within a virtual tape machine model to linearize a virtual hysteresis curve of a record-head model, including various dynamic side effects, instead of just using a polynomial and doing nonlinear waveshaping.
So, as technology allows us to invent or apply more sophisticated algorithms, I believe one challenge is putting a definite end to the hardware-vs.-software debate, since we’ve all come a long way since the beginning of computer-based recording and native signal processing, and we’re approaching a point where it’s often only a matter of workflow and ergonomics rather than sound.
Another – and perhaps equally important – challenge is the product idea itself. You can do everything in software, and you have low cost structures, compared to hardware. So, product designers and developers are tempted to put lots of stuff into their software. “We do it because we can”, you know. I see a big challenge in making a product that a) produces excellent sonic results, b) allows for a sufficient “artistic freedom”, and is c) still easy to use and intuitive right from the start. With hardware, the parts directly available to the user are usually critical cost factors, like knobs, switches, displays and available front-plate space for a given form factor. So, when a device turns out as easy-to-use, its design process has already passed economic decisions. This is usually non-existent in audio-software interface design. We tend to put in what we like, or what our customers want us to put into. Depending on a maker and its target audience, this might work out fine, but sometimes it won’t, and then you have these feature-bloated monsters with cluttered UIs and a parameter set that needs reading of 50 manual pages and watching 10 YouTube tutorials. Keeping it all under control, and following a consistent set of rules, THAT is a challenge!
I do agree so much with you about your product idea/design excursion! There is that much difference in using a “one knob, one job” interface vs. twiddling through countless menu options of a “jack of all trades” approach. Talking about modeling: How important is that from your point of view and how does it compares to rather empirical approaches?
I’m pretty sure there are a lot of products on the market that claim they’re using modeling techniques but rather follow an empirical approach under the hood. People might find that dubious, but I think it could still be valid up to a point where just the sonic results matter, and when the product is supposed to offer a relatively narrow parameter range and deliver a very specific sound. One is often tempted to think circuit or physical modeling is the more precise way and far superior.
But quite often the contrary is true, since one has to outweigh the computational costs of solving an immense amount of equations in realtime against a perhaps much smarter process that is perhaps less ‘correct’, but lighter on the cpu and probably just spot-on and musical right away. Physical or circuit modeling would of course be appropriate when you’re aiming at a more generic solution, for instance when designing a product like a guitar tube amplifier capable of tones ranging from clean twang to ultra high-gain. Or when you’re trying to mimic what happens across a snare drum head by implementing waveguide-mesh techniques or following the Huygens–Fresnel principle.
But chances are your model gets so huge, complicated and includes so many nonlinearities that it makes realtime calculation impossible. Even industry-standard Spice models have difficulties in accurately modeling non-linear circuit parts and processes. Then comes the point where you have to strip down and replace things using black boxes or general high-level approaches. I personally am quite happy with a mixed approach based on some nice modeling going on but still decide on an artistic scale and let my ears, my intuition and personal experience be the final judges.
Analog vs. digital and considering the available CPU power today: Are we there yet or does the available technology still restrict us when designing digital audio effect processors? Or, in which regard the digital domain is already superior?
My assumption is that as long as computers become faster, DSP algorithms will also increase in quality. Today, everybody knows about frequency warping with digital IIR filter designs, which hasn’t been a ‘problem’ at all a decade ago. People then were just glad that computers could run filter algorithms at all within a complex DAW song project. And now, a typical DAW channel strip has to implement filters emulating the finest analogue studio mixing desks ever made, including nonlinearities, high bandwidth and zero-delay feedback topology. Customer demand has grown, which is basically a good thing. With current software technology, chances are a typical DAW production sounds way better than 10 years ago.
In my opinion, the gap in terms of audio quality is closing, if it hasn’t already. I sold my last big studio desk 8 years ago, and frankly, I’m not looking back. My mixes became significantly better, although they were worse when I started out DAW-only. Although, I have to admit, I’m not always sure if it’s only because of software quality or because my mixing abilities adapted and improved…
Digital tools are surely superior in fields not using algorithms based on analogue counterparts. This especially includes techniques like FIR-based filters or processes based on Fourier transformation. Analog circuits can’t do linear-phase filtering, nor can they do high-quality time or pitch correction. Basically, the entire field of audio restoration, sample-accurate manipulation and of course forensics is unthinkable without modern digital tools.
Restrictions in the digital realm… well, I see those more in terms of ergonomics and haptics. Since I grew up with analog gear, large, bulky devices, long-throw faders and heavy knobs, I sometimes miss that extra touch, the space and general overview. DAW controllers still don’t give me that feeling, since there’s still the mouse and the keyboard demanding my attention.
You’ve already announced two brand new dynamic processors: A tape emulator and a compressor. What do they make different and to stand out from the crowd?
The compressor (currently named Presswerk) wasn’t meant as a complete product initially. We wanted to modernize and generalize our dynamics tools in the u-he code framework, so that we could easily take something ‘off the shelf’ for a particular product, like a synth, for instance. Compressors generally consist of many interconnected building blocks, so we suddenly had numerous modules and sub-modules which we arranged in an Über-Compressor fashion, out of curiosity. It sounded really good, although we didn’t do any rocket science, we just implemented reasonably good ingredients. We then handed an early alpha state on to our testers and got overwhelmed, as they said it sounded huge, fat, got balls, whatever. I wasn’t sure what it was but it could have been those extra things like the saturation and warmth control acting directly on the gain-reduction process. Presswerk then was destined to become an actual product and suddenly needed a definite parameter set, concrete workflow, a proper GUI, all that stuff. This still goes on, we’re still in the early stages of this product, so it might still take some time until we know what it’s going to be in the end.
Satin, our tape machine, was also a bit of a happy accident. I had ideas for building a generic tape device for almost 10 years, and one day I was fiddling with virtual hysteresis and high-frequency bias signals to linearize an operation curve, when suddenly my tiny model of a recording head’s voice coil and magnetic tape came to life. Still scratchy and somewhat mis-aligned, but the more I got it ‘right’ the better it could cope with real-world data and parametrization.
Having generic models of tape-recorder ingredients is the heart of Satin. We never wanted to simulate a specific device, so everything should be fully customizable. We have continuous speed ranging from domestic hi-fi to pro-studio, we have exchangeable industry-standard equalization curves, we can even change the gap width of our virtual heads which has a great impact on frequency response and the typical peaks, dips and resonances along the spectrum, for instance head bump. That’s why we communicate it as a tape construction kit. You can tweak the model in a way that it starts being your personal custom tape device. We’re engineers to the core, but we also love to make way for the artist in us.
Steinberg and Yamaha Commercial Audio are teaming up at this years SAE Alumni Convention in Amsterdam, showing the recent range of collaborative developed products, like the Nuage system solution, alongside other big name products from both brands.
Date: October 24 – 25, 2013
Location: MuzyQ Amsterdam
Opening times for the public:
Thursday, October 24, 10 a.m. – 6 p.m.
Friday, October 25, 10:30 a.m. – 4 p.m.
Entrance is free!
Steinberg is pleased to announce a new firmware update for UR28M and UR824. The version 2 update adds new functionality to the USB audio interfaces such as new guitar amp effects and iPad connectivity. On top of this free update, all registered UR users are eligible for a free, downloadable version of Cubase AI 7.
Steinberg is happy to announce the release of the latest VST Sound Instrument Set, Allen Morgan Signature Drums Volume 2. Delivering new sounds for Groove Agent ONE, the drum machine available in previous versions of Cubase, the Nuendo Expansion Kit and Sequel 3, this content set features over 700 pre-processed hybrid sounds and 150 MIDI loops.
“With this latest edition of drum kits for Groove Agent ONE I set out to design sonic tools that will allow you to create a truly unique listening experience. Drums and sounds vary from dark and ambient soundscapes to 'in your face' ripped up electronic sounds — and plenty of surprises in between. These sounds and accompanying loops will fit perfectly in any production looking to stand out, grab the listener by the ears and make a statement,” says US-based producer and mixing engineer Allen Morgan.
Steinberg today starts the WaveLab survey 2013. Any WaveLab user is cordially invited to participate in this survey in order to contribute to the further development of the world's leading mastering platform!
It will only take about 4-5 minutes to complete the survey. All responses are strictly anonymous, the data is coded and will be reported only in the aggregate.
We welcome your engagement and support, allowing us to develop WaveLab even better towards your individual needs and workflows.
In recent history, I’ve constantly extended and improved my Stateful Saturation approach and within ThrillseekerVBL I’ve managed to introduce authentic analog style sounding distortion right into VST land, which is what I’ve always had in my mind and dreamed of. And there’s so much and overwhelming feedback on that – thank you sooo much!
Since quite a while, I’ve dreamed about a brand new series of plug-ins which will combine the strength of both worlds: analog modeling on the one side but pure digital techniques on the other – incorporating techniques such as look-ahead, FIR filtering or even stuff that comes from the digital image processing domain, such as HDR (High Dynamic Range) processing.
High Dynamic Range (HDR) processing is something pretty much new in the audio domain. While there are lots of theories and implementations available about HDR imaging, this is quite new and sparingly adopted in the audio domain. SlickHDR is going to be a very first approach in applying high dynamic range processing to audio within a VST compatible plug-in.
This highly successful workshop series has toured cities and churches across North America for more than eleven years. With a handful of dates and private training sessions remaining in 2013, Mackie has already confirmed their participation for 2014.
Designed for the church audio volunteer, live sound personnel, media and praise band members, each workshop is produced by industry veteran Hector La Torre and presented by sound reinforcement engineer Mike Sokol (see bios at www.fitsandstarts.com). The workshops offer attendees a rare opportunity to learn about church sound systems in a professional manner.
The HOW-TO Church Sound Workshops offer several training categories in each session, including: Microphones, Mixing Consoles, Speakers & Amplifiers, Signal Processors/Effects, Recording, Podcasting and more. All workshops feature full 8+ hour hands-on Live Sound for Worship workshops with equipment fully integrated, as well as a Recording and Podcasting Seminar.
“Mackie has a long history of involvement with the worship market, and our products are a big part of delivering the message clearly to people of faith all across the world,” says Mackie Marketing and Communications Manager Shaunna Krebs. “We’re really pleased to be involved in the HOW-TO workshops, and look forward to doing our part to improve church sound.”
“We’ve worked with Mackie equipment for quite a few years now, and couldn’t be happier with the gear,” added Hector La Torre of Fits and Starts. “The price-to-performance factor makes the entire product line-up – from powered loudspeakers to analog and digital mixers – a perfect fit for houses of worship.”