Hans, this is not directly related to this beta release, but I have a couple of questions about audio processing.
1: Has there been any work on looking at leveraging the power of GPUs in addition, or instead of CPUs to process audio? Is there more "bang for your buck" so to speak, power/efficiency wise going this route?
2: What about AI? Or Machine-Learning? Can any of this powerful, new technology be used to improve audio processing?
Just wondered where things are innovation-wise with this amazing technology that you are bringing us.
1. On a modern many-core CPU you can already run dozens of Stereo Tool instances simultaneously. Some things would indeed be suitable for running on a GPU, but many others are not (for example, a compressor always depends on the previous value, and the whole idea of GPU's is that many cores do the same thing at the same time, so that wouldn't work). So there is not really anyting to be gained by this, and it would cost a lot of development effort.
2. We're using AI for PhoneBooster! So yes. It's not easy though. In order to train an AI model you need to know what you want to achieve as the end result, and then you can train the AI to get there. For something like PhoneBooster that makes perfect sense: You can use the original audio, process it to sound as if it went through a phone line, and then train the AI to get the original back. But for other parts of processing it's not so easy. If you have a way to get the result that you want, there's no need to train an AI anymore - and the AI will never be perfect so if you have a way to get there that will always outperform an AI trained to achieve the same result. I could imagine that you could use an AI for things like a Declipper, similarly to how we did it for PhoneBooster. But we already have a Declipper... The Delossifier could be a good one, that could possibly be improved by AI training. If you have any other ideas let me know!