Quote:
Quote:
Comparison preset for classic vs. new. Only observation I have at the moment is that I can hear some minor distortion remaining in the classic preset, but that should be correctable with a little more time. CPU load is lower, which is the important point of this.
To me, the new MB version really sounds much more natural and - don't know how else to describe it - 'musical'. If I switch to the old MB it sounds 'restrained', 'held back'. Not for everything - many tracks sound very similar - but exactly the tracks that I could never get to sound right with the old MB really sound better with the new one.
Btw - for a closer approximation I lowered the new MB output level to -2 dB, but even then this difference was still there.
Might be a confirmation bias, meaning you believe that the new stuff is better, thus you "hear" that it's "better".
I can likely mimic other presets, and as for what I posted for comparison, I did not spend enough time with it. What I was truly trying to indicate was that a reasonably close approximation can be had for less CPU load. Given that it sounds good enough for me, why MUST I choose the new stuff over the old stuff? Because it's no longer the preferred choice? I've tried objectively listening to both, and if there is a difference, it is not significant enough to justify a 20% relative increase in CPU usage.
The choices you're making with your product are strikingly similar to Intel's removal of optimized SSE2 code and SSE3 options for the compiler and IPP. In both situations, end users are expected to either upgrade, or lag behind.
Now, it's been a real struggle to try to investigate the SSE levels. I felt that was easier to try rather than the other possibility, which centers around how you test filters - by using "best case scenario" type of unit testing, optimizing a specific section of code in isolation. When you do it that way, an implementation that uses more memory might test as the optimal choice, but it doesn't factor in what happens when all of the pieces of the product interact with each other in a more memory-sensitive environment, where the amount of available cache memory may be the limiting factor, and the "sub-optimal" implementation at the unit test level may be the optimal implementation for the software package as a whole.
It bears repeating that the performance issues are seen with 1MB L2 systems. That crosses between AMD and Intel, with the Intel systems being the early Core-based laptops. The problem here is if that's what's going on, that it's not the SIMD / SSE level, but the actual implementation, then that is a procedural issue that would need a change in perspective / approach.
I'll post something over the next day or so in another thread about best vs. worst case testing.