Quote:
I'm seeking to see an apology there, but I don't exactly find one.
I'm not sorry for not correctly reading what you typed. Twice. Accidents happen. I'm not perfect, and I'm never going to be sorry for not being perfect, or I would hate myself.
Quote:
It's more of a "don't be offended with what I typed" than it is an "I'm sorry, I misunderstood what you typed."
I am sorry that you were offended by my misunderstanding. I wasn't disheartened when you said I couldn't even read English.

So yeah, it's an honest mistake of mine, so please don't rake me over the coals over it any more, or I'll stop caring about chatting with you... which I do enjoy for the moment. I'm all about the pursuit of the absolute truth, if possible, when it comes to the matters of all things technical.

It's one of the ONLY areas this can even happen, in our largely random and organic universe. (at least, how we understand it right now anyways, who knows, maybe it's all going *exactly* as planned, bwahaha)
Quote:
Regardless, if you'd like to ease up on the "you're an idiot" content of your posts to me, I'll be happy to continue discussion. I could be wrong about the cache, but the evidence (reports from people) is decidedly on my side for the moment.
I don't think you're wrong about the cache if the issue is that there is overall stuttering problems that are not caused by the CPU time being over-utilized (very close to 100%). While the CPU use itself won't be effected until stuttering happens, and the CPU use will stay the same or decrease... (that being said, here's the part where we already both agree

)
The cache SIZE being an issue WILL become more of a problem the closer to 100% the CPU use is, because the remaining CPU time in each audio block will be increasingly closer to the amount of time spent caching code.
Using a benchmark of system memory read speed, and knowing the total "worse case scenario" instruction and data throughput requirements, and knowing the CPU time required per block, and ofc the block length... one can figure out exactly what percentage of the available time that *could* be spent on code/data reading IS. And that's *without* any caching at all.
I'm NOT 100% sure that your system, at some-80s% CPU use would have enough system memory bandwidth to keep up in that some-15% of CPU time that's not getting used. But I'm about 90% sure it would. Basically you see if the total code/data caching throughput of the application is less than 15% of your total system memory throughput. And even more importantly... how much less. If caching is a large problem with overall realworld PERFORMANCE (not to be confused with CPU use) of StereoTool, then you should not be able to use 100% of the CPU with StereoTool. The worse the caching bottleneck is, the farther away from 100% CPU you you'll be, where the stuttering first starts to happen.
There actually *should* be some kind of software to measure this, if it's even possible at all. I might have a look around later today or tomorrow if I think of it. It would be a very interesting thing to be able to measure, wouldn't it?
So yes, it can be a very real performance problem. But it should not cause "CPU Usage" to increase according to any modern OS kernel that I've developed server applications for recently (OSX, Linux, WindowsNT, Oracle, Solaris, AIX, and BSD within the last 10 years), since the problem is that the CPU would be under-utilized.
Anyways, I shant beat a dead horse, I know you "get it" like 2-3 posts ago.

Cheers.