Apologies if this has already been asked, couldn't find the answer anywhere.
MIC looks to be an interesting platform for high end audio digital signal processing - particularly as it will be much easier to port our existing (SSE/AVX x86) code to it than to a more traditional DSP platform like SHARC (never mind OpenCL or other GPGPU approaches, which aren't well suited to the kind of things we do).
It's a high level requirement for our (admittedly niche) sector that a constant audio I/O stream be maintained, able to respond to user input with a consistent latency sub 10ms. So, for example, when a player hits a key on a synth keyboard, the synthesized sound can start to be played only a few milliseconds later.
What this translates to at a code execution level is, we need to have a high degree of certainty as to the round trip time for a packet of data farmed out for processing - that being composed of delivery to remote core, remote core scheduler wake-up time, processing window, delivery back to host, host signalling/wakeup. The host code runs as a Windows (or OS X) user-mode application, so there's some degree of OS scheduler unpredictability to deal with there as well.
Wondered if low-latency operation had been considered yet (I imagine some finance customers will have similar requirements...) or whether that's something for future MIC products.