Bug 1592 - Wishlist request for DSP EAP_* function header/information
: Wishlist request for DSP EAP_* function header/information
Status: RESOLVED WONTFIX
Product: Development platform
general
: 4.0
: All All
: Low enhancement with 3 votes (vote)
: ---
Assigned To: Quim Gil
: dev-platform-general-bugs
:
:
:
:
  Show dependency tree
 
Reported: 2007-06-27 13:50 UTC by Simon Pickering
Modified: 2009-11-18 18:31 UTC (History)
5 users (show)

See Also:


Attachments


Note

You need to log in before you can comment on or make changes to this bug.


Description Simon Pickering (reporter) maemo.org 2007-06-27 13:50:53 UTC
I'd like to add to the wishlist my wish for a header file listing the EAP_*
(and any other) functions which wrap the audio codec driver and are contained
in the avs_kernel.out (DSP kernel). It would also be nice to have either some
example code or some explanation of each function, though not essential.

This would allow us to write code to use the audio output (input?) capabilities
of the N800 (and probably 770) and allow people to hack at writing more dsp
sinks for various music/sound formats (e.g. OGG, DTMF, G.729, etc.)
Comment 1 Quim Gil nokia 2007-07-05 15:14:46 UTC
Hi, Simon.

Sorry, we don't have any plans to provide the DSP/EAP_* information publicly.
DSP is a Nokia proprietary component. Opening any aspect of it is out of our
scope and it is unlikely that this is going to change in the short term.

I can still add this feature request in the Wishlist section to have it
recorded, if you wish.
Comment 2 Simon Pickering (reporter) maemo.org 2007-07-05 17:49:33 UTC
Thanks for your reply Quim. Yes I would like it in the wishlist please.
Comment 3 Quim Gil nokia 2007-08-03 06:46:36 UTC
Done: http://maemo.org/intro/roadmap.html

Feel free providing here any additional details. We are specially interested in
the use cases you cannot perform at the moment as a developer. Then we can have
a look to the use cases and see what alternative solutions could be found.
Comment 4 Simon Pickering (reporter) maemo.org 2007-08-06 15:32:47 UTC
The thing we can't do (at least not without some reverse engineering and/or a
DSP kernel rebuild) is access the audio input and output hardware codec. We
could write our own driver to access the codec, but my understanding is that
this would require a DSP kernel rebuild, meaning we'd lose the Nokia
proprietary code contained therein. This is probably unacceptable for most
people (as the standard audio DSPsinks would then not work, nor would the VOIP
codecs, etc.), meaning that this would simply be a learning exercise rather
than a useful endeavour.

If we had a way of interfacing with the audio codec driver, people could write
their own DSP tasks to perform audio encoding (i.e. Speex) and audio decoding
(i.e. Speex, OGG, game sound formats). 

I think the real advantage to being able to use the audio hardware directly is
that it would enable DSPsinks to be written to take some load off the ARM when
doing other intensive tasks (i.e. playing games and wanting to decode audio for
things like SNES emulation).

These are not trivial undertakings, however they are easier than having to
write the driver and build a new kernel and then write a DSPsink, and the idea
is to get as many people hacking on the platform to make it as great as
possible, and this is one of those areas that people could certainly hack on. 

I should add that there is code available targeted at this particular DSP (I
doubt it would work "out of the box", but it should make these tasks less
daunting): certainly there is OGG decoder source, Speex source, and I think
that the Neuros uses this chip too (though I don't know whether they release
DSP source). Therefore, there is a good chance that people will try programming
for the DSP (and it wouldn't be a waste of Nokia's time to release some more
information).

Cheers, Si.
Comment 5 Frantisek Dufka maemo.org 2007-08-09 16:08:50 UTC
Just wanted to add that OGG support for DSP would have high chance to become
reality if we had the API for sound output available. See also
http://www.gossamer-threads.com/lists/maemo/developers/24244#24244

True that even now we could make the decoding run on DSP and route decoded
audio data back to ARM core and then again back to DSP for playback via alsa or
gstreamer but this would be pretty suboptimal so there is little or no reason
to try it. Well at least unless someone has plenty of spare time,  wants to do
it as an excercise in making project for DSP and would not mind the risk that
the result may not be better than current 'decode on ARM' solution.
Comment 6 Siarhei Siamashka 2007-08-23 10:16:26 UTC
Not trying to deny the value of this EAP_* functions information, but I suspect
that using DSP only just like a number crunching device can still provide a
great performance improvement for audio and video playback.

I think that the overhead for ARM core introduced by getting decompressed data
from DSP and pushing it back again for playback might be heavily overrated.
Simple calculation shows that 44100Hz * 2 channels * 16-bit is only 176400
bytes of raw decompressed audio data per second. Moving this amount of data
would take much less than 1% of cpu resources considering memcpy performance.
On the other hand, decoding mp3 or vorbis audio on ARM core takes more than 10%
of cpu. Anyway, it is better to implement some proof of concept code which
generates some simple signal on DSP, gets this data to ARM core and tries to
play it, we would be able to estimate real ARM core overhead for this solution
and make a decision.

One more reason to implement a decoder on DSP so that it can get decompressed
data back to ARM core is better flexibility of this solution. Not everyone
would like to be restricted to use only built-in speakers. MP3 decoder dsp task
shipped as a binary blob on internet tablets is useless for A2DP because of
this. Do free implementations need to clone the same design?

With all that said, EAP_* function headers would be still very useful. ARM core
overhead might be small, but sound output latency would be definitely lower
with the ability to play sounds directly from DSP. Latency is not so critical
for multimedia players as it can be taken into account and compensated. But if
we try to use DSP to decode and play sound effects in games, it might be quite
important.

And by the way, dsp task for pcm audio output is rather small, so reverse
engineering it should be technically not very challenging. If we get enough
free codecs working on DSP and the absence of EAP_* functions headers will
prove to be a substantial limitation, there are good chances that this EAP_*
information will become available eventually either by somebody doing reverse
engineering or from Nokia after seeing enough reasons to open this information.
Comment 7 Frantisek Dufka maemo.org 2007-08-23 11:15:23 UTC
(In reply to comment #6)
> I think that the overhead for ARM core introduced by getting decompressed data
> from DSP and pushing it back again for playback might be heavily overrated.
> Simple calculation shows that 44100Hz * 2 channels * 16-bit is only 176400
> bytes of raw decompressed audio data per second. Moving this amount of data
> would take much less than 1% of cpu resources considering memcpy performance.

Well I was thinking more about latency. Also when ARM cpu is heavily loaded it
may make a difference. Even now we sometimes have audio skipping when doing
something intensive.

> 
> One more reason to implement a decoder on DSP so that it can get decompressed
> data back to ARM core is better flexibility of this solution. Not everyone
> would like to be restricted to use only built-in speakers. MP3 decoder dsp task
> shipped as a binary blob on internet tablets is useless for A2DP because of
> this. Do free implementations need to clone the same design?

Good point. Something like 'gstreamer on dsp' would be interesting i.e.
possibility to chain smaller dsp tasks (like mp3/vorbis decoding + pcm)
together without using arm core. Is there some way for dsptasks to communicate
between each other?
Comment 8 Simon Pickering (reporter) maemo.org 2007-08-23 12:02:57 UTC
> One more reason to implement a decoder on DSP so that it can get decompressed
> data back to ARM core is better flexibility of this solution. Not everyone
> would like to be restricted to use only built-in speakers. MP3 decoder dsptask
> shipped as a binary blob on internet tablets is useless for A2DP because of
> this. Do free implementations need to clone the same design?

Completely agreed, ideally one would be able to stack decoders/encoders on the
DSP side as Frantisek says here:

> Good point. Something like 'gstreamer on dsp' would be interesting i.e.
> possibility to chain smaller dsp tasks (like mp3/vorbis decoding + pcm)
> together without using arm core. Is there some way for dsptasks to communicate
> between each other?

I have been looking into this. I think it may be possible as the dsp kernel and
tasks all run in the same memory space (that's my understanding). Therefore it
ought to be possible to pass (via the ARM) a pointer to a different dsp task's
private memory (in the task header) and access the data there. I need to write
some test code after I've finished fiddling with Tremor (or at least found out
whether my simple 'dspgateway port' will work).

Ideally we'd use the EAP_* stuff in yet another task to allow a final output to
audio stage to be tacked onto a decoder line. The same would go for a2dp if we
could access the bluetooth hardware from the DSP, otherwise we'd just go to an
SBC codec and then back to ARM.

> Not trying to deny the value of this EAP_* functions information, but I > suspect
> that using DSP only just like a number crunching device can still provide a
> great performance improvement for audio and video playback.

Do you have something in mind? I can write some basic DSP-side and ARM-side
code for you to place some function into to see how it affects the ARM-side cpu
load, etc.

Would this all be better on the mailing list, rather than turning the bug
report into a random discussion (or is that what they're for?)?