Re: Polypaudio for Gnome 2.10, the next steps
- From: Mike Hearn <mike navi cx>
- To: Seth Nickell <snickell redhat com>
- Cc: desktop-devel-list gnome org
- Subject: Re: Polypaudio for Gnome 2.10, the next steps
- Date: Tue, 23 Nov 2004 00:09:42 +0000
On Mon, 2004-11-22 at 19:01 -0500, Seth Nickell wrote:
> Substantially, I agree with Mike. It seems like this problem should be
> solved at the Alsa layer. In fact, Alsa already has this implemented, we
> just need to have it setup by default on cards that don't support
> hardware mixing.
It's getting easier. For Fedora Core 3 the only change I had to make was
putting this in /etc/asound.conf:
pcm.!default {
type plug
slave.pcm "dmix"
}
Unfortunately alsalib is still substantially buggy at least on my
chipset (which I admit is the chipset from hell). Hopefully they'll get
there soon.
> People will undoubtedly raise "remote terminal" issues as a reason to go
> with a sound server approach. While we *should* make sure terminal
> services work, they aren't the primary target, and we shouldn't be
> centering the design around them. It seems like the right layer to
> attack remote audio issues is in gstreamer anyway while the data is
> still compressed and can be more readily transported across the network
> (this approach could possibly also be helpful in maintaining audio/video
> sync across the network).
Actually grabbing the audio at the gstreamer layer has the problem that
only gstreamer apps can do remote audio which would not make the LTSP
people very happy. It'd be like the situation where only gnome-vfs apps
can access remote servers from the file picker.
The proposal sent to fedora-devel-list outlines one way in which
[nearly] all applications on Linux could expose their audio output in a
flexible way, that'd allow some arbitrary application to pick it up, mix
it, compress it, forward it, basically do anything you want with it but
on an app-by-app level. If that app uses GStreamer great.
The app-by-app part is the important one, otherwise a sound server could
just record the sound cards output. Obviously you want to keep multiple
terminal sessions separate so you have to be able to reroute audio from
each app individually at some layer, and using the alsalib layer seems
to make sense.
thanks -mike
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]