Re: Collaboration on standard Wayland protocol extensions



On Mon, Mar 28, 2016 at 11:33:15PM -0400, Drew DeVault wrote:
On 2016-03-29 10:30 AM, Jonas Ådahl wrote:
I'm just going to put down my own personal thoughts on these. I mostly
agree with Carsten on all of this. In general, my opinion is that it is
completely pointless to add Wayland protocols for things that has
nothing to do with Wayland what so ever; we have other display protocol
agnostic methods for that that fits much better.

I think these features have a lot to do with Wayland, and I still
maintain that protocol extensions make sense as a way of doing it. I
don't want to commit my users to dbus or something similar and I'd
prefer if I didn't have to make something unique to sway. It's probably
going to be protocol extensions for some of this stuff and I think it'd
be very useful for the same flexibility to be offered by other
compositors.

As a rule of thumb, whether a feature needs a Wayland protocol or not,
one can consider whether a client needs to reference a client side
object (such as a surface) on the server. If it needs it, we should add
a Wayland protocol; otherwise not. Another way of seeing it would be
"could this be shared between Wayland/X11/Mir/... then don't do it in
any of those".

I prefer to think of it as "who has logical ownership over this resource
that they're providing". The compositor has ownership of your output and
input devices and so on, and it should be responsible for making them
available.

I didn't say the display server shouldn't be the one exposing such an
API, I just think it is a bad idea to duplicate every display server
agnostic API for every possible display server protocol.


- Screen capture
Why would this ever be a Wayland protocol? If a client needs to capture
its own content it doesn't need to ask the compositor; otherwise it's
the job of the compositor. If there needs to be a complex pipeline setup
that adds sub titles, muxing, sound effects and what not, we should make
use of existing projects that intend to create inter-process video
pipelines (pinos[0] for example).

FWIW, I believe remote desktop/screen sharing support partly falls under
this category as well, with the exception that it may need input event
injection as well (which of course shouldn't be a Wayland protocol).

As a side note, for GNOME, I have been working on a org.gnome prefixed
D-Bus protocol for remote desktop that enables the actual remote desktop
things to be implemented in a separate process by providing pinos
streams, and I believe that at some point it would be good to have a
org.freedesktop.* (or equivalent) protocol doing that in a more desktop
agnostic way. Such a protocol could just as well be read-only, and
passed to something like ffmpeg (maybe can even pipe it from gst-launch
directly to ffmpeg if you so wish) in order to do screen recording.

I know that Gnome folks really love their DBus, but I don't think that
it makes sense to use it for this. Not all of the DEs/WMs use dbus and
it would be great if the tools didn't have to know how to talk to it,
but instead had some common way of getting pixels from the compositor.

So if you have a compositor or a client that wants to support three
display server architectures, it needs to implement all those three
API's separately? Why can't we provide an API ffmpeg etc can use no
matter if the display server happens to be the X server, sway or
Unity-on-Mir?

I don't see the point of not just using D-Bus just because you aren't
using it yet. It's already there, installed on your system, it's already
used by various other parts of the stack, and it will require a lot less
effort by clients and servers if they they want to support more than
just Wayland.


I haven't heard of Pinos before, but brief searches online make it look
pretty useful for this purpose. I think it can be involved here.


Pinos communicates via D-Bus, but pixels/frames are of course never
passed directly, but via shared memory handles. What a screen
cast/remote desktop API would do is more or less to start/stop a pinos
stream and optionally inject events, and let the client know what stream
it should use.


I don't think we should start writing Wayland protocols for things that
has nothing to do with Wayland, only because the program where it is
going to be implemented in may already may be doing Wayland things.
There simply is no reason for it.

We should simply use the IPC system that we already have that we already
use for things like this (for example color management, inter-process
video pipelines, geolocation, notifications, music player control, audio
device discovery, accessibility, etc.).

Most of what you mentioned (geolocation, notifications, music control,
audio device discovery) have anything to do with Wayland. Why would they
have to use the same communication system? Things like how output/input
devices are handled, screen capture, and so on are very clearly Wayland
related and I think a Wayland solution for them is entirely acceptable.

Sorry, I don't see how you make the connection between "Wayland" and
"screen capture" other than that it may be implemented in the same
process. Wayland is meant to be used by clients to be able to pass
content to  and receive input from the display server. It's is not
intended to be a catch-all IPC replacing D-Bus.


Jonas


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]