Re: Collaboration on standard Wayland protocol extensions



On 2016-03-29  3:10 PM, Carsten Haitzler wrote:
I don't really understand why forking from the compositor and bringing
along the fds really gives you much of a gain in terms of security. Can

why?

there is no way a process can access the socket with privs (even know the
extra protocol exists) unless it is executed by the compositor. the compositor
can do whatever it deems "necessary" to ensure it executes only what is
allowed. eg - a whitelist of binary paths. i see this as a lesser chance of a
hole.

I see what you're getting at now. We can get the pid of a wayland
client, though, and from that we can look at /proc/cmdline, from which
we can get the binary path. We can even look at /proc/exe and produce a
checksum of it, so that programs become untrusted as soon as they
change.

i know - but for just capturing screencasts, adding watermarks etc. - all you
need is to store a stream - the rest can be post-processed.

Correct, if you record to a file, you can deal with it in post. But
there are other concerns, like what output format you'd like to use and
what encoding quality you want to use to consider factors like disk
space, cpu usage, etc. And there still is the live streaming use-case,
which we should support and which your solution does not address.

why do we need the fullscreen shell? that was intended for environments where
apps are only ever fullscreen from memory. xdg shell has the ability for a
window to go fullscreen (or back to normal) this should do just fine. :) sure -
let's talk about this stuff - fullscreening etc.

I've been mixing up fullscreen-shell with that one thing in xdg-shell.
My bad.

let's talk about the actual apps surfaces and where they go - not
configuration of outputs. :)

No, I mean, that's what I'm getting at. I don't want to talk about that
because it doesn't make sense outside of e. On Sway, the user is putting
their windows (fullscreen or otherwise) on whatever output they want
themselves. There aren't output roles. Outputs are just outputs and I
intend to keep it that way.

Troublemaking software is going to continue to make trouble. Further
news at 9. That doesn't really justify making trouble for users as well.

or just have the compositor "work" without needing scripts and users to have to
learn how to write them. :)

Never gonna happen, man. There's no way you can foresee and code for
everyone's needs. I'm catching on to this point you're heading towards,
though: e doesn't intend to suit everyone's needs.

Here's the wayland screenshot again for comparison:

https://sr.ht/Ai5N.png

Most apps are fine with being told what resolution to be, and they
_need_ to be fine with this for the sake of my sanity. But I understand
that several applications have special concerns that would prevent this

but for THEIR sanity, they are not fine with it. :)

Nearly all toolkits are entirely fine with being any size, at least
above some sane minimum. A GUI that cannot deal with being a
user-specified size is a poorly written GUI.

no. this has nothing to do with floating. this has to do with minimum and in
this case especially - maximum sizes. it has NOTHING to do with floating. you
are conflating sizing with floating because floating is how YOU HAPPEN to want
to deal with it.

Fair. Floating is how I would deal with it. But maybe I'm missing
something: where does the min/max size hints come from? All I seem to
know of is the surface geometry request, which isn't a hint so much as
it's something every single app does. If I didn't ignore it, all windows
would be fucky and the tiling layout wouldn't work at all. Is there some
other hint coming from somewhere I'm not aware of?

you COULD deal with it as i described - pad out the area or
scale retaining aspect ratio - allow user to configure the response. if i had a
small calculator on the left and something that can size up on the right i
would EXPECt a tiling wm to be smart and do:

+---+------------+
|   |............|
|:::|............|
|:::|............|
|:::|............|
|   |............|
+---+------------+

Eh, this might be fine for a small number of windows, and maybe even is
the right answer for Sway. I'm worried about it happening for most
windows and I don't want to encourage people to make their applications
locked into one aspect ratio and unfriendly to tiling users.

What I really want is _users_ to have control. I don't like it that
compositors are forcing solutions on them that doesn't allow them to be
in control of how their shit works.
they can patch their compositors if they want. if you are forcing users to
write scripts you are already forcing them to "learn to code" in a simple way.
would it not be best to try and make things work without needing scripts/custom
code per user and have features/modes/logic that "just work" ?

There's a huge difference between the skillset necessary to patch a
Wayland compositor to support scriptable output configuration and to
write a bash script that uses a tool the compositor shipped for this
purpose.

*I* do not want adhoc panels/taskbars/tools written by separate projects within
my DE because they cause more problems than they solve. been there. done that.
not going back. i learned my lesson on that years ago. for them to work you have
pagers and taskbars in them to be fully functional and unless you ALSO then bind
all this metadata for the pagers, virtual desktops and their content to a
protocol that is also universal, then its rather pointless. this then ties your
desktop to a specific design of how desktops are (eg NxM grids and only ONE of
those in an entire environment. when with enlightenment each screen has an
independent NxM grid PER SCREEN that can be switched separately.

Again, the scope of this is not increasing ad hominum. I never brought
virtual desktops and pagers into the mix. There is a small number of
things that are clearly the compositor's responsibility and that small
list is the only things I want to manipulate with a protocol. Handling
screen capture hardly has room for innovation - there are pixels on
screen, they need to be given to ffmpeg et al. This isn't locking you
into some particular user-facing design choice in your DE.

I'm not suggesting anything radical to try and cover all of these use
cases at once. Sway has a protocol that lets a surface indicate it wants
to be docked somewhere, which allows for custom taskbars and things like
dmenu and so on to exist pretty easily, and this protocol is how swaybar
happens to be implemented. This doesn't seem very radical to me, it
doesn't enforce anything on how each of the DEs choose to implement
their this and that.

then keep your protocol. :) i know i have no interest in supporting it - as
above. :)

Well, so be it.

We've both used this same argument from each side multiple times, it's
getting kind of old. But I think these statements hold true:

There aren't necessarily enough people to work on the features I'm
proposing right now. I don't think anyone needs to implement this _right
now_. There also aren't ever enough people to give every little feature
of their DE the attention that leads to software that is as high quality
as a similar project with a single focus on that one feature.

that is true. :)

Interesting that this immediately follows up the last paragraph. If you
acknowledge that your implementation of desktop feature #27 can't
possibly be as flexible/configurable/usable/good as some project that's
entirely focused on just making that one feature great, then why would
you refuse to implement the required extensibility for your users to
bring the best tools available into your environment?

buy into a desktop environment wholesale. They may want to piece it
together however they see fit and it's their god damn right to. Anything
else is against the spirit of free software.

i disagree. i can't take linux and just use some bsd device drvier with it - oh
dear. that's against the spirit free software! i have to port it and
integrate it (as a kernel module). wayland is about making the things that HAVE
to be shared protocol just that. the things that don't absolutely have to be,
we don't. you are able to patch, modify and extend your de/wm, all you like -
most de's provide some way to do this. gnome today uses js. e uses loadable
modules. i am unsure about kde. :)

Sure, but you can use firefox and vim and urxvt while your friend
prefers termite and emacs and chromium, and your other friend uses gedit
and gnome-terminal and surf.

In this case, I'm not seeing how your points about what order things
need to be done in matters. Now is the right time for me to implement
this in Sway. The major problems you're trying to solve are either
non-issues or solved issues on Sway, and it makes sense to do this now.
I'd like to do it in a way that works for everyone.

you need to solve clients that have a minx/max size without introducing the
need for a floating property. that is something entirely different.
not solved.

You're right, I do have to solve this. But my project and its
contributors have the bandwidth to address this and the things I'm
bringing up at the same time.

what happens when you need to restart sway after some development? where do all
your terminals/editors/ide's, browsers/irc clients go? they vanish and you have
to re-run them?

Most of my users aren't developers working on sway all the time. Sway
has an X backend like Weston, I use that to run nested sways for
development so I'm not restarting Sway all the time. The compositor
crashing without losing all of the clients is a pipe dream imo, I'm not
going to look into it for now.

aaah ok. so compositor adapts. then likely i would express this as a
"minimize your decorations" protocol from compositor to client, client to
compositor then responds similarly like "minimize your decorations" and
compositor MAY choose to not draw a shadow/titlebar etc. (or client
responds with "ok" and then compositor can draw all it likes around the
app).

I think Jonas is on the right track here. This sort of information could
go into xdg_*. It might not need an entire protocol to itself.

i'd lean on a revision of xdg :)

I might lean the other way now that I've seen that KDE has developed a
protocol for this. I think that would be a better starting point since
it's proven and already in use. Thoughts?

... you might be surprised. 4k ones are already out there. ok . not 1.3ghz -
2ghz - but no way you can capture even 4k with the highest end arms unless you
avoid conversion. you keep things in yuv space and drop your bandwidth
requirements hugely. in fact you never leave yuv space and make use of the hw
layers and the video decoder decodes directly into scanout buffers. you MAY be
able to stuff the yuv buffers back into an encoder and re-encode again ... just.
but it'd be better not to decode AND encode by take the mp4/whatever stream
directly and shuffle it down the network pipe. :)

believe it or not TODAY tablets with 4k screens ship. you can buy them. they
are required to support things like miracast (mp4/h264 stream over wifi). it's
reality today. products shipping in the 100,000's and millions. :)

Eh, alright. So they'll exist soon. I feel like both strategies can
coexist, in that case. If you want to livestream your tablet, you'll
have a performance hit and it might just be unavoidable. If you just
want to record video, use the compositor's built in thingy. I'm okay
with unavoidable performance concerns in niche situations - most people
aren't going to be livestreaming from their tablet pretty much ever.
Most people aren't even going to be screen capturing on their tablet to
be honest. It goes back to crippling the common case for the sake of the
niche case.

--
Drew DeVault


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]