GSK review and ideas



I just read through the wip/otte/rendermode branch, and I must say I
really like it. It makes it very explicit that the rendernodes are
immutable. For instance, there is no "append" operation or any setters
for any data (other than the mostly-debug name). I also like that it
has several types of nodes, because it makes it much easier to reason
about the semantics of it. If you have a single node with a bunch of
options it is very hard to understand how they combine, and i'm pretty
sure there would be cases where e.g. the cairo and the GL backend
disagreed on this.

However, I don't think the branch goes far enough. There are several
more changes I'd like to make:

First of all, I think GtkSnapshot as-is doesn't make sense in gtk. It
is a natural companion to GskRenderNode, and should live next to it
(modulo some tiny helpers for css rendering that can stay in
gtk+). I'm thinking we can call it GskRenderTreeBuilder or something,
because its very similar to the String / StringBuilder pattern.

I think we can drop the make_immutable vfunc for render nodes now,
because no render nodes have any setters really, they are immutable
since all properties are construct-only.

I also think we should drop the get_bounds vfunc. All the child nodes
have a "graphene_rect_t bounds" already. We just need to add one for
the
containers, and calculate it during construction. Then we can push
the field up to GskRenderNode and drop a lot of indirect calls.

gsk is pretty tiny now, and it is really quite tied to gdk, in that
GskRenderer is basically they one way you're supposed to draw to a
toplevel GdkWindow. I don't really get why it has to have its own
namespace. I think we should just move all the gsk stuff to gdk. There
are no pre-existing naming conflicts, and I think it will make things
clearer.

I think the many layers of rendering are a bit confusing, there are so
many different _begin_frame() calls. One obvious cleanup here would be
to break out the cairo fallback handling in GdkWindow into a
GdkDrawContext we could use instead of the GL one, then we can get rid
of code like:

  if (draw_context)
    gdk_draw_context_begin_frame (draw_context, real_region);
  else
    gdk_window_begin_paint_internal (window, real_region);



The only thing you use GskRenderNode objects for now is to create a
tree of them and hand that off to a GskRenderer for rendering of a
frame. All the other operations on the tree (other than the debug
stuff in the inspector) is handled purely inside gsk/gsk. For
instance, there is no reason to hang on to a render node for any other
lifetime than the rendering of the frame. Having all this code doing
atomic operations to refcount the nodes during construction, then
nothing, then atomic refs for every node after the frame is done seems
quite unnecessary.

Instead I propose we add a new object GskRenderTree, which create and
owns all the rendernodes that are part of the frame. The nodes would
be tied to a render tree forever and have the same lifetime as the
tree. This would allow us to very very efficiently allocate and free
render nodes. You'd just allocate a chunk of memory for the tree and
hand out references to that part of that as we create new nodes.  Then
we can free this with a few calls to free() at the end.  Most of the
data in the current render node implementations is either inline or
could easily be allocated from the same chunk of memory during
creation. The only exceptions are really cairo_surface_t and
GskTexture, but we can easily make the RenderTree own them. Then we
can also drop the finalize vfunc for the render nodes.

Its not clear to me how clipping is supposed to work. In the
traditional rendering model we always applied the widget allocation as
a clip region on the cairo_t when we propagated to a child. This meant
that no widget could ever draw outside its allocation. But in the new
world we only apply clipping when we manually emit a clip node, such
as in a viewport. Typically widgets don't draw outside their
allocation by default, but it could easily happen by mistake, or due
to external things like css transformations affecting what you draw.

I don't think this is necessarily a *bad* thing. In fact, it seems
like it is a more natural way from the perspective of doing cool
things. However, it violates certain properties we rely on in Gtk+,
particularly involving gtk_widget_queue_draw() and the tracking of the
dirty region (which then eventually gets passed to the renderer as a
clip). If things can draw outside the allocation, then that will not
repaint properly due to the clip-to-dirty-region.

This combined with the fact that OpenGL makes it very hard, flickerly
and generally poorly supported to do damage-style partial updates of
the front buffer means we should consider always updating the entire
toplevel each time we paint. This is what games do, and we need to
make that fast anyway for e.g. the resize case, so we might as well
always do it. That means we can further simplify clipping in general,
because then we always start with a rectangle and only clip by rects,
which i think means we can do the clipping on the GPU.

Also related to clipping, we're currently not doing any culling at
all. I think we need to make gtk_container_snapshot_child() take the
current clip into consideration when recursing. Right now we're
creating nodes for children that are not visible.

Of course, such culling relies on the fact that the children don't
draw outside their allocation. Culling seems quite important to be
able to handle 10000s of rows in a listbox. So maybe we need to
actually clip every widget.

The above kinda contradicts each other, so we need to decide what to
do here. My hunch is that the natural approach is a bit of both, where
we always repaint the entire window (ignoring the damage region), but
clip each widget to the allocation. That means clipping becomes a
simple int-rect intersection (not a full region_t) which can be done
fast (and maybe on the GPU) while still allowing highlevel culling on
the widget allocation level. Does that seem right?

Related to this *again* is transformations. I don't think the way
these work right now actually do the right thing. Consider for
instance a clip node inside a transform node. For this to work right
we need to transform the clip node rect to window coordinates, which
may end up with a very weird region (esp. for a 3d
transform!). Another example is if we have something inside the
transformation that is more complex than a textured polygon, such as
e.g. some GLSL code that renders a shadow or a rounded rect, then that
code needs to respect arbitrary transformations during rendering,
which is far from trivial! So I wonder if it isn't better to always
consider a non-trivial transformation a place where you render the
child node into an offscreen and then draw the result as a
texture. (Althogh maybe we can special-case very simple cases like a
transformed textured quad.)

-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
 Alexander Larsson                                            Red Hat, Inc 
       alexl redhat com            alexander larsson gmail com 
He's a sword-wielding Amish master criminal on the run. She's a plucky 
antique-collecting Valkyrie with only herself to blame. They fight crime! 


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]