(Sorry for breaking threading, etc, posting from my phone)
So, I think you misunderstood my opinion. I think the change in API to make render nodes immutable, etc, is correct. The issue I have is on a higher level, in the gtk widget tree. Let me try to explain.
Every frame when we draw a toplevel we have each widget in the gtkwidget tree submit geometry in the form of render nodes. The final result is a transient immutable entity describing the entire set of rendering operations for the frame. We can then do complex work on this in the backend to efficiently render it.
However, submitting geometry shouldn't automatically mean a complete rework from scratch. We submit a description, which includes references to textures, vertex arrays, shaders, etc. But we shouldn't have to e.g. re-upload the textures each time we submit. However, we can't pre-calculate many of these things ahead of time, even it they are theoretically known by size-allocate time. For instance, we don't yet have a reference to the gl context, and we probably want to wait as long as possible to avoid later size-allocates invalidating the work before render time.
As a simple example, consider a widget that renders just a textured quad. The texture depends on the size of the widget and some widget state. Lets consider how this is drawn. By render time the first frame we know the size and the state, so we can generate and upload the texture data. Then we create a new render node referencing it and hand it of to the tree. However, we also keep the texture around, because the next frame we can submit a new render node referencing the same texture unless something changed.
So, what can change? If the widget state changes then we manually mark the texture invalid and queue a redraw on the widget. This will trigger a repaint and then we recreate the texture. What if the size changes? Here we can catch size-allocate and detect a size change (as opposed to a pure move) and drop the texture. In the move case we just queue a draw, but don't drop the texture.
Everything in the simple case above can be handled with the things we have now, but gtk could have APIs to make this simpler. In particular, the example above is exactly what gtk should do automatically in the case of falling back to Cairo rendering of a widget, so we need to do this anyway. I.e I propose adding something between queue-redraw (which just resubmits geometry) and queue-resize (which requests a layout change). Let's call it queue-rerender for this mail. If a cairo-using widget is just moved then queue-redraw is called and the texture from last time we called widget.draw is reused, but if say the font style or the icon theme changes then queue-rerender is called and the old texture is invalidated.
Things also get complicated if a widget wants to do something that is not a straight rendering of the child nodes. For example it renders the child widgets into an offscreen and runs a shader on it (another case is efficient scrolling ala pixel cache). In this case we want to cache things and avoid rerendering the offscreen. I think you misunderstood this part. I don't mean that we should keep the render tree between frames. However, if nothing changes in the widget subtree we can cache the rendered offscreen and reuse that. The only complexity here is that queue-redraw needs to properly bubble up the tree so we can catch it and mark the container widget for redraw, and optionally stop the bubbling (if the child is not visible). This bubbling currently happens by some ugly low-level gdkwindow callback, and needs to be brought up to a proper gtkwidget api.
Another useful aspect of such a queue-rerender api is that it can be used to control cacheing of GPU resources. For instance, a container can queue-rerender a child that is made invisible or clipped to save texture memory.