Re: Doubts about GPeriodic
- From: Havoc Pennington <hp pobox com>
- To: Owen Taylor <otaylor redhat com>
- Cc: gtk-devel-list gnome org
- Subject: Re: Doubts about GPeriodic
- Date: Thu, 21 Oct 2010 03:09:18 -0400
Another issue, seems like the ticker needs to be per-native-window:
* the GL context is per-window so the vsync mechanism also is
* we ought to shut down the ticker on windows that aren't visible
* each screen has its own vsync and the window is the normal
convention to imply a screen
* the general principle that widgets should be getting context and
state from parent widgets, in most cases ultimately from the toplevel
- but by chaining through parents. Rather than from global singletons
or state. attempted to explain in
http://log.ometer.com/2010-09.html#19 (so any gtk widget that's a
child of a clutter stage for example, would want to be asking that
clutter stage for paint clock)
Native windows would be either toplevels or embedded clutter/glarea
widgets, generally. But maybe just saying "any native window can have
its own clock" is right.
There probably shouldn't even be a global API because using it would
be broken, right?
When not actually using GL or vsync, then I think all native windows
could just inherit a single global ticker that would just be a
timeout, but that's more of an implementation detail than an API
thing.
Another thought, in the patch
periodic->last_run = now;
I think this will look a little rocky - the frames are going to
display at actual-screen-hz intervals, no matter what time it is when
you record last_run and no matter what time it is when you call your
drawing APIs. So things look better if you keep the "tween timestamp"
on hz intervals. The last_run time probably has very little to do with
when frames hit the display. Animations should go ahead and paint
assuming they are hitting the display at a fixed fps.
In the litl shell fwiw the pseudocode for the tween time on each frame is:
int frame_time = 1000 / fps;
int actual_time = <time since start of animation> - current_ticker_time;
int frames_late = (actual_time / frame_time) - 1;
current_ticker_time += frame_time;
if (frames_late > 0) {
current_ticker_time += (frame_time * (frames_late + 1));
}
The idea of this is: decide to drop frames based on floor(frames_late)
and then skip ahead by ceil(frames_late). The point of that is to bias
against dropping a frame until we're a full frame behind, but then be
sure we drop enough frames to get ahead a bit when we do drop them,
and always stay on a multiple of the refresh rate.
Due to this and also the desire to not explode when the computer's
clock is set, I would define the ticker to be a monotonic value that
is in time units but is not a wall clock time. i.e. if I change my
computer's clock back an hour, the ticker should keep marching
forward, and the ticker is allowed to be fudged to make animations
pretty.
Havoc
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]