Re: Cluechaining and Original Clues
- From: Jim McDonald <Jim mcdee net>
- To: Joe Shaw <joe joeshaw org>
- Cc: dashboard-hackers gnome org
- Subject: Re: Cluechaining and Original Clues
- Date: Sat, 20 Dec 2003 16:41:31 +0000
On Sat, 2003-12-20 at 15:58, Joe Shaw wrote:
Hi,
[...]
> It appears that the backends which generate new clues don't remove the
> old ones from the packets, which means that some backends will return
> the same information twice (once for the original clue in the original
> cluepacket, and again for the same informatio in the duplicate
> cluepacket). Is this meant to be the way that this works, or should
> duplicate cluepackets from chainers remove the original clues?
I might be remembering wrong, but I am pretty sure they are supposed to
be there, since there's the possibility that a backend might not get the
original cluepacket but might instead only get the rewritten one. We
might have changed that, though. It also allows backends to know about
the previous clues, so it can return cached information or just ignore
them if it knows it's dealt with them in the past. There was also the
plan to add a maximum chain depth, but I don't know if that was ever
implemented. I don't think it ever really came up, since we had
relatively few chaining backends.
Okay, so from that I'd say that the backends should either receive a single cluepacket with all of the chained clues, or multiple cluepackets each with their own separate set of clues. The current system of sending multiple cluepackets with overlapping sets of clues seems to be redundant.
Of course, there's nothing concrete about the design, and if anyone
comes up with a better, more elegant way to deal with it, we'll run with
it.
Well we can either make chainers separate from backends, where the cluepacket goes through all chainers before it goes to the backends, or we can leave things as they are but then make cluechainers strip out the original clues.
As for duplicates, I think they're fine. There was supposed to be a
result filtering class which would remove duplicates and cull out
results with lower relevances.
I still think that straight duplicates are a waste of time and effort. Most backends don't keep their own cache, so especially for network-heavy backends this could be a severe waste of resources as we up the number of cluechainers and backends.
I know it doesn't really answer your questions concretely, but I hope it
helps. :)
Yeah that pretty much covers it. Just interested as to which bits in the current code are bugs and which are design decisions :)
Joe
Cheers,
Jim.
--
Jim McDonald - Jim mcdee net
|
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]