Re: Nautilus/Medusa search index enhancements





To answer B first, Rebecca, medusa's maintainer.  Seth helped port it to
GNOME2.  As for A, your plan isn't gone.  Medusa still works, and it is
very fast.  I believe making medusa a user level application makes it
play well on many OSes and simplifies user preferences.

Oh. Well, I guess A is still doable. But B, which would have been absolutely great, cannot be done anymore. Please tell me if medusa still has that permissions code just in case someone wants to start it as a system daemon.


Well I'd like that, and it can be done.  It's a usability issue
complicated by the user's intent of the index.  I maintain indexes as
root and as a user, and I'm frustrated by managing the paths in my
indexes and what it indexes.  As a user I'm not interested in the noise
in my results from cache/, tmp/, .bak., '~' files, but root is.  I want
to index several remote volumes, but root doesn't.

I think the problem can be simplified use a preconfigured HOME index for
each user, and an ALL index shared by all users.

Most peopple create a /home/shared or /var/shared style thing, so that would be left out of the index as per you proposed.

 Medusa may need a
preferences to set file types and paths to ignore. Or the directory
properties in nautilus, and file types/programs can be extended.

Distributed searches are separate indexes, so it might be easier to
implement your ideas now that medusa knows about multiple indexes.  We
need an intersection/union routine for multiple indexes.  Such a trick
could be further extended to integrate results from Google.

Google? Isn't that a bit much? I know the indexer can be powerful, but isn't it supposed to index/retrieve file system object things? What I'd really want to see is Medusa asking the remote NFS server for search results, instead of having to index remote volumes (which can be really slow on slow mounts or large fileservers). It's seconds versus hours.

Wrong. FAM is a file notification daemon. You really expect the search daemon locally to ask FAM to monitor EVERY file in the file server? That's not possible.

Maybe not, but I'd like to see incremental indexing.  I'll resort to
cron if I have to.

Incremental indexing is a good idea. Prob is, to make it work, you need to trap writes to the filesystem, create a queue to reindex whatever was trapped, and then reindex that when idle. This evidently only works when you're doing things as root, as a daemon, not as a user. It breaks when you're trying to do user things. Having medusa run as root was a good idea. Slocate does it, what was the motivation to make it run as a user now?


Nautilus doesn't know how to select the index.  My own thoughts were to
have an ALL index owned by system/nobody, and a user index named HOME. The user can select the index to search.

UI nightmare. People just want to search. Why would they need to know about "indexes"?

KDE/GNOME/medusa developers certainly can write a wrapper to access
search, and extend Konquerer to use it.  Gstreamer is in this position
right now.  They have a good low level architecture, and some users are
making KDE interfaces to call it, but KDE isn't willing to accept it as
part of KDE because it isn't native c++.

Well, we've always known that the KDE people suffer from a bit of the "not invented here" syndrome. But they gotta draw the line at some point. When there's an infrastructure which works fine for searches, they won't have an excuse to reinvent the wheel, just as they didn't with glibc.

Rebecca is.  I think I'm the only user looking a medusa code on a weekly
basis.  My priority is to get it working with nautilus, so that other
users can sample it's power.  I need some content indexers

yes please. Can an "MP3" content indexer store the ID3 tag attributes in the index, so I can ask the index to look for an "Author" called "ATB"? If you could do this, I'll extend my "enqlocate" script to automagically query your medusa index. Oh, do you want a copy of the script? It enqueues anything found by locate which matches a certain regexp.

, and a better
means controling what and where indexing take place.
This I want to know:
1. Is the ability to index as root and to run as a daemon still present? Is the init.d medusa script still included (although not enabled) with the distribution? If those things are still present, we could reach a compromise: make Medusa automatically exclude NFS mounted volumes from indexes, and have Medusa relay search URIs to remotely running Medusa-searchd instances in NFS volumes. That way I can impress my net admin, and you can have the security you're striving for. 2. What's the ETA for making the changes on the medusa infrastructural database to support indexing per-volume, instead of absolute paths?

good luck and thanks!!!

     Rudd-O




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]