Re: [Evolution] Dealing with huge imap accounts / best practise?



pete:


Hm, I wish I could avoid caches -- please help explaining the
following behaviour, though:

- Create a fresh IMAP account, local sync disabled (20.000+ emails
  on the server, a lot of subfolders)
- Use another mail client to move an email into a subdirectory of
  that same IMAP account; in my case that mail was flagged read
- In Evolution, search that mail from the top folder incl. all
  subfolders, by entering the unique subject
- BTW: Indeed, the status line displays something like server based
  search... nice
- Wait until all processes finish (10 minutes here)

Result: Mail not found.

- Try Send / Receive messages
- This took 30 minutes
- Search repeated as above

Result: Mail not found.

When I manually open the subfolder with that mail once, and then
restart the search from the top folder it gets found.

What is happening here? 


You say the status line says that it's a server based search - then
surely it is the server, not evolution, that is the issue?

Could be -- but still I don't really get it:

The status line showed the IMAP server's hostname, yes, but this
also disappeared while the search continued for a long time.  Don't
know whether this is the intended behaviour.

How would we debug server side searches...? I fear it might get
complicated here. I love to hear from you all that this should work
though, I believe we won't be the only ones using evolution/dovecot.

What is happening after I once opened the targeted folder with the
mail to be found? It gets found although local caching is explicitly
off?

 
Out of interest, those search times and send/receive look to be
excessive.  20k mails is not big - I have 100+k in a single folder; I
know the number of folders is important, but I have a fair few and I
can delete the cache folder and Evo will rebuild the message list in a
few minutes.  Are you on a particularly slow connection? Is the server
underpowered?  Do you have any control over the server?

Thanks for addressing this. The reason is probably that we have
thousands of subfolders and quite deep structures. And, indeed, my
test setup is quite slow, 1 TB HDD with btrfs DUP on to simulate
some corner cases from actual users here. The rest is pretty fast,
under my control. I'll probably switch the test bed to using an SSD.



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]