Re: [Anjuta-list] Symbol Name generation from file for autocomplete/calltip [LONG]



OK, I went through the code and the relevant function seems to be this:
tags_manager.c:606 update_using_ctags (TagsManager * tm, gchar * filename)

you're right, it does almost the same thing.

However, AFAICS, there are a couple of minor problems:

1) It only extracts the "local" tags for the file which means that you are 
handling the global calltip/autocomplete using linux-gnome-c.api and the 
local tags with tags.cache. This is probably because the function is not 
pre-processing the file for the included files, which was the original 
problem.

2) Anjuta is probably not using this file for anything apart from 'symbol' 
auto-completion. The original poster's suggestion (if I understood him 
correctly) was that autocompletion, calltips and perhaps keyword highighting 
should depend on the included files, and it should work irrespective of 
whether the header is a system header or a local header. In order for this to 
work correctly, we need to:
    a) Pre-process the file on a periodic basis (maybe on each save ?) with 
the project-level compiler settings for include directories, defines, etc.
    b) Run ctags on the pre-processed file and extract all symbols and, for 
functions, prototypes and macros, the definition and store these in the 
TagsManager struct for that file/project.
    c) Use this for auto-completion, calltips and maybe even syntax 
highlighting (instead of hardcoding those in Scintilla's properties file).
    d) Do away with the system level API file with hardcoded definitions.

AFAICS, there are two ways to go about it:

a) Pre-process each file fully and extract all symbols/definitions from it.
b) 	Extract the header file names for the file.
	For each header
		If header has not already been CTAG-ed
			CTAG the header and add definitions to a project level API file
		End If
	End Loop

(a) is probably simpler.

Note that this is in no way a perfect solution. I can think of quite a few 
drawbacks to this, namely:

a) Memory consumption - this will be quite high when the user has a large 
number of open files since we need to maintain a seperate TagsManager object 
for each file. Of course, we can have a project level TagaManager object like 
we do now and sacrifice some accuracy for speed and low memory usage.
b) CTAGS has some inherent limitations which we'll have to live with. 
Firstly, it does not extract the full function/macro definition but just the 
line containing the definition - it might be a problem for multiline 
declarations (Using the python program dynamically will be too slow I 
believe). Also, ctags cannot parse a stream - which is why in the second 
function, I was writing the pre-parsed file to a temporary file before 
running ctags on it.

I can give this a go if noone else is working on it ? Or do the developers 
have a completely different plan for this ?

- Biswa.

On Monday 10 September 2001 10:17 pm, Naba Kumar wrote:
> On 10 Sep 2001 22:12:45 +0530, Naba Kumar wrote:
> > > CAVEAT: This is currently a bit slow (~ .6 secs per file - doesn't
>
> ..
>
> > function]? The function I wrote was quite fast (around 0.5 secs for each
> > file or lesser).
>
> Of course, they are almost same :), but anjuta manages cache to speed up
> the tags extraction. That is what I meant by 'fast'. :-)
>
> --
>
> Regards,
> -Naba
>
> -------------------------------------------------------------
> We do not colonize.  We conquer.  We rule.  There is no other way for
> us.
> 		-- Rojan, "By Any Other Name", stardate 4657.5
> -------------------------------------------------------------
>
>
>
> _________________________________________________________
> Do You Yahoo!?
> Get your free @yahoo.com address at http://mail.yahoo.com
>
>
> _______________________________________________
> Anjuta-list mailing list
> Anjuta-list lists sourceforge net
> https://lists.sourceforge.net/lists/listinfo/anjuta-list




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]