[gnome-continuous-yocto/gnomeostree-3.28-rocko: 3531/8267] bitbake: codeparser.py: support deeply nested tokens
- From: Emmanuele Bassi <ebassi src gnome org>
- To: commits-list gnome org
- Cc:
- Subject: [gnome-continuous-yocto/gnomeostree-3.28-rocko: 3531/8267] bitbake: codeparser.py: support deeply nested tokens
- Date: Sun, 17 Dec 2017 00:45:50 +0000 (UTC)
commit caf1a69577b10bbb0e7914964e2ef4bb69c18def
Author: Patrick Ohly <patrick ohly intel com>
Date: Fri Nov 18 16:23:22 2016 +0100
bitbake: codeparser.py: support deeply nested tokens
For shell constructs like
echo hello & wait $!
the process_tokens() method ended up with a situation where "token"
in the "name, value = token" assignment was a list of tuples
and not the expected tuple, causing the assignment to fail.
There were already two for loops (one in _parse_shell(), one in
process_tokens()) which iterated over token lists. Apparently the
actual nesting can also be deeper.
Now there is just one such loop in process_token_list() which calls
itself recursively when it detects that a list entry is another list.
As a side effect (improvement?!) of the loop removal in
_parse_shell(), the local function definitions in process_tokens() get
executed less often.
Fixes: [YOCTO #10668]
(Bitbake rev: d18a74de9ac75ba32f84c40620ca9d47c1ef96a3)
Signed-off-by: Patrick Ohly <patrick ohly intel com>
Signed-off-by: Richard Purdie <richard purdie linuxfoundation org>
bitbake/lib/bb/codeparser.py | 29 +++++++++++++++++------------
1 files changed, 17 insertions(+), 12 deletions(-)
---
diff --git a/bitbake/lib/bb/codeparser.py b/bitbake/lib/bb/codeparser.py
index 25938d6..5d2d440 100644
--- a/bitbake/lib/bb/codeparser.py
+++ b/bitbake/lib/bb/codeparser.py
@@ -342,8 +342,7 @@ class ShellParser():
except pyshlex.NeedMore:
raise sherrors.ShellSyntaxError("Unexpected EOF")
- for token in tokens:
- self.process_tokens(token)
+ self.process_tokens(tokens)
def process_tokens(self, tokens):
"""Process a supplied portion of the syntax tree as returned by
@@ -389,18 +388,24 @@ class ShellParser():
"case_clause": case_clause,
}
- for token in tokens:
- name, value = token
- try:
- more_tokens, words = token_handlers[name](value)
- except KeyError:
- raise NotImplementedError("Unsupported token type " + name)
+ def process_token_list(tokens):
+ for token in tokens:
+ if isinstance(token, list):
+ process_token_list(token)
+ continue
+ name, value = token
+ try:
+ more_tokens, words = token_handlers[name](value)
+ except KeyError:
+ raise NotImplementedError("Unsupported token type " + name)
+
+ if more_tokens:
+ self.process_tokens(more_tokens)
- if more_tokens:
- self.process_tokens(more_tokens)
+ if words:
+ self.process_words(words)
- if words:
- self.process_words(words)
+ process_token_list(tokens)
def process_words(self, words):
"""Process a set of 'words' in pyshyacc parlance, which includes
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]