finn pushed to branch finn/replace-logger-with-echo at BuildGrid / buildgrid
Commits:
-
886e3ff4
by Laurence Urhegyi at 2018-11-22T18:18:58Z
-
131c6d87
by Finn at 2018-11-23T09:01:50Z
-
2360d613
by Finn at 2018-11-23T09:01:50Z
-
15e7a095
by Finn at 2018-11-23T09:01:50Z
-
b21c1258
by Finn at 2018-11-23T09:01:50Z
-
7e184bf9
by Finn at 2018-11-23T09:01:50Z
-
38ed83ba
by Finn at 2018-11-23T09:01:50Z
10 changed files:
- + COMMITTERS.md
- CONTRIBUTING.rst
- − MAINTAINERS
- buildgrid/_app/bots/buildbox.py
- buildgrid/_app/bots/host.py
- buildgrid/_app/commands/cmd_bot.py
- buildgrid/_app/commands/cmd_cas.py
- buildgrid/_app/commands/cmd_execute.py
- buildgrid/_app/commands/cmd_operation.py
- buildgrid/_app/commands/cmd_server.py
Changes:
1 |
+## COMMITTERS
|
|
2 |
+ |
|
3 |
+| Name | Email |
|
|
4 |
+| -------- | -------- |
|
|
5 |
+| Carter Sande | <carter.sande@duodecima.technology> |
|
|
6 |
+| Ed Baunton | <edbaunton gmail com> |
|
|
7 |
+| Laurence Urhegyi | <laurence urhegyi codethink co uk> |
|
|
8 |
+| Finn Ball | <finn ball codethink co uk> |
|
|
9 |
+| Paul Sherwood | <paul sherwood codethink co uk> |
|
|
10 |
+| James Ennis | <james ennis codethink com> |
|
|
11 |
+| Jim MacArthur | <jim macarthur codethink co uk> |
|
|
12 |
+| Juerg Billeter | <juerg billeter codethink co uk> |
|
|
13 |
+| Martin Blanchard | <martin blanchard codethink co uk> |
|
|
14 |
+| Marios Hadjimichael | <mhadjimichae bloomberg net> |
|
|
15 |
+| Raoul Hidalgo Charman | <raoul hidalgocharman codethink co uk> |
|
|
16 |
+| Rohit Kothur | <rkothur bloomberg net> |
|
... | ... | @@ -32,40 +32,31 @@ side effects and quirks the feature may have introduced. More on this below in |
32 | 32 |
|
33 | 33 |
.. _BuildGrid mailing list: https://lists.buildgrid.build/cgi-bin/mailman/listinfo/buildgrid
|
34 | 34 |
|
35 |
- |
|
36 | 35 |
.. _patch-submissions:
|
37 | 36 |
|
38 | 37 |
Patch submissions
|
39 | 38 |
-----------------
|
40 | 39 |
|
41 |
-We are running `trunk based development`_. The idea behind this is that merge
|
|
42 |
-requests to the trunk will be small and made often, thus making the review and
|
|
43 |
-merge process as fast as possible. We do not want to end up with a huge backlog
|
|
44 |
-of outstanding merge requests. If possible, it is preferred that merge requests
|
|
45 |
-address specific points and clearly outline what problem they are solving.
|
|
46 |
- |
|
47 |
-Branches must be submitted as merge requests (MR) on GitLab and should be
|
|
48 |
-associated with an issue, whenever possible. If it's a small change, we'll
|
|
49 |
-accept an MR without it being associated to an issue, but generally we prefer an
|
|
50 |
-issue to be raised in advance. This is so that we can track the work that is
|
|
40 |
+Branches must be submitted as merge requests (MR) on GitLab and should have a
|
|
41 |
+corresponding issue raised in advance (whenever possible). If it's a small change,
|
|
42 |
+an MR without it being associated to an issue is okay, but generally we prefer an
|
|
43 |
+issue to be raised in advance so that we can track the work that is
|
|
51 | 44 |
currently in progress on the project.
|
52 | 45 |
|
46 |
+When submitting a merge request, please obtain a review from another committer
|
|
47 |
+who is familiar with the area of the code base which the branch effects. An
|
|
48 |
+approval from another committer who is not the patch author will be needed
|
|
49 |
+before any merge (we use Gitlab's 'approval' feature for this).
|
|
50 |
+ |
|
53 | 51 |
Below is a list of good patch submission good practice:
|
54 | 52 |
|
55 | 53 |
- Each commit should address a specific issue number in the commit message. This
|
56 | 54 |
is really important for provenance reasons.
|
57 |
-- Merge requests that are not yet ready for review must be prefixed with the
|
|
58 |
- ``WIP:`` identifier, but if we stick to trunk based development then the
|
|
59 |
- ``WIP:`` identifier will not stay around for very long on a merge request.
|
|
60 |
-- When a merge request is ready for review, please find someone willing to do
|
|
61 |
- the review (ideally a maintainer) and assign them the MR, leaving a comment
|
|
62 |
- asking for their review.
|
|
55 |
+- Merge requests that are not yet ready for review should be prefixed with the
|
|
56 |
+ ``WIP:`` identifier.
|
|
63 | 57 |
- Submitted branches should not contain a history of work done.
|
64 | 58 |
- Unit tests should be a separate commit.
|
65 | 59 |
|
66 |
-.. _trunk based development: https://trunkbaseddevelopment.com
|
|
67 |
- |
|
68 |
- |
|
69 | 60 |
Commit messages
|
70 | 61 |
~~~~~~~~~~~~~~~
|
71 | 62 |
|
... | ... | @@ -89,6 +80,57 @@ For more tips, please read `The seven rules of a great Git commit message`_. |
89 | 80 |
|
90 | 81 |
.. _The seven rules of a great Git commit message: https://chris.beams.io/posts/git-commit/#seven-rules
|
91 | 82 |
|
83 |
+.. _committer-access:
|
|
84 |
+ |
|
85 |
+Committer access
|
|
86 |
+----------------
|
|
87 |
+ |
|
88 |
+Committers in the BuildGrid project are those folks to whom the right to
|
|
89 |
+directly commit changes to our version controlled resources has been granted.
|
|
90 |
+While every contribution is
|
|
91 |
+valued regardless of its source, not every person who contributes code to the
|
|
92 |
+project will earn commit access. The `COMMITTERS`_ file lists all committers.
|
|
93 |
+ |
|
94 |
+.. _COMMITTERS: https://gitlab.com/BuildGrid/buildgrid/blob/master/COMMITTERS.md
|
|
95 |
+.. _Subversion: http://subversion.apache.org/docs/community-guide/roles.html#committers
|
|
96 |
+ |
|
97 |
+ |
|
98 |
+How commit access is granted
|
|
99 |
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
100 |
+ |
|
101 |
+After someone has successfully contributed a few non-trivial patches, some full
|
|
102 |
+committer, usually whoever has reviewed and applied the most patches from that
|
|
103 |
+contributor, proposes them for commit access. This proposal is sent only to the
|
|
104 |
+other full committers -- the ensuing discussion is private, so that everyone can
|
|
105 |
+feel comfortable speaking their minds. Assuming there are no objections, the
|
|
106 |
+contributor is granted commit access. The decision is made by consensus; there
|
|
107 |
+are no formal rules governing the procedure, though generally if someone strongly
|
|
108 |
+objects the access is not offered, or is offered on a provisional basis.
|
|
109 |
+ |
|
110 |
+This of course relies on contributors being responsive and showing willingness
|
|
111 |
+to address any problems that may arise after landing patches. However, the primary
|
|
112 |
+criterion for commit access is good judgment.
|
|
113 |
+ |
|
114 |
+You do not have to be a technical wizard, or demonstrate deep knowledge of the
|
|
115 |
+entire codebase to become a committer. You just need to know what you don't
|
|
116 |
+know. If your patches adhere to the guidelines in this file, adhere to all the usual
|
|
117 |
+unquantifiable rules of coding (code should be readable, robust, maintainable, etc.),
|
|
118 |
+and respect the Hippocratic Principle of "first, do no harm", then you will probably
|
|
119 |
+get commit access pretty quickly. The size, complexity, and quantity of your patches
|
|
120 |
+do not matter as much as the degree of care you show in avoiding bugs and minimizing
|
|
121 |
+unnecessary impact on the rest of the code. Many full committers are people who have
|
|
122 |
+not made major code contributions, but rather lots of small, clean fixes, each of
|
|
123 |
+which was an unambiguous improvement to the code. (Of course, this does not mean the
|
|
124 |
+project needs a bunch of very trivial patches whose only purpose is to gain commit
|
|
125 |
+access; knowing what's worth a patch post and what's not is part of showing good
|
|
126 |
+judgement.)
|
|
127 |
+ |
|
128 |
+When submitting a merge request, please obtain a review from another committer
|
|
129 |
+who is familiar with the area of the code base which the branch effects. Asking on
|
|
130 |
+slack is probably the best way to go about this. An approval from a committer
|
|
131 |
+who is not the patch author will be needed before any merge (we use Gitlab's
|
|
132 |
+'approval' feature for this).
|
|
133 |
+ |
|
92 | 134 |
|
93 | 135 |
.. _coding-style:
|
94 | 136 |
|
... | ... | @@ -198,35 +240,6 @@ trunk. |
198 | 240 |
|
199 | 241 |
.. _coverage report: https://buildgrid.gitlab.io/buildgrid/coverage/
|
200 | 242 |
|
201 |
- |
|
202 |
-.. _committer-access:
|
|
203 |
- |
|
204 |
-Committer access
|
|
205 |
-----------------
|
|
206 |
- |
|
207 |
-We'll hand out commit access to anyone who has successfully landed a single
|
|
208 |
-patch to the code base. Please request this via Slack or the mailing list.
|
|
209 |
- |
|
210 |
-This of course relies on contributors being responsive and showing willingness
|
|
211 |
-to address any problems that may arise after landing branches.
|
|
212 |
- |
|
213 |
-When submitting a merge request, please obtain a review from another committer
|
|
214 |
-who is familiar with the area of the code base which the branch effects. An
|
|
215 |
-approval from another committer who is not the patch author will be needed
|
|
216 |
-before any merge (we use gitlab's 'approval' feature for this).
|
|
217 |
- |
|
218 |
-What we are expecting of committers here in general is basically to escalate the
|
|
219 |
-review in cases of uncertainty.
|
|
220 |
- |
|
221 |
-.. note::
|
|
222 |
- |
|
223 |
- We don't have any detailed policy for "bad actors", but will of course handle
|
|
224 |
- things on a case by case basis - commit access should not result in commit
|
|
225 |
- wars or be used as a tool to subvert the project when disagreements arise.
|
|
226 |
- Such incidents (if any) would surely lead to temporary suspension of commit
|
|
227 |
- rights.
|
|
228 |
- |
|
229 |
- |
|
230 | 243 |
.. _gitlab-features:
|
231 | 244 |
|
232 | 245 |
GitLab features
|
1 |
-Finn Ball
|
|
2 |
-E-mail: finn ball codethink co uk
|
|
3 |
-Userid: finnball
|
... | ... | @@ -13,6 +13,7 @@ |
13 | 13 |
# limitations under the License.
|
14 | 14 |
|
15 | 15 |
|
16 |
+import logging
|
|
16 | 17 |
import os
|
17 | 18 |
import subprocess
|
18 | 19 |
import tempfile
|
... | ... | @@ -29,7 +30,8 @@ def work_buildbox(context, lease): |
29 | 30 |
"""
|
30 | 31 |
local_cas_directory = context.local_cas
|
31 | 32 |
# instance_name = context.parent
|
32 |
- logger = context.logger
|
|
33 |
+ |
|
34 |
+ logger = logging.getLogger(__name__)
|
|
33 | 35 |
|
34 | 36 |
action_digest = remote_execution_pb2.Digest()
|
35 | 37 |
|
... | ... | @@ -13,6 +13,7 @@ |
13 | 13 |
# limitations under the License.
|
14 | 14 |
|
15 | 15 |
|
16 |
+import logging
|
|
16 | 17 |
import os
|
17 | 18 |
import subprocess
|
18 | 19 |
import tempfile
|
... | ... | @@ -26,7 +27,7 @@ def work_host_tools(context, lease): |
26 | 27 |
"""Executes a lease for a build action, using host tools.
|
27 | 28 |
"""
|
28 | 29 |
instance_name = context.parent
|
29 |
- logger = context.logger
|
|
30 |
+ logger = logging.getLogger(__name__)
|
|
30 | 31 |
|
31 | 32 |
action_digest = remote_execution_pb2.Digest()
|
32 | 33 |
action_result = remote_execution_pb2.ActionResult()
|
... | ... | @@ -20,7 +20,6 @@ Bot command |
20 | 20 |
Create a bot interface and request work
|
21 | 21 |
"""
|
22 | 22 |
|
23 |
-import logging
|
|
24 | 23 |
from pathlib import Path, PurePath
|
25 | 24 |
import sys
|
26 | 25 |
from urllib.parse import urlparse
|
... | ... | @@ -120,8 +119,7 @@ def cli(context, parent, update_period, remote, client_key, client_cert, server_ |
120 | 119 |
context.cas_client_cert = context.client_cert
|
121 | 120 |
context.cas_server_cert = context.server_cert
|
122 | 121 |
|
123 |
- context.logger = logging.getLogger(__name__)
|
|
124 |
- context.logger.debug("Starting for remote {}".format(context.remote))
|
|
122 |
+ click.echo("Starting for remote=[{}]".format(context.remote))
|
|
125 | 123 |
|
126 | 124 |
interface = bot_interface.BotInterface(context.channel)
|
127 | 125 |
|
... | ... | @@ -20,7 +20,6 @@ Execute command |
20 | 20 |
Request work to be executed and monitor status of jobs.
|
21 | 21 |
"""
|
22 | 22 |
|
23 |
-import logging
|
|
24 | 23 |
import os
|
25 | 24 |
import sys
|
26 | 25 |
from urllib.parse import urlparse
|
... | ... | @@ -63,8 +62,7 @@ def cli(context, remote, instance_name, client_key, client_cert, server_cert): |
63 | 62 |
|
64 | 63 |
context.channel = grpc.secure_channel(context.remote, credentials)
|
65 | 64 |
|
66 |
- context.logger = logging.getLogger(__name__)
|
|
67 |
- context.logger.debug("Starting for remote {}".format(context.remote))
|
|
65 |
+ click.echo("Starting for remote=[{}]".format(context.remote))
|
|
68 | 66 |
|
69 | 67 |
|
70 | 68 |
@cli.command('upload-dummy', short_help="Upload a dummy action. Should be used with `execute dummy-request`")
|
... | ... | @@ -75,7 +73,7 @@ def upload_dummy(context): |
75 | 73 |
action_digest = uploader.put_message(action)
|
76 | 74 |
|
77 | 75 |
if action_digest.ByteSize():
|
78 |
- click.echo('Success: Pushed digest "{}/{}"'
|
|
76 |
+ click.echo('Success: Pushed digest=["{}/{}]"'
|
|
79 | 77 |
.format(action_digest.hash, action_digest.size_bytes))
|
80 | 78 |
else:
|
81 | 79 |
click.echo("Error: Failed pushing empty message.", err=True)
|
... | ... | @@ -92,7 +90,7 @@ def upload_file(context, file_path, verify): |
92 | 90 |
for path in file_path:
|
93 | 91 |
if not os.path.isabs(path):
|
94 | 92 |
path = os.path.abspath(path)
|
95 |
- context.logger.debug("Queueing {}".format(path))
|
|
93 |
+ click.echo("Queueing path=[{}]".format(path))
|
|
96 | 94 |
|
97 | 95 |
file_digest = uploader.upload_file(path, queue=True)
|
98 | 96 |
|
... | ... | @@ -102,12 +100,12 @@ def upload_file(context, file_path, verify): |
102 | 100 |
for file_digest in sent_digests:
|
103 | 101 |
file_path = os.path.relpath(files_map[file_digest.hash])
|
104 | 102 |
if verify and file_digest.size_bytes != os.stat(file_path).st_size:
|
105 |
- click.echo('Error: Failed to verify "{}"'.format(file_path), err=True)
|
|
103 |
+ click.echo("Error: Failed to verify '{}'".format(file_path), err=True)
|
|
106 | 104 |
elif file_digest.ByteSize():
|
107 |
- click.echo('Success: Pushed "{}" with digest "{}/{}"'
|
|
105 |
+ click.echo("Success: Pushed path=[{}] with digest=[{}/{}]"
|
|
108 | 106 |
.format(file_path, file_digest.hash, file_digest.size_bytes))
|
109 | 107 |
else:
|
110 |
- click.echo('Error: Failed pushing "{}"'.format(file_path), err=True)
|
|
108 |
+ click.echo("Error: Failed pushing path=[{}]".format(file_path), err=True)
|
|
111 | 109 |
|
112 | 110 |
|
113 | 111 |
@cli.command('upload-dir', short_help="Upload a directory to the CAS server.")
|
... | ... | @@ -121,7 +119,7 @@ def upload_directory(context, directory_path, verify): |
121 | 119 |
for node, blob, path in merkle_tree_maker(directory_path):
|
122 | 120 |
if not os.path.isabs(path):
|
123 | 121 |
path = os.path.abspath(path)
|
124 |
- context.logger.debug("Queueing {}".format(path))
|
|
122 |
+ click.echo("Queueing path=[{}]".format(path))
|
|
125 | 123 |
|
126 | 124 |
node_digest = uploader.put_blob(blob, digest=node.digest, queue=True)
|
127 | 125 |
|
... | ... | @@ -134,12 +132,12 @@ def upload_directory(context, directory_path, verify): |
134 | 132 |
node_path = os.path.relpath(node_path)
|
135 | 133 |
if verify and (os.path.isfile(node_path) and
|
136 | 134 |
node_digest.size_bytes != os.stat(node_path).st_size):
|
137 |
- click.echo('Error: Failed to verify "{}"'.format(node_path), err=True)
|
|
135 |
+ click.echo("Error: Failed to verify path=[{}]".format(node_path), err=True)
|
|
138 | 136 |
elif node_digest.ByteSize():
|
139 |
- click.echo('Success: Pushed "{}" with digest "{}/{}"'
|
|
137 |
+ click.echo("Success: Pushed path=[{}] with digest=[{}/{}]"
|
|
140 | 138 |
.format(node_path, node_digest.hash, node_digest.size_bytes))
|
141 | 139 |
else:
|
142 |
- click.echo('Error: Failed pushing "{}"'.format(node_path), err=True)
|
|
140 |
+ click.echo("Error: Failed pushing path=[{}]".format(node_path), err=True)
|
|
143 | 141 |
|
144 | 142 |
|
145 | 143 |
def _create_digest(digest_string):
|
... | ... | @@ -160,8 +158,8 @@ def _create_digest(digest_string): |
160 | 158 |
@pass_context
|
161 | 159 |
def download_file(context, digest_string, file_path, verify):
|
162 | 160 |
if os.path.exists(file_path):
|
163 |
- click.echo('Error: Invalid value for "file-path": ' +
|
|
164 |
- 'Path "{}" already exists.'.format(file_path), err=True)
|
|
161 |
+ click.echo("Error: Invalid value for " +
|
|
162 |
+ "path=[{}] already exists.".format(file_path), err=True)
|
|
165 | 163 |
return
|
166 | 164 |
|
167 | 165 |
digest = _create_digest(digest_string)
|
... | ... | @@ -171,11 +169,11 @@ def download_file(context, digest_string, file_path, verify): |
171 | 169 |
if verify:
|
172 | 170 |
file_digest = create_digest(read_file(file_path))
|
173 | 171 |
if file_digest != digest:
|
174 |
- click.echo('Error: Failed to verify "{}"'.format(file_path), err=True)
|
|
172 |
+ click.echo("Error: Failed to verify path=[{}]".format(file_path), err=True)
|
|
175 | 173 |
return
|
176 | 174 |
|
177 | 175 |
if os.path.isfile(file_path):
|
178 |
- click.echo('Success: Pulled "{}" from digest "{}/{}"'
|
|
176 |
+ click.echo("Success: Pulled path=[{}] from digest=[{}/{}]"
|
|
179 | 177 |
.format(file_path, digest.hash, digest.size_bytes))
|
180 | 178 |
else:
|
181 | 179 |
click.echo('Error: Failed pulling "{}"'.format(file_path), err=True)
|
... | ... | @@ -190,8 +188,8 @@ def download_file(context, digest_string, file_path, verify): |
190 | 188 |
def download_directory(context, digest_string, directory_path, verify):
|
191 | 189 |
if os.path.exists(directory_path):
|
192 | 190 |
if not os.path.isdir(directory_path) or os.listdir(directory_path):
|
193 |
- click.echo('Error: Invalid value for "directory-path": ' +
|
|
194 |
- 'Path "{}" already exists.'.format(directory_path), err=True)
|
|
191 |
+ click.echo("Error: Invalid value, " +
|
|
192 |
+ "path=[{}] already exists.".format(directory_path), err=True)
|
|
195 | 193 |
return
|
196 | 194 |
|
197 | 195 |
digest = _create_digest(digest_string)
|
... | ... | @@ -204,11 +202,11 @@ def download_directory(context, digest_string, directory_path, verify): |
204 | 202 |
if node.DESCRIPTOR is remote_execution_pb2.DirectoryNode.DESCRIPTOR:
|
205 | 203 |
last_directory_node = node
|
206 | 204 |
if last_directory_node.digest != digest:
|
207 |
- click.echo('Error: Failed to verify "{}"'.format(directory_path), err=True)
|
|
205 |
+ click.echo("Error: Failed to verify path=[{}]".format(directory_path), err=True)
|
|
208 | 206 |
return
|
209 | 207 |
|
210 | 208 |
if os.path.isdir(directory_path):
|
211 |
- click.echo('Success: Pulled "{}" from digest "{}/{}"'
|
|
209 |
+ click.echo("Success: Pulled path=[{}] from digest=[{}/{}]"
|
|
212 | 210 |
.format(directory_path, digest.hash, digest.size_bytes))
|
213 | 211 |
else:
|
214 |
- click.echo('Error: Failed pulling "{}"'.format(directory_path), err=True)
|
|
212 |
+ click.echo("Error: Failed pulling path=[{}]".format(directory_path), err=True)
|
... | ... | @@ -20,7 +20,6 @@ Execute command |
20 | 20 |
Request work to be executed and monitor status of jobs.
|
21 | 21 |
"""
|
22 | 22 |
|
23 |
-import logging
|
|
24 | 23 |
import os
|
25 | 24 |
import stat
|
26 | 25 |
import sys
|
... | ... | @@ -64,8 +63,7 @@ def cli(context, remote, instance_name, client_key, client_cert, server_cert): |
64 | 63 |
|
65 | 64 |
context.channel = grpc.secure_channel(context.remote, credentials)
|
66 | 65 |
|
67 |
- context.logger = logging.getLogger(__name__)
|
|
68 |
- context.logger.debug("Starting for remote {}".format(context.remote))
|
|
66 |
+ click.echo("Starting for remote=[{}]".format(context.remote))
|
|
69 | 67 |
|
70 | 68 |
|
71 | 69 |
@cli.command('request-dummy', short_help="Send a dummy action.")
|
... | ... | @@ -76,7 +74,7 @@ def cli(context, remote, instance_name, client_key, client_cert, server_cert): |
76 | 74 |
@pass_context
|
77 | 75 |
def request_dummy(context, number, wait_for_completion):
|
78 | 76 |
|
79 |
- context.logger.info("Sending execution request...")
|
|
77 |
+ click.echo("Sending execution request...")
|
|
80 | 78 |
action = remote_execution_pb2.Action(do_not_cache=True)
|
81 | 79 |
action_digest = create_digest(action.SerializeToString())
|
82 | 80 |
|
... | ... | @@ -96,7 +94,7 @@ def request_dummy(context, number, wait_for_completion): |
96 | 94 |
result = None
|
97 | 95 |
for stream in response:
|
98 | 96 |
result = stream
|
99 |
- context.logger.info(result)
|
|
97 |
+ click.echo(result)
|
|
100 | 98 |
|
101 | 99 |
if not result.done:
|
102 | 100 |
click.echo("Result did not return True." +
|
... | ... | @@ -104,7 +102,7 @@ def request_dummy(context, number, wait_for_completion): |
104 | 102 |
sys.exit(-1)
|
105 | 103 |
|
106 | 104 |
else:
|
107 |
- context.logger.info(next(response))
|
|
105 |
+ click.echo(next(response))
|
|
108 | 106 |
|
109 | 107 |
|
110 | 108 |
@cli.command('command', short_help="Send a command to be executed.")
|
... | ... | @@ -132,12 +130,12 @@ def run_command(context, input_root, commands, output_file, output_directory): |
132 | 130 |
|
133 | 131 |
command_digest = uploader.put_message(command, queue=True)
|
134 | 132 |
|
135 |
- context.logger.info('Sent command: {}'.format(command_digest))
|
|
133 |
+ click.echo("Sent command=[{}]".format(command_digest))
|
|
136 | 134 |
|
137 | 135 |
# TODO: Check for missing blobs
|
138 | 136 |
input_root_digest = uploader.upload_directory(input_root)
|
139 | 137 |
|
140 |
- context.logger.info('Sent input: {}'.format(input_root_digest))
|
|
138 |
+ click.echo("Sent input=[{}]".format(input_root_digest))
|
|
141 | 139 |
|
142 | 140 |
action = remote_execution_pb2.Action(command_digest=command_digest,
|
143 | 141 |
input_root_digest=input_root_digest,
|
... | ... | @@ -145,7 +143,7 @@ def run_command(context, input_root, commands, output_file, output_directory): |
145 | 143 |
|
146 | 144 |
action_digest = uploader.put_message(action, queue=True)
|
147 | 145 |
|
148 |
- context.logger.info("Sent action: {}".format(action_digest))
|
|
146 |
+ click.echo("Sent action="">".format(action_digest))
|
|
149 | 147 |
|
150 | 148 |
request = remote_execution_pb2.ExecuteRequest(instance_name=context.instance_name,
|
151 | 149 |
action_digest=action_digest,
|
... | ... | @@ -154,7 +152,7 @@ def run_command(context, input_root, commands, output_file, output_directory): |
154 | 152 |
|
155 | 153 |
stream = None
|
156 | 154 |
for stream in response:
|
157 |
- context.logger.info(stream)
|
|
155 |
+ click.echo(stream)
|
|
158 | 156 |
|
159 | 157 |
execute_response = remote_execution_pb2.ExecuteResponse()
|
160 | 158 |
stream.response.Unpack(execute_response)
|
... | ... | @@ -21,7 +21,6 @@ Check the status of operations |
21 | 21 |
"""
|
22 | 22 |
|
23 | 23 |
from collections import OrderedDict
|
24 |
-import logging
|
|
25 | 24 |
from operator import attrgetter
|
26 | 25 |
from urllib.parse import urlparse
|
27 | 26 |
import sys
|
... | ... | @@ -67,8 +66,7 @@ def cli(context, remote, instance_name, client_key, client_cert, server_cert): |
67 | 66 |
|
68 | 67 |
context.channel = grpc.secure_channel(context.remote, credentials)
|
69 | 68 |
|
70 |
- context.logger = logging.getLogger(__name__)
|
|
71 |
- context.logger.debug("Starting for remote {}".format(context.remote))
|
|
69 |
+ click.echo("Starting for remote=[{}]".format(context.remote))
|
|
72 | 70 |
|
73 | 71 |
|
74 | 72 |
def _print_operation_status(operation, print_details=False):
|
... | ... | @@ -21,7 +21,6 @@ Create a BuildGrid server. |
21 | 21 |
"""
|
22 | 22 |
|
23 | 23 |
import asyncio
|
24 |
-import logging
|
|
25 | 24 |
import sys
|
26 | 25 |
|
27 | 26 |
import click
|
... | ... | @@ -35,7 +34,7 @@ from ..settings import parser |
35 | 34 |
@click.group(name='server', short_help="Start a local server instance.")
|
36 | 35 |
@pass_context
|
37 | 36 |
def cli(context):
|
38 |
- context.logger = logging.getLogger(__name__)
|
|
37 |
+ pass
|
|
39 | 38 |
|
40 | 39 |
|
41 | 40 |
@cli.command('start', short_help="Setup a new server instance.")
|
... | ... | @@ -61,7 +60,7 @@ def start(context, config): |
61 | 60 |
pass
|
62 | 61 |
|
63 | 62 |
finally:
|
64 |
- context.logger.info("Stopping server")
|
|
63 |
+ click.echo("Stopping server")
|
|
65 | 64 |
server.stop()
|
66 | 65 |
loop.close()
|
67 | 66 |
|