I am trying to set custom causes for Jenkins builds using the jenkins api.
The jenkins api has an invoke() method for invoking new builds that receives the cause parameter.
# this is a jenkinsapi.Job method
invoke(self, securitytoken=None, block=False,
build_params=None, cause=None, files=None, delay=5):
The cause param is dealt with like this:
if cause:
build_params['cause'] = cause
I am trying to find out what format to use when defining a custom cause. In order to do this I first extracted the cause of a build to see what it looks like using jenkinsapi.Build method get_causes().
This yields a list of dictionaries as expected (only 1 cause), example:
[{'shortDescription': 'description of cause',
'userId': 'userid',
'userName': 'username'}]
With this knowledge, I tried invoking builds while specifying cause as a list of dictionaries in the same format, but this didn't work, upon collecting the causes from this new build, only the normal default cause was there.
So, my question is what do I need to do to create a custom cause?
I've found two ways to add in the custom cause but only one way which works with the Jenkin's API. I'm still hoping there's an alternative solution.
To get the custom cause setting to work I had to enable this setting in each Jenkin's job:
After enabling that setting, I was able to trigger the Job with a custom cause which would show up in the console.
job.invoke(securitytoken="asecuretoken", cause="A custom cause.")
The main trouble I have with this route is that it doesn't fill out the amount of information I see from custom plugins. That's the alternative I've found to using the cause in this manner but it requires more work to implement.
I've found a good example which customizes the build messages based on a REST request is the GitLab Jenkin's Plugin.
Related
Recently I have a problem when a coworker made a change in a signature for a function return, we have clients that call the function in this way:
example = function()
But then as I was depending on his changes, he unintentionally change to this:
example, other_stuff = function()
I was not aware of this change, I did the merge and everything seem ok but then the error happen as I was expecting one value, but now it was trying to unpack two
So my question is knowing python is not a typed language, is there a way to know this happen and prevent this behavior (a tool or something), because sadly was until a runtime error was raise when I notice this, or how did we need to handle this
Sounds like a process error. An API shouldn't change its signature without considering its users. Within a project its easy, just search. Externally, the API version number should be bumped and the change should be in the change notes.
The API should have unit tests which include return value tests. "he unintentionally changed" issues should all be caught there. Since this didn't happen, a bug report against the tests should be written.
Of course, the coworker could just change those tests. But all of that should be in a code reviewed change set in your source repository. The coworker should have to justify the change and how to mitigate breakage. Since this API appears to have external clients, it should be very difficult to get an API signature change as all clients will need to be notified.
I would like, given a python module, to monkey patch all functions, classes and attributes it defines. Simply put, I would like to log every interaction a script I do not directly control has with a module I do not directly control. I'm looking for an elegant solution that will not require prior knowledge of either the module or the code using it.
I found several high-level tools that help wrapping, decorating, patching etc... and i've went over the code of some of them, but I cannot find an elegant solution to create a proxy of any given module and automatically proxy it, as seamlessly as possible, except for appending logic to every interaction (record input arguments and return value, for example).
in case someone else is looking for a more complete proxy implementation
Although there are several python proxy solutions similar to those OP is looking for, I could not find a solution that will also proxy classes and arbitrary class objects, as well as automatically proxy functions return values and arguments. Which is what I needed.
I've got some code written for that purpose as part of a full proxying/logging python execution and I may make it into a separate library in the future. If anyone's interested you can find the core of the code in a pull request. Drop me a line if you'd like this as a standalone library.
My code will automatically return wrapper/proxy objects for any proxied object's attributes, functions and classes. The purpose is to log and replay some of the code, so i've got the equivalent "replay" code and some logic to store all proxied objects to a json file.
I have a script which does the following:
Create campaign
Create AdSet (requires campaign_id)
Create AdCreative (requires adset_id)
Create Ad (requires creative_id and adset_id)
I am trying to lump all of them into a batch request. However, I realized that my none of these gets created except for my campaign (step 1) if I use remote_create(batch=my_batch). This is probably due to the dependencies of the ids that are needed in by each of the subsequent steps.
I read the documentation and it mentions that one can "Specifying dependencies between operations in the request" (https://developers.facebook.com/docs/graph-api/making-multiple-requests) between calls via {result=(parent operation name):(JSONPath expression)}
Is this possible with the python API?
Can this be achieved with the way I am using remote_creates?
Unfortunately python sdk doesn't currently support this. There is a github issue for it: https://github.com/facebook/facebook-python-ads-sdk/issues/256.
I have also encountered this issue also and have described my workaround in the comments on the issue:
"I found a decent workaround for getting this behaviour without too much trouble. Basically I set the id fields that have dependencies with values like "{result=:$,id}" and prior to doing execute() on the batch object I iterate over ._batch and add as the 'name' entry. When I run execute sure enough it works perfectly. Obviously this solution does have it's limitations such where you are doing multiple calls to the same endpoint that need to be fed into other endpoints and you would have duplicated resource names and would need to customize the name further to string them together.
Anyways, hope this helps someone!"
I'm expectedly getting a CypherExecutionException. I would like to catch it but I can't seem to find the import for it.
Where is it?
How do I find it myself next time?
Depending on which version of py2neo you're using, and which Cypher endpoint - legacy or transactional - this may be one of the auto-generated errors built dynamically from the server response. Newer functionality (i.e. the transaction endpoint) no longer does this and instead holds hard-coded definitions for all exceptions for just this reason. This wasn't possible for the legacy endpoint when the full list of possible exceptions was undocumented.
You should however be able to catch py2neo.error.GraphError instead which is the base class from which these dynamic errors inherit. You can then study the attributes of that error for more specific checking.
As the title suggests, I have a gtk.TreeView whose model is sorted and filtered. According to the documentation: "Drag and drop reordering of rows only works with unsorted stores.". The only other information given relates to using external sources, which in this case I don't need.
I tried implementing it anyway, providing handlers to the drag-dest received and drag-drop signals, but still get the following error:
GtkWarning: You must override the default 'drag_data_received' handler on GtkTreeView when using models that don't support the GtkTreeDragDest interface and enabling drag-and-drop. The simplest way to do this is to connect to 'drag_data_received' and call g_signal_stop_emission_by_name() in your signal handler to prevent the default handler from running. Look at the source code for the default handler in gtktreeview.c to get an idea what your handler should do. (gtktreeview.c is in the GTK source code.) If you're using GTK from a language other than C, there may be a more natural way to override default handlers, e.g. via derivation.
Despite this, although I haven't implemented it yet, it looks like I could make it work, since it doesn't crash. Nevertheless, this is a warning I'd rather not have.
So, Is there a python equivalent of g_signal_stop_emission_by_name, or am I going about this the wrong way?
I got a bit confused as I already had a "drag-drop" handler but was sorted once I implemented the following:
def __init__(self):
self.treeview.connect("drag_data_received", self.on_drag_data_received)
def on_drag_data_received(self, widget, drag_context, x, y, selection_data, info, timestamp):
widget.stop_emission('drag_data_received')
Just to add, according to pygtk docs, *emit_stop_by_name* and *stop_emission* are identical.
It is gobject.GObject.emit_stop_by_name(). I don't know if what you are doing will succeed, but there is no "standard" way I can think of.
Instead of implementing yourself, you could try using Py-gtktree: see example called drag_between_tree_and_list.py. You can sort the tree on the right and still be able to drag into it with items dragged in automatically placed in the "correct" position. It doesn't allow draggin anywhere into the tree, but for a different reason: example explicitly requests this.
I got rid of the warning by using treeview.stop_emission('drag-drop-received') in my own drag-drop-received signal handler. Perhaps the method by doublep will also work, although I haven't tried it.