I have a script which does the following:
Create campaign
Create AdSet (requires campaign_id)
Create AdCreative (requires adset_id)
Create Ad (requires creative_id and adset_id)
I am trying to lump all of them into a batch request. However, I realized that my none of these gets created except for my campaign (step 1) if I use remote_create(batch=my_batch). This is probably due to the dependencies of the ids that are needed in by each of the subsequent steps.
I read the documentation and it mentions that one can "Specifying dependencies between operations in the request" (https://developers.facebook.com/docs/graph-api/making-multiple-requests) between calls via {result=(parent operation name):(JSONPath expression)}
Is this possible with the python API?
Can this be achieved with the way I am using remote_creates?
Unfortunately python sdk doesn't currently support this. There is a github issue for it: https://github.com/facebook/facebook-python-ads-sdk/issues/256.
I have also encountered this issue also and have described my workaround in the comments on the issue:
"I found a decent workaround for getting this behaviour without too much trouble. Basically I set the id fields that have dependencies with values like "{result=:$,id}" and prior to doing execute() on the batch object I iterate over ._batch and add as the 'name' entry. When I run execute sure enough it works perfectly. Obviously this solution does have it's limitations such where you are doing multiple calls to the same endpoint that need to be fed into other endpoints and you would have duplicated resource names and would need to customize the name further to string them together.
Anyways, hope this helps someone!"
Related
I'm new to Python and AWS Glue and I have trouble to enable auto-completion for certain case.
First case:
dynamic_frame has no API list
If I force the creation of spark DataFrame, no API list is showed, it seems DataSource0 is unknown.
But glueContext has the API list shown:
Second case:
Spark DataFrame can be created and API list is showed. Since I'm new to AWS Glue, I'm not sure this is the best practice to use the object of DynamicFrame for the DF conversion, instead of using .toDF() directly and what would be the impact.
Third case:
Define the type to have API list showed. and again, I'm new to Python and I come from a background of C/JAVA/SCALA, so I've no idea if this would be weird or "non-python" code style.
Environnment:
Python 3.6 installed by Anaconda
pyspark and AWS Glue installed via pip
To solve the first case try putting a type hint in front of the variable DataSource0 : DynamicFrame.
The Linter of the IDE tries running your code in a separate background process to resolve the types and populate the auto-completion list. If you use a type hint you are helping the Linter determine the type of the attribute and so the IDE doesn't have to determine the type dynamically.
The reason auto-completion works with the library functions in the other examples is because in those cases no dynamic code needs to resolved by the Linter. The library functions by themselves are clearly determined.
Sometimes it also takes PyCharm a few seconds to resolve auto-completions, so don't force the menu, give it a few seconds for the background Linter process to complete between writing the variable and the dot (Ctrl+Space afterwards to show the auto-complete).
In your case those library functions also have several levels of depth, that makes resolving the auto-suggestions heavier for the IDE. Flat really is better than nested in this case. But this isn't a problem just an inconvenience, provided the code is correct it will execute as expected at run-time (it does make writing the code more difficult without the occasional help of auto-suggestions).
I am currently working on a Django 2+ project involving a blockchain, and I want to make copies of some of my object's states into that blockchain.
Basically, I have a model (say "contract") that has a list of several "signature" objects.
I want to make a snapshot of that contract, with the signatures. What I am basically doing is taking the contract at some point in time (when it's created for example) and building a JSON from it.
My problem is: I want to update that snapshot anytime a signature is added/updated/deleted, and each time the contract is modified.
The intuitive solution would be to override each "delete", "create", "update" of each of the models involved in that snapshot, and pray that all of them the are implemented right, and that I didn't forget any. But I think that this is not scalable at all, hard to debug and ton maintain.
I have thought of a solution that might be more centralized: using a periodical job to get the last update date of my object, compare it to the date of my snapshot, and update the snapshot if necessary.
However with that solution, I can identify changes when objects are modified or created, but not when they are deleted.
So, this is my big question mark: how with django can you identify deletions in relationships, without any prior context, just by looking at the current database's state ? Is there a django module to record deleted objects ? What are your thoughts on my issue ?
All right?
I think that, as I understand your problem, you are in need of a module like Django Signals, which listens for changes in the database and, when identified (and if all the desired conditions are met), executes certain commands in your application ( even be able to execute in the database).
This is the most recent documentation:
https://docs.djangoproject.com/en/3.1/topics/signals/
I keep on getting the following error while running nipyapi.canvas.update_variable_registry(versionedPG, variable) API from nipyapi.
Do I need to refresh the flow before making this call. Is there any nipyapi call to do the same ?
I referred the following link https://community.cloudera.com/t5/Support-Questions/NIFI-processor-not-the-most-up-to-date/m-p/158171 which states that if you are modifying the component from 2 different places, then I could see this errors. But in my case, I am running python code to modify and update the processor & components.
Also, what does 5 in the error below means.
ERROR:main:[5, null, 0d389912-2f27-31da-d5d2-f399556fb35e] is not the most up-to-date revision. This component appears to have been modified
How to get the most up-to-date revision of the processor ?
Well, it seems like update_variable_registry is not the good way to update those variables.
According to Nifi http logs examination, you have to
Create an update request through a POST. This is done using submit_update_variable_registry_request(...)
Wait for completin using through a GET of this update request. This is done using get_update_request(...)
Finally DELETE the update request. This is done using delete_update_request(...)
After having tried that, it seems like only the first part is really needed. Part 2 and 3 may be elements of UI refresh ...
This is resolved in version 0.13.3 of NiPyAPI Github
I am trying to set custom causes for Jenkins builds using the jenkins api.
The jenkins api has an invoke() method for invoking new builds that receives the cause parameter.
# this is a jenkinsapi.Job method
invoke(self, securitytoken=None, block=False,
build_params=None, cause=None, files=None, delay=5):
The cause param is dealt with like this:
if cause:
build_params['cause'] = cause
I am trying to find out what format to use when defining a custom cause. In order to do this I first extracted the cause of a build to see what it looks like using jenkinsapi.Build method get_causes().
This yields a list of dictionaries as expected (only 1 cause), example:
[{'shortDescription': 'description of cause',
'userId': 'userid',
'userName': 'username'}]
With this knowledge, I tried invoking builds while specifying cause as a list of dictionaries in the same format, but this didn't work, upon collecting the causes from this new build, only the normal default cause was there.
So, my question is what do I need to do to create a custom cause?
I've found two ways to add in the custom cause but only one way which works with the Jenkin's API. I'm still hoping there's an alternative solution.
To get the custom cause setting to work I had to enable this setting in each Jenkin's job:
After enabling that setting, I was able to trigger the Job with a custom cause which would show up in the console.
job.invoke(securitytoken="asecuretoken", cause="A custom cause.")
The main trouble I have with this route is that it doesn't fill out the amount of information I see from custom plugins. That's the alternative I've found to using the cause in this manner but it requires more work to implement.
I've found a good example which customizes the build messages based on a REST request is the GitLab Jenkin's Plugin.
I'm very confused with the state and documentation of mapreduce support in GAE.
In the official doc https://developers.google.com/appengine/docs/python/dataprocessing/, there is an example, but :
the application use mapreduce.input_readers.BlobstoreZipInputReader, and I would like to use mapreduce.input_readers.DatastoreInputReader. The documentation mention the parameters of DatastoreInputReader, but not the return value sent back to the map fonction....
the application "demo" (page Helloworld) has a mapreduce.yaml file wich IS NOT USED in the application ???
So I found http://code.google.com/p/appengine-mapreduce/. The is a complete example with mapreduce.input_readers.DatastoreInputReader, but it is written that reduce phase isn't supported yet !
So I would like to know if it is possible to implement the first form of mapreduce, with the DatastoreInputReader, to execute a real map / reduce to get a GROUP BY equivalent ?
The second example is from the earlier release, which did indeed just support the mapper phase. However, as the first example shows, the full map/reduce functionality is now supported and has been for some time. The mapreduce.yaml is from that earlier version, it is not used now.
I'm not sure what your actual question is. The value sent to the map function from DatastoreInputReader is, not surprisingly, the individual entity which is taken from the kind being mapped over.