How to construct an APScheduler trigger class to pass to add_job? - python

I am using APScheduler and I need to add jobs with a programmatically created list of trigger options. That is, I can't write code where I pass trigger parameters directly to add_job (such as "second"="*/5" etc.).
The documentation mentions that you can create a trigger instance and pass that to add_job as the trigger parameter, instead of "cron" or "interval", etc.
I would like to try to do that, as it appears that the trigger constructor takes kwargs style parameters and I should be able to pass it a dictionary.
I have not found an example of how to do this. I have tried:
from apscheduler.triggers import cron
# skipping irrelevant code
class Schedules(object):
# skipping irrelevant code
def add_schedule(self, new_schedule):
# here I create trigger_args as {'second': '*/5'}, for example
trigger = cron(trigger_args)
This fails with: TypeError: 'module' object is not callable
How do I instantiate a trigger object?

I found a solution to my main problem without figuring out how to create a trigger instance (though I am still curious as to how to do that).
The main issue I had is that I need to programmatically create the trigger parameters. Knowing a bit more now about parameter passing in python, I see that if I make a dict of all the parameters, not just the trigger parameters, I can pass the parameters this way:
job_def = {}
#here I programmatically create the trigger parameters and add them to the dict
job_def["func"] = job_function
job_def["trigger"] = "cron"
job_def["args"] = [3]
new_job = self.scheduler.add_job(**job_def )

Related

How to design a logging event in python in an efficient way, rather than simply adding the events inside a function?

I want to implement the server-side event analytics feature (using https://segment.com/), I am clear in using the api, just we have to add the event api's inside a function whose action we need to monitor, for example, for creating a new user account in my application, I will have an event inside the function create_user
def create_user(email_id, name, id):
# some code to add the user in my table
....
....
# calling the segment api to track event
analytics.track(user_id=email_id, event="Add User", properties={"username": name})
The above code works, but design wise I feel it can be done in a better way, since the create_user should have functionality of adding the user only, but here it contains the track event, and I have to do the same in modifying all the areas wherever I need to monitor by adding this analytics api and this makes my code contain irrelevant logic, I read about decorators but my analytics event depend on the code logic inside the function (like if the user email is valid then only I add the user to Db and trigger the event), so that doesn't seem to help.
So I am seeking help here in handling this scenario with the better approach and keeping the code clean. Is there any way or design approach exist for solving this case?
We can achieve this using decorators and one more separate function like below mentioned. With this code you have to call your confirm_loggin function based on the conditional event to log the data. While your inputs to the user function can be logged temporarily each time.
def confirm_logging():
'''
Final logging function, where once called from the main function
would log the data as need. Can be customized how it needs to be logged.
'''
global temp_log_data
print("Finally logging the data", temp_log_data)
# Can be taken ahead into DB logging.
temp_log_data.clear()
return
def logging_func(func):
'''
Temporary logging function for every function called into temp_log_data
The below temporary logging mechanism can be customized as required
'''
global temp_log_data
temp_log_data = []
def wrapper_function(*args, **kwargs):
# The below print statement can be customzied per your requirement
# You can also call anyother function instead of print and use the args
temp_log_data.append([args[0], args[1], args[2]])
print("Temporary logging data here -", (args[0], args[1], args[2]))
func(*args, **kwargs)
return wrapper_function
#logging_func
def create_user(greet, name, surname):
'''
Your main function, core to functionality specific
'''
print("{} {} {}".format(greet, name, surname))
if name == 'Abhi':
confirm_logging()
return
create_user('Welcome', 'Abhi', 'Jain')

How to get context object in sla_miss_callback function

I am able to successfully implement and test on_success_callback and on_failure_callback in Apache Airflow including successfully able to pass parameters to them using context object. However I am not able to successfully implement sla_miss_callback . By going through different online sources I found that arguments that get passed on to this function is
dag, task_list, blocking_task_list, slas, blocking_tis
However the sla_miss_callback unlike success/failure callback doesn't get the context object in its argument list and if I am trying to run Multiple set of operators like Python, Bash Operators they fail and scheduler complains for not passing context to execute function.
I tried looking at other online sources and in just one (https://www.rea-group.com/blog/watching-the-watcher/) I found that we can extract context object by using the self object . So I appended self to the additional 5 arguments described above but it didn't work for me. I want to know how is it possible to retrieve or pass context object to sla_miss_callback function not only for running different operators but also retrieving other details about the dag which has missed the SLA
It seems it is not possible to pass the context dictionary to the SLA callback (see source code for sla_miss_callback) but I've found a reasonable workaround to access some other information about the dag-run such as dag_id, task_id, and execution_date. You can also use any of the build-in macros/parameters which should work fine. While I am using the SlackWebhookOperator for my other callbacks, I am using SlackWebhookHook for the sla_miss_callback. For example:
from airflow.providers.slack.hooks.slack_webhook import SlackWebhookHook
def sla_miss_callback(dag, task_list, blocking_task_list, slas, blocking_tis, *args, **kwargs):
dag_id = slas[0].dag_id
task_id = slas[0].task_id
execution_date = slas[0].execution_date.isoformat()
hook = SlackWebhookHook(
http_conn_id='slack_callbacks',
webhook_token=BaseHook.get_connection('slack_callbacks').password,
message=f"""
:sos: *SLA has been missed*
*Task:* {task_id}
*DAG:* {dag_id}
*Execution Date:* {execution_date}
"""
)
hook.execute()

Use String as Function handle in Python

im currently writing my own event handling system with python. There i allow user to assign callbacks for each event with a function:
myEventHandlingSystem.addCallback(event_name, event_callback)
However, as each client has a set of event and sometimes the supported event list will be very long. In such situation, manually add each event using the method described above is quite complicated and might also have some mistakes like forgot to assign an event or an event is assigned multiple times with different callbacks by mistake.
What i want to achieve is that as each event has a String name, user should define the callback with exact the same name as the event. So at the end, user just need to provide a list of event, and each callbacks will automatically be associated to the event (of course, the a callback named in other way, the nothing should be assigned to that event) like:
SUPPORTED_EVENT1 = 'evt_Event1'
SUPPORTED_EVENT2 = 'evt_Event2'
clientConfig = json.load(configHandle)
# the json config file contains a field of 'SupportedEventList':[SUPPRTED_EVENT1,...]
def evt_Event1(*args):
...
myEventHandlingSystem.addCallbackFromEventList(clientConfig['SupportedEventList'])
Note that the supportedEventList is alway in form of a string list, where each element is the string contains the name of Event. Also the callback is handeled in the file where class for myEventHandlingSystem is defined
A solution to this would be to have a dict with associations between the functions and their names. Remember, functions are first-class objects in python and as such you can store them in a dict like you can store references to any other data. Hence you could do something along the lines of:
def evt_Event1(*args):
...
ALLOWED_CALLBACKS = {"evt_Event1": evt_Event1}
supportedEventList = ["evt_Event1",...]
for event in supportedEventList:
myEventHandlingSystem.addCallback(event, ALLOWED_CALLBACKS[event])
EDIT: So to address the question as it stands now. There is a function in python called globals() which would have a list of the functions defined in a given python file i.e. {"evt_Event1":<function at ...>}
You could do:
SUPPORTED_EVENT1 = 'evt_Event1'
SUPPORTED_EVENT2 = 'evt_Event2'
clientConfig = json.load(configHandle)
# the json config file contains a field of 'SupportedEventList':[SUPPRTED_EVENT1,...]
def evt_Event1(*args):
...
GLOBALS = globals()
for callback_name in clientConfig['SupportedEventList']:
eventSystem.addCallback(GLOBALS[callback_name])
If you wanted a register type API. You could:
So you could do something along the lines of:
# ./decorator.py
REGISTERED = {}
def register(func):
REGISTERED[func.__name__] = func
return func
and in the main code
from .decorator import REGISTERED, register
#register
def callback():
...
Alternatively you could straight away register the new callback into your event handling system using this type of decorator:
def register(func):
myEventHandlingSystem.addCallback(func.__name__, func)
return func
Also you could check out pluggy and similar plug in frameworks that allow people to add plugin functionality which sort of fits your pattern.

Odoo 11+ How to override function args and all calls to it without overriding every call

I want to override the uom._compute_price() and compute_quantity() in odoo to add product as an argument.
the problem is that these functions are called in many other functions in many other modules like stock, account, sale, purchase.
So I have to override each calling function which is about 140 occurrences .
Is there a better way to implement this without modifying the 140 calls ?
for ex. in the overridden functions, can I get the original caller object and its attributes ?
If there is a way to get the product from self or any other object, you can
make product_id a keyword only argument and check
it's not None, else you set it your self from an object.
def _compute_price(self, same_arguments,*,product_id=None)
if product_id is None:
product_id = # get the id from something
# your code here
If you can't get the product_id from anywhere I hope you find
and solution for that or you have to edit all calls.
Changing an upstream method arguments is not recommended, and you are discovering the why.
However, your calls to it can add a key in the context, and the overridden method can react to it. Example:
def _compute_price(self, original_args):
if self.env.context.get("product_id"):
# Do your stuff here
return super()._compute_price(original_args)
Then, from other parts of your module, call the method with that context key:
uom.with_context(product_id=product.id)._compute_price(original_args)
Another option would be to add the context key in the overriden method itself, if you want it to be present under every call:
def _compute_price(self, original_args):
# Get the product somehow here
return super(type(self), self.with_context(product_id=product.id))._compute_price(original_args)
However, keep in mind that only addons that are aware of this context and react to it actually need it. The 1st approach should be the most accurate for most cases.

custom Plone Dexterity factory to create subcontent

I thought it would be possible to create a custom Dexterity factory that calls the default factory and then adds some subcontent (in my case Archetypes-based) to the created 'parent' Dexterity content.
I have no problem creating and registering the custom factory.
However, regardless of what method I use (to create the AT subcontent), the subcontent creation fails when attempted from within the custom factory.
I've tried everything from plone.api to invokeFactory to direct instantiation of the AT content class.
In most cases, traceback shows the underlying Plone/CMF code tries to get portal_types tool using getToolByName and fails; similarly when trying to instantiate the AT class directly, the manage_afterAdd then tries to access reference_catalog, which fails.
Is there any way to make this work?
A different approach can simply be to add event handlers for IObjectAddedEvent, and add there your subcontents using common APIs.
After some trials and errors, it turns out this is possible:
from zope.container.interfaces import INameChooser
from zope.component.hooks import getSite
from plone.dexterity.factory import DexterityFactory
class CustomDexterityFactory(DexterityFactory):
def __call__(self, *args, **kw):
folder = DexterityFactory.__call__(self, *args, **kw)
# we are given no context to work with so need to resort to getSite
# hook or zope.globalrequest.getRequest and then wrap the folder
# in the context of the add view
site = getSite()
wrapped = folder.__of__(site["PUBLISHED"].context)
# invokeFactory fails if the container has no id
folder.id = "tmp_folder_id"
# standard AT content creation
wrapped.invokeFactory("Page", "tmp_obj_id")
page = wrapped["tmp_obj_id"]
new_id = INameChooser(service_obj).chooseName(title, page)
page.setId(new_id)
page.setTitle(title)
# empty the id, otherwise it will keep
folder.id = None
return folder
While the above works, at some point the created Page gets indexed (perhaps by invokeFactory), which means there will be a bogus entry in the catalog. Code to remove the entry could be added to the factory.
Overall, it would be easier to just create an event handler, as suggested by #keul in his answer.

Categories