How to check output dynamically with Luigi - python

I realize I likely need to use dynamic requirements to accomplish the following task, however I have not been able to wrap my head around what this would look like in practice.
The goal is to use Luigi to generate data and add it to a database, without knowing ahead of time what data will be generated.
Take the following example using mongodb:
import luigi
from uuid import uuid4
from luigi.contrib import mongodb
import pymongo
# Make up IDs, though in practice the IDs may be generated from an API
class MakeID(luigi.Task):
def run(self):
with self.output().open('w') as f:
f.write(','.join([str(uuid4()) for e in range(10)]))
# Write the data to file
def output(self):
return luigi.LocalTarget('data.csv')
class ToDataBase(luigi.Task):
def requires(self):
return MakeID()
def run(self):
with self.input().open('r') as f:
ids = f.read().split(',')
# Add some fake data to simulate generating new data
count_data = {key: value for value, key in enumerate(ids)}
# Add data to the database
self.output().write(count_data)
def output(self):
# Attempt to read non-existent file to get the IDs to check if task is complete
with self.input().open('r') as f:
valid_ids = f.read().split(',')
client = pymongo.MongoClient('localhost',
27017,
ssl=False)
return mongodb.MongoRangeTarget(client,
'myDB',
'myData',
valid_ids,
'myField')
if __name__ == '__main__':
luigi.run()
The goal is to obtain data, modify it and then add it to a database.
The above code fails when run because the output method of ToDataBase runs before the require method so the while the function has access to the input, the input does not yet exist. Regardless I still need to check to be sure that the data was added to the database.
This github issue is close to what I am looking for, though as I mentioned I have not been able to figure out dynamic requirements for this use case in practice.

The solution is to create a third task (in the example Dynamic) that yields the task that is waiting on dynamic input and making the dependency a parameter rather than a requires method.
class ToDatabase(luigi.Task):
fp = luigi.Parameter()
def output(self):
with open(self.fp, 'r') as f:
valid_ids = [str(e) for e in f.read().split(',')]
client = pymongo.MongoClient('localhost', 27017, ssl=False)
return mongodb.MongoRangeTarget(client, 'myDB', 'myData',
valid_ids, 'myField')
def run(self):
with open(self.fp, 'r') as f:
valid_ids = [str(e) for e in f.read().split(',')]
self.output().write({k: 5 for k in valid_ids})
class Dynamic(luigi.Task):
def output(self):
return self.input()
def requires(self):
return MakeIDs()
def run(self):
yield(AddToDatabase(fp=self.input().path))

Related

How to use dependencies with yield in FastApi

I am using FastApi and I would like to know if I am using the dependencies correctly.
First, I have a function that yields the database session.
class ContextManager:
def __init__(self):
self.db = DBSession()
def __enter__(self):
return self.db
def __exit__(self):
self.db.close()
def get_db():
with ContextManager() as db:
yield db
I would like to use that function in another function:
def validate(db=Depends(get_db)):
is_valid = verify(db)
if not is is_valid:
raise HTTPException(status_code=400)
yield db
Finally, I would like to use the last functions as a dependency on the routes:
#router.get('/')
def get_data(db=Depends(validate)):
data = db.query(...)
return data
I am using this code and it seems to work, but I would like to know if it is the most appropiate way to use dependencies. Especially, I am not sure if I have to use 'yield db' inside the function validate or it would be better to use return. I would appreciate your help. Thanks a lot

understanding of some Luigi issues

Look at class ATask
class ATask(luigi.Task):
config = luigi.Parameter()
def requires(self):
# Some Tasks maybe
def output(self):
return luigi.LocalTarget("A.txt")
def run(self):
with open("A.txt", "w") as f:
f.write("Complete")
Now look at class BTask
class BTask(luigi.Task):
config = luigi.Parameter()
def requires(self):
return ATask(config = self.config)
def output(self):
return luigi.LocalTarget("B.txt")
def run(self):
with open("B.txt", "w") as f:
f.write("Complete")
Question is there is a chance that while TaskA running and start write "A.txt" taskB will start before taskA finishing writing?
The second is that if I start execution like
luigi.build([BTask(config=some_config)], local_scheduler=True )
And if this pipilene fail inside - Could I somehow to know outside about this like return value of luigi.build or smth else?
No, luigi won't start executing TaskB until TaskA has finished (ie, until it has finished writing the target file)
If you want to get a detailed response for luigi.build in case of error, you must pass an extra keyword argument: detailed_summary=True to build/run methods and then access the summary_text, this way:
luigi_run_result = luigi.build(..., detailed_summary=True)
print(luigi_run_result.summary_text)
For details on that, please read Response of luigi.build()/luigi.run() in Luigi documentation.
Also, you may be interested in this answer about how to access the error / exception: https://stackoverflow.com/a/33396642/3219121

MySQL Targets in Luigi workflow

My TaskB requires TaskA, and on completion TaskA writes to a MySQL table, and then TaskB is to take in this output to the table as its input.
I cannot seem to figure out how to do this in Luigi. Can someone point me to an example or give me a quick example here?
The existing MySqlTarget in luigi uses a separate marker table to indicate when the task is complete. Here's the rough approach I would take...but your question is very abstract, so it is likely to be more complicated in reality.
import luigi
from datetime import datetime
from luigi.contrib.mysqldb import MySqlTarget
class TaskA(luigi.Task):
rundate = luigi.DateParameter(default=datetime.now().date())
target_table = "table_to_update"
host = "localhost:3306"
db = "db_to_use"
user = "user_to_use"
pw = "pw_to_use"
def get_target(self):
return MySqlTarget(host=self.host, database=self.db, user=self.user, password=self.pw, table=self.target_table,
update_id=str(self.rundate))
def requires(self):
return []
def output(self):
return self.get_target()
def run(self):
#update table
self.get_target().touch()
class TaskB(luigi.Task):
def requires(self):
return [TaskA()]
def run(self):
# reading from target_table

How to make spydlay module to work like httplib/http.client?

I have to test server based on Jetty. This server can work with its own protocol, HTTP, HTTPS and lastly it started to support SPDY. I have some stress tests which are based on httplib /http.client -- each thread start with similar URL (some data in query string are variable), adds execution time to global variable and every few seconds shows some statistics. Code looks like:
t_start = time.time()
connection.request("GET", path)
resp = connection.getresponse()
t_stop = time.time()
check_response(resp)
QRY_TIMES.append(t_stop - t_start)
Client working with native protocol shares httplib API, so connection may be native, HTTPConnection or HTTPSConnection.
Now I want to add SPDY test using spdylay module. But its interface is opaque and I don't know how to change its opaqueness into something similar to httplib interface. I have made test client based on example but while 2nd argument to spdylay.urlfetch() is class name and not object I do not know how to use it with my tests. I have already add tests to on_close() method of my class which extends spdylay.BaseSPDYStreamHandler, but it is not compatibile with other tests. If it was instance I would use it outside of spdylay.urlfetch() call.
How can I use spydlay in a code that works based on httplib interfaces?
My only idea is to use global dictionary where url is a key and handler object is a value. It is not ideal because:
new queries with the same url will overwrite previous response
it is easy to forget to free handler from global dictionary
But it works!
import sys
import spdylay
CLIENT_RESULTS = {}
class MyStreamHandler(spdylay.BaseSPDYStreamHandler):
def __init__(self, url, fetcher):
super().__init__(url, fetcher)
self.headers = []
self.whole_data = []
def on_header(self, nv):
self.headers.append(nv)
def on_data(self, data):
self.whole_data.append(data)
def get_response(self, charset='UTF8'):
return (b''.join(self.whole_data)).decode(charset)
def on_close(self, status_code):
CLIENT_RESULTS[self.url] = self
def spdy_simply_get(url):
spdylay.urlfetch(url, MyStreamHandler)
data_handler = CLIENT_RESULTS[url]
result = data_handler.get_response()
del CLIENT_RESULTS[url]
return result
if __name__ == '__main__':
if '--test' in sys.argv:
spdy_response = spdy_simply_get('https://localhost:8443/test_spdy/get_ver_xml.hdb')
I hope somebody can do spdy_simply_get(url) better.

Best way to mix and match components in a python app

I have a component that uses a simple pub/sub module I wrote as a message queue. I would like to try out other implementations like RabbitMQ. However, I want to make this backend change configurable so I can switch between my implementation and 3rd party modules for cleanliness and testing.
The obvious answer seems to be to:
Read a config file
Create a modifiable settings object/dict
Modify the target component to lazily load the specified implementation.
something like :
# component.py
from test.queues import Queue
class Component:
def __init__(self, Queue=Queue):
self.queue = Queue()
def publish(self, message):
self.queue.publish(message)
# queues.py
import test.settings as settings
def Queue(*args, **kwargs):
klass = settings.get('queue')
return klass(*args, **kwargs)
Not sure if the init should take in the Queue class, I figure it would help in easily specifying the queue used while testing.
Another thought I had was something like http://www.voidspace.org.uk/python/mock/patch.html though that seems like it would get messy. Upside would be that I wouldn't have to modify the code to support swapping component.
Any other ideas or anecdotes would be appreciated.
EDIT: Fixed indent.
One thing I've done before is to create a common class that each specific implementation inherits from. Then there's a spec that can easily be followed, and each implementation can avoid repeating certain code they'll all share.
This is a bad example, but you can see how you could make the saver object use any of the classes specified and the rest of your code wouldn't care.
class SaverTemplate(object):
def __init__(self, name, obj):
self.name = name
self.obj = obj
def save(self):
raise NotImplementedError
import json
class JsonSaver(SaverTemplate):
def save(self):
file = open(self.name + '.json', 'wb')
json.dump(self.object, file)
file.close()
import cPickle
class PickleSaver(SaverTemplate):
def save(self):
file = open(self.name + '.pickle', 'wb')
cPickle.dump(self.object, file, protocol=cPickle.HIGHEST_PROTOCOL)
file.close()
import yaml
class PickleSaver(SaverTemplate):
def save(self):
file = open(self.name + '.yaml', 'wb')
yaml.dump(self.object, file)
file.close()
saver = PickleSaver('whatever', foo)
saver.save()

Categories