Wait for stdout on Popen - python

I am trying to setting up an acceptance test harness for a flask app and I am currently struggling to wait for the app to start before making calls.
Following construct works fine:
class SpinUpTests(unittest.TestCase):
def tearDown(self):
super().tearDown()
self.stubby_server.kill()
self.stubby_server.communicate()
def test_given_not_yet_running_when_created_without_config_then_started_on_default_port(self):
self.not_yet_running(5000)
self.stubby_server = subprocess.Popen(['python', '../../app/StubbyServer.py'], stdout=subprocess.PIPE)
time.sleep(1)#<--- I would like to get rid of this
self.then_started_on_port(5000)
I would like to wait on stdout for:
self.stubby_server = subprocess.Popen(['python', '../../app/StubbyServer.py'], stdout=subprocess.PIPE)
time.sleep(1)#<--- I would like to get rid of this
Running on http://127.0.0.1:[port]/ (Press CTRL+C to quit)
I tried
for line in self.stubby_server.stdout.readline()
but readline() never finishes, tho I already see the output in the test output window.
Any ideas how I can wait for the flask app to start without having to use an explicit sleep()?

Using the retry package, this will help overcome your problem. Ultimately, you set what you are looking to try again, what exception you want to retry on, and you can set specific timing parameters based on how you want to retry. It's pretty well documented.
Here is an example of how I solved this in one of the projects I was working on here
Here is the snippet of code that will help you, in case that link does not work:
#classmethod
def _start_app_locally(cls):
subprocess.Popen(["fake-ubersmith"])
retry_call(
requests.get,
fargs=["{}/status".format(cls.endpoint)],
exceptions=RequestException,
delay=1
)
As you can see I just tried to hit my endpoint with a get using requests (the fargs are the arguments passed to requests.get as you can see it calls back the method you pass to retry_call), and based on the RequestException I was expecting, I would retry with a 1 second delay.
Finally, "fake-ubersmith" is the command that will run your server, which is ultimately your similar command of: 'python', '../../app/StubbyServer.py'

Related

Event Handling in Python Luigi

I've been trying to integrate Luigi as our workflow handler. Currently we are using concourse, however many of the things we're trying to do is a hassle to get around in concourse so we made the switch to Luigi as our dependency manager. No problems so far, workflows trigger and execute properly.
The issue comes in when a task fails for whatever reason. This case specifically the requires block of a task, however all cases need to be taken care of. As of right now Luigi gracefully takes care of the error and writes it to STDOUT. It still emits and exit code 0 though, which to concourse means the job passed. A false positive.
I've been trying to get the event handling to fix this, but I cannot get it to trigger, even with an extremely simple job:
#luigi.Task.event_handler(luigi.Event.FAILURE)
def mourn_failure(task, exception):
with open('/root/luigi', 'a') as f:
f.write("we got the exception!") #testing in concourse image
sys.exit(luigi.retcodes.retcode().unhandled_exception)
class Test(luigi.Task):
def requires(self):
raise Exception()
return []
def run(self):
pass
def output(self):
return []
Then running the command in python shell
luigi.run(main_task_cls=Test, local_scheduler=True)
The exception gets raised, but the even doesn't fire or something.
The file doesn't get written and the exit code is still 0.
Also, if it makes a difference I have my luigi config at /etc/luigi/client.cfg which contains
[retcode]
already_running=10
missing_data=20
not_run=25
task_failed=30
scheduling_error=35
unhandled_exception=40
I'm at a loss as to why the event handler won't trigger, but somehow I need the process to fail on an error.
It seems like the problem is where you place the "raise Exception" call.
If you place it in the requires function - it basically runs before your Test task run method. So it's not as if your Test task failed, but the task it's dependent on (right now, empty...).
for example if you move the raise to run, you're code will behave as you expect.
def run(self):
print('start')
raise Exception()
To handle a case where your dependency fails (in this case, the exception is raised in the requires method), you can add another type of luigi event handler, BROKEN_TASK: luigi.Event.BROKEN_TASK.
This will make sure the luigi code emits the return code (different than 0) you expect.
Cheers!
If you'd like to catch exceptions in requires(), use the following:
#luigi.Task.event_handler(luigi.Event.BROKEN_TASK)
def mourn_failure(task, exception):
...
If I understand it correctly, you just want luigi to return an error code when a task fails, I had many issues with this one, but it turns out to be quite simple, you just need to run it with luigi on the command line, not with python. Like this:
luigi --module my_module MyTask
I don't know if that was your problem too, but I was running with python, and then luigi ignored the retcodes on the luigi.cfg. Hope it helps.

Python flask returning values in real time

I have created my micro web framework with flask which uses fabric to call the shell scripts which are in remote servers.
The shell script might take a longer time to get completed. I send the POST request from my browser and awaits for the results.
The fabric displays the real time contents on the flask run screen but flask returns the values to the browser after the completion of that remote script.
How can i make my flask to print that real time values on my browser screen ?
My flask piece:
#app.route("/abc/execute", methods=['POST'])
def execute_me():
value = request.json['value']
result = fabric_call(value)
result = formations(result)
return json.dumps(result)
My fabric piece:
def fabric_call(value):
with settings(host_string='my server', user='user',password='passwd',warn_only=True):
proc = run(my shell script)
return json.dumps(proc)
Update
I tried streamin` as well. But it didn't work. The output is displayed to my curl POST after script's complete execution. What am I missing ?
#app.route("/abc/execute", methods=['POST'])
def execute_me():
value = request.json['value']
def generate():
for row in formations(fabric_call(value)):
yield row + '\n'
return Response(generate(), mimetype="text/event-stream")
First of all, you need to make sure your data source (formations()) is actually a generator that yields data when available. Right now it very much looks like it runs the command and only returns a value once it has completely finished.
Also, in case you are using AJAX to call your endpoint remember that you cannot use e.g. jQuery's $.ajax(); you need to use XHR directly and poll for new data instead of using the onreadystatechange event since you want data once it's available and not only when the request finished.

Block python's atexit during a crash?

I have a python script I've written which uses atexit.register() to run a function to persist a list of dictionaries when the program exits. However, this code is also running when the script exits due to a crash or runtime error. Usually, this results in the data becoming corrupted.
Is there any way to block it from running when the program exits abnormally?
EDIT: To clarify, this involves a program using flask, and I'm trying to prevent the data persistence code from running on an exit that results from an error being raised.
You don't want to use atexit with Flask. You want to use Flask signals. It sounds like you are specifically looking for the request_finished signal.
from flask import request_finished
def request_finished_handler(sender, response, **extra):
sender.logger.debug('Request context is about to close down. '
'Response: %s', response)
# do some fancy storage stuff.
request_finished.connect(request_finished_handler, app)
The benefit of request_finished is that it only fires after a successful response. That means that so long as there isn't an error in another signal, you should be good.
One way: at global level in main program:
abormal_termination = False
def your_cleanup_function():
# Add next two lines at the top
if abnormal_termination:
return
# ...
# At end of main program:
try:
# your original code goes here
except Exception: # replace according to what *you* consider "abnormal"
abnormal_termination = True # stop atexit handler
Not pretty, but straightforward ;-)

How can a Python function be called on script exit reliably?

How do you register a function with all the correct handlers etc to be called when the Python script is exitted (successfully or not)?
I have tried:
#atexit.register
def finalise():
'''
Function handles program close
'''
print("Tidying up...")
...
print("Closing")
...but this does not get called when the user closes the command prompt window for example (because #atexit.register decorated functions do not get called when the exitcode is non zero)
I am looking for a way of guaranteeing finalise() is called on program exit, regardless of errors.
For context, my Python program is a continually looping service program that aims to run all the time.
Thanks in advance
I don't think it can be done in pure Python. From documentation:
Note: the functions registered via this module are not called when the
program is killed by a signal not handled by Python, when a Python
fatal internal error is detected, or when os._exit() is called.
I think you may find this useful: How to capture a command prompt window close event in python
Have you tried to just catch all kinds of exceptions in your main function or code block?
For context, my Python program is a continually looping service program that aims to run all the time.
This is very hard to get right (see How do you create a daemon in Python?). You should use a library like http://pypi.python.org/pypi/python-daemon/ instead.
This one should work
works both on Ctrl-C and when the assertion fails. Maybe you can use a similar construct and pack it as a decorator, or whatever.
def main():
print raw_input('> ')
# do all your stuff here
if __name__ == '__main__':
try:
main()
finally:
print 'Bye!'
atexit.register(func)
func = lambda x: x
Use http://docs.python.org/2/library/atexit.html

Python Thread not returning the value

Using: Django with Python
Overall objective: Call a function which processes video conversion (internally makes a curl command to the media server) and should immediately return back to the user.
Using message queue would be an overkill for the app.
So I had decided to use threads, I have written a class which overwrites the init and run method and calls the curl command
class process_video(Thread):
def __init__ (self,video_id,video_title,fileURI):
Thread.__init__(self)
self.video_id = video_id
self.video_title = video_title
self.fileURI = fileURI
self.status =-1
def run(self):
logging.debug("FileURi" + self.fileURI)
curlCmd = "curl --data-urlencode \"fileURI=%s\" %s/finalize"% (self.fileURI, settings.MEDIA_ROOT)
logging.debug("Command to be executed" + str(curlCmd))
#p = subprocess.call(str(curlCmd), shell=True)
output_media_server,error = subprocess.Popen(curlCmd,stdout = subprocess.PIPE).communicate()
logging.debug("value returned from media server:")
logging.debug(output_media_server)
And I instantiate this class from another function called createVideo
which calls like this success = process_video(video_id, video_title, fileURI)
Problem:
The user gets redirected back to the other view from the createVideo and the processVideo gets called, however for some reason the created thread (process_video) doesn't wait for the output from the media server.
I wouldn't rely on threads being executed correctly within web applications. Depending on the web server's MPM, the process that executes the request might get killed after a request is done (I guess).
I'd recommend to make the media server request synchronously but let the media server return immediately after it started the encoding without problems (if you have control over its source code). Then a background process (or cron) could poll for the result regularly. This is only one solution - you should provide more information about your infrastructure (e.g. do you control the media server?).
Also check the duplicates in another question's comments for some answers about using task queues in such a scenario.
BTW I assume that no exception occurs in the background thread?!
Here is the thing what I did for getting around the issue which I was facing.
I used django piston to create an API for calling the processvideo with the parameters passed as GET, I was getting a 403 CSRF error when I was trying to send the parameters as POST.
and from the createVideo function I was calling the API like this
cmd = "curl \"%s/api/process_video/?video_id=%s&fileURI=%s&video_title=%s\" > /dev/null 2>&1 &" %(settings.SITE_URL, str(video_id),urllib.quote(fileURI),urllib.quote(video_title))
and this worked.
I feel it would have been helpful if I could have got the session_id and post parameters to work. Not sure how I could get off that csrf thing to work.

Categories