I'm working on a project which uses kubernetes to manage a collection of flask servers and stores its data in redis. I have to run a lot of background tasks which handle and process data, and also check on the progress of that data processing. I'd like to know if there are frameworks or guides on how to do this optimally as my current setup leaves me feeling like it's suboptimal.
Here's basically how I have it set up now:
def process_data(data):
# do processing
return processed
def run_processor(data_key):
if redis_client.exists(f"{data_key}_processed", f"{data_key}_processing") > 0:
return
redis_client.set(f"{data_key}_processing", 1)
data = redis_client.get(data_key)
processed = process_data(data_key)
redis_client.set({f"{data_key}_processed": processed})
redis_client.delete(f"{data_key}_processing")
#app.route("start/data/processing/endpoint")
def handle_request():
Thread(target=run_processor, args=data_key).start()
return jsonify(successful=True)
The idea is that I can call the handle_request endpoint as many times as I want and it will only run if the data is not processed and there isn't any other process already running, regardless of which pod is running it. One flaw I've already noticed is that the process could fail and leave f'{data_key}_processing' in place. I could fix that by adding and refreshing a timeout, but it feels hacky to me. Additionally, I don't have a good way to "check in" on a process which is currently running.
If there are any useful resources or even just terms I could google the help would be much obliged.
Related
I would like to measure the coverage of my Python code which gets executed in the production system.
I want an answer to this question:
Which lines get executed often (hot spots) and which lines are never used (dead code)?
Of course this must not slow down my production site.
I am not talking about measuring the coverage of tests.
I assume you are not talking about test suite code coverage which the other answer is referring to. That is a job for CI indeed.
If you want to know which code paths are hit often in your production system, then you're going to have to do some instrumentation / profiling. This will have a cost. You cannot add measurements for free. You can do it cheaply though and typically you would only run it for short amounts of time, long enough until you have your data.
Python has cProfile to do full profiling, measuring call counts per function etc. This will give you the most accurate data but will likely have relatively high impact on performance.
Alternatively, you can do statistical profiling which basically means you sample the stack on a timer instead of instrumenting everything. This can be much cheaper, even with high sampling rate! The downside of course is a loss of precision.
Even though it is surprisingly easy to do in Python, this stuff is still a bit much to put into an answer here. There is an excellent blog post by the Nylas team on this exact topic though.
The sampler below was lifted from the Nylas blog with some tweaks. After you start it, it fires an interrupt every millisecond and records the current call stack:
import collections
import signal
class Sampler(object):
def __init__(self, interval=0.001):
self.stack_counts = collections.defaultdict(int)
self.interval = interval
def start(self):
signal.signal(signal.VTALRM, self._sample)
signal.setitimer(signal.ITIMER_VIRTUAL, self.interval, 0)
def _sample(self, signum, frame):
stack = []
while frame is not None:
formatted_frame = '{}({})'.format(
frame.f_code.co_name,
frame.f_globals.get('__name__'))
stack.append(formatted_frame)
frame = frame.f_back
formatted_stack = ';'.join(reversed(stack))
self.stack_counts[formatted_stack] += 1
signal.setitimer(signal.ITIMER_VIRTUAL, self.interval, 0)
You inspect stack_counts to see what your program has been up to. This data can be plotted in a flame-graph which makes it really obvious to see in which code paths your program is spending the most time.
If i understand it right you want to learn which parts of your application is used most often by users.
TL;DR;
Use one of the metrics frameworks for python if you do not want to do it by hand. Some of them are above:
DataDog
Prometheus
Prometheus Python Client
Splunk
It is usually done by function level and it actually depends on application;
If it is a desktop app with internet access:
You can create a simple db and collect how many times your functions are called. For accomplish it you can write a simple function and call it inside every function that you want to track. After that you can define an asynchronous task to upload your data to internet.
If it is a web application:
You can track which functions are called from js (mostly preferred for user behaviour tracking) or from web api. It is a good practice to start from outer to go inner. First detect which end points are frequently called (If you are using a proxy like nginx you can analyze server logs to gather information. It is the easiest and cleanest way). After that insert a logger to every other function that you want to track and simply analyze your logs for every week or month.
But you want to analyze your production code line by line (it is a very bad idea) you can start your application with python profilers. Python has one already: cProfile.
Maybe make a text file and through your every program method just append some text referenced to it like "Method one executed". Run the web application like 10 times thoroughly as a viewer would and after this make a python program that reads the file and counts a specific parts of it or maybe even a pattern and adds it to a variable and outputs the variables.
I've built a Flask application, that computes some paths in a graph. Usually, it's a very greedy task and it takes a lot of time to finish calculations. While I was busy with configuring algorithm, I didn't really pay attention to the server side implementations. We've set up an Nginx server, that servers that whole thing. Here's main Flask route:
#app.route('/paths', methods=['POST'])
def paths():
form = SampleForm(request.form)
if form.validate_on_submit():
point_a = form.point_a.data
point_b = form.point_b.data
start = form.start.data.strftime('%Y%m%d')
end = form.end.data.strftime('%Y%m%d')
hops = form.hops.data
rendering_time, collections = make_collection(point_a, point_b, start, end, hops)
return render_template(
'result.html',
searching_time=rendering_time,
collections=collections)
else:
logger.warning('Bad form: {}'.format(form.errors))
return render_template('index.html', form=form)
The whole calculation thing lies under make_collection method. So, whenever user sends request to the server.com/path, he will have to wait, until the method completes calculations and returns something. This is not a pleasing solution, sometimes Nginx just goes timeout.
The next version of this was with a simple idea of delegating labor work to some thread and just returning an empty page to the user. Later on we can just update page contents with the latest searhing results.
#app.route('/paths', methods=['POST'])
def paths():
form = SampleForm(request.form)
if form.validate_on_submit():
point_a = form.point_a.data
point_b= form.point_b.data
start = form.start.data.strftime('%Y%m%d')
end = form.end.data.strftime('%Y%m%d')
hops = form.hops.data
finder = threading.Thread(
target=make_collection,
kwargs={
'point_a': point_a,
'point_b': point_b,
'start': start,
'end': end,
'hops': hops})
finder.start()
rendering_time, collections = 0, []
return render_template(
'result.html',
searching_time=rendering_time
collections=collections)
else:
logger.warning('Bad form: {}'.format(form.errors))
return render_template('index.html', form=form)
The code above works fine and with accepatable searching time(didn't changed from the first version, like expected). The problem is, it works like that only on my local machine. When I deploy this to the Nginx, the total performance is not even nearly close to what I'm expecting. For comparison, results that I find on my local machine under 30 seconds, Nginx cannot fully find even under 300 seconds. What to do?
P.S. Originially, setting up Nginx server wasn't my part of the job and I'm not very familiar how Nginx works, but if you need any info, please, ask.
First code snippet looks like an easy way to let client fetch calculations results.
However, make_collection is a blocking one and Nginx will keep one of its workers busy with it. Since usual way of Nginx configuration is to have one worker per CPU core, that leaves you with one worker less each time you make a HTTP request to /paths. If there are multiple requests to /paths then is not surprise that you get a poor performance. Not to mention WSGI server that you probably have e.g. uwsgi, gunicorn, etc. and their workers and threads per worker process.
Solution with threads might look like a good solution, but you can end up with a lot of threads. Pay attention to threads in Python and try to avoid CPU bound work from being delegated to threads in Python unless you really know what you are doing.
In general you should try to avoid these blocking calls, like the one you make and offload them to a separate worker queue while keeping a reference for getting results later on.
I have a script that in the end executes two functions. It polls for data on a time interval (runs as daemon - and this data is retrieved from a shell command run on the local system) and, once it receives this data will: 1.) function 1 - first write this data to a log file, and 2.) function 2 - observe the data and then send an email IF that data meets certain criteria.
The logging will happen every time, but the alert may not. The issue is, in cases that an alert needs to be sent, if that email connection stalls or takes a lengthy amount of time to connect to the server, it obviously causes the next polling of the data to stall (for an undisclosed amount of time, depending on the server), and in my case it is very important that the polling interval remains consistent (for analytics purposes).
What is the most efficient way, if any, to keep the email process working independently of the logging process while still operating within the same application and depending on the same data? I was considering creating a separate thread for the mailer, but that kind of seems like overkill in this case.
I'd rather not set a short timeout on the email connection, because I want to give the process some chance to connect to the server, while still allowing the logging to be written consistently on the given interval. Some code:
def send(self,msg_):
"""
Send the alert message
:param str msg_: the message to send
"""
self.msg_ = msg_
ar = alert.Alert()
ar.send_message(msg_)
def monitor(self):
"""
Post to the log file and
send the alert message when
applicable
"""
read = r.SensorReading()
msg_ = read.get_message()
msg_ = read.get_message() # the data
if msg_: # if there is data in general...
x = read.get_failed() # store bad data
msg_ += self.write_avg(read)
msg_ += "==============================================="
self.ctlog.update_templog(msg_) # write general data to log
if x:
self.send(x) # if bad data, send...
This is exactly the kind of case you want to use threading/subprocesses for. Fork off a thread for the email, which times out after a while, and keep your daemon running normally.
Possible approaches that come to mind:
Multiprocessing
Multithreading
Parallel Python
My personal choice would be multiprocessing as you clearly mentioned independent processes; you wouldn't want a crashing thread to interrupt the other function.
You may also refer this before making your design choice: Multiprocessing vs Threading Python
Thanks everyone for the responses. It helped very much. I went with threading, but also updated the code to be sure it handled failing threads. Ran some regressions and found that the subsequent processes were no longer being interrupted by stalled connections and the log was being updated on a consistent schedule . Thanks again!!
I do not want to lose my sets if windows is about to shutdown/restart/log off/sleep, Is it possible to save it before shutdown? Or is there an alternative to save information without worring it will get lost on windows shutdown? JSON, CSV, DB? Anything?
s = {1,2,3,4}
with open("s.pick","wb") as f: # pickle it to file when PC about to shutdown to save information
pickle.dump(s,f)
I do not want to lose my sets if windows is about to shutdown/restart/log off/sleep, Is it possible to save it before shutdown?
Yes, if you've built an app with a message loop, you can receive the WM_QUERYENDSESSION message. If you want to have a GUI, most GUI libraries will probably wrap this up in their own way. If you don't need a GUI, your simplest solution is probably to use PyWin32. Somewhere in the docs there's a tutorial on creating a hidden window and writing a simple message loop. Just do that on the main thread, and do your real work on a background thread, and signal your background thread when a WM_QUERYENDSESSION message comes in.
Or, much more simply, as Evgeny Prokurat suggests, just use SetConsoleCtrlHandler (again through PyWin32). This can also catch ^C, ^BREAK, and the user closing your console, as well as the logoff and shutdown messages that WM_QUERYENDSESSION catches. More importantly, it doesn't require a message loop, so if you don't have any other need for one, it's a lot simpler.
Or is there an alternative to save information without worring it will get lost on windows shutdown? JSON, CSV, DB? Anything?
The file format isn't going to magically solve anything. However, a database could have two advantages.
First, you can reduce the problem by writing as often as possible. But with most file formats, that means rewriting the whole file as often as possible, which will be very slow. The solution is to streaming to a simpler "journal" file, packing that into the real file less often, and looking for a leftover journal at every launch. You can do that manually, but a database will usually do that for you automatically.
Second, if you get killed in the middle of a write, you end up with half a file. You can solve that by the atomic writing trick—write a temporary file, then replace the old file with the temporary—but this is hard to get right on Windows (especially with Python 2.x) (see Getting atomic writes right), and again, a database will usually do it for you.
The "right" way to do this is to create a new window class with a msgproc that dispatches to your handler on WM_QUERYENDSESSION. Just as MFC makes this easier than raw Win32 API code, win32ui (which wraps MFC) makes this easier than win32api/win32gui (which wraps raw Win32 API). And you can find lots of samples for that (e.g., a quick search for "pywin32 msgproc example" turned up examples like this, and searches for "python win32ui" and similar terms worked just as well).
However, in this case, you don't have a window that you want to act like a normal window, so it may be easier to go right to the low level and write a quick&dirty message loop. Unfortunately, that's a lot harder to find sample code for—you basically have to search the native APIs for C sample code (like Creating a Message Loop at MSDN), then figure out how to translate that to Python with the pywin32 documentation. Less than ideal, especially if you don't know C, but not that hard. Here's an example to get you started:
def msgloop():
while True:
msg = win32gui.GetMessage(None, 0, 0)
if msg and msg.message == win32con.WM_QUERYENDSESSION:
handle_shutdown()
win32api.TranslateMessage(msg)
win32api.DispatchMessage(msg)
if msg and msg.message == win32con.WM_QUIT:
return msg.wparam
worker = threading.Thread(real_program)
worker.start()
exitcode = msgloop()
worker.join()
sys.exit(exitcode)
I haven't shown the "how to create a minimal hidden window" part, or how to signal the worker to stop with, e.g., a threading.Condition, because there are a lot more (and easier-to-find) good samples for those parts; this is the tricky part to find.
you can detect windows shutdown/log off with win32api.setConsoleCtrlHandler
there is a good example How To Catch “Kill” Events with Python
I am trying to write a Heroku app in python which will read and store data from a xively feed in real time. I want the app to run independently as a sort of 'backend process' to simply store the data in a database. (It does not need to 'serve up' anything for users (for site visitors).)
Right now I am working on the 'continuous reading' part. I have included my code below. It simply reads the datastream once, each time I hit my app's Heroku URL. How do I get it to operate continuously so that it keeps on reading the data from xively?
import os
from flask import Flask
import xively
app = Flask(__name__)
#app.route('/')
def run_xively_script():
key = 'FEED_KEY'
feedid = 'FEED_ID'
client = xively.XivelyAPIClient(key)
feed = client.feeds.get(feedid)
datastream = feed.datastreams.get("level")
level = datastream.current_value
return "level is %s" %(level)
I am new to web development, heroku, and python... I would really appreciate any help(pointers)
{
PS:
I have read about Heroku Scheduler and from what I understand, it can be used to schedule a task at specific time intervals and when it does so, it starts a one-off dyno for the task. But as I mentioned, my app is really meant to perform just one function->continuously reading and storing data from xively. Is it necessary to schedule a separate task for that? And the one-off dyno that the scheduler will start will also consume dyno hours, which I think will exceed the free 750 dyno-hours limit (as my app's web dyno is already consuming 720 dyno-hours per month)...
}
Using the scheduler, as you and #Calumb have suggested, is one method to go about this.
Another method would be for you to setup a trigger on Xively. https://xively.com/dev/docs/api/metadata/triggers/
Have the trigger occur when your feed is updated. The trigger should POST to your Flask app, and the Flask app can then take the new data, manipulate it and store it as you wish. This would be the most near realtime, I'd think, because Xively is pushing the update to your system.
This question is more about high level architecture decisions and what you are trying to accomplish than a specific thing you should do.
Ultimately, Flask is probably not the best choice for an app to do what you are trying to do. You would be better off with just pure python or pure ruby. With that being said, using Heroku scheduler (which you alluded to) makes it possible to do something like what you are trying to do.
The simplest way to accomplish your goal (assuming that you want to change minimal amount of code and that constantly reading data is really what you want to do. Both of which you should consider) is to write a loop that runs when you call that task and grabs data for a few seconds. Just use a for loop and increment a counter for however many times you want to get the data.
Something like:
for i in range(0,5):
key = 'FEED_KEY'
feedid = 'FEED_ID'
client = xively.XivelyAPIClient(key)
feed = client.feeds.get(feedid)
datastream = feed.datastreams.get("level")
level = datastream.current_value
time.sleep(1)
However, Heroku has limits on how long something can run before it returns a value. Otherwise the router will return a 503 or 500. But you could use the scheduler to then schedule this to run every certain amount of time.
Again, I think that Flask and Heroku is not the best solution for what it sounds like you are trying to do. I would review your use case and go back to the drawing board on what the best method to accomplish it our.