I'm writing small & simple telegram bot on python. I never used this language in my work and decided that's a good way to learn by practice.
To get updates my app currently uses long polling called from an endless loop.
So I'm basically searching for the simplest way to run this app on openshift. I tried to use this example on flask but it didn't work. There are a lot of other options to implement background infinite processes with multiprocessing (from django and cerely to tornado) but it seems that all of them are way too advanced and complicated for my rather modest needs.
If the polling is not event driven, then you could use 'cron' (you can add cron cartridge to your gear) to periodically trigger your python script, that does the work and "dies".
However, keep in mind that Openshfit is not really intended to be your worker thread (unless you are on the bronze plan or higher). Unless you receive an external request to your gear within 24 hour period, your gear will be "idled" and your process will no longer run.
The way to get around this, "officially", is probably to get the bronze plan (you will not be charged unless you require the 4th gear instance),
"Unofficially", you can create a gear with python that will give you a default website. Then you create a new python script that does your job and trigger it using cron. To keep the gear from idling, use something like uptimerobot to ping your "website" every day.
How do I get SSL for my domains?
If you are still getting by on OpenShift Online's generous Free plan,
you'll see a warning message at the top of your application's SSL
configuration area. You can always take advantage of our *.rhcloud.com
wildcard certificate in order to securely connect to any application
via it's original, OpenShift-provided hostname URL.
Tornado is very simple, my first steps in telegram bot dev I did using this server on openshift platform.
Related
OK so I'm working on an app that has 2 Heroku apps - one is the writer that writes to my DB after scraping a site, and one is the reader that consumes the said DB.
The former is just a Python script that has a kind of a while 1 loop - it's actually a Twitter stream. I want this to run every x minutes independent of what the reader is doing.
Now, running the script locally works fine, but I'm not sure how getting this to work on Heroku would work. I've tried looking it up, but could not find a solid answer. I read about background tasks, Redis queue, One-off dynos etc, but I'm not sure what to really use for my purpose. Some of my requirements are:
have the Python script keep logs of whatever I want.
in the future, I might want to add an admin panel for the writer, that will just show me stats of the script (and the logs). So hooking up this admin panel (flask) should be easy-ish and not break the script itself.
I would love any suggestions or pointers here.
I suggest writing the consumer as a server that waits around, then processes the stream on the timed interval. That is, you start it once and it runs forever, doing some processing every 10 minutes or so.
See: sched Python module, which handles scheduling events at certain times and running them.
Simpler: use Heroku's scheduler service.
This technique is simpler -- it's just straight-through code -- but can lead to problems if you have two of the same consumer running at the same time.
I have over 100 web servers instances running a php application using apc and we occasionally (order of once per week across the entire fleet) see a corruption to one of the caches which results in a distinctive error log message.
Once this occurs then the application is dead on that node any transactions routed to it will fail.
I've written a simple wrapper around tail -F which can spot the patter any time it appears in the log file and evaluate a shell command (using bash eval) to react. I have this using the salt-call command from salt-stack to trigger processing a custom module which shuts down the nginx server, warms (refreshes) the cache, and, of course, restarts the web server. (Actually I have two forms of this wrapper, bash and Python).
This is fine and the frequency of events is such that it's unlikely to be an issue. However my boss is, quite reasonably, concerned about a common mode failure pattern ... that the regular expression might appear in too many of these logs at once and take town the entire site.
My first thought would be to wrap my salt-call in a redis check (we already have a Redis infrastructure used for caching and certain other data structures). That would be implemented as an integer, with an expiration. The check would call INCR, check the result, and sleep if more than N returned (or if the Redis server were unreachable). If the result were below the threshold then salt-call would be dispatched and a decrement would be called after the server is back up and running. (Expiration of the Redis key would kill off any stale increments after perhaps a day or even a few hours ... our alerting system will already have notified us of down servers and our response time is more than adequate for such time frames).
However, I was reading about the Saltstack event handling features and wondering if it would be better to use that instead. (Advantage, the nodes don't have redis-cli command tool nor the Python Redis libraries, but, obviously, salt-call is already there with its requisite support). So using something in Salt would minimize the need to add additional packages and dependencies to these systems. (Alternatively I could just write all the Redis handling as a separate PHP command line utility and just have my shell script call that).
Is there a HOWTO for writing simple Saltstack modules? The docs seem to plunge deeply into reference details without any orientation. Even some suggestions about which terms to search on would be helpful (because their use of terms like pillars, grains, minions, and so on seems somewhat opaque).
The main doc for writing a Salt module is here: http://docs.saltstack.com/en/latest/ref/modules/index.html
There are many modules shipped with Salt that might be helpful for inspiration. You can find them here: https://github.com/saltstack/salt/tree/develop/salt/modules
One thing to keep in mind is that the Salt Minion doesn't do anything unless you tell it to do something. So you could create a module that checks for the error pattern you mention, but you'd need to add it to the Salt Scheduler or cron to make sure it gets run frequently.
If you need more help you'll find helpful people on IRC in #salt on freenode.
Before you go any further, I am currently working in a very restricted environment. Installing additional dll's/exe's, and other admin like activities are frustratingly difficult. I am fully aware that some of the methodology described in this post is far from best practice...
I would like to start a long running background process that start/stops with Apache. I have a cgi enabled python script that takes as input all of the parameters necessary to run a complex "job". It is not feasible to run this job in the cgi script itself - because a)cgi is already slow to begin with and b)multiple simultaneous requests would definitely cause trouble. The cgi script will do nothing more than enter the parameters into a "jobs" database.
Normally, I would set something up like MSMQ in conjunction with a Windows Service. I would have a web service add a job to the queue, and the windows service would be polling the queue at some standard interval - processing jobs in sequence...
How could I accomplish the same in Apache? I can easily enough create a python script to serve as the background job processor. My questions are:
how do I start it process up with, leave it running with, and stop with Apache?
how can i monitor the process - make sure stays alive with Apache?
Any tips or insight welcome.
Note. OS is Windows Server 2008
Heres a pretty hacky solution for anyone looking to do something similar.
Set up a windows scheduled task that does that background processing. set it to run once a day or whatever interval you want (it is irrelevant, as you'll see in next steps)
In the Settings tab of the Scheduled Task - make sure the "Allow task to be run on demand" option is checked. Also, under the "If the task is already running..." text, make sure the Do not start a new instance option in selected.
Then, from the cgi script - it is possible to invoke the scheduled task from the command line(subprocess module) see here. With the options set above - if the task is already running - any subsequent run on demands are ignored.
This seems like a simple question, but I am having trouble finding the answer.
I am making a web app which would require the constant running of a task.
I'll use sites like Pingdom or Twitterfeed as an analogy. As you may know, Pingdom checks uptime, so is constantly checking websites to see if they are up and Twitterfeed checks RSS feeds to see if they;ve changed and then tweet that. I too need to run a simple script to cycle through URLs in a database and perform an action on them.
My question is: how should I implement this? I am familiar with cron, currently using it to do my server backups. Would this be the way to go?
I know how to make a Python script which runs indefinitely, starting back at the beginning with the next URL in the database when I'm done. Should I just run that on the server? How will I know it is always running and doesn't crash or something?
I hope this question makes sense and I hope I am not repeating someone else or anything.
Thank you,
Sam
Edit: To be clear, I need the task to run constantly. As in, check URL 1 in the database, check URl 2 in the database, check URL 3 and, when it reaches the last one, go right back to the beginning. Thanks!
If you need a repeatable running of the task which can be run from command line - that's what the cron is ideal for.
I don't see any demerits of this approach.
Update:
Okay, I saw the issue somewhat different. Now I see several solutions:
run the cron task at set intervals, let it process the data once per run, next time it will process the data on another run; use PIDs/Database/semaphores to avoid parallel processes;
update the processes that insert/update data in the database; let the information be processed when it is inserted/updated; c)
write a demon process which will reside in memory and check the data in real time.
cron would definitely be a way to go with this, as well as any other task scheduler you may prefer.
The main point is found in the title to your question:
Run a repeating task for a web app
The background task and the web application should be kept separate. They can share code, they can share access to a database, but they should be separate and discrete application contexts. (Consider them as separate UIs accessing the same back-end logic.)
The main reason for this is because web applications and background processes are architecturally very different and aren't meant to be mixed. Consider the structure of a web application being held within a web server (Apache, IIS, etc.). When is the application "running"? When it is "on"? It's not really a running task. It's a service waiting for input (requests) to handle and generate output (responses) and then go back to waiting.
Web applications are for responding to requests. Scheduled tasks or daemon jobs are for running repeated processes in the background. Keeping the two separate will make your management of the two a lot easier.
For quite a long time I've wanted to start a pet project that will aim in
time to become a web hosting control panel, but mainly focused on Python hosting --
meaning I would like to make a way for users to generate/start Django/
other frameworks projects right from the panel. I seemed to have
found the perfect tool to build my app with it: CherryPy.
This would allow me to do it the way I want, building the app with its own HTTP/
HTTPS server and also all in my favorite programming language.
But now a new question arises: As CherryPy is a threaded server, will
it be the right for this kind of task?
There will be lots of time consuming tasks so if one of the
tasks blocks, the rest of the users trying to access other pages will
be left waiting and eventually get timed out.
I imagine that this kind of problem wouldn't happen on a fork based server.
What would you advise?
"Threaded" and "Fork based" servers are equivalent. A "threaded" server has multiple threads of execution, and if one blocks then the others will continue. A "Fork based" server has multiple processes executing, and if one blocks then the others will continue. The only difference is that threaded servers by default will share memory between the threads, "fork based" ones by default will not share memory.
One other point - the "subprocess" module is not thread safe, so if you try to use it from CherryPy you will get wierd errors. (This is Python Bug 1731717)