auto execute a web service in falcon - python

I have a function which registers my web-services to spring-eureka discovery server but it automatically de-registers it. to solve this problem, I thought to make a function which will automatically execute in few seconds and register my service again and again.
Please suggest what to do and if u have a better approach to encounter this problem that will be great.

We can make another program which will ping the health check URL of my web server.
responsePythonAPI = requests.request("GET", "http://10.95.51.8:5050/health", headers=headers)
pythonAPI = True if responsePythonAPI.json()["status"]["value"] == u'200 OK' and responsePythonAPI.json()["status"]["code"] == 200 else False
if pythonAPI == True:
eureka.registerWebService()
else:
eureka.deregisterWebService()
This program will run as soon as application gets up and running and registers it in in time inteval of 100seconds

Related

Shutting down Flask application from C#?

I am relaying HTTP requests from a C# application by sending JSON data to a localhost flask application, sending the requests with python, and relaying the response back to my C# application. Needs to be done this way because the server I am dealing with is 3rd party and fingerprints SCHANNEL requests and sends back dummy data (Does this with Powershell as well, but not curl, Postman, or Python).
var process = new Process();
process.StartInfo = new ProcessStartInfo()
{
FileName = "cmd.exe",
Arguments = #" /k python Assets\Scripts\server.py",
UseShellExecute = true
};
process.Start();
I found this solution, which uses an endpoint (/shutdown)
def shutdown_server():
func = request.environ.get('werkzeug.server.shutdown')
if func is None:
raise RuntimeError('Not running with the Werkzeug Server')
func()
Get a warning that it is being deprecated. I can live with that, but my OCD makes me want to do this properly. The warning tells me this is a hacky solution.
I am new to python/flask. What would be a good way about going about this?
Sidenote: process.Kill() doesn't work. Wish it did.
process.CloseMainWindow() seems to do the trick from my initial tests. Why process.Close() or process.Kill() do not work, is beyond me.

Flask redirect from a child procces - make a waiting page using only python

today I try to make a "waiting page" using Flask.
I mean a client makes a request, I want to show him a page like "wait the process can take a few minutes", and when the process ends on the server display the result.I want to display "wait" before my function manageBill.teste but redirect work only when it returned right?
#application.route('/teste', methods=['POST', 'GET'])
def test_conf():
if request.method == 'POST':
if request.form.get('confList') != None:
conf_file = request.form.get('confList')
username = request.form.get('username')
password = request.form.get('password')
date = request.form.get('date')
if date == '' or conf_file == '' or username == '' or password == '':
return "You forget to provide information"
newpid = os.fork()
if newpid == 0: # in child procces
print('A new child ', os.getpid())
error = manageBill.teste(conf_file, username, password, date)
print ("Error :" + error)
return redirect('/tmp/' + error)
else: # in parent procces
return redirect('/tmp/wait')
return error
return manageBill.manageTest()`
My /tmp route:
#application.route('/tmp/<wait>')
def wait_teste(wait):
return "The procces can take few minute, you will be redirected when the teste is done.<br>" + wait
If you are using the WSGI server (the default), requests are handled by threads. This is likely incompatible with forking.
But even if it wasn't, you have another fundamental issue. A single request can only produce a single response. Once you return redirect('/tmp/wait') that request is done. Over. You can't send anything else.
To support such a feature you have a few choices:
The most common approach is to have AJAX make the request to start a long running process. Then setup an /is_done flask endpoint that you can check (via AJAX) periodically (this is called polling). Once your endpoint returns that the work is done, you can update the page (either with JS or by redirecting to a new page).
Have /is_done be a page instead of an API endpoint that is queried from JS. Set an HTTP refresh on it (with some short timeout like 10 seconds). Then your server can send a redirect for the /is_done endpoint to the results page once the task finishes.
Generally you should strive to serve web requests as quickly as possible. You shouldn't leave connections open (to wait for a long task to finish) and you should offload these long running tasks to a queue system running separately from the web process. In this way, you can scale your ability to handle web requests and background processes separately (and one failing does not bring the other down).

In Python, how to keep track of variable in the response of a REST API for changes?

I have a REST API, which returns a JSON response. I need to keep track of one of the fields of the response, and listen for any changes in the value of this field. If the value reaches a certain threshold, I need to perform some task (say print an alert message). How can I accomplish this? Right now, I have a daemon which runs periodically, making an HTTP request and obtaining the value. What is the correct approach to do this, if I want to perform the action the moment the variable reaches the threshold?
This is what I currently have -
import daemon, time
from daemon import runner
NUMBER_OF_MINUTES = 10
def doSomething():
print "Yay, we got there"
def getSomeData():
url = "www.somewebsite.com/getdata?id=somevalue&name=someothervalue"
response = requests.get(url)
json_data = response.json()
myField = json_data['somefield']
if myField > threshold:
doSomething()
def run()
while True:
getSomeData()
time.sleep(60*NUMBER_OF_MINUTES)
if __name__ == '__main__':
run()
Do you have control of the API? If so, add a websocket endpoint that your frontend app can connect to. Your API can then let your frontend app know whenever the value changes, through whatever data structure is appropriate.
If you don't have control of the API, your current polling solution is about as good as it gets.

Bottle equivalent of engine.restart()

I am trying to transfer from Cherrypy to Bottle & Gevent(server).
After I run:
application=bottle.default_app() #bottle
WSGIServer(('', port), application, spawn=None).serve_forever() #gevent
I want to restart the server just as if the reloader reloaded the server (but only when I tell the server to).
So I want to access a page with credential request and only after correct authentication will it restart.
Here is my functional example in Cherrypy:
#expose
def reloadMe(self, u=None, p=None):
if u=="username" and p=="password":
engine.restart()
raise HTTPRedirect('/')
More simply I am asking how do I reload this script so that my edits to the source file are implemented but only when I retrieve a "restart" page.
I literally only need the Bottlepy equivalent of
engine.restart() #cherrypy
Does no one know how to do this?
You can write a small shell script to restart gevent wsgi server.
then using this code, you can call the script.
#get('/restartmyserver')
def handler():
http_auth_data = bottle.request.auth() # returns a tuple (username,password) only basic auth.
if http_auth_data[0] == user and http_auth_data[1] == password:
os.system("your_shell_script_to_restart_gevent_wsgi")
bottle.redirect('/')
let me know if you need more info.

Background tasks on App Engine

How can I run background tasks on App Engine?
You may use the Task Queue Python API.
GAE is very useful tool to build scalable web applications. Few of the limitations pointed out by many are no support for background tasks, lack of periodic tasks and strict limit on how much time each HTTP request takes, if a request exceeds that time limit the operation is terminated, which makes running time consuming tasks impossible.
How to run background task ?
In GAE the code is executed only when there is a HTTP request. There is a strict time limit (i think 10secs) on how long the code can take. So if there are no requests then code is not executed. One of the suggested work around was use an external box to send requests continuously, so kind of creating a background task. But for this we need an external box and now we dependent on one more element. The other alternative was sending 302 redirect response so that client re-sends the request, this also makes us dependent on external element which is client. What if that external box is GAE itself ? Everyone who has used functional language which does not support looping construct in the language is aware of the alternative ie recursion is the replacement to loop. So what if we complete part of the computation and do a HTTP GET on the same url with very short time out say 1 second ? This creates a loop(recursion) on php code running on apache.
<?php
$i = 0;
if(isset($_REQUEST["i"])){
$i= $_REQUEST["i"];
sleep(1);
}
$ch = curl_init("http://localhost".$_SERVER["PHP_SELF"]."?i=".($i+1));
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_TIMEOUT, 1);
curl_exec($ch);
print "hello world\n";
?>
Some how this does not work on GAE. So what if we do HTTP GET on some other url say url2 which does HTTP GET on the first url ? This seem to work in GAE. Code for this looks like this.
class FirstUrl(webapp.RequestHandler):
def get(self):
self.response.out.write("ok")
time.sleep(2)
urlfetch.fetch("http://"+self.request.headers["HOST"]+'/url2')
class SecondUrl(webapp.RequestHandler):
def get(self):
self.response.out.write("ok")
time.sleep(2)
urlfetch.fetch("http://"+self.request.headers["HOST"]+'/url1')
application = webapp.WSGIApplication([('/url1', FirstUrl), ('/url2', SecondUrl)])
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
Since we found out a way to run background task, lets build abstractions for periodic task (timer) and a looping construct which spans across many HTTP requests (foreach).
Timer
Now building timer is straight forward. Basic idea is to have list of timers and the interval at which each should be called. Once we reach that interval call the callback function. We will use memcache to maintain the timer list. To find out when to call callback, we will store a key in memcache with interval as expiration time. We periodically (say 5secs) check if that key is present, if not present then call the callback and again set that key with interval.
def timer(func, interval):
timerlist = memcache.get('timer')
if(None == timerlist):
timerlist = []
timerlist.append({'func':func, 'interval':interval})
memcache.set('timer-'+func, '1', interval)
memcache.set('timer', timerlist)
def checktimers():
timerlist = memcache.get('timer')
if(None == timerlist):
return False
for current in timerlist:
if(None == memcache.get('timer-'+current['func'])):
#reset interval
memcache.set('timer-'+current['func'], '1', current['interval'])
#invoke callback function
try:
eval(current['func']+'()')
except:
pass
return True
return False
Foreach
This is needed when we want to do long taking computation say doing some operation on 1000 database rows or fetch 1000 urls etc. Basic idea is to maintain list of callbacks and arguments in memcache and each time invoke callback with the argument.
def foreach(func, args):
looplist = memcache.get('foreach')
if(None == looplist):
looplist = []
looplist.append({'func':func, 'args':args})
memcache.set('foreach', looplist)
def checkloops():
looplist = memcache.get('foreach')
if(None == looplist):
return False
if((len(looplist) > 0) and (len(looplist[0]['args']) > 0)):
arg = looplist[0]['args'].pop(0)
func = looplist[0]['func']
if(len(looplist[0]['args']) == 0):
looplist.pop(0)
if((len(looplist) > 0) and (len(looplist[0]['args']) > 0)):
memcache.set('foreach', looplist)
else:
memcache.delete('foreach')
try:
eval(func+'('+repr(arg)+')')
except:
pass
return True
else:
return False
# instead of
# foreach index in range(0, 1000):
# someoperaton(index)
# we will say
# foreach('someoperaton', range(0, 1000))
Now building a program which fetches list of urls every one hour is straight forward. Here is the code.
def getone(url):
try:
result = urlfetch.fetch(url)
if(result.status_code == 200):
memcache.set(url, '1', 60*60)
#process result.content
except :
pass
def getallurl():
#list of urls to be fetched
urllist = ['http://www.google.com/', 'http://www.cnn.com/', 'http://www.yahoo.com', 'http://news.google.com']
fetchlist = []
for url in urllist:
if (memcache.get(url) is None):
fetchlist.append(url)
#this is equivalent to
#for url in fetchlist: getone(url)
if(len(fetchlist) > 0):
foreach('getone', fetchlist)
#register the timer callback
timer('getallurl', 3*60)
complete code is here http://groups.google.com/group/httpmr-discuss/t/1648611a54c01aa
I have been running this code on appengine for few days without much problem.
Warning: We make heavy use of urlfetch. The limit on no of urlfetch per day is 160000. So be careful not to reach that limit.
You can find more about cron jobs in Python App Engine here.
Up and coming version of runtime will have some kind of periodic execution engine a'la cron. See this message on AppEngine group.
So, all the SDK pieces appear to work, but my testing indicates this isn't running on the production servers yet-- I set up an "every 1 minutes" cron that logs when it runs, and it hasn't been called yet
Hard to say when this will be available, though...
Using the Deferred Python Library is the easiest way of doing background task on Appengine using Python which is built on top of TaskQueue API.
from google.appengine.ext import deferred
def do_something_expensive(a, b, c=None):
logging.info("Doing something expensive!")
# Do your work here
# Somewhere else
deferred.defer(do_something_expensive, "Hello, world!", 42, c=True)
If you want to run background periodic tasks, see this question (AppEngine cron)
If your tasks are not periodic, see Task Queue Python API or Task Queue Java API
There is a cron facility built into app engine.
Please refer to:
https://developers.google.com/appengine/docs/python/config/cron?hl=en
Use the Task Queue - http://code.google.com/appengine/docs/java/taskqueue/overview.html

Categories