For a percentage of production traffic, I want to duplicate the received request to a different version of my application. This needs to happen asynchronously so I don't double service time to the client.
The reason for doing this is so I can compare the responses generated by the prod version and a production candidate version. If their results are appropriately similar, I can be confident that the new version hasn't broken anything. (If I've made a functional change to the application, I'd filter out the necessary part of the response from this comparison.)
So I'm looking for an equivalent to:
class Foo(webapp2.RequestHandler):
def post(self):
handle = make_async_call_to('http://other_service_endpoint.com/', self.request)
# process the user's request in the usual way
test_response = handle.get_response()
# compare the locally-prepared response and the remote one, and log
# the diffs
# return the locally-prepared response to the caller
UPDATE
google.appengine.api.urlfetch was suggested as a potential solution to my problem, but it's synchronous in the dev_appserver, though it behaves the way I wanted in production (the request doesn't go out until get_response() is called, and it blocks). :
start_time = time.time()
rpcs = []
print 'creating rpcs:'
for _ in xrange(3):
rpcs.append(urlfetch.create_rpc())
print time.time() - start_time
print 'making fetch calls:'
for rpc in rpcs:
urlfetch.make_fetch_call(rpc, 'http://httpbin.org/delay/3')
print time.time() - start_time
print 'getting results:'
for rpc in rpcs:
rpc.get_result()
print time.time() - start_time
creating rpcs:
9.51290130615e-05
0.000154972076416
0.000189065933228
making fetch calls:
0.00029993057251
0.000356912612915
0.000473976135254
getting results:
3.15417003632
6.31326603889
9.46627306938
UPDATE2
So, after playing with some other options, I found a way to make completely non-blocking requests:
start_time = time.time()
rpcs = []
logging.info('creating rpcs:')
for i in xrange(10):
rpc = urlfetch.create_rpc(deadline=30.0)
url = 'http://httpbin.org/delay/{}'.format(i)
urlfetch.make_fetch_call(rpc, url)
rpc.callback = create_callback(rpc, url)
rpcs.append(rpc)
logging.info(time.time() - start_time)
logging.info('getting results:')
while rpcs:
rpc = apiproxy_stub_map.UserRPC.wait_any(rpcs)
rpcs.remove(rpc)
logging.info(time.time() - start_time)
...but the important point to note is that none of the async fetch options in urllib work in the dev_appserver. Having discovered this, I went back to try #DanCornilescu's solution and found that it only works properly in production, but not in the dev_appserver.
The URL Fetch service supports asynchronous requests. From Issuing an asynchronous request:
HTTP(S) requests are synchronous by default. To issue an asynchronous
request, your application must:
Create a new RPC object using urlfetch.create_rpc(). This object represents your asynchronous call in subsequent method calls.
Call urlfetch.make_fetch_call() to make the request. This method takes your RPC object and the request target's URL as parameters.
Call the RPC object's get_result() method. This method returns the result object if the request is successful, and raises an exception if
an error occurred during the request.
The following snippets demonstrate how to make a basic asynchronous
request from a Python application. First, import the urlfetch library
from the App Engine SDK:
from google.appengine.api import urlfetch
Next, use urlfetch to make the asynchronous request:
rpc = urlfetch.create_rpc()
urlfetch.make_fetch_call(rpc, "http://www.google.com/")
# ... do other things ...
try:
result = rpc.get_result()
if result.status_code == 200:
text = result.content
self.response.write(text)
else:
self.response.status_code = result.status_code
logging.error("Error making RPC request")
except urlfetch.DownloadError:
logging.error("Error fetching URL0")
Note: As per Sniggerfardimungus's experiment mentioned in the question's update the async calls might not work as expected on the development server - being serialized instead of concurrent, but they do so when deployed on GAE. Personally I didn't use the async calls yet, so I can't really say.
If the intent is not block at all waiting for the response from the production candidate app you could push a copy of the original request and the production-prepared response on a task queue then answer to the original request - with neglijible delay (that of enqueueing the task).
The handler for the respective task queue would, outside of the original request's critical path, make the request to the staging app using the copy of the original request (async or not, doesn't really matter from the point of view of impacting the production app's response time), get its response and compare it with the production-prepared response, log the deltas, etc. This can be nicely wrapped in a separate module for minimal changes to the production app and deployed/deleted as needed.
Related
I'm trying to implement Fire and Forget mechanism using FastAPI. I'm facing few difficulties when implementing the mechanism.
I have two applications. One is developed with FastAPI and other is Flask. FastAPI will run in AWS Lambda and it will send requests to the Flask app running on AWS ECS.
Currently, I was able to send a request to the Flask API and receive an immediate response from the FastAPI app. But I see FastAPI still running bg_tasks.add_task(make_request, request) in the background which times out after lambda execution threshold time (15 mins).
Fast API application:
def make_request(data):
"""
Function to make a post request to flask application
:param data: Data from the user to write into the file
:return: None
"""
print("***** Inside post *****")
requests.post(url=root_url, data=data)
print("***** Post completed *****")
#router.post("/write-to-file")
async def write_to_file(request: Dict, bg_tasks: BackgroundTasks):
"""
Function to queue the requests and return to the post function
:param request: Request from the user
:param bg_tasks: Background task instance
:return: Some message
"""
print(f"****** Request call started ******")
bg_tasks.add_task(make_request, request)
print(f"****** Request completed ******")
return {"Message": "Data will be written into the file"}
Flask Application:
#app.route('/', methods=['POST'])
def write():
"""
Function to write the request data into the file
:return:
"""
request_data = request.form
try:
print(f"Sleep time {int(request_data['sleep_time'])}")
time.sleep(int(request_data["sleep_time"]))
request_data = dict(request_data)
request_data['current_time'] = str(datetime.now())
with open("data.txt", "a") as f:
f.write("\n")
f.write(json.dumps(request_data, indent=4))
return {"Message": "Success"}
except Exception as e:
return {"Message": e}
Fast API (http://localhost:8000/write-to-file/) calls the write_to_file method, which adds all the tasks (requests) into the background queue and runs them in background.
This function does not wait for the process to be completed. However, it returns the response to the client side. make_request method will then trigger the Flask endpoint (http://localhost:5000/), which in turn will process the request and write to a file. Consider make_request as one AWS lambda, if flask application takes more hours to process, the lambda will wait for longer time.
Is it possible to kill lambda once the request is published, or do something else to solve the timeout issue?
With the current setup, your lambda would run for as long, as the Flask endpoint would require to process your request. Effectively, both APIs run exactly the same time.
This is because the requests.post in the lambda function must wait for the response to finish. Given that you don't care about the results of that response, I can think of several other ways to solve this.
If I were you, I would move the queue processing to the ECS side. Then the only thing that lambda would only be responsible for putting a job into the queue that the ECS worker would process when it has capacity.
This option would let you get rid of one of the APIs: you would be able to query the Flask API directly and kill the lambda function, or instead kill the Flask API and run a worker process on ECS.
Alternatively, you could respond early on the Flask API side, which would finish your HTTP request, and thus the lambda execution, sooner. This can be confusing to set up and defeats the purpose of exposing an HTTP API in the first place. Also, under some circumstances, the Flask request execution could be terminated by the webserver after a default timeout (~30 seconds).
And finally, in case you really-really want to leave your code as it is now, you could set a request to timeout after a short period of time. In case you go this route, make sure to choose a long enough timeout for Flask to start processing the request:
try:
requests.post(url=root_url, data=data, timeout=5) # throw after 5 seconds of waiting
except requests.exceptions.Timeout:
pass
I created batch delayed http (async) client which allows to trigger multiple async http requests and most importantly it allows to delay the start of requests so for example 100 requests are not triggered at a time.
But it has an issue. The http .fetch() method has a handleMethod parameter which handles the response, but I found out that if the delay (sleep) after the fetch isn't long enough the handle method is not even triggered. (maybe the request is killed or what meanwhile).
It is probably related to .run_sync method. How to fix that? I want to put delays but dont want this issue happen.
I need to parse the response regardless how long the request takes, regardless the following sleep call (that call has another reason as i said, and should not be related to response handling at all)
class BatchDelayedHttpClient:
def __init__(self, requestList):
# class members
self.httpClient = httpclient.AsyncHTTPClient()
self.requestList = requestList
ioloop.IOLoop.current().run_sync(self.execute)
#gen.coroutine
def execute(self):
print("exec start")
for request in self.requestList:
print("requesting " + request["url"])
self.httpClient.fetch(request["url"], request["handleMethod"], method=request["method"], headers=request["headers"], body=request["body"])
yield gen.sleep(request["sleep"])
print("exec end")
I have a REST API, which returns a JSON response. I need to keep track of one of the fields of the response, and listen for any changes in the value of this field. If the value reaches a certain threshold, I need to perform some task (say print an alert message). How can I accomplish this? Right now, I have a daemon which runs periodically, making an HTTP request and obtaining the value. What is the correct approach to do this, if I want to perform the action the moment the variable reaches the threshold?
This is what I currently have -
import daemon, time
from daemon import runner
NUMBER_OF_MINUTES = 10
def doSomething():
print "Yay, we got there"
def getSomeData():
url = "www.somewebsite.com/getdata?id=somevalue&name=someothervalue"
response = requests.get(url)
json_data = response.json()
myField = json_data['somefield']
if myField > threshold:
doSomething()
def run()
while True:
getSomeData()
time.sleep(60*NUMBER_OF_MINUTES)
if __name__ == '__main__':
run()
Do you have control of the API? If so, add a websocket endpoint that your frontend app can connect to. Your API can then let your frontend app know whenever the value changes, through whatever data structure is appropriate.
If you don't have control of the API, your current polling solution is about as good as it gets.
I am new to rabbitmq and trying to figure out how I can make a client request a server with information about memory and CPU utilization with this tutorial (https://www.rabbitmq.com/tutorials/tutorial-six-python.html).
So the client requests for CPU and memory ( I believe I will need two queues) and the server respond with the values.
Is there anyway to simple create a client.py and server.py with this case using the Pika library in Python.
I would recommend you to follow the first RabbitMQ tutorials if you haven't already. The RPC example builds on concepts covered on previous examples (direct queues, exclusive queues, acknowledgements, etc.).
The RPC solution proposed on the tutorial requires at least two queues, depending on how many clients you want to use:
One direct queue (rpc_queue), used to send requests from the client to the server.
One exclusive queue per client, used to receive responses.
The request/response cycle:
The client sends a message to the rpc_queue. Each message includes a reply_to property, with the name of the client exclusive queue the server should reply to, and a correlation_id property, which is just an unique id used to track the request.
The server waits for messages on the rpc_queue. When a message arrives, it prepares the response, adds the correlation_id to the new message, and sends it to the queue defined in the reply_to message property.
The client waits on its exclusive queue until it finds a message with the correlation_id that was originally generated.
Jumping straight to your problem, the first thing to do is to define the message format you'll want to use on your responses. You can use JSON, msgpack or any other serialization library. For example, if using JSON, one message could look something like this:
{
"cpu": 1.2,
"memory": 0.3
}
Then, on your server.py:
def on_request(channel, method, props, body):
response = {'cpu': current_cpu_usage(),
'memory': current_memory_usage()}
properties = pika.BasicProperties(correlation_id=props.correlation_id)
channel.basic_publish(exchange='',
routing_key=props.reply_to,
properties=properties,
body=json.dumps(response))
channel.basic_ack(delivery_tag=method.delivery_tag)
# ...
And on your client.py:
class ResponseTimeout(Exception): pass
class Client:
# similar constructor as `FibonacciRpcClient` from tutorial...
def on_response(self, channel, method, props, body):
if self.correlation_id == props.correlation_id:
self.response = json.loads(body.decode())
def call(self, timeout=2):
self.response = None
self.correlation_id = str(uuid.uuid4())
self.channel.basic_publish(exchange='',
routing_key='rpc_queue',
properties=pika.BasicProperties(
reply_to=self.callback_queue,
correlation_id=self.correlation_id),
body='')
start_time = time.time()
while self.response is None:
if (start_time + timeout) < time.time():
raise ResponseTimeout()
self.connection.process_data_events()
return self.response
As you see, the code is pretty much the same as the original FibonacciRpcClient. The main differences are:
We use JSON as data format for our messages.
Our client call() method doesn't require a body argument (there's nothing to send to the server)
We take care of response timeouts (if the server is down, or if it doesn't reply to our messages)
Still, there're a lot of things to improve here:
No error handling: For example, if the client "forgets" to send a reply_to queue, our server is gonna crash, and will crash again on restart (the broken message will be requeued infinitely as long as it isn't acknowledged by our server)
We don't handle broken connections (no reconnection mechanism)
...
You may also consider replacing the RPC approach with a publish/subscribe pattern; in this way, the server simply broadcasts its CPU/memory state every X time interval, and one or more clients receive the updates.
I'm writing python app which currently is being hosted on Heroku. It is in early development stage, so I'm using free account with one web dyno. Still, I want my heavier tasks to be done asynchronously so I'm using iron worker add-on. I have it all set up and it does the simplest jobs like sending emails or anything that doesn't require any data being sent back to the application. The question is: How do I send the worker output back to my application from the iron worker? Or even better, how do I notify my app that the worker is done with the job?
I looked at other iron solutions like cache and message queue, but the only thing I can find is that I can explicitly ask for the worker state. Obviously I don't want my web service to poll the worker because it kind of defeats the original purpose of moving the tasks to background. What am I missing here?
I see this question is high in Google so in case you came here with hopes to find some more details, here is what I ended up doing:
First, I prepared the endpoint on my app. My app uses Flask, so this is how the code looks:
#app.route("/worker", methods=["GET", "POST"])
def worker():
#refresh the interface or whatever is necessary
if flask.request.method == 'POST':
return 'Worker endpoint reached'
elif flask.request.method == 'GET':
worker = IronWorker()
task = worker.queue(code_name="hello", payload={"WORKER_DB_URL": app.config['WORKER_DB_URL'],
"WORKER_CALLBACK_URL": app.config['WORKER_CALLBACK_URL']})
details = worker.task(task)
flask.flash("Work queued, response: ", details.status)
return flask.redirect('/')
Note that in my case, GET is here only for testing, I don't want my users to hit this endpoint and invoke the task. But I can imagine situations when this is actually useful, specifically if you don't use any type of scheduler for your tasks.
With the endpoint ready, I started to look for a way of visiting that endpoint from the worker. I found this fantastic requests library and used it in my worker:
import sys, json
from sqlalchemy import *
import requests
print "hello_worker initialized, connecting to database..."
payload = None
payload_file = None
for i in range(len(sys.argv)):
if sys.argv[i] == "-payload" and (i + 1) < len(sys.argv):
payload_file = sys.argv[i + 1]
break
f = open(payload_file, "r")
contents = f.read()
f.close()
payload = json.loads(contents)
print "contents: ", contents
print "payload as json: ", payload
db_url = payload['WORKER_DB_URL']
print "connecting to database ", db_url
db = create_engine(db_url)
metadata = MetaData(db)
print "connection to the database established"
users = Table('users', metadata, autoload=True)
s = users.select()
#def run(stmt):
# rs = stmt.execute()
# for row in rs:
# print row
#run(s)
callback_url = payload['WORKER_CALLBACK_URL']
print "task finished, sending post to ", callback_url
r = requests.post(callback_url)
print r.text
So in the end there is no real magic here, the only important thing is to send the callback url in the payload if you need to notify your page when the task is done. Alternatively you can place the endpoint url in the database if you use one in your app. Btw. the snipped above also shows how to connect to the postgresql database in your worker and print all the users.
One last thing you need to be aware of is how to format your .worker file, mine looks like this:
# set the runtime language. Python workers use "python"
runtime "python"
# exec is the file that will be executed:
exec "hello_worker.py"
# dependencies
pip "SQLAlchemy"
pip "requests"
This will install the latest versions of SQLAlchemy and requests, if your project is dependent on any specific version of the library, you should do this instead:
pip "SQLAlchemy", "0.9.1"
Easiest way - push message to your api from worker - it's log or anything you need to have in your app