Tell me with what you can wait for a response to another endpoint?
I am on the main page (index), entering something into the form. The POST request is sent to another server. At this moment:
another server processes the data and, depending on their correctness, makes a POST request to my url /answer (True or False).
I will be redirected, for example, to another page.
How to register the logic of another page (another) so that Django waits for a POST request from another server to /answer and depending on this request True/False, I output everything OK or everything Bad on this page?
url.py
urlpatterns = [
path('index/', index, name='index'),
path('page_2/', page_2, name='page_2'),
path('answer/', answer, name='answer'),
]
-------------------------------------------------
views.py
def index(request):
requests.post(example.com, data='My data')
return redirect('page_2')
def page_2(request):
# wait request in answer
if request.session['answer'] is True:
return 'Ok'
retunr 'Bad'
def answer(request):
data = request.data
# send to page_2 or save in request.session['answer']
return Response(status=200)
I reckon it's a strange situation and it's better if you could redesign the logic of your code so that the view functions process the request ASAP and not busily wait for external events to be triggered as it increases response time.
However, in order to achieve this purpose we need a communication channel between index and answer view. So to implement a communication like this:
index: Hey answer! I've sent the request. I'm going to sleep, wake me up if you got its result.
answer: Oh I got it man. Here you are. Wake up!
index: Thanks. Now I process it and return my response.
So this channel might be anything! A model in database, some entities in redis, some files in filesystem, etc.
One possible solution using the models might be:
Create a model(name it ExampleRequest for example) consisting of a boolean field named received
In index view, create an instance of ExampleRequest with received = False before sending the request.
In answer view, find the previously created ExampleRequest and set its received field to True
In index view, after sending the request, in a while loop, query the database and check if the created ExampleRequest instance has received = True? If yes, then the external server has called answer. So break the while and do the rest of the work; otherwise, just time.sleep(1) and continue the while loop.
Just note:
When multiple clients are using your website, some of them might request index view and then there will be more than one instance of ExampleRequest. In answer view, you have to be able to find out the current request is related to which one of those instances. You might need to store a unique data related to that request in ExampleRequest model.
You might consider the situation where the other server doesn't call answer view ever. So there might be an upper bound for the iterations of index view's while loop.
Also you may remove ExampleRequest instances after capturing them in index view in order to optimize disk usage of your database.
I say it again, it's better if you can do the polling stuff in frontend instead of backend to avoid high response time and other syncing issues.
This might not the complete answer, but it gives you way.
def index(request):
requests.post(example.com, data='My data')
return redirect('page_2')
Change it to following
import httpx
async def index(request):
async with httpx.AsyncClient() as client:
response = await client.post(example.com, data='My data')
print(response.json())
Related
I have a Flask app that generates video stream links. It connects to a server using login credentials and grabs a one time use link (that expires when a new link is generated using the same credentials). Using a list of credentials I am able to stream to as many devices as I like, so long as I have enough accounts.
The issue I am having is that one of the clients doesn't like the way the stream is returned.
#app.route("/play", methods=["GET"])
def play():
def streamData():
try:
useAccount(<credentials>)
with requests.get(link, stream=True) as r:
for chunk in r.iter_content(chunk_size=1024):
yield chunk
except:
pass
finally:
freeAccount(<credentials>)
...
# return redirect(link)
return Response(streamData())
If I return a redirect then there are no playback issues at all. The problem with a redirect is I don't have a way of marking the credentials as in use, then freeing them after.
The problem client is TVHeadend. I am able to get it to work by enabling the additional avlib inside of TVHeadend... But I shouldn't have to do that. I don't have to when I return a redirect.
What could be the cause of this?
Is it possible to make my app respond in the same way as the links server does?
My guess is that TVHeadend is very strict on if something complies to whatever standards... and I am guessing my app doesn't?
I am working on a flutter app for creating schedules for teachers. I created a Django project to generate a schedule based on the data in the post request from the user. This post request is sent from the flutter app. The Django project doesn't use a database or anything, It simply receives the input data, creates the schedule and returns the output data back to the user.
The problem is that the process of creating the schedule only works 1 time after starting the Django server. So when I want another user to send a request and receive a schedule I have to restart the server... Maybe the server remembers part of the data from the previous request?? I don't know. Is there somekind of way to make it forget everything after a request is done?
When I try to repeatedly run the scheduler without being in a Django project it works flawlessly. The Scheduler is based on the Google cp_model sat solver. (from ortools.sat.python import cp_model). The error I get when running the scheduler the second time in a Django project is 'TypeError: LonelyDuoLearner_is_present_Ss9QV7qFVvXBzTe3R6lmHkMBEWn1_0 is not a boolean variable'.
Is there some kind of way to fix this or mimic the effect of restarting the server?
The django view looks like this:
from django.http import HttpResponse
from django.views.decorators.csrf import csrf_exempt
from .scheduler.planning import Planning
from .scheduler.planning import print_json
import json
# Convert the data and creates the schedule.
#csrf_exempt
def generate_schedule(request):
if request.method == 'POST':
try:
data = json.loads(request.body)
planning = Planning()
planning.from_json(data)
output_json = planning.output_to_json()
print_json(output_json)
response = json.dumps(output_json)
except Exception as e:
print(e)
print("The data provided doesn't have the right structure.")
response = json.dumps([{'Error': "The data provided doesn't have the right structure."}])
else:
response = json.dumps([{'Error': 'Nothing to see here, please leave.'}])
return HttpResponse(response, content_type='text/json')
There is no beautiful way to restart the server (aside from just killing it by force, which is hardly beautiful).
You're probably using some global state somewhere in the code you're not showing, and it gets screwed up.
You should fix that instead, or if you can't do so, run the solving in a subprocess (using e.g. subprocess.check_call(), or multiprocessing.Process()).
The CP-SAT solver is stateless. The only persistent/shared object is the Ctrl-C handler, which can be disabled with a sat parameter. (catch_sigint if my memory is correct).
I've struggled for two days to understand how REST API Gateways should return GET requests to browsers when the backend service runs on AMQP (without using Web Sockets or polling).
Have successfully RPC'ed betweeen AMQP service (with RabbitMqs reply_to & correlation_id), but with Flask HTTP request waiting I'm still lost.
gateway.py - Response Handler Inside The HTTP Handler, Times out
def products_get():
def handler(ch=None, method=None, properties=None, body=None):
if body:
return body
return False
return_queue = 'products.get.return'
broker.channel.queue_declare(return_queue)
broker.channel.basic_consume(handler, return_queue)
broker.publish(exchange='', routing_key='products.get', body='Request data', properties=pika.BasicProperties(reply_to=return_queue))
now = time.time() # for timeout. Not having this returns 'no content' immediately
while time.time() < now + 1:
if handler():
return handler()
return 'Time out'
POST/PUT can simply send the AMQP message, return 200/201/201 immediately and the service work at its own pace. A separate REST interface just for GET requests seems implausible, but don't know the other options.
Regards
I think what you're asking is "how to perform asynchronous GET requests". and I reckon that the answer is - you can't. and should not. its bad practice or bad design. and it does not scale.
Why are you trying to get your GET response payload from AMQP?
If the paylaod (the content of the response) can be pulled from some DB, just pull it from there. that's called a synchronous request.
If the payload must be processed in some backend, send it away and don't have the requester wait for a response. You could assign some ID and have the requester ask again later (or collect some callback URL from the requester and have your backend POST the response once its ready - less common design).
EDIT:
so, given that you have to work with AMQP-backed backend, I would do something a little more elaborate: spawn a thread or a process in your front end that would constantly consume from AMQP and store the results locally or in some db. and serve GET results based on data that you stored locally. if the data isn't yet available, just return 404. ideally you'll need to re-shape your API: split it into "post" requests (that would trigger work at the backend) and "get" requests (that would return the results if they're available).
I'm working on my first Flask app (version 0.10.1), and also my first Python (version 3.5) app. One of its pieces needs to work like this:
Submit a form
Run a Celery task (which makes some third-party API calls)
When the Celery task's API calls complete, send a JSON post to another URL in the app
Get that JSON data and update a database record with it
Here's the relevant part of the Celery task:
if not response['errors']: # response comes from the Salesforce API call
# do something to notify that the task was finished successfully
message = {'flask_id' : flask_id, 'sf_id' : response['id']}
message = json.dumps(message)
print('call endpoint now and update it')
res = requests.post('http://0.0.0.0:5000/transaction_result/', json=message)
And here's the endpoint it calls:
#app.route('/transaction_result/', methods=['POST'])
def transaction_result():
result = jsonify(request.get_json(force=True))
print(result.flask_id)
return result.flask_id
So far I'm just trying to get the data and print the ID, and I'll worry about the database after that.
The error I get though is this: requests.exceptions.ConnectionError: None: Max retries exceeded with url: /transaction_result/ (Caused by None)
My reading indicates that my data might not be coming over as JSON, hence the Force=True on the result, but even this doesn't seem to work. I've also tried doing the same request in CocoaRestClient, with a Content-Type header of application/json, and I get the same result.
Because both of these attempts break, I can't tell if my issue is in the request or in the attempt to parse the response.
First of all request.get_json(force=True) returns an object (or None if silent=True). jsonify converts objects to JSON strings. You're trying to access str_val.flask_id. It's impossible. However, even after removing redundant jsonify call, you'll have to change result.flask_id to result['flask_id'].
So, eventually the code should look like this:
#app.route('/transaction_result/', methods=['POST'])
def transaction_result():
result = request.get_json()
return result['flask_id']
And you are absolutely right when you're using REST client to test the route. It crucially simplifies testing process by reducing involved parts. One well-known problem during sending requests from a flask app to the same app is running this app under development server with only one thread. In such case a request will always be blocked by an internal request because the current thread is serving the outermost request and cannot handle the internal one. However, since you are sending a request from the Celery task, it's not likely your scenario.
UPD: Finally, the last one reason was an IP address 0.0.0.0. Changing it to the real one solved the problem.
Goal: Using app engine's basic webapp framework I want to create a new request, with post data, to send to another RequestHandler. Something like pageGenerator.post({'message':'the message','datum1':datum1,...})...
Problem Description: One request handler, call it pageGenerator, creates a page with a form on it. When the user submits the form, the post goes to a different handler: dataProcessor. If dataProcessor finds some problem with the submitted data it would send the submitted data plus an Error Message to 'pageGenerator`'s post method, and pageGenerator would serve up the page with the error message.
How do I pass data (and control) back and forth like this? I would like pageGenerator to be able to get the data with self.request.get('message').
Sounds like you're over-complicating things. Consider just having a common method to show the form that can be invoked in different circumstances:
class FormHandler(webapp.RequestHandler):
def get(self):
self.show_form()
def post(self):
if form_is_valid():
handle_success()
else:
self.show_form({'feedback':'Validation failed'})
def show_form(self, vals={}):
vals['field1'] = self.request.get('field1')
vals['field2'] = self.request.get('field2')
html = template.render('form.html', vals)
self.response.out.write(html)
If you really need "display form" and "process form" to be in different handler classes, you can accomplish the same thing by defining show_form() in a common parent class.