I'm working on my first Flask app (version 0.10.1), and also my first Python (version 3.5) app. One of its pieces needs to work like this:
Submit a form
Run a Celery task (which makes some third-party API calls)
When the Celery task's API calls complete, send a JSON post to another URL in the app
Get that JSON data and update a database record with it
Here's the relevant part of the Celery task:
if not response['errors']: # response comes from the Salesforce API call
# do something to notify that the task was finished successfully
message = {'flask_id' : flask_id, 'sf_id' : response['id']}
message = json.dumps(message)
print('call endpoint now and update it')
res = requests.post('http://0.0.0.0:5000/transaction_result/', json=message)
And here's the endpoint it calls:
#app.route('/transaction_result/', methods=['POST'])
def transaction_result():
result = jsonify(request.get_json(force=True))
print(result.flask_id)
return result.flask_id
So far I'm just trying to get the data and print the ID, and I'll worry about the database after that.
The error I get though is this: requests.exceptions.ConnectionError: None: Max retries exceeded with url: /transaction_result/ (Caused by None)
My reading indicates that my data might not be coming over as JSON, hence the Force=True on the result, but even this doesn't seem to work. I've also tried doing the same request in CocoaRestClient, with a Content-Type header of application/json, and I get the same result.
Because both of these attempts break, I can't tell if my issue is in the request or in the attempt to parse the response.
First of all request.get_json(force=True) returns an object (or None if silent=True). jsonify converts objects to JSON strings. You're trying to access str_val.flask_id. It's impossible. However, even after removing redundant jsonify call, you'll have to change result.flask_id to result['flask_id'].
So, eventually the code should look like this:
#app.route('/transaction_result/', methods=['POST'])
def transaction_result():
result = request.get_json()
return result['flask_id']
And you are absolutely right when you're using REST client to test the route. It crucially simplifies testing process by reducing involved parts. One well-known problem during sending requests from a flask app to the same app is running this app under development server with only one thread. In such case a request will always be blocked by an internal request because the current thread is serving the outermost request and cannot handle the internal one. However, since you are sending a request from the Celery task, it's not likely your scenario.
UPD: Finally, the last one reason was an IP address 0.0.0.0. Changing it to the real one solved the problem.
Related
I have encountered an issue, as I have to create a cookie in the backend, which I will later use to send a request from the frontend. Both apps are on the same domain. This is the general idea behind it: https://levelup.gitconnected.com/secure-frontend-authorization-67ae11953723.
Frontend - Sending GET request to Backend
#app.get('/')
async def homepage(request: Request, response_class=HTMLResponse):
keycloak_code = 'sksdkssdk'
data = {'code': keycloak_code}
url_post = 'http://127.0.0.1:8002/keycloak_code'
post_token=requests.get(url=url_post, json = data )
return 'Sent'
if __name__ == '__main__':
uvicorn.run(app, host='local.me.me', port=7999,debug=True)
Backend
#app.get("/keycloak_code")
def get_tokens(response: Response, data: dict):
code = data['code']
print(code)
....
requests.get(url='http://local.me.me:8002/set')
return True
#app.get("/set")
async def createcookie(response: Response):
r=response.set_cookie(key='tokic3', value='helloworld', httponly=True)
return True
if __name__ == '__main__':
uvicorn.run(app, host='local.me.me', port=8002, log_level="debug")
When I open the browser and access http://local.me.me:8002/set, I can see that the cookie is created.
But when I make a GET request from my frontend to backend to the same URL, the request is received—as I can see in the terminal—but the backend does not create the cookie. Does anyone know what I might be doing wrong?
I have tried different implementations from FastAPI docs, but none has similar use cases.
127.0.0.1 and localhost (or local.me.me in your case) are two different domains (and origins). Hence, when making a request you need to use the same domain you used for creating the cookie. For example, if the cookie was created for local.me.me domain, then you should use that domain when sending the request. See related posts here, as well as here and here.
You also seem to have a second FastAPI app (listenning on a different port) acting as your frontend (as you say). If that's what you are trying to do, you would need to use Session Objects in Python requests module, or preferably, use a Client instance from httpx library, in order to persist cookies across requests. The advantage of httpx is that it offers an asynchronous API as well, using the httpx.AsyncClient(). You can find more details and examples in this answer, as well as here and here.
I am working on a flutter app for creating schedules for teachers. I created a Django project to generate a schedule based on the data in the post request from the user. This post request is sent from the flutter app. The Django project doesn't use a database or anything, It simply receives the input data, creates the schedule and returns the output data back to the user.
The problem is that the process of creating the schedule only works 1 time after starting the Django server. So when I want another user to send a request and receive a schedule I have to restart the server... Maybe the server remembers part of the data from the previous request?? I don't know. Is there somekind of way to make it forget everything after a request is done?
When I try to repeatedly run the scheduler without being in a Django project it works flawlessly. The Scheduler is based on the Google cp_model sat solver. (from ortools.sat.python import cp_model). The error I get when running the scheduler the second time in a Django project is 'TypeError: LonelyDuoLearner_is_present_Ss9QV7qFVvXBzTe3R6lmHkMBEWn1_0 is not a boolean variable'.
Is there some kind of way to fix this or mimic the effect of restarting the server?
The django view looks like this:
from django.http import HttpResponse
from django.views.decorators.csrf import csrf_exempt
from .scheduler.planning import Planning
from .scheduler.planning import print_json
import json
# Convert the data and creates the schedule.
#csrf_exempt
def generate_schedule(request):
if request.method == 'POST':
try:
data = json.loads(request.body)
planning = Planning()
planning.from_json(data)
output_json = planning.output_to_json()
print_json(output_json)
response = json.dumps(output_json)
except Exception as e:
print(e)
print("The data provided doesn't have the right structure.")
response = json.dumps([{'Error': "The data provided doesn't have the right structure."}])
else:
response = json.dumps([{'Error': 'Nothing to see here, please leave.'}])
return HttpResponse(response, content_type='text/json')
There is no beautiful way to restart the server (aside from just killing it by force, which is hardly beautiful).
You're probably using some global state somewhere in the code you're not showing, and it gets screwed up.
You should fix that instead, or if you can't do so, run the solving in a subprocess (using e.g. subprocess.check_call(), or multiprocessing.Process()).
The CP-SAT solver is stateless. The only persistent/shared object is the Ctrl-C handler, which can be disabled with a sat parameter. (catch_sigint if my memory is correct).
I have a couple different needs for asynchrony in my Python 3.6 Flask RESTful web service running under Gunicorn.
1) I'd like for one of my service's routes to be able to send an HTTP request to another HTTP service and, without waiting for the response, send a response back to the client that called my service.
Some example code:
#route
def fire_and_forget():
# Send request to other server without waiting
# for it to send a response.
# Return my own response.
2) I'd like for another one of my service's routes to be able to send 2 or more asynchronous HTTP requests to other HTTP services and wait for them all to reply before my service sends a response.
Some example code:
#route
def combine_results():
# Send request to service A
# Send request to service B
# Wait for both to return.
# Do something with both responses
# Return my own response.
Thanks in advance.
EDIT: I am trying to avoid the additional complexity of using a queue (e.g. celery).
You can use eventlets for the the second use case. It's pretty easy to do:
import eventlet
providers = [EventfulPump(), MeetupPump()]
try:
pool = eventlet.GreenPool()
pile = eventlet.GreenPile(pool)
for each in providers:
pile.spawn(each.get, [], 5, loc) # call the interface method
except (PumpFailure, PumpOverride):
return abort(503)
results = []
for res in pile:
results += res
You can wrap each of your api endpoints in a class that implements a "common interface" (in the above it is the get method) and you can make the calls in parallel. I just place them all in a list.
Your other use case is harder to accomplish in straight python. At least a few years ago you would be forced to introduce some sort of worker process like celery to get something like that done. This question seems to cover all the issues:
Making an asynchronous task in Flask
Perhaps things have changed in flask land?
I've struggled for two days to understand how REST API Gateways should return GET requests to browsers when the backend service runs on AMQP (without using Web Sockets or polling).
Have successfully RPC'ed betweeen AMQP service (with RabbitMqs reply_to & correlation_id), but with Flask HTTP request waiting I'm still lost.
gateway.py - Response Handler Inside The HTTP Handler, Times out
def products_get():
def handler(ch=None, method=None, properties=None, body=None):
if body:
return body
return False
return_queue = 'products.get.return'
broker.channel.queue_declare(return_queue)
broker.channel.basic_consume(handler, return_queue)
broker.publish(exchange='', routing_key='products.get', body='Request data', properties=pika.BasicProperties(reply_to=return_queue))
now = time.time() # for timeout. Not having this returns 'no content' immediately
while time.time() < now + 1:
if handler():
return handler()
return 'Time out'
POST/PUT can simply send the AMQP message, return 200/201/201 immediately and the service work at its own pace. A separate REST interface just for GET requests seems implausible, but don't know the other options.
Regards
I think what you're asking is "how to perform asynchronous GET requests". and I reckon that the answer is - you can't. and should not. its bad practice or bad design. and it does not scale.
Why are you trying to get your GET response payload from AMQP?
If the paylaod (the content of the response) can be pulled from some DB, just pull it from there. that's called a synchronous request.
If the payload must be processed in some backend, send it away and don't have the requester wait for a response. You could assign some ID and have the requester ask again later (or collect some callback URL from the requester and have your backend POST the response once its ready - less common design).
EDIT:
so, given that you have to work with AMQP-backed backend, I would do something a little more elaborate: spawn a thread or a process in your front end that would constantly consume from AMQP and store the results locally or in some db. and serve GET results based on data that you stored locally. if the data isn't yet available, just return 404. ideally you'll need to re-shape your API: split it into "post" requests (that would trigger work at the backend) and "get" requests (that would return the results if they're available).
I'm trying to use Twilio with Google App Engine. I'm currently trying to validate requests coming in from Twilio with SMS messages. I have a custom handler that has the 2 methods below on it.
from twilio.util import RequestValidator
class TwilioRequestHandler(BaseRequestHandler):
def twilio_request_validator(self):
return RequestValidator(AUTH_TOKEN)
def validate_request(self):
if not 'X-Twilio-Signature' in self.request.headers:
logging.error("X-Twilio-Signature was not in the request headers")
return False
return self.twilio_request_validator().validate(self.request.url, self.request.POST, self.request.headers['X-Twilio-Signature'])
When a request comes in on one of my TwiML endpoints, I call self.validate_request() from my request handler. This always seems to return false. As you can see from my code above, this should be the equivalent of calling Twilio's RequestValidator(AuthToken).validate(self.request.url, self.request.POST, self.request.headers['X-Twilio-Signature'])
I figured that it's possible that some of the request arguments that I received aren't supposed to be included when computing the signature, so I even went so far as taking the arguments for one request, creating a simple script checked all possible combinations, and compared it to the signature for that request. None of them were successful, so I have to be curious what I'm doing wrong, or if this is possibly something on the Twilio side.
Have you checked the protocol on the request URL against the Twilio endpoint?
Heroku apparently proxies HTTPS traffic to HTTP if Flask is configured for HTTP only. Flask's request.url will still begin with http:// even though Twilio is pointing at a URL that begins with https://. This will throw off hash.