I'm developing a react web app which communicates with a DRF backend via axios. While developing locally, I handled CORS by installing django-cors-headers and adding localhost:3000 to CORS_ORIGIN_WHITELIST (3000 is the default port for react w/ create-react-app):
CORS_ORIGIN_WHITELIST = (
'localhost:3000',
)
This worked fine until I deployed to a remote server, when I suddenly started seeing CORS errors again:
Access to XMLHttpRequest at 'http://localhost:8000/api/path/' from
origin 'http://example.com:3000' has been blocked by CORS policy...
which was baffling to me, since it already worked when I was developing locally.
This had me stumped for hours until sheer frustration led me to change the react request from
axios.post(localhost:8000/api/path/, {
key1: val1,
key2: val2,
...
})
.then(response => {
doSomeStuff(response);
});
to
axios.post(example.com:8000/api/path/, {
key1: val1,
key2: val2,
...
})
.then(response => {
doSomeStuff(response);
});
and the whitelist from
CORS_ORIGIN_WHITELIST = (
'localhost:3000',
)
to
CORS_ORIGIN_WHITELIST = (
'example.com:3000',
)
At which point the CORS errors stopped.
My question is: why did this happen? My understanding was that localhost and example.com were two names for the same server, but every other combination of whitelisting localhost/example.com and requesting localhost/example.com results in an error. What is the difference from a CORS perspective?
localhost and example.com are not two names for the same server. localhost resolves to 127.0.0.1, a local address, ie the server is inside your computer. example.com resolves to a remote server.
CORS-allowing localhost allows requests from localhost. CORS-allowing example.com allows requests from example.com. This is not a bug.
Related
I don't want to use getUpdates method to retrieve updates from Telegram, but a webhook instead.
Error from getWebhookInfo is:
has_custom_certificate: false,
pending_update_count: 20,
last_error_date: 1591888018,
last_error_message: "SSL error {error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed}"
My code is:
from flask import Flask
from flask import request
from flask import Response
app = Flask(__name__)
#app.route('/', methods=['POST', 'GET'])
def bot():
if request.method == 'POST':
return Response('Ok', status=200)
else:
return f'--- GET request ----'
if __name__ == "__main__":
app.run(host='0.0.0.0', port='8443', debug=True, ssl_context=('./contract.crt', '.private.key'))
When I hit https://www.mydomain.ext:8443/ I can see GET requests coming but not POST ones when I write something on my telegram-bot chat
Also that's how I set a webhook for telegram as follow:
https://api.telegram.org/botNUMBER:TELEGRAM_KEY/setWebhook?url=https://www.mydomain.ext:8443
result:
{
ok: true,
result: true,
description: "Webhook was set"
}
Any suggestion or something wrong I've done?
https://core.telegram.org/bots/api#setwebhook
I'm wondering if the problem it's caused because I'm using 0.0.0.0, the reason it's that if I use 127.0.0.0 the url/www.mydomain.ext cannot be reached
Update
ca_certitificate = {'certificate': open('./folder/ca.ca-bundle', 'rb')}
r = requests.post(url, files=ca_certitificate)
print(r.text)
that print gives me:
{
"ok": false,
"error_code": 400,
"description": "Bad Request: bad webhook: Failed to set custom certificate file"
}
I deployed a Telegram chatbot without Flask a while ago.
I remember that the POST and GET requests required /getUpdates and /sendMessage added to the bot url. Maybe it will help.
Telegram bots only works with full chained certificates. And the error in your getWebHookInfo:
"last_error_message":"SSL error {337047686, error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed}"
Is Telegram saying that it needs the whole certificate chain (it's also called CA Bundle or full chained certificate). as answered on the question.
If you validate your certificate using the SSLlabs you will see that your domain have chain issues:
https://www.ssllabs.com/ssltest/analyze.html?d=www.vallotta-party-bot.com&hideResults=on
To solve this need you need to set the CA Certificate. In this way, you need to find the CA certificate file with your CA provider.
Also, the best option in production sites is to use gunicorn instead of Flask.
If you are using gunicorn, you can do this with command line arguments:
$ gunicorn --certfile cert.pem --keyfile key.pem --ca_certs cert.ca-bundle -b 0.0.0.0:443 hello:app
Or create a gunicorn.py with the following content:
import multiprocessing
bind = "0.0.0.0:443"
workers = multiprocessing.cpu_count() * 2 + 1
timeout = 120
certfile = "cert/certfile.crt"
keyfile = "cert/service-key.pem"
ca_certs = "cert/cert.ca-bundle"
loglevel = 'info'
and run as follows:
gunicorn --config=gunicorn.py hello:app
If you use Nginx as a reverse proxy, then you can configure the certificate with Nginx, and then Nginx can "terminate" the encrypted connection, meaning that it will accept encrypted connections from the outside, but then use regular unencrypted connections to talk to your Flask backend. This is a very useful setup, as it frees your application from having to deal with certificates and encryption. The configuration items for Nginx are as follows:
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
# ...
}
Another important item you need to consider is how are clients that connect through regular HTTP going to be handled. The best solution, in my opinion, is to respond to unencrypted requests with a redirect to the same URL but on HTTPS. For a Flask application, you can achieve that using the Flask-SSLify extension. With Nginx, you can include another server block in your configuration:
server {
listen 80;
server_name example.com;
location / {
return 301 https://$host$request_uri;
}
}
A good tutorial of how setup your application with https can be found here: Running Your Flask Application Over HTTPS
I had similar case. I was developing bot on localhost (yet without SSL) and tunneled it to web through ngrok. In beginning all was OK, but once I found no POST-requests are coming. It turned out time of tunneling expired. I laughed and restarted tunneling. But requests weren't coming. It turned out, I forgot to change address of webhook (it switches every ngrok session). Don't repeat my errors.
I have a very minimal API (let's call it api.py) using Flask :
from flask import Flask, request
from flask_restx import Resource, Api
from flask_cors import CORS
app = Flask(__name__)
cors = CORS(app)
api = Api(app)
#api.route('/hello')
class HelloWorld(Resource):
def get(self):
return {"hello" : "world"}
if __name__ == '__main__':
app.run(port=5000)
}`
I then run : python3 api.py, no error
On another command line, I then query the API
curl http://localhost:5000/hello
which give me the right answer : {"hello": "world"}
On its side, the Flask App says : 127.0.0.1 - - [21/May/2020 22:55:38] "GET /hello HTTP/1.1" 200 -
Which seems ok to me
I then build a JS / Ajax Query to query the API from a WebPage :
$.ajax({
url: "http://localhost:5000/hello"
, type: "GET"
, contentType: "application/json"
, dataType: "json"
, success: function(data) {
console.log(data)
}
})
When I access the Web Page that fires the Ajax call, I get the following error message :
GET http://localhost:5000/hello net::ERR_CONNECTION_REFUSED
I understand that is a CORS issue. The problem is that I did test ALL the tricks from SO and other help forums with no success...
I did try :
from flask_cors import CORS
app.run(host="0.0.0.0", debug=True)
app.run(host="0.0.0.0", debug=False)
#cross-origin
...
Nothing works, I still have this ERR_CONNECTION_REFUSED
Thanks for any help on that subject, as I am loosing my head with that problem...
Your ajax call shouldn't be pointing to localhost. Replace it with the URL of the EC2 instance.
No need to configure Nginx to listen on your app's port for this task. Or exposing this port outside on the remote host.
But yes, for remote server setup ajax call shouldn't be pointing to the localhost. You need to update it to the url you see in the browser. So either machine IP or a DNS.
While for local machine debugging you could try 127.0.0.1 instead of localhost.
The second part of making this work is to deal with CORS:
for remote server CORS should be enabled both in the app (as middleware) and in the preflight request (see how to allow CORS for OPTIONS requests);
for local machine I would recommend disabling CORS by running the browser in disabled security mode (like disable-web-security flag for Chrome etc.).
I am creating an app with stack React => Flask => MongoDB.
I want to have an easy to use development environment, so I host everything locally.
I work in Ubuntu 16
Running flask app from PyCharm on localhost:5000.
Writing React app with VS Code and running it with console npm start command, hosting it on localhost:3000.
I want to make a GET call from React app to Flask web api to retrieve some data from db into frontend.
Flask code:
app = Flask(__name__)
cors = CORS(app)
app.config['CORS_HEADERS'] = 'Content-Type'
client = MongoClient('localhost', 27017)
db = client['test-database']
collection = db['test-collection']
#app.route("/orders/")
#cross_origin()
def GetAllOrders():
all_docs = list(collection.find({}, {'_id': False}))
print(all_docs)
return jsonify(all_docs)
React code:
componentDidMount() {
console.log(4);
fetch("http://localhost:5000/orders/", { mode: "no-cors" })
.then(results => {
console.log(5);
console.log(results);
console.log(results.body);
});
}
So whether I set mode "no-cors" or not the Chrome Developer Tools Network tab shows the GET call as successful and I can see the orders. Meanwhile, in the Console tab
when I send GET with mode: "no-cors" option I get Response object that has properties bodyUsed: false and body: null, so can not display the orders;
when I send GET without mode: "no-cors" option I get error:
Failed to load http://localhost:5000/orders/: The 'Access-Control-Allow-Origin' header contains multiple values 'http://localhost:3000, *', but only one is allowed. Origin 'http://localhost:3000' is therefore not allowed access. Have the server send the header with a valid value, or, if an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
And Network tab inspection shows only value for 'Access-Control-Allow-Origin' header. Its value is 'http://localhost:3000'.
What am I missing? How to get those orders into my React application?
PS. I have CORS Chrome plugin installed and enabled.
Just had to disable the CORS chrome plugin. That plugin was the thing that was duplicating Access-Control-Allow-Origin header values.
I have Kibana (part of elasticsearch stack) running on xx.xxx.xxx.xxx:5601. Since Kibana does not have authentication of its own, I am trying to wrap it under my flask login setup. In other words, if someone tries to visit xx.xxx.xxx.xxx:5601, I need the page to be redirected to my flask login page. I can use the #login_required decorator on the URL to achieve this...but I don't know how to setup the flask route URL to handle the port 5601 since it needs to begin with a leading slash.
#app.route("/")
#login_required
Any suggestions?
EDIT
#senaps: App 1 is flask that runs on 0.0.0.0, port 9500, App 2 is node.js based Kibana that I can choose to either run on localhost port 5601 and then expose via nginx, or i can directly make public on IP:5601. Either way, it is running as a "service" on startup and listening on 5601 at all times.
Problem statement - App 2 to be wrapped under App 1 login. I do not want to use nginx for authentication of App 2 but rather the App 1 flask login setup.
I'm currently using gunicorn to serve flask app and have nginx reverse proxy setup to route to flask app. Guide followed is digitalocean
Option 1 - Node.js Kibana application exposed to public on IP:5601.
server {
listen 80;
server_name example.com;
location / {
include proxy_params;
proxy_pass http://unix:/home/ubuntu/myproject/myproject.sock;
}}
If I visit IP, it goes to my flask app, great. I'm unable to figure out how to handle flask view URL if someone visits IP:5601. Instead of taking them to Kibana, it should redirect to my flask app for authentication.
I tried adding another server block to listen at 5601 and proxy_pass to the flask sock file, I get a nginx error that says it cannot bind to 5601 and asks me to kill the listener at 5601. But I need Kibana running at 5601 at all times (unless I can figure out a way to launch this service via python flask).
Option 2 - Kibana application runs on localhost port 5601 mounted at "/kibana" in order to not conflict with "/" needed for flask. Then it is exposed via nginx reverse proxy.
server {
listen 80;
server_name example.com;
location / {
include proxy_params;
proxy_pass http://unix:/home/ubuntu/myproject/myproject.sock;
}
location /kibana/ {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
rewrite /kibana/(.*)$ /$1 break;
}}
With this setup, one can access Kibana by going to IP/kibana. But problem with Option 2 is if I have a /kibana view in my flask app to catch it, it does not take effect as redirection to Kibana happens at nginx, so flask never gets involved.
I coudln't find much info on stack etc. since most solutions deal with using nginx to authenticate Kibana and not any other python applications.
Given this, how would I corporate your solution? Many thanks in advance for looking into this.
so you have 2 separate apps right?
you want the second app to only work if user is authenticated with first app.
simplest way would be, to use the same db.this way, flask login would check for the user's authentication based on the same db. with that being said, you may not be able to handle session's perfectly okay.
the trick is in uwsgi nad nginx. you should use Emperor mode of uwsgi so both apps are deployed.
#app.route("/")
#login_required
def function()
now, the question might be how would we get the second app's / route if the first app has that route too. well, this will not be a problem since the url is different. but you need your nginx configured to relay requests for xx.x.x.x to the first app and x.x.x.x:y to the second app.
server {
listen 80;
server_name example.org www.example.org;
root /var/www/port80/
}
server {
listen 5601;
server_name example.org www.example.org;
root /var/www/port81/
}
since you asked for suggestions on how to do it, i don't include codes. so you can figure out based on your setup. or you should tell us how you setup and serve the two apps, so we could provide more of code.
One approach is to proxy all traffic to the Kibana server through the Flask application. You can use a catch-all route to handle forwarding of the different paths. You would disallow access to Kibana from sources other than from the Flask application.
import requests # may require `pip install requests`
kibana_server_baseurl = 'https://xxx.xxx.xxx.xxx:5601/'
#app.route('/', defaults={'path': ''})
#app.route('/<path:path>')
#login_required
def proxy_kibana_requests(path):
# ref http://flask.pocoo.org/snippets/118/
url = kibana_server_baseurl + path
req = requests.get(url, stream = True)
return Response(stream_with_context(req.iter_content()), content_type = req.headers['content-type'])
Another options is to use Nginx as a reverse proxy and use Nginx to handle authentication. The simplest, if it meets your needs, is to use basic auth. https://www.nginx.com/resources/admin-guide/restricting-access-auth-basic/.
Alternatively you could check for a custom header in the Nginx config on access to the Kibana application and redirect to the Flask application if it were missing.
Another option is to use an existing Kibana authentication proxy. A commercial option, Elastic x-pack is a popular option. Another OSS option is https://github.com/fangli/kibana-authentication-proxy. I have not personally used either.
So I have a simple web page that runs on Nginx and has Rest API calls to a Python Flask app.
I'd like to put them through 2 wormholes on Dataplicity. One for the web page and the other one for the backend app.
At the moment I can only do either. Is there a way to make it work?
Thanks!
Yep, put nginx in front and park the apps under different locations.
Let's say your app is listening on port 8080 and the other app on port 8081.
Then your nginx config might look like this:
server {
listen ...;
...
location /app1/ {
proxy_pass http://127.0.0.1:8080;
}
location /app2/ {
proxy_pass http://127.0.0.1:8081;
}
...
}
Which means your apps will be accessible locally as:
http localhost/app1/
http localhost/app2/
This will appear as this on Dataplicity wormhole:
https 123123123.dataplicity.io/app1/
https 123123123.dataplicity.io/app2/
Hope that helps :)
M.