I am trying to run my django server on an Ubuntu instance on AWS EC2. I am using gunicorn to run the server like this :
gunicorn --workers 4 --bind 127.0.0.1:8000 woc.wsgi:application --name woc-server --log-level=info --worker-class=tornado --timeout=90 --graceful-timeout=10
When I make a request I am getting 502, Bad Gateway on the browser. Here is the server log http://pastebin.com/Ej5KWrWs
Some sections of the settings.py file where behaviour is changed based on hostname are
iUbuntu is the hostname of my laptop
if socket.gethostname() == 'iUbuntu':
'''
Development mode
"iUbuntu" is the hostname of Ishan's PC
'''
DEBUG = TEMPLATE_DEBUG = True
else:
'''
Production mode
Anywhere else than Ishan's PC is considered as production
'''
DEBUG = TEMPLATE_DEBUG = False
if socket.gethostname() == 'iUbuntu':
'''Development'''
ALLOWED_HOSTS = ['*', ]
else:
'''Production Won't let anyone pretend as us'''
ALLOWED_HOSTS = ['domain.com', 'www.domain.com',
'api.domain.com', 'analytics.domain.com',
'ops.domain.com', 'localhost', '127.0.0.1']
(I don't get what's the purpose of this section of the code. Since I inherited the code from someone and the server was working I didn't bothered removing it without understanding what it does)
if socket.gethostname() == 'iUbuntu':
MAIN_SERVER = 'http://localhost'
else:
MAIN_SERVER = 'http://domain.com'
I can't figure out what's the problem here. The same code runs fine with gunicorn on my laptop.
I have also made a small hello world node.js to serve on the port 8000 to test nginx configuration and it is running fine. So no nginx errors.
UPDATE:
I set DEBUG to True and copied the Traceback http://pastebin.com/ggFuCmYW
UPDATE:
Thanks to the reply by #ARJMP. This indeed is the problem with celery consumer not getting connected to broker.
I am configuring celery like this : app.config_from_object('woc.celeryconfig') and the contents of celeryconfig.py are:
BROKER_URL = 'amqp://celeryuser:celerypassword#localhost:5672/MyVHost'
CELERY_RESULT_BACKEND = 'rpc://'
I am running the worker like this :celery worker -A woc.async -l info --autoreload --include=woc.async -n woc_celery.%h
And the error that I am getting is:
consumer: Cannot connect to amqp://celeryuser:**#127.0.0.1:5672/MyVHost: [Errno 104] Connection reset by peer.
Ok so your problem as far as I can tell is that your celery worker can't connect to the broker. You have some middleware trying to call a celery task, so it will fail on every request (unless that analyse_urls.delay(**kw) is conditional)
I found a similar issue that was solved by upgrading their version of celery.
Another cause could be that the EC2 instance can't connect to the message queue server because the EC2 security group won't allow it. If the message queue is running on a separate server, you may have to make sure you've allowed the connection between the EC2 instance and the message queue through AWS EC2 Security Groups
try setting the rabbitmq connection timeout to 30 seconds. this usually clears up the problem of being unable to connect to a server.
you can add connection_timeout to your connection string:
BROKER_URL = 'amqp://celeryuser:celerypassword#server.example.com:5672/MyVHost?connection_timeout=30'
note the format with the question mark: ?connection_timeout=30
this is a query string parameter for the RMQ connection string.
also - make sure the url is pointing to your production server name / url, and not localhost, in your production environment
Related
I keep getting this error despite trying everything out in internet.
I'm trying to run my flask application on Heroku.
Below is my ProcFile
web gunicorn -b 127.0.0.1:8000 geeni:app
Below is my geeni.py file.
class ChargeUser(Resource):
def post(self):
jsonData = request.get_json(force=True)
stripeid = jsonData['stripeid_customer']
currency = jsonData['currency']
amount = jsonData['amount']
apiKey = jsonData['api_key']
try:
stripe.Charge.create(amount = amount, source=stripeid, currency=currency)
return jsonify({'Msg':'Charged!'})
except:
raise
api.add_resource(ChargeUser,'/')
if __name__ == '__main__':
app.run()
I've setup my heroku push/login everything and have throughly followed tutorials. No luck..
Your Procfile should be web: gunicorn -b 0.0.0.0:$PORT greeni:app. As currently written, Heroku would never see that your application is ready to receive inbound connections:
The 127.0.0.1 interface would not receive any external network traffic. Instead, the 0.0.0.0 string does bind to the all external interfaces.
Heroku passes the required port via the $PORT variable, which is usually 5000.
Remember - Heroku manages the "routing mesh", which receives the inbound HTTP traffic, then forwards it to your application. It assigns the address and port, which can't be hard-coded in your Procfile.
I have my Python Flask web app hosted on nginx. While trying to execute a request it shows a timeout error in the nginx error log as shown below :
[error] 2084#0: *1 upstream timed out (110: Connection timed out)
while reading response header from upstream, client:
192.168.2.224, server: 192.168.2.131, request: "POST /execute HTTP/1.1", upstream: "uwsgi://unix:/hom
e/jay/PythonFlaskApp/app.sock", host: "192.168.2.131:9000", referrer:
"http://192.168.2.131:9000/"
If I try to run the app locally it works fine and responds fine.
Any one have any idea what might be wrong ?
the error found in browser console is :
Gateway Time-out
Here is the nginx config file:
server {
listen 9000;
server_name 192.168.2.131;
location / {
include uwsgi_params;
proxy_read_timeout 300;
uwsgi_pass unix:/home/jay/PythonFlaskApp/app.sock;
}
}
And here is the Python Fabric code that i trying to execute. i'm not sure if this is causing the issue, but any waz here is the code :
from fabric.api import *
#application.route("/execute",methods=['POST'])
def execute():
try:
machineInfo = request.json['info']
ip = machineInfo['ip']
username = machineInfo['username']
password = machineInfo['password']
command = machineInfo['command']
isRoot = machineInfo['isRoot']
env.host_string = username + '#' + ip
env.password = password
resp = ''
with settings(warn_only=True):
if isRoot:
resp = sudo(command)
else:
resp = run(command)
return jsonify(status='OK',message=resp)
except Exception, e:
print 'Error is ' + str(e)
return jsonify(status='ERROR',message=str(e))
I have a uWSGi config file for the web app and started it using an upstart script. Here is uwSGi conf file :
[uwsgi]
module = wsgi
master = true
processes = 5
socket = app.sock
chmod-socket = 660
vacuum = true
die-on-term = true
and here is upstart script
description "uWSGI server instance configured to serve Python Flask App"
start on runlevel [2345]
stop on runlevel [!2345]
setuid jay
setgid www-data
chdir /home/jay/PythonFlaskApp
exec uwsgi --ini app.ini
I have followed the below tutorial on running flask app on nginx
This is likely a problem with the Fabric task, not with Flask. Have you tried isolating / removing Fabric from the application, just for troubleshooting purposes? You could try stubbing out a value for resp, rather than actually executing the run/sudo commands in your function. I would bet that the app works just fine if you do that.
And so that would mean that you've got a problem with Fabric executing the command in question. First thing you should do is verify this by mocking up an example Fabfile on the production server using the info you're expecting in one of your requests, and then running it with fab -f <mock_fabfile.py>.
It's also worth noting that using with settings(warn_only=True): can result in suppression of error messages. I think that you should remove this, since you are in a troubleshooting scenario. From the docs on Managing Output:
warnings: Warning messages. These are often turned off when one expects a given operation to fail, such as when using grep to test existence of text in a file. If paired with setting env.warn_only to True, this can result in fully silent warnings when remote programs fail. As with aborts, this setting does not control actual warning behavior, only whether warning messages are printed or hidden.
As a third suggestion, you can get more info out of Fabric by using the show('debug') context manager, as well as enabling Paramiko's logging:
from fabric.api import env
from fabric.context_managers import show
# You can also enable Paramiko's logging like so:
import logging
logging.basicConfig(level=logging.DEBUG)
def my_task():
with show('debug'):
run('my command...')
The Fabric docs have some additional suggestions for troubleshooting: http://docs.fabfile.org/en/1.6/troubleshooting.html. (1.6 is an older/outdated version, but the concepts still apply.)
I have a flask app that I want to deploy using CherryPy's built in server. I chose CherryPy so that the app can be deployed without having to reverse proxy (ie. nginx in front).
I'm having trouble getting CherryPy to listen for requests on just a single hostname.
Say I'm serving 2 sites: test1.com and test2.com (and have them set in my hosts file to point back to localhost).
My /etc/hosts file:
127.0.0.1 test1.com test2.com
CherryPy is serving test1.com, test2.com doesn't have anything serving it.
My cherrypy file is as follows:
import cherrypy
from my_test_flask_app import app
if __name__ == '__main__':
cherrypy.tree.graft(app, "/")
cherrypy.server.unsubscribe()
server = cherrypy._cpserver.Server()
server.socket_host = "test1.com"
server.socket_port = 8030
server.thread_pool = 30
server.subscribe()
cherrypy.engine.start()
cherrypy.engine.block()
Set up this way, I go to test1.com:8030 on my browser and it works as expected.
But when I go to test2.com:8030, the same app is served. I expected it not to serve anything, since CherryPy isn't set up to listen for test2.com.
To me, it seems that CherryPy is just listening for everything on the given port (8030), and treating the socket_host part as if its 0.0.0.0
Am I missing something here? I've looked through lots of docs and tutorials, but all things suggest that this code snippet should be working as I expected.
Thanks
Here's how you can setup what you want...
root = Root()
RootApp = cherrypy.Application(root)
Domain2App = cherrypy.Application(root)
SecureApp = cherrypy.Application(Secure())
vhost = cherrypy._cpwsgi.VirtualHost(RootApp,
domains={'www.domain2.example': Domain2App,
'www.domain2.example:443': SecureApp,
})
cherrypy.tree.graft(vhost)
https://cherrypy.readthedocs.org/en/3.3.0/refman/_cpwsgi.html#classes
Hope this helps!
You misunderstand the socket listen address - they are IP addresses only, not on DNS names. Set this way, CherryPy listens to the localhost (127.0.0.1) only - try using your Ethernet/Wlan local address and you should get connection refused.
Also, you can wrap your application with a WSGI middleware that checks the Host header for the proper domain, or use CherryPy virtual host facility to check the host header.
I was trying to use Celery to write a backend asynchronous process for my Django project.
I use Rabbitmq as my task queue while using Cloudamqp with Heroku.
The problem is: the whole project works perfectly on my own laptop(using localhost to test),while it didn't work on the production server.
This is the error message I got: [Errno 111] Connection refused
Then I did some research, I might be wrong but it seems that the problem is I already reached the limitation of number of worker can be used since i am using a free account for Heroku right now.
I read about this:"But remember to tweak the BROKER_POOL_LIMIT if you’re using the free plan. Set it to 1 and you should be good. If you have connection problems, try reduce the concurrency of both your web workers and in the celery worker." but I am not sure how to do that.
Here is my setting.py:
BROKER_URL="amqp://paswzaog:0TwC3i7cBdTAKA9JE57EMm1xUzovFbry#turtle.rmq.cloudamqp.com/paswzaog"
BROKER_POOL_LIMIT = 1
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_RESULT_BACKEND='djcelery.backends.database:DatabaseBackend'
CELERY_BEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
CELERY_RESULT_BACKEND = 'amqp'
CELERY_TASK_RESULT_EXPIRES = 18000 # 5 hours.`
Here is the error message:
'[Errno 111] Connection refused
'/app/.heroku/python/bin',
'/app/.heroku/python/lib/python2.7/site-packages/setuptools-3.6-py2.7.egg',
'/app/.heroku/python/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg',
'/app',
'/app/.heroku/python/lib/python27.zip',
'/app/.heroku/python/lib/python2.7',
'/app/.heroku/python/lib/python2.7/plat-linux2',
'/app/.heroku/python/lib/python2.7/lib-tk',
'/app/.heroku/python/lib/python2.7/lib-old',
'/app/.heroku/python/lib/python2.7/lib-dynload',
'/app/.heroku/python/lib/python2.7/site-packages',
'/app','
Did anybody has a similar problem before? or can someone recommend a thorough tutorial about how to do setting up stuff of Celery onto Heroku? Thanks in advance!
It looks like your credentials might not be valid. Either that or you have more than 1 concurrent connection.
A good way to test this would be to scale your web servers to 0 (heroku scale web=0), then create a single worker (heroku scale worker=1) -- then tail your Heroku logs (heroku logs --tail) to see if your single worker can connect.
I have an issue with route_url and my setup. On the server I have a paster server which listen on 127.0.0.1 on port 6543 and a nginx server which does reverse proxying from port 80 to port 6543.
I'm also using paste prefix to retrieve the real client IP with this setup in my ini file:
[filter:paste_prefix]
use = egg:PasteDeploy#prefix
[pipeline:main]
pipeline =
paste_prefix
myapp
The server is on a private LAN and I'm trying to connect to the server through a SSH tunnel set up as this:
ssh me#sshgateway -L 8080:nginx_server_ip:80
And I connect to the web page on my client at this url: http://localhost:8080
The main page is displayed correctly but then all links generated with request.route_url are redirecting to localhost/url (without the :8080).
I guess this have something to do either with nginx or paste prefix (or both).
I hope that replacing route_url with route_path will probably solve this problem without fixing the nginx/ini setup issue.
Is there any reason to call route_url instead of route_path, ever ?
route_url is useful in situations like generating a redirect from HTTP to HTTPS, or to a different subdomain. Other than that route_path is probably preferable.