I use bottle and uwsgi.
uwsgi config:
[uwsgi]
http-socket = :8087
processes = 4
workers=4
master = true
file=app.py
app.py:
import bottle
import os
application = bottle.app()
#bottle.route('/test')
def test():
os.system('./restart_ssdb.sh')
if __name__=='__main__':
bottle.run()
restart_ssdb.sh(just restart a service and do not care about what the service is):
./ssdb-server -d ssdb.conf -s restart
Then I start the uwsgi and it works well.
Then I access url:127.0.0.1/test
The image shows that one of uwsgi processes becomes ssdb server.
Then I stop uwsgi:
The port 8087 belongs to ssdb. It causes uwsgi server to be unable to restart because the port is used.
What causes the problem in Figure 2 to appear?
I just want to execute shell(restart ssdb server), But it must be
guaranteed not to affect the uwsgi server, what can I do?
http://uwsgi-docs.readthedocs.io/en/latest/ThingsToKnow.html
I solve it when i set the
close-on-exec
option to my uwsgi setting
Related
I have an app writen by python with Flask, and deploy it use uwsgi +ngix, here is my config of uwsgi:
[uwsgi]
master=true
socket = :8223
chdir= /SWS/swdzweb
wsgi-file = manage.py
callable = app
processes = 4
threads = 2
My app will response a request which want to start or stop a daemon process writen by pytho too. as below
in the request function do
os.system("python /SWS/webservice.py %s" % cmd)
where cmd is start|stop. in my daemon process, it is single process and single thread, and i capture SIGTEM then exit,like this
signal(SIGTERM, lambda signo,handler:sys.exit(0))
But. when I start this daemon process by uwsgi in my request function, i can't stop it, for example
kill -15 pid or python /SWS/web service.py stop
just like the SIGTERM signal does not send to my daemon process.
however, when i config uwsgi with 4 processes and 1 thread, this works fine. config like this
[uwsgi]
master=true
socket = :8223
chdir= /SWS/swdzweb
wsgi-file = manage.py
callable = app
processes = 4
threads = 1
I can not figure out the reason, so I have to ask for help.
Thanks!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
I have my Python Flask web app hosted on nginx. While trying to execute a request it shows a timeout error in the nginx error log as shown below :
[error] 2084#0: *1 upstream timed out (110: Connection timed out)
while reading response header from upstream, client:
192.168.2.224, server: 192.168.2.131, request: "POST /execute HTTP/1.1", upstream: "uwsgi://unix:/hom
e/jay/PythonFlaskApp/app.sock", host: "192.168.2.131:9000", referrer:
"http://192.168.2.131:9000/"
If I try to run the app locally it works fine and responds fine.
Any one have any idea what might be wrong ?
the error found in browser console is :
Gateway Time-out
Here is the nginx config file:
server {
listen 9000;
server_name 192.168.2.131;
location / {
include uwsgi_params;
proxy_read_timeout 300;
uwsgi_pass unix:/home/jay/PythonFlaskApp/app.sock;
}
}
And here is the Python Fabric code that i trying to execute. i'm not sure if this is causing the issue, but any waz here is the code :
from fabric.api import *
#application.route("/execute",methods=['POST'])
def execute():
try:
machineInfo = request.json['info']
ip = machineInfo['ip']
username = machineInfo['username']
password = machineInfo['password']
command = machineInfo['command']
isRoot = machineInfo['isRoot']
env.host_string = username + '#' + ip
env.password = password
resp = ''
with settings(warn_only=True):
if isRoot:
resp = sudo(command)
else:
resp = run(command)
return jsonify(status='OK',message=resp)
except Exception, e:
print 'Error is ' + str(e)
return jsonify(status='ERROR',message=str(e))
I have a uWSGi config file for the web app and started it using an upstart script. Here is uwSGi conf file :
[uwsgi]
module = wsgi
master = true
processes = 5
socket = app.sock
chmod-socket = 660
vacuum = true
die-on-term = true
and here is upstart script
description "uWSGI server instance configured to serve Python Flask App"
start on runlevel [2345]
stop on runlevel [!2345]
setuid jay
setgid www-data
chdir /home/jay/PythonFlaskApp
exec uwsgi --ini app.ini
I have followed the below tutorial on running flask app on nginx
This is likely a problem with the Fabric task, not with Flask. Have you tried isolating / removing Fabric from the application, just for troubleshooting purposes? You could try stubbing out a value for resp, rather than actually executing the run/sudo commands in your function. I would bet that the app works just fine if you do that.
And so that would mean that you've got a problem with Fabric executing the command in question. First thing you should do is verify this by mocking up an example Fabfile on the production server using the info you're expecting in one of your requests, and then running it with fab -f <mock_fabfile.py>.
It's also worth noting that using with settings(warn_only=True): can result in suppression of error messages. I think that you should remove this, since you are in a troubleshooting scenario. From the docs on Managing Output:
warnings: Warning messages. These are often turned off when one expects a given operation to fail, such as when using grep to test existence of text in a file. If paired with setting env.warn_only to True, this can result in fully silent warnings when remote programs fail. As with aborts, this setting does not control actual warning behavior, only whether warning messages are printed or hidden.
As a third suggestion, you can get more info out of Fabric by using the show('debug') context manager, as well as enabling Paramiko's logging:
from fabric.api import env
from fabric.context_managers import show
# You can also enable Paramiko's logging like so:
import logging
logging.basicConfig(level=logging.DEBUG)
def my_task():
with show('debug'):
run('my command...')
The Fabric docs have some additional suggestions for troubleshooting: http://docs.fabfile.org/en/1.6/troubleshooting.html. (1.6 is an older/outdated version, but the concepts still apply.)
I am trying to run my django server on an Ubuntu instance on AWS EC2. I am using gunicorn to run the server like this :
gunicorn --workers 4 --bind 127.0.0.1:8000 woc.wsgi:application --name woc-server --log-level=info --worker-class=tornado --timeout=90 --graceful-timeout=10
When I make a request I am getting 502, Bad Gateway on the browser. Here is the server log http://pastebin.com/Ej5KWrWs
Some sections of the settings.py file where behaviour is changed based on hostname are
iUbuntu is the hostname of my laptop
if socket.gethostname() == 'iUbuntu':
'''
Development mode
"iUbuntu" is the hostname of Ishan's PC
'''
DEBUG = TEMPLATE_DEBUG = True
else:
'''
Production mode
Anywhere else than Ishan's PC is considered as production
'''
DEBUG = TEMPLATE_DEBUG = False
if socket.gethostname() == 'iUbuntu':
'''Development'''
ALLOWED_HOSTS = ['*', ]
else:
'''Production Won't let anyone pretend as us'''
ALLOWED_HOSTS = ['domain.com', 'www.domain.com',
'api.domain.com', 'analytics.domain.com',
'ops.domain.com', 'localhost', '127.0.0.1']
(I don't get what's the purpose of this section of the code. Since I inherited the code from someone and the server was working I didn't bothered removing it without understanding what it does)
if socket.gethostname() == 'iUbuntu':
MAIN_SERVER = 'http://localhost'
else:
MAIN_SERVER = 'http://domain.com'
I can't figure out what's the problem here. The same code runs fine with gunicorn on my laptop.
I have also made a small hello world node.js to serve on the port 8000 to test nginx configuration and it is running fine. So no nginx errors.
UPDATE:
I set DEBUG to True and copied the Traceback http://pastebin.com/ggFuCmYW
UPDATE:
Thanks to the reply by #ARJMP. This indeed is the problem with celery consumer not getting connected to broker.
I am configuring celery like this : app.config_from_object('woc.celeryconfig') and the contents of celeryconfig.py are:
BROKER_URL = 'amqp://celeryuser:celerypassword#localhost:5672/MyVHost'
CELERY_RESULT_BACKEND = 'rpc://'
I am running the worker like this :celery worker -A woc.async -l info --autoreload --include=woc.async -n woc_celery.%h
And the error that I am getting is:
consumer: Cannot connect to amqp://celeryuser:**#127.0.0.1:5672/MyVHost: [Errno 104] Connection reset by peer.
Ok so your problem as far as I can tell is that your celery worker can't connect to the broker. You have some middleware trying to call a celery task, so it will fail on every request (unless that analyse_urls.delay(**kw) is conditional)
I found a similar issue that was solved by upgrading their version of celery.
Another cause could be that the EC2 instance can't connect to the message queue server because the EC2 security group won't allow it. If the message queue is running on a separate server, you may have to make sure you've allowed the connection between the EC2 instance and the message queue through AWS EC2 Security Groups
try setting the rabbitmq connection timeout to 30 seconds. this usually clears up the problem of being unable to connect to a server.
you can add connection_timeout to your connection string:
BROKER_URL = 'amqp://celeryuser:celerypassword#server.example.com:5672/MyVHost?connection_timeout=30'
note the format with the question mark: ?connection_timeout=30
this is a query string parameter for the RMQ connection string.
also - make sure the url is pointing to your production server name / url, and not localhost, in your production environment
I'm having some trouble getting simple multi-threading functionality up and running in my web application.
Im using Flask, uwsgi, nginx on Ubuntu 12.04.
Each time I start a new thread it will not execute before I shut down the uwsgi server. Its very odd!
If I'm doing a simple task (e.g. printing) it will execute as expected 9/10 times. If I do a heavy computing job (e.g. OCR on a file) it will always start executing when the server is restarting (doing a shutdown)
Any idea why my code does not perform as expected?
Code:
def hello_world(world):
print "Hello, " + world # This will get printed when the uwsgi server restarts
def thread_test():
x = "World!"
t = threading.Thread(target=hello_world, args=(x,))
t.start()
#application.route('/api/test')
def test():
thread_test()
return "Hello, World!", 200
EDIT 1:
My uwsgi configuration looks like this:
[uwsgi]
chdir = /Users/vingtoft/Documents/Development/archii/server/archii2/
pythonpath = /Users/vingtoft/Documents/Development/archii/server/archii2/
pythonpath = /Users/vingtoft/Documents/Development/archii/server/ml/
module = app.app:application
master = True
vacuum = True
socket = /tmp/archii.sock
processes = 4
pidfile = /Users/vingtoft/Documents/Development/archii/server/archii2/uwsgi.pid
daemonize = /Users/vingtoft/Documents/Development/archii/server/archii2/uwsgi.log
virtualenv = /Users/vingtoft/Documents/Development/virtualenv/flask/
wsgi-file = /Users/vingtoft/Documents/Development/archii/server/archii2/app/app.py
ssl = True
uWSGI server by default will disable threads support for some performance improvements, but you can enable it back using either:
threads = 2 # or any greater number
or
enable-threads = true
But be warned that first method will tell uWSGI to create 2 threads for each of your workers so for 4 workers, you will end up with 8 actual threads.
That threads will work as separate workers, so they are not for your use for background jobs, but using any number of threads greater than one will enable thread support for uWSGI server, so now you can create more of it for some background tasks.
I'm using nd_service_registry to register my django service to zookeeper, which launched with uwsgi.
versions:
uWSGI==2.0.10
Django==1.7.5
My question is, what is the correct way to place nd_service_registry.set_node code to register itself to zookeeper server, avoiding duplicated register or deregister.
my uwsgi config ini, with processes=2, enable-threads=true, threads=2:
[uwsgi]
chdir = /data/www/django-proj/src
module = settings.wsgi:application
env = DJANGO_SETTINGS_MODULE=settings.test
master = true
pidfile = /tmp/uwsgi-proj.pid
socket = /tmp/uwsgi_proj.sock
processes = 2
threads = 2
harakiri = 20
max-requests = 50000
vacuum = true
home = /data/www/django-proj/env
enable-threads = true
buffer-size = 65535
chmod-socket=666
register code:
from nd_service_registry import KazooServiceRegistry
nd = KazooServiceRegistry(server=ZOOKEEPER_SERVER_URL)
nd.set_node('/web/test/server0', {'host': 'localhost', 'port': 80})
I've tested such cases and both worked as expected, django service registered at uwsgi master process startup only once.
place code in settings.py
place code in wsgi.py
Even if I killed uwsgi worker processes(then master process will relaunch another worker) or let worker process kill+restart by uwsgi harakiri options, no new register action triggered.
So my question is whether my register code is correct for django+uwsgi with processes and threads enabled, and where to place it.
The problem happened when you use uwsgi with master/worker. When uwsgi master process spawns workers, the connection to zookeeper maintained by thread in zookeeper client can't be copy to worker correctly.So in application of uwsgi, you should use uwsgi decorators: uwsgidecorators.postfork to initialize register code. The function decorated by #postfork will be called when spawning new workers.
Hope it will help.