web.py + lighttpd + matplotlib not working - python

I'm trying to deploy my web.py app with lighttpd. It doesn't work if import matplotlib.
This works...
hello.py:
#!/usr/bin/python
import web
# Say hello.
class Index:
def GET(self): return 'hello web.py'
if __name__ == "__main__":
app = web.application(('/*', 'Index'), globals())
app.run()
/etc/init.d/lighttpd restart
I go to my site and see "hello web.py".
But if I add import matplotlib to hello.py and restart the server, then when I go to the site I get a 500 - Internal Server Error.
Here's /var/log/lighttpd/error.log:
2010-12-24 00:17:31: (log.c.166) server started
2010-12-24 00:17:42: (mod_fastcgi.c.1734) connect failed: Connection refused on
unix:/tmp/fastcgi.socket-0
2010-12-24 00:17:42: (mod_fastcgi.c.3037) backend died; we'll disable it for 1 s
econds and send the request to another backend instead: reconnects: 0 load: 1
2010-12-24 00:17:43: (mod_fastcgi.c.2582) unexpected end-of-file (perhaps the fa
stcgi process died): pid: 4074 socket: unix:/tmp/fastcgi.socket-0
2010-12-24 00:17:43: (mod_fastcgi.c.3320) child exited, pid: 4074 status: 1
2010-12-24 00:17:43: (mod_fastcgi.c.3367) response not received, request sent: 9
53 on socket: unix:/tmp/fastcgi.socket-0 for /hello.py?, closing connection
2010-12-24 00:20:30: (server.c.1503) server stopped by UID = 0 PID = 4095
2010-12-24 00:20:30: (log.c.166) server started
-- Edit --
Here is my lighttpd.conf: http://pastebin.com/n6sG5z9K
Pretty sure it's just the default (except I set server.document-root = "/var/www/hello/")
Here is my fastcgi.conf:
server.modules += ( "mod_fastcgi" )
server.modules += ( "mod_rewrite" )
fastcgi.server = ( "/hello.py" =>
(( "socket" => "/tmp/fastcgi.socket",
"bin-path" => "/usr/bin/python /var/www/hello/hello.py",
"max-procs" => 1,
"bin-environment" => (
"REAL_SCRIPT_NAME" => ""
),
"check-local" => "disable"
))
)
url.rewrite-once = (
"^/favicon.ico$" => "/static/favicon.ico",
"^/static/(.*)$" => "/static/$1",
"^/(.*)$" => "/hello.py/$1",
)
Any suggestions?

Stumbled into this today (with Apache, but it's likely to be exactly the same issue). I redirected stdout and stderr from the script to see what was happening, and the issue is that matplotlib is trying to create a file:
Traceback (most recent call last):
File "/home/ec2-user/dlea/src/dla.py", line 24, in <module>
import dbm
File "/home/ec2-user/dlea/src/dbm.py", line 7, in <module>
import matplotlib
File "/usr/lib64/python2.6/site-packages/matplotlib/__init__.py", line 709, in <module>
rcParams = rc_params()
File "/usr/lib64/python2.6/site-packages/matplotlib/__init__.py", line 627, in rc_params
fname = matplotlib_fname()
File "/usr/lib64/python2.6/site-packages/matplotlib/__init__.py", line 565, in matplotlib_fname
fname = os.path.join(get_configdir(), 'matplotlibrc')
File "/usr/lib64/python2.6/site-packages/matplotlib/__init__.py", line 240, in wrapper
ret = func(*args, **kwargs)
File "/usr/lib64/python2.6/site-packages/matplotlib/__init__.py", line 439, in _get_configdir
raise RuntimeError("Failed to create %s/.matplotlib; consider setting MPLCONFIGDIR to a writable directory for matplotlib configuration data"%h)
RuntimeError: Failed to create /var/www/.matplotlib; consider setting MPLCONFIGDIR to a writable directory for matplotlib configuration data
Since it's being run as user httpd (Apache), it tries to create the file in /var/www/, which is root-owned, and not writeable by the Apache user.
One valid solution is as simple as setting the MPLCONFIGDIR to a temporary directory before importing matplotlib:
import os
import tempfile
os.environ['MPLCONFIGDIR'] = tempfile.mkdtemp()
import matplotlib
To track the issue, this is how I redirected stdout and stderr to some log file to see what was happening:
sys.stdout = open("/var/log/dla_stdout.txt", 'a')
sys.stderr = open("/var/log/dla_stderr.txt", 'a')
I actually got the solution from this other StackOverflow question: Setting Matplotlib MPLCONFIGDIR: consider setting MPLCONFIGDIR to a writable directory for matplotlib configuration data

I was following this recipe: http://webpy.org/cookbook/fastcgi-lighttpd
I overlooked a link at the top to this thread: http://www.mail-archive.com/webpy#googlegroups.com/msg02800.html
That thread had the solution. I run the python process like so:
/var/www/hello.py fastcgi 9080
and then set my fastcgi.conf like so:
fastcgi.server = ( "/hello.py" =>
((
"host" => "127.0.0.1",
"port" => 9080,
"check-local" => "disable"
))
)
Then it works. (Still not sure I've got everything configured properly, but things seem to be working.)

I fix the issue by:
pip install flup
don't need
/var/www/hello.py fastcgi 9080
my system is: amazon ec2, ubuntu 10.04
lighttpd: 1.4.26

My first guess is that you're getting an ImportError because matplotlib wasn't installed properly or isn't on the PYTHONPATH or some other crazy thing. The only way to know for sure is to look at the traceback. It shows you're running fastcgi, which means that the python code is being executed in another process. Therefore, you can't find the traceback in the lighttpd logs.
How are you running the fastcgi process? The traceback would have been written to its stderr. You might also consider using supervisord. It has support for redirecting stderr to a log file and various other things that make creating daemon processes easier.

Related

SSL hostname.bundle error in python script

we have a python script running on a centos web panel machine designed to start up a Git interface
this was working fine till the hostname ssl expired yesterday
we replaced the hostname.cert and hostname.key and hostname.bundle with a new wildcard digicert ssl and restarted all services but get the error:
Traceback (most recent call last):
File "/home/hosting/scripts/git-web-interface/server.py", line 12, in <module>
certificate_chain = "/etc/pki/tls/certs/hostname.bundle"
File "/usr/local/lib/python3.6/site-packages/cheroot/ssl/builtin.py", line 101, in __init__
self.context.load_cert_chain(certificate, private_key)
FileNotFoundError: [Errno 2] No such file or directory
the file being executed is:
# Our stuff
import customGlobals
# Web.py
import web
if __name__ == "__main__":
from cheroot.server import HTTPServer
from cheroot.ssl.builtin import BuiltinSSLAdapter
HTTPServer.ssl_adapter = BuiltinSSLAdapter(
certificate = "/etc/pki/tls/certs/hostname.cert",
private_key = "/etc/pki/tls/private/hostname.key",
certificate_chain = "/etc/pki/tls/certs/hostname.bundle"
)
customGlobals.app.run()
we have triple checked that the cert, key and chain paths are correct and the files exist with the right ownership but its saying the hostname.bundle is incorrect
anyone know what's going on here?
the hostname.bundle appears to be just a copy of the cert file, so im guessing that's not right but i don't know what it should be if its not that?
can anyone advise?

ValueError: Unknown type <class 'redis.client.StrictPipeline'>

I develop locally on win10, which is a problem for the usage of the RQ task queue, which only works on linux systems because it requires the ability to fork processes. I'm trying to extend the flask-base project https://github.com/hack4impact/flask-base/tree/master/app which can use RQ. I came across https://github.com/michaelbrooks/rq-win . I love the idea of this repo (If I can get it working it will really simplify my life, since I develop on win 10 -64):
After installing this library
I can queue a job in my views by running something like:
#login_required
#main.route('/selected')
def selected():
messages = 'abcde'
j = get_queue().enqueue(render_png, messages, result_ttl=5000)
return j.get_id()
This returns a job_code correctly.
I changed the code in manage.py to:
from rq_win import WindowsWorker
#manager.command
def run_worker():
"""Initializes a slim rq task queue."""
listen = ['default']
REDIS_URL = 'redis://localhost:6379'
conn = Redis.from_url(REDIS_URL)
with Connection(conn):
# worker = Worker(map(Queue, listen))
worker = WindowsWorker(map(Queue, listen))
worker.work()
When I try to run it with:
$ python -u manage.py run_worker
09:40:44
09:40:44 *** Listening on ?[32mdefault?[39;49;00m...
09:40:58 ?[32mdefault?[39;49;00m: ?[34mapp.main.views.render_png('{"abcde"}')?[39;49;00m (8c1b6186-39a5-4daf-9c45-f60e4241cd1f)
...\lib\site-packages\rq\job.py:161: DeprecationWarning: job.status is deprecated. Use job.set_status() instead
DeprecationWarning
09:40:58 ?[31mValueError: Unknown type <class 'redis.client.StrictPipeline'>?[39;49;00m
Traceback (most recent call last):
File "...\lib\site-packages\rq_win\worker.py", line 87, in perform_job
queue.enqueue_dependents(job, pipeline=pipeline)
File "...\lib\site-packages\rq\queue.py", line 322, in enqueue_dependents
for job_id in pipe.smembers(dependents_key)]
File "...\lib\site-packages\rq\queue.py", line 322, in <listcomp>
for job_id in pipe.smembers(dependents_key)]
File "...\lib\site-packages\rq\compat\__init__.py", line 62, in as_text
raise ValueError('Unknown type %r' % type(v))
ValueError: Unknown type <class 'redis.client.StrictPipeline'>
So in summary, I think the jobs are being queued correctly within redis. However when the worker process tries to grab a job off of the queue to process, This error occurs. How can I fix this?
So after some digging, it looks like the root of the error is here, where job_id being sent to the as_text function is, somehow, a StrictPipeline object. However, I have been unable to replicate the error locally; can you post more of your code? Also, I would try re-installing the redis, rq, and rq-win modules, and possibly try importing rq.compat

How to install packages with python scripts

I want to install a package with a python script. I have read the documentation about PackageManager API (http://doc.aldebaran.com/2-4/naoqi/core/packagemanager-api.html):
So I have packaged the app with choregraphe as it is described in http://doc.aldebaran.com/2-4/naoqi/core/packagemanager.html and I have tried to install it with a python script that looks like:
import qi
import sys
if __name__ == '__main__':
ip = "11.1.11.111"
port = 9559
session = qi.Session()
try:
session.connect("tcp://" + ip + ":" + str(port))
except RuntimeError:
print ("Can't connect to Naoqi at ip \"" + ip + "\" on port " + str(port))
sys.exit(1)
service = session.service("PackageManager")
package = "C:\\test_package_handlers_01-835a92-1.0.0.pkg"
# this is to see if the problem is that python can not locate the file
with open(package) as f:
print f
service.install(package)
And here is what I receive as an error:
# provided package could be opened
<open file 'C:\\test_package_handlers_01-835a92-1.0.0.pkg', mode 'r' at 0x02886288>
Traceback (most recent call last):
File "C:/test.py", line 24, in <module>
service.install(package)
RuntimeError: C:\test_package_handlers_01-835a92-1.0.0.pkg: no such file
I guess this is because the package must be uploaded on the robot and the package file path must be the one that is on the robot.
EDITED
I have added the package to a choreographe blank project and run this blank project on the robot. This way the package was saved to the robot with path /home/nao/.local/share/PackageManager/apps/.lastUploadedChoregrapheBehavior/test_package_handlers_01-835a92-1.0.0.pkg and when I have changed the path in my script ("C:\\test_package_handlers_01-835a92-1.0.0.pkg" with "/home/nao/.local/share/PackageManager/apps/.lastUploadedChoregrapheBehavior/test_package_handlers_01-835a92-1.0.0.pkg") the script worked as it was intended and the package was installed on the robot.
So is there a way to install packages from my PC without uploading them to the robot, because otherwise it is better to use Choregraphe to upload projects.
Maybe it is good to give the following explanation of what I want to achieve:
I have a folder on my PC with 20 packages for example
I want to install all those 20 packages with one python script
There is a python script that installs all the packages from the folder when it is invoked like this:
python package_installer.py path_to_packages_folder
EDITED_2
import qi
import ftplib
import os
ROBOT_URL = "10.80.129.90"
print "Uploading PKG"
pkg_file = "my-application-0.0.1.pkg"
pkg_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), pkg_file)
ftp = ftplib.FTP(ROBOT_URL)
ftp.login("nao", "nao")
with open(pkg_path) as pkg:
ftp.storbinary("STOR "+pkg_file, pkg)
print "Connecting NAOqi session"
app = qi.Application(url='tcp://'+ROBOT_URL+':9559')
app.start()
session = app.session
print "Installing app"
packagemgr = session.service("PackageManager")
packagemgr.install("/home/nao/"+pkg_file)
print "Cleaning robot"
ftp.delete(pkg_file)
ftp.quit()
print "End"
app.stop()
This piece of code ftp = ftplib.FTP(ROBOT_URL) throws the following exception:
Traceback (most recent call last):
File "C:/Stefan/DSK_PEPPER_clode_2/PythonScripts/_local_testing/uploading_and_installing_package.py", line 11, in <module>
ftp = ftplib.FTP(ROBOT_URL)
File "C:\Python27\lib\ftplib.py", line 120, in __init__
self.connect(host)
File "C:\Python27\lib\ftplib.py", line 135, in connect
self.sock = socket.create_connection((self.host, self.port), self.timeout)
File "C:\Python27\lib\socket.py", line 575, in create_connection
raise err
socket.error: [Errno 10061] No connection could be made because the target machine actively refused it
Also when I connect to the robot with username 'nao' and pass 'nao' as described in http://doc.aldebaran.com/2-5/dev/tools/opennao.html and then try to create a folder in /home/nao/.local/share/PackageManager/apps/ with sudo mkdir it informs me that: Sorry, user nao is not allowed to execute '/bin/mkdir dasdas' as root on Pepper.. If I use only mkdir here is what it tells me: mkdir: cannot create directory 'new_folder': Permission denied
Using qibuild, you can also directly install using:
qipkg deploy-package /path/to/my-package.pkg --url nao#10.10.23.45
You indeed need to upload the file before. You can use scp or sftp to do this. Once the .pkg is on the robot then you can use PackageManager.install.
Imagine something like:
import qi
import paramiko
import os
ROBOT_URL = "10.80.129.90"
print "Uploading PKG"
pkg_file = "my-application-0.0.1.pkg"
pkg_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), pkg_file)
transport = paramiko.Transport((ROBOT_URL, 22))
transport.connect(username="nao", password="nao")
sftp = paramiko.SFTPClient.from_transport(transport)
sftp.put(pkg_path, pkg_file)
print "Connecting NAOqi session"
app = qi.Application(url='tcp://'+ROBOT_URL+':9559')
app.start()
session = app.session
print "Installing app"
packagemgr = session.service("PackageManager")
packagemgr.install("/home/nao/"+pkg_file)
print "Cleaning robot"
sftp.remove(pkg_file)
sftp.close()
transport.close()
print "End"
app.stop()

KeyError with CherryPy WSGIServer serving static files

I'm trying to use CherryPy's WSGI server to serve static files, like in Using Flask with CherryPy to serve static files. Option 2 of the accepted answer there looks exactly like what I'd like to do, but I'm getting a KeyError when I try to use the static directory handler.
What I've tried:
>>>> import cherrypy
>>>> from cherrypy import wsgiserver
>>>> import os
>>>> static_handler = cherrypy.tools.staticdir.handler(section='/', dir=os.path.abspath('server_files')
>>>> d = wsgiserver.WSGIPathInfoDispatcher({'/': static_handler})
>>>> server = wsgiserver.CherryPyWSGIServer(('localhost', 12345), d)
>>>> server.start()
Then, when I try to access the server I'm getting a 500 response and the following error in the console:
KeyError('tools',)
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/cherrypy/wsgiserver/wsgiserver2.py", line 1353, in communicate
req.respond()
File "/Library/Python/2.7/site-packages/cherrypy/wsgiserver/wsgiserver2.py", line 868, in respond
self.server.gateway(self).respond()
File "/Library/Python/2.7/site-packages/cherrypy/wsgiserver/wsgiserver2.py", line 2267, in respond
response = self.req.server.wsgi_app(self.env, self.start_response)
File "/Library/Python/2.7/site-packages/cherrypy/wsgiserver/wsgiserver2.py", line 2477, in __call__
return app(environ, start_response)
File "/Library/Python/2.7/site-packages/cherrypy/_cptools.py", line 175, in handle_func
handled = self.callable(*args, **self._merged_args(kwargs))
File "/Library/Python/2.7/site-packages/cherrypy/_cptools.py", line 102, in _merged_args
tm = cherrypy.serving.request.toolmaps[self.namespace]
KeyError: 'tools'
This is displayed twice for each time I try to hit anything that the server should be able to display. When I hooked up a Flask app to the server the Flask app worked as expected, but the static file serving still gave the same error.
What do I need to do to get the staticdir.handler to work?
I've tried various ways of getting this to work and up until today was also hitting the KeyError you have been seeing (among other issues).
I finally managed to get CherryPy to serve static alongside a Django app by adapting the code from this gist (included below).
import os
import cherrypy
from cherrypy import wsgiserver
from my_wsgi_app import wsgi
PATH = os.path.abspath(os.path.join(os.path.dirname(__file__), 'public'))
class Root(object):
pass
def make_static_config(static_dir_name):
"""
All custom static configurations are set here, since most are common, it
makes sense to generate them just once.
"""
static_path = os.path.join('/', static_dir_name)
path = os.path.join(PATH, static_dir_name)
configuration = {static_path: {
'tools.staticdir.on': True,
'tools.staticdir.dir': path}
}
print configuration
return cherrypy.tree.mount(Root(), '/', config=configuration)
# Assuming your app has media on diferent paths, like 'c', 'i' and 'j'
application = wsgiserver.WSGIPathInfoDispatcher({
'/': wsgi.application,
'/c': make_static_config('c'),
'/j': make_static_config('j'),
'/i': make_static_config('i')})
server = wsgiserver.CherryPyWSGIServer(('0.0.0.0', 8070), application,
server_name='www.cherrypy.example')
try:
server.start()
except KeyboardInterrupt:
print "Terminating server..."
server.stop()
Hopefully wrapping a Flask app will be fairly similar.
The key for me was using the cherrypy.tree.mount on a dummy class, rather than trying to use the staticdir.handler directly.
For the curious - I used the code in the gist to customise a version of django-cherrypy's runcpserver management command, although in hindsight it would probably have been easier to create a new command from scratch.
Good luck (and thanks to Alfredo Deza)!

Easy application logging/debugging with nginx, uwsgi, flask?

I'm not looking to turn on the dangerous debugging console, but my application is getting a 500 error and doesn't seem to be writing any output for me to investigate more deeply.
I saw this exchange on the mailing list, which led me to this page on logging errors.
However, I still find this very confusing and have a couple of questions:
(1) In which file should the stuff below go?
ADMINS = ['yourname#example.com']
if not app.debug:
import logging
from logging.handlers import SMTPHandler
mail_handler = SMTPHandler('127.0.0.1',
'server-error#example.com',
ADMINS, 'YourApplication Failed')
mail_handler.setLevel(logging.ERROR)
app.logger.addHandler(mail_handler)
...assuming the "getting bigger" file pattern for larger applications? __init__.py? config.py? run.py?
(2) I am overwhelmed by options there, and can't tell which I should use. Which loggers should I turn on, with what settings, to replicate the local python server debug I get to stdout when I run run.py? I find that default, local output stream very useful, more so than the interactive debugger in the page. Does anyone have a pattern they could share on setting up something replicating this with an nginx deployment, outputting to a log?
(3) Is there anything I need to change, not at the flask level, but in nginx, say in my /etc/nginx/sites-available/appname file, to enable logging?
UPDATE
Specifically, I'm looking for information like I get when python runs locally as to why, say, a package isn't working, or where my syntax error might be, or what variable doesn't exist:
$ python run.py
Traceback (most recent call last):
File "run.py", line 1, in <module>
from myappname import app
File "/home/me/myappname/myappname/__init__.py", line 27, in <module>
file_handler.setLevel(logging.debug)
File "/usr/lib/python2.7/logging/__init__.py", line 710, in setLevel
self.level = _checkLevel(level)
File "/usr/lib/python2.7/logging/__init__.py", line 190, in _checkLevel
raise TypeError("Level not an integer or a valid string: %r" % level)
When I run flask on a server, I never see this. I just get a uWSGI error in the browser, and have no idea which code was problematic. I would just like something like the above to be written to a file.
I notice also that setting the following logging didn't really write much to file, even when I turn the log way up to the DEBUG level:
from logging import FileHandler
file_handler = FileHandler('mylog.log')
file_handler.setLevel(logging.DEBUG)
app.logger.addHandler(file_handler)
mylog.log is blank, even when my application errors out.
I'll also add that I've tried to set debug = True in the following ways, in __init__.py:
app = Flask(__name__)
app.debug = True
app.config['DEBUG'] = True
from werkzeug.debug import DebuggedApplication
app.wsgi_app = DebuggedApplication(app.wsgi_app, True)
app.config.from_object('config')
app.config.update(DEBUG=True)
app.config['DEBUG'] = True
if __name__ == '__main__':
app.run(debug=True)
While in my config.py file, I have...
debug = True
Debug = True
DEBUG = True
Yet, no debugging happens, and without logging or debugging, this is rather hard to track down. Errors simply terminate the application with the un-useful browser message:
uWSGI Error
Python application not found
Set config['PROPAGATE_EXCEPTIONS'] to True when running app in production and you want tracebacks to be logged into log files. (I haven't tried with SMTP handler, though..)
The part where you create handlers, add to loggers etc. should be in the if __name__ == '__main__' clause, i.e. your main entry point. I assume that would be run.py.
I'm not sure I can answer this - it depends on what you want. I'd advise looking at the logging tutorial to see the various options available.
I don't believe you need to change anything at the nginx level.
Update: You might want to have an exception clause that covers uncaught exceptions, e.g.
if __name__ == '__main__':
try:
app.run(debug=True)
except Exception:
app.logger.exception('Failed')
which should write the traceback of any exception which occurred in app.run() to the log.
I know that this is a VERY old post, but I ran into the issue now, and it took me a bit to find the solution. Flask sends errors to the server. I was running Gunicorn with an upstart script on Ubuntu 14.04 LTS, and the place where I found the error logs was as follows:
/var/log/upstart/myapp.log
http://docs.gunicorn.org/en/stable/deploy.html#upstart
Just in case some other poor soul ends up in this situation.

Categories