I am using a Flask application run by gunicorn which works fine if I run it directly on my server but fails if I run it inside Docker with the error
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib64/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/opt/app-root/src/backend.py", line 242, in runAlertManager
db = get_db()
File "/opt/app-root/src/backend.py", line 29, in get_db
db = g._database = sqlite3.connect(DATABASE)
sqlite3.OperationalError: unable to open database file
If I get a cli inside the docker container I can use the sqlite3 client just fine on that file. If I remove the Threading part it also works fine just not sure why it does work directly but not in Docker
Simplified I have something like this
from threading import Thread
from time import sleep
from flask import g, Flask, request, jsonify, current_app
app = Flask(__name__)
def get_db():
db = getattr(g, '_database', None)
if db is None:
print(DATABASE)
db = g._database = sqlite3.connect(DATABASE)
return db
def runAlertManager(app):
'''
Runs AlertManager in a separate process
'''
with app:
db = get_db()
while True:
#do something
sleep(10)
x = Thread(target=runAlertManager, args=(app.app_context(), ))
x.start()
just the path pointing to /db/database.db which is mounted into the container and exists.
The entire directory an SQLite database exists in needs to be writable; SQLite needs to create some sidecar files (e.g. for database.db, database.db-wal or database.db-journal).
Instead of mounting just the file, mount the directory it's in.
Related
I use PyMySQL library and Flask in my program. My view function accesses the database every time it called. After some calls it breaks and raise InterfaceError(0, ''). All next requests also raise InterfaceError (any db query, specifially).
Traceback (most recent call last):
(several files of mine and Flask)
File "/home/maxim/.local/lib/python3.7/site-packages/pymysql/cursors.py", line 170, in execute
result = self._query(query)
File "/home/maxim/.local/lib/python3.7/site-packages/pymysql/cursors.py", line 328, in _query
conn.query(q)
File "/home/maxim/.local/lib/python3.7/site-packages/pymysql/connections.py", line 516, in query
self._execute_command(COMMAND.COM_QUERY, sql)
File "/home/maxim/.local/lib/python3.7/site-packages/pymysql/connections.py", line 750, in _execute_command
raise err.InterfaceError("(0, '')")
pymysql.err.InterfaceError: (0, '')
I read PyMySQL library code and saw, that this error occures if connection's _sock variable is None (i think it means connection is closed). But why is it happen?
I use one connection object for all view functions (i.e. it is defined outside functions). Do I do it right or I must make new connection every request? Or I need do something other to get rid of this error?
My code: https://pastebin.com/sy3xKtgB
Full traceback: https://pastebin.com/iTU75FUi
I solved my problem by creating a new connection to db every request.
def get_db():
return pymysql.connect(
'ip',
'user',
'password',
'db_name',
cursorclass=pymysql.cursors.DictCursor
)
I call this function every request.
from flask import Flask, request
from my_utils import get_db
app = Flask(__name__)
#app.route('/get', methods=['POST'])
def get():
conn = get_db()
with conn.cursor() as cur:
pass
I'm trying to write a simple server that disconnects a user after a min of inactivity.
ive found a simple way of doing it with threading.Timer (restarting the timer every time there is an activity).
im getting RuntimeError when using disconnect in a Timer.
tried using app.app_context and app.test_request_context but either I don't know how and where to use them or it simply doesn't work.
server code:
from flask import Flask, request
from flask_socketio import SocketIO, emit, disconnect
from threading import Timer
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
sio = SocketIO(app)
clients = {}
class Client:
def __init__(self, user, sid, client_time):
self.user = user
self.sid = sid
self.client_time = client_time
self.activity_timer = Timer(10, self.disc_after_60)
self.start_timer()
def disc_after_60(self):
disconnect(self.sid)
del clients[self.user]
def start_timer(self):
if self.activity_timer.is_alive():
self.activity_timer.cancel()
self.activity_timer.start()
else:
self.activity_timer.start()
#sio.on('register')
def handle_register(client_user, client_time):
clients[client_user] = Client(client_user, request.sid, client_time)
emit('message', ("SERVER", f"{client_user} has joined the server!"), broadcast=True)
client side I just connect using register.
the full error message:
Exception in thread Thread-8:
Traceback (most recent call last):
File "C:\Users\idshi\AppData\Local\Programs\Python\Python38-32\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "C:\Users\idshi\AppData\Local\Programs\Python\Python38-32\lib\threading.py", line 1254, in run
self.function(*self.args, **self.kwargs)
File "C:\Users\idshi\PycharmProjects\PyChat excersize\Server\fsserver.py", line 24, in disc_after_60
disconnect(self.sid)
File "C:\Users\idshi\AppData\Local\Programs\Python\Python38-32\lib\site-packages\flask_socketio\__init__.py", line 919, in disconnect
socketio = flask.current_app.extensions['socketio']
File "C:\Users\idshi\AppData\Local\Programs\Python\Python38-32\lib\site-packages\werkzeug\local.py", line 348, in __getattr__
return getattr(self._get_current_object(), name)
File "C:\Users\idshi\AppData\Local\Programs\Python\Python38-32\lib\site-packages\werkzeug\local.py", line 307, in _get_current_object
return self.__local()
File "C:\Users\idshi\AppData\Local\Programs\Python\Python38-32\lib\site-packages\flask\globals.py", line 52, in _find_app
raise RuntimeError(_app_ctx_err_msg)
RuntimeError: Working outside of application context.
This typically means that you attempted to use functionality that needed
to interface with the current application object in some way. To solve
this, set up an application context with app.app_context(). See the
documentation for more information.
I would be glad if someone can help me with this. Thanks in advance.
The disconnect() function needs to be called with an application context installed, as that's the only way to know what's the application instance.
Try this:
def disc_after_60(self):
with app.app_context():
disconnect(sid=self.sid, namespace='/')
del clients[self.user]
If I import logstash, running locally I get the following error
Connected to pydev debugger (build 162.1812.1)
/home/vagrant/.envs/emailservice/lib/python3.4/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
Traceback (most recent call last):
File "/home/vagrant/.pycharm_helpers/pydev/pydevd.py", line 1580, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "/home/vagrant/.pycharm_helpers/pydev/pydevd.py", line 964, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/vagrant/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/emailService/app.py", line 59, in <module>
import logstash
File "/home/vagrant/.envs/emailservice/lib/python3.4/site-packages/logstash/__init__.py", line 2, in <module>
from event import Event
ImportError: No module named 'event'
Process finished with exit code 1
my app.py file looks mostly like this. I run it locally through a vagrant session. If I remove the import logstash from the local branch of the if, the application starts up fine and I get local console log output.
import logging
import os
import sys
from flask import Flask
from flask_restful import Api
from flask_cache import Cache
from flask_sqlalchemy import SQLAlchemy
from opbeat.contrib.flask import Opbeat
from tasks import make_celery
app = Flask(__name__)
app.secret_key = os.environ.get('SECRET_KEY', 'SUCHSECRETSWOW')
app.config.from_object(os.environ.get('APP_SETTINGS', 'config.DevelopmentConfig'))
cache = Cache(app)
db = SQLAlchemy(app)
api = Api(app)
celery = make_celery(app)
if len(app.config['OPBEAT_ORGANIZATION_ID']):
opbeat = Opbeat(
app,
organization_id=app.config['OPBEAT_ORGANIZATION_ID'],
app_id=app.config['OPBEAT_APP_ID'],
secret_token=app.config['OPBEAT_SECRET_TOKEN'],
)
#app.after_request
def after_request(response):
response.headers.add('Access-Control-Allow-Origin', '*')
response.headers.add('Access-Control-Allow-Headers', 'Content-Type,Authorization')
response.headers.add('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE')
return response
def clear_cache():
cache.clear()
def start_resources():
from emailService.api import HealthApi
api.add_resource(HealthApi, '/health')
def start_tasks():
from emailService.tasks import KickoffFetchEmailPeriodTask
if __name__ == '__main__':
if app.config.get('DEVELOPMENT', False):
#The reason this exists is purely because of my error.
import logstash
app.logger.setLevel(logging.DEBUG)
app.logger.addHandler(logging.StreamHandler())
else:
import logstash
app.logger = logging.getLogger('python-logstash-logger')
app.logger.setLevel(logging.INFO)
app.logger.addHandler(logstash.LogstashHandler('myhost.veryhost.suchhost', 5959, version=1))
app.logger.addHandler(logging.StreamHandler())
clear_cache()
start_tasks()
start_resources()
app.logger.debug('Starting app')
app.run(host='0.0.0.0', port=16600, debug=True, use_reloader=False)
All of the google searches result in a great big fat sum total of nothing helpful.
You're probably running into this issue, you have pip installed logstash instead of python-logstash
Run this and it should work afterwards:
> pip uninstall logstash
> pip install python-logstash
Starting app.py, then killing the database and hitting /api/foo gives me:
peewee.OperationalError: could not connect to server: Connection refused
Bringing the database back up gives me and hitting /api/foo gives me:
peewee.OperationalError: terminating connection due to administrator
command\nSSL connection has been closed unexpectedly\n
And hitting /api/foo again gives me:
peewee.InterfaceError: connection already closed
Test case
test_case/__init__.py
#!/usr/bin/env python
from os import environ
from bottle import Bottle, request, response
from playhouse.db_url import connect
bottle_api = Bottle()
db = connect(environ['RDBMS_URI'])
from test_case.foo.models import Foo
db.connect() # Not needed, but do want to throw errors ASAP
db.create_tables([Foo], safe=True) # Create tables (if they don't exist)
from test_case.foo.routes import foo_api
bottle_api.merge(foo_api)
bottle_api.catchall = False
#bottle_api.hook('before_request')
def _connect_db():
print 'Connecting to db'
db.connect()
#bottle_api.hook('after_request')
def _close_db():
print 'Closing db'
if not db.is_closed():
db.close()
def error_catcher(environment, start_response):
try:
return bottle_api.wsgi(environment, start_response)
except Exception as e:
environment['PATH_INFO'] = '/api/error'
environment['api_error'] = e
return bottle_api.wsgi(environment, start_response)
#bottle_api.route('/api/error')
def global_error():
response.status = 500
return {'error': (lambda res: res[res.find("'") + 1:res.rfind("'")])(
str(request.environ['api_error'].__class__)),
'error_message': request.environ['api_error'].message}
test_case/__main__.py
from __init__ import bottle_api
# Or `from __init__ import bottle_api`; `from bottle import run`;
# Then `run(error_catcher, port=5555)`
bottle_api.run(port=5555)
test_case/foo/__init__.py
test_case/foo/models.py
from peewee import Model, CharField
from test_case import db
class Foo(Model):
id = CharField(primary_key=True)
class Meta(object):
database = db
test_case/foo/routes.py
from bottle import Bottle
from playhouse.shortcuts import model_to_dict
from test_case.foo.models import Foo
foo_api = Bottle()
#foo_api.get('/api/foo')
def retrieve_foos():
return {'foos': tuple(model_to_dict(foo) for foo in Foo.select())}
Github gist for easy cloning.
Update:
I believe the problem lies in how you've structured your imports and the way python loads and caches modules in sys.path.
I think that one of your modules is being imported and loaded twice and different parts of the codebase use different instances of the module.
Thus, the views in foo.routes, are using one instance of the database object, while the connection hooks are using another.
Instead of from __init__, what about trying from test_case import bottle_api? That is the one import statement that jumps out at me as a possible culprit.
I added the following to your code so I could run it from the command-line:
if __name__ == '__main__':
api.run()
Then I made a request to /api/foo and saw some fake data. I stopped the Postgresql server and got this error:
Traceback (most recent call last):
File "/usr/lib64/python2.7/wsgiref/handlers.py", line 85, in run
self.result = application(self.environ, self.start_response)
File "/home/charles/tmp/scrap/bottlez/lib/python2.7/site-packages/bottle.py", line 979, in __call__
return self.wsgi(environ, start_response)
File "/home/charles/tmp/scrap/bottlez/lib/python2.7/site-packages/bottle.py", line 954, in wsgi
out = self._cast(self._handle(environ))
File "/home/charles/tmp/scrap/bottlez/lib/python2.7/site-packages/bottle.py", line 857, in _handle
self.trigger_hook('before_request')
File "/home/charles/tmp/scrap/bottlez/lib/python2.7/site-packages/bottle.py", line 640, in trigger_hook
return [hook(*args, **kwargs) for hook in self._hooks[__name][:]]
File "bt.py", line 31, in _connect_db
db.connect()
File "/home/charles/tmp/scrap/bottlez/src/peewee/peewee.py", line 2967, in connect
self.initialize_connection(self.__local.conn)
File "/home/charles/tmp/scrap/bottlez/src/peewee/peewee.py", line 2885, in __exit__
reraise(new_type, new_type(*exc_value.args), traceback)
File "/home/charles/tmp/scrap/bottlez/src/peewee/peewee.py", line 2965, in connect
**self.connect_kwargs)
File "/home/charles/tmp/scrap/bottlez/src/peewee/peewee.py", line 3279, in _connect
conn = psycopg2.connect(database=database, **kwargs)
File "/home/charles/tmp/scrap/bottlez/lib/python2.7/site-packages/psycopg2/__init__.py", line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async)
OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
When I restarted the server and made a subsequent request I got a normal response with my test data.
So, in short, I'm not sure what I may be missing but the code seems to be working correctly to me.
Postgresql 9.4, psycopg2 2.6, python 2.7.9, peewee 2.6.0
I am running a program from another person who are inconvenience ask for help from. The program is a website. Server end is written by python and flask (module, http://flask.pocoo.org/). The program has been successfully run on the server. What I need to do is modify something on it. Since the production server is not allowed for test, I tested it in development server locally via flask. However, I could not run even the original program. Below is from python.
(venv)kevin#ubuntu:~/python/public_html$ python index.wsgi
Traceback (most recent call last):
File "index.wsgi", line 6, in
from app import app as application
File "/home/kevin/python/public_html/app.py", line 27, in <module>
app = create_app()
File "/home/kevin/python/public_html/app.py", line 12, in create_app
database.init_db()
File "/home/kevin/python/public_html/database.py", line 24, in init_db
Base.metadata.create_all(engine)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/schema.py", line 2793, in create_all
tables=tables)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1478, in _run_visitor
with self._optional_conn_ctx_manager(connection) as conn:
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1471, in _optional_conn_ctx_manager
with self.contextual_connect() as conn:
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1661, in contextual_connect
self.pool.connect(),
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 272, in connect
return _ConnectionFairy(self).checkout()
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 425, in __init__
rec = self._connection_record = pool._do_get()
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 857, in _do_get
return self._create_connection()
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 225, in _create_connection
return _ConnectionRecord(self)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 318, in __init__
self.connection = self.__connect()
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 368, in __connect
connection = self.__pool._creator()
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py", line 80, in connect
return dialect.connect(*cargs, **cparams)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 283, in connect
return self.dbapi.connect(*cargs, **cparams)
sqlalchemy.exc.OperationalError: (OperationalError) unable to open database file None None
In the config.py file
LOGFILE = '/tmp/ate.log'
DEBUG = True
TESTING = True
THREADED = True
DATABASE_URI = 'sqlite:////tmp/ate.db'
SECRET_KEY = os.urandom(24)
Hence, I created a folder called "tmp" under my user and an empty file called "ate.db". Then, ran it again. It said
IOError: [Errno 2] No such file or directory: '/home/kevin/log/ate.log'
Then, I created the log folder and the log file. Run it, but nothing happened like
(venv)kevin#ubuntu:~/python/public_html$ python index.wsgi
(venv)kevin#ubuntu:~/python/public_html$ python index.wsgi
(venv)kevin#ubuntu:~/python/public_html$
If it is successful, the website should be available on http://127.0.0.1:5000/. However, it did not work. Does anybody know why and how to solve it? The codes should be fine since it is now available online. The problem should be a local problem. Thank you so much for your help.
The code of where the program is stuck
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import scoped_session, sessionmaker
engine = None
db_session = None
Base = declarative_base()
def init_engine(uri, **kwards):
global engine
engine = create_engine(uri, **kwards)
return engine
def init_db():
global db_session
db_session = scoped_session(sessionmaker(bind=engine))
# import all modules here that might define models so that
# they will be registered properly on the metadata. Otherwise
# you will have to import them first before calling init_db()
import models
Base.metadata.create_all(engine)
Replace:
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:////dbdir/test.db'
With:
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///dbdir/test.db'
finally figured it out, had help tho
import os
file_path = os.path.abspath(os.getcwd())+"\database.db"
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///'+file_path
db = SQLAlchemy(app)
I had this issue with sqlite. The process trying to open the database file needs to have write access to the directory as it creates temporary/lock files.
The following structure worked for me to allow www-data to use the database.
%> ls -l
drwxrwxr-x 2 fmlheureux www-data 4096 Feb 17 13:24 database-dir
%> ls -l database-dir/
-rw-rw-r-- 1 fmlheureux www-data 40960 Feb 17 13:28 database.sqlite
My database URI started rocking after adding one dot in between ////. Working on windows 7. I had directory and db-file created prior to calling this.
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///./dbdir/test.db'
I think I've seen errors like this where file permissions were wrong for the .db file or its parent directory. You might make sure that the process trying to access the database can do so by appropriate use of chown or chmod.
This is specifically about Django, but maybe still relevant: https://serverfault.com/questions/57596/why-do-i-get-sqlite-error-unable-to-open-database-file
I just met this same problem and found that I make a stupid circular reference .
./data_model.py
from flask.ext.sqlalchemy import SQLAlchemy
from api.src.app import app
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:////database/user.db')
db = SQLAlchemy(app)
./app.py
...
from api.src.data_model import db
db.init_app(app)
Then I removed the app.py/db and it works.
For those looking for a solution to the OperationalError, not necessarily caused by being unable to open database file None None - you might try adding a pool_pre_ping=True argument to create_engine, i.e.
engine = create_engine("mysql+pymysql://user:pw#host/db", pool_pre_ping=True)
see sqlalchemy documentation:
Pessimistic testing of connections upon checkout is achievable by using the Pool.pre_ping argument, available from create_engine() via the create_engine.pool_pre_ping argument
The “pre ping” feature will normally emit SQL equivalent to “SELECT 1” each time a connection is checked out from the pool; if an error is raised that is detected as a “disconnect” situation, the connection will be immediately recycled, and all other pooled connections older than the current time are invalidated, so that the next time they are checked out, they will also be recycled before use.
You're not managing to find the path to the database from your current level. What you need to do is the following:
DATABASE_URI = 'sqlite:///../tmp/ate.db'
That means go up to the root level .. and then navigate down to the database (the relative path is /tmp/ate.db in this case).
I had this same issue when trying to start the central scheduler for luigi (python module) with task history enabled.
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to open database file
I was attempting to use the following configuration from their documentation:
[task_history]
db_connection = sqlite:////user/local/var/luigi-task-hist.db
However, /user/local/* did not exist on my machine and I had to change the configuration to:
[task_history]
db_connection = sqlite:////usr/local/var/luigi-task-hist.db
Kind of a dumb mistake, but easily overlooked. Might save someone some time. This change got rid of the error in my case and luigid started with no errors.
I am doing a course of Python and I have the same problem. Affortunately in the course put the right way to determined the path of the database URI
So it works for me even in the 2022 year.
You need to change:
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:////tmp/test.db'
to:
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///<name of database>.db'
I hope that it works for someone.
This is the problem related to your file path. If you want to save your file in your root directory itself, then write file_name itself right after '/' -
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///file_name.db'
I was able to overcome the same error by running sudo python :)