Introduction
I'm developing a python webapp running on Flask. One of the module I developed use sqlite3 to access a database file in one of my project directory. Locally it works like a charm, but I have issues to make it run properly on pythonanywhere.
Code
Here's an insight of my module_database.py (both sql query are only SELECT):
import sqlite3
import os
PATH_DB = os.path.join(os.path.dirname(__file__), 'res/database.db')
db = sqlite3.connect(PATH_DB)
cursor = db.cursor()
def init():
cursor.execute(my_sql_query)
val = cursor.fetchone()
def process():
cursor.execute(another_sql_query)
another_val = cursor.fetchone()
I don't know if that's important but my module is imported like this:
from importlib import import_module
module = import_module(absolute_path_to_module)
module.init() # module init
And afterwards my webapp will regularly call:
module.process()
So, I have one access to the db in my init() and one access to the db in my process(). Both works when I run it locally.
Problem
I pulled my code via github on pythonanywhere, restarted the app and I can see in the log file that the access to the DB in the init() worked (I print a value, it's working fine)
But then, when my app calls the process() method I got a:
2017-11-06 16:27:55,551: File "/home/account-name/project-name/project_modules/module_database.py", line 71, in my_method
2017-11-06 16:27:55,551: cursor.execute(sql)
2017-11-06 16:27:55,552: sqlite3.DatabaseError: database disk image is malformed
I tried via the console to run an integrity check:
PRAGMA integrity_check;
and it prints OK
I'd be glad to hear if you have any idea where this could come from.
a small thing, and it may not fix your specific problem, but you should always call path.abspath on __file__ before calling path.dirname, otherwise you can get unpredictable results depending on how your code is imported/loaded/run
PATH_DB = os.path.join(
os.path.dirname(os.path.abspath(__file__)),
'res/database.db'
)
Related
I am new to python.
I am creating some programs that do an ETL and save the tables in a postgresql database.
The connection to the database is done in the following way.
I have a settings.ini file where I have the variable:
bdd_link = postgresql://postgres:adrian123#localhost:5432/prueba
In a config.py file I have the variable:
from decouple import AutoConfig
link_bdd = config("bdd_link")
And in a sql_con.py file I have the following:
from config import link_bdd
from sqlalchemy import create_engine
from sqlalchemy_utils import database_exists, create_database
def acceder_bdd():
url_bdd = link_bdd
if not database_exists(url_bdd):
create_database(url_bdd)
engine = create_engine(url_bdd, echo=False)
This all works when I run it normal.
But when I do it inside a virtual environment (venv), the create_tables() function, which in turn uses the acceder_bdd function, which I use to create tables inside postgresql, gives me the following error:
engine.connect().execute(f"DROP TABLE IF EXISTS {table}")
AttributeError: 'NoneType' object has no attribute 'connect'
It's like the create_engine(url_bdd, echo=False) method isn't working, and it makes me think that database "settings" don't work this way inside a virtual environment.
This code runs perfectly fine if I first redirect to the folder where this main.py file is located. So in cmd I just type: python main.py. In this folder is also my database "Datalog.db" located.
If I run this python file from somewhere else, I get a problem with this line of code: cur.execute(sql). So in cmd I type: python C:\Users\ [...] \main.py. I get following error: "sqlite3.OperationalError: no such table: Datalog". Later I want to include this python file in a pythonshell in node-red and there I have to define the full path of this main.py
I also tried to build an exe-file with it but then the same error occurs:"sqlite3.OperationalError: no such table: Datalog".
Apparently the connection to the database is not the issue, first my cur.execute command is not working.
I find out that I have to "include my SQLite database file in the include_files statement", but I have no idea how to do this ..
Can anybody help? I am very sorry for any inconvenience, I just started programming and this is my first post.
import sqlite3 as db
db_name = 'Datalog'
output_number = 'Output1'
output = 'hello'
timestamp = '2019-11-11 09:27:02'
db_name = f'{db_name}.db'
con = db.connect(db_name)
with con:
cur = con.cursor()
sql = f"UPDATE Datalog SET {output_number}='{output}' WHERE timestamp ='{timestamp}'"
cur.execute(sql)
con.commit()
print("### DB updated ###")
You can change db_name to the full path of your sqlite database or even better, dynamically build the path to the db (Below code assumes the db is in the same folder (directory) as the file which calls the below code ):
import os.path
db_name = os.path.dirname(os.path.abspath(__filename__)) + 'Datalog'
python looks for files in "sys.path" so you want to insert the name of the directory containing your file in that path so python can find it
here is an example:
import sys
sys.path.insert(0, r'C:\Users\Philip\Work\Bin')
where 0 is the position in the sys.path, 0 means top, 1 means second from top etc.
obviously you would replace my directory with yours
I am running in to the dreaded MySQL Commands out of Sync when using a custom DB library and celery.
The library is as follows:
import pymysql
import pymysql.cursors
from furl import furl
from flask import current_app
class LegacyDB:
"""Db
Legacy Database connectivity library
"""
def __init__(self,app):
with app.app_context():
self.rc = current_app.config['RAVEN']
self.logger = current_app.logger
self.data = {}
# setup Mysql
try:
uri = furl(current_app.config['DBCX'])
self.dbcx = pymysql.connect(
host=uri.host,
user=uri.username,
passwd=uri.password,
db=str(uri.path.segments[0]),
port=int(uri.port),
cursorclass=pymysql.cursors.DictCursor
)
except:
self.rc.captureException()
def query(self, sql, params = None, TTL=36):
# INPUT 1 : SQL query
# INPUT 2 : Parameters
# INPUT 3 : Time To Live
# OUTPUT : Array of result
# check that we're still connected to the
# database before we fire off the query
try:
db_cursor = self.dbcx.cursor()
if params:
self.logger.debug("%s : %s" % (sql, params))
db_cursor.execute(sql,params)
self.dbcx.commit()
else:
self.logger.debug("%s" % sql)
db_cursor.execute(sql)
self.data = db_cursor.fetchall()
if self.data == None:
self.data = {}
db_cursor.close()
except Exception as ex:
if ex[0] == "2006":
db_cursor.close()
self.connect()
db_cursor = self.dbcx.cursor()
if params:
db_cursor.execute(sql,params)
self.dbcx.commit()
else:
db_cursor.execute(sql)
self.data = db_cursor.fetchall()
db_cursor.close()
else:
self.rc.captureException()
return self.data
The purpose of the library is to work alongside SQLAlchemy whilst I migrate a legacy database schema from a C++-based system to a Python based system.
All configuration is done via a Flask application and the app.config['DBCX'] value reads the same as a SQLAlchemy String ("mysql://user:pass#host:port/dbname") allowing me to easily switch over in future.
I have a number of tasks that run "INSERT" statements via celery, all of which utilise this library. As you can imagine, the main reason for running Celery is so that I can increase throughput on this application, however I seem to be hitting an issue with the threading in my library or the application as after a while (around 500 processed messages) I see the following in the logs:
Stacktrace (most recent call last):
File "legacy/legacydb.py", line 49, in query
self.dbcx.commit()
File "pymysql/connections.py", line 662, in commit
self._read_ok_packet()
File "pymysql/connections.py", line 643, in _read_ok_packet
raise OperationalError(2014, "Command Out of Sync")
I'm obviously doing something wrong to hit this error, however it doesn't seem to matter whether MySQL has autocommit enabled/disabled or where I place my connection.commit() call.
If I leave out the connection.commit() then I don't get anything inserted into the database.
I've recently moved from mysqldb to pymysql and the occurrences appear to be lower, however given that these are simple "insert" commands and not a complicated select (there aren't even any foreign key constraints on this database!) I'm struggling to work out where the issue is.
As things stand at present, I am unable to use executemany as I cannot prepare the statements in advance (I am pulling data from a "firehose" message queue and storing it locally for later processing).
First of all, make sure that the celery thingamajig uses its own connection(s) since
>>> pymysql.threadsafety
1
Which means: "threads may share the module but not connections".
Is the init called once, or per-worker? If only once, you need to move the initialisation.
How about lazily initialising the connection in a thread-local variable the first time query is called?
I have a question about Celery.
I am calling a function named task and I want to return a list of a specific class.
But if I do this I get an error on my server:
No module named 'modelsgert'
modelsgert is the name of the python file where my class has been defined.
I have imported the same file to my project that is located on my server but yet he doesn't know this. Probably he sends a reference to the file location of the file on the celery server.
code celery server:
from celery import Celery
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from modelsgert import(
Diagnose,
Procedur,
DBSession,
Data
)
import time
celery = Celery('tasks', backend='amqp', broker='amqp://guest#localhost//')
#celery.task()
def test_task(data):
diagnose = DBSession.query(Diagnose)
listofdiagnoses = []
listofdiagnoses.append(diagnose[0])
listofdiagnoses.append(diagnose[1])
return (listofdiagnoses)
code Pyramid server
celery = Celery( backend='amqp', broker='amqp://guest#192.168.1.5:5672//')
celery.conf.update(CELERY_RESULT_BACKEND = 'amqp', BROKER_HOST='192.168.1.5', BROKER_USER='kristof', BROKER_PASSWORD='bob', BROKER_VHOST='myvhost', BROKER_PORT=5672)
task = celery.send_task('tasks.test_task',["kakker"])
TheData = task.get()
is there a way to fix this problem in a proper way?
Are you certain that modelsgert is available when you see that error?
Celery uses pickle by default, and that module indeed stores the the name of the module and class (together with the data contained in the class), and when loading the data again, the module and class are looked up dynamically. This stage fails because the modelsgert cannot be imported.
I must note that you are trying to send SQLAlchemy objects here, and that is very rarely a good idea. The objects are tied to a specific session, and when you unpickle the objects that session will no longer be there. Moveover, the objects represent database state, and the database state could easily have changed by the time you load the objects again.
You should, instead, send object identifiers, and query for the objects again on the other side. Instead of a list of Diagnose objects, send the primary keys instead:
listofdiagnoses = [d.id for d in diagnose]
On the other side, you'd then use those identifiers to load your Diagnose objects again from the database.
Hey, I'm trying to work out with /remote_api with a django-patch app engine app i got running.
i want to select a few rows from my online production app locally.
i cant seem to manage to do so, everything authenticates fine, it doesnt breaks on imports, but when i try to fetch something it just doesnt print anything.
Placed the test python inside my local app dir.
#!/usr/bin/env python
#
import os
import sys
# Hardwire in appengine modules to PYTHONPATH
# or use wrapper to do it more elegantly
appengine_dirs = ['myworkingpath']
sys.path.extend(appengine_dirs)
# Add your models to path
my_root_dir = os.path.abspath(os.path.dirname(__file__))
sys.path.insert(0, my_root_dir)
from google.appengine.ext import db
from google.appengine.ext.remote_api import remote_api_stub
import getpass
APP_NAME = 'Myappname'
os.environ['AUTH_DOMAIN'] = 'gmail.com'
os.environ['USER_EMAIL'] = 'myuser#gmail.com'
def auth_func():
return (raw_input('Username:'), getpass.getpass('Password:'))
# Use local dev server by passing in as parameter:
# servername='localhost:8080'
# Otherwise, remote_api assumes you are targeting APP_NAME.appspot.com
remote_api_stub.ConfigureRemoteDatastore(APP_NAME,
'/remote_api', auth_func)
# Do stuff like your code was running on App Engine
from channel.models import Channel, Channel2Operator
myresults = mymodel.all().fetch(10)
for result in myresults:
print result.key()
it doesn't give any error or print anything. so does the remote_api console example google got. when i print the myresults i get [].
App Engine patch monkeypatches the ext.db module, mutilating the kind names. You need to make sure you import App Engine patch from your script, to give it the opportunity to mangle things as per usual, or you won't see any data returned.