from sqlobject import *
class Data(SQLObject):
ts = TimeCol()
val = FloatCol()
Data.select().count()
Fails with:
AttributeError: No connection has been defined for this thread or process
How do I get the SQL which would be generated, without declaring a connection?
It's impossible for two reasons. 1st, .count() not only generates a query, it also executes it, so not only it requires a connection, it also requires a database and a populated table. 2nd, different queries could be generated for different backends (esp. in the area of quoting strings) so a connection is required to render a query object to a string.
To generate a query string with accumulator function you need to repeat the code that generates the query. So the full solution for your question is
#! /usr/bin/env python
from sqlobject import *
__connection__ = "sqlite:/:memory:?debug=1"
class Data(SQLObject):
ts = TimeCol()
val = FloatCol()
print(Data.select().queryForSelect().newItems("COUNT(*)"))
Related
Background on the problem: I've got MSSQL running in a Docker container, and I'm executing SQL scripts from a file within the container using Python/SQLAlchemy. I know this works because I've been executing other scripts without any problem. But one particular SQL script is executing without any result, and after careful elimination it seems to be caused by a single block of commented out code. Again this doesn't seem to cause an issue elsewhere, and the SQL script also runs faultlessly when executed directly in MSSQL.
The code that calls the SQL script:
with engine.connect() as con:
with open(file_loc) as file:
a = file.read()
a = re.split('\nGO|\nGO\n|\ngo|\ngo\n', a) # split the query at GO
for q in a:
con.execute(sa.text(q))
The code that causes the issue:
-- create new PMT() function
create function [financials].[pmt] (
#r decimal(38,16)
) returns decimal(24,4)
as
begin
declare #pmt decimal(24,4);
...
return #pmt;
end;
GO
------ present value <<<<ANY COMMENT HERE PREVENTS THE FUNCTIONS FROM BEING CREATED
create function [financials].[pmt_PV] (
#pmt decimal(38,16) <<removed trailing comma here
) returns decimal(24,4)
as
begin
declare #pmt_pv decimal(24,4);
...
return #pmt_pv;
end;
GO
Any comment in the the space marked in the middle prevents the functions from being created.
EDIT: I've now tried pyodbc as well as pymysql, and have the same result (zero rows returned when calling a stored procedure). Forgot to mention before that this is on Ubuntu 16.04.2 LTS using the MySQL ODBC 5.3 Driver (libmyodbc5w.so).
I'm using pymysql (0.7.11) on Python 3.5.2, executing various stored procedures against a MySQL 5.6.10 database. I'm running into a strange and inconsistent issue where I'm occasionally getting zero results returned, though I can immediately re-run the exact same code and get the number of rows I expect.
The code is pretty straightforward...
from collections import OrderedDict
import pymysql
from pymysql.cursors import DictCursorMixin, Cursor
class OrderedDictCursor(DictCursorMixin, Cursor):
dict_type = OrderedDict
try:
connection = pymysql.connect(
host=my_server,
user=my_user,
password=my_password,
db=my_database,
connect_timeout=60,
cursorclass=pymysql.cursors.DictCursor
)
param1 = '2017-08-23 00:00:00'
param2 = '2017-08-24 00:00:00'
proc_args = tuple([param1, param2])
proc = 'my_proc_name'
cursor = connection.cursor(OrderedDictCursor)
cursor.callproc(proc, proc_args)
result = cursor.fetchall()
except Exception as e:
print('Error: ', e)
finally:
if not isinstance(connection, str):
connection.close()
More often than not, it works just fine. But every once in awhile, it completes almost instantly but with zero rows in the result set. No error that I can see or anything, just nothing... Run it again, and no problem.
Turns out that the problem had nothing to do with pymysql, odbc, etc., but rather was a problem with the order in which the parameters were passed to the stored procedure.
On my desktop, I was using Python 3.6 and things worked just fine. I didn't realize, tho, that one of the changes between 3.5.2 and 3.6 affected how items added to a dictionary object via json.loads were ordered.
The parameters being passed were coming from a dict object originally populated via json.loads... since they were unordered pre-3.6, running the code would occasionally mean that my starttime and endtime parameters were passed to the MySQL stored procedure backwards. Hence, zero rows returned.
Once I realized that was the issue, fixing it was just a matter of adding object_pairs_hook=OrderedDict to the json.loads part.
I have created a stored procedure usuarios_get , I test it in oracle console and work fine. This is the code of stored procedure
create or replace PROCEDURE USUARIOS_GET(
text_search in VARCHAR2,
usuarios_list out sys_refcursor
)
AS
--Variables
BEGIN
open usuarios_list for select * from USUARIO
END USUARIOS_GET;
The python code is this:
with connection.cursor() as cursor:
listado = cursor.var(cx_Oracle.CURSOR)
l_query = cursor.callproc('usuarios_get', ('',listado)) #in this sentence produces error
l_results = l_query[1]
The error is the following:
NotSupportedError: Variable_TypeByValue(): unhandled data type VariableWrapper
I've also tried with other stored procedure with a out parameter number type and modifying in python code listado= cursor.var(cx_Oracle.NUMBER) and I get the same error
NotSupportedError: Variable_TypeByValue(): unhandled data type VariableWrapper
I work with
python 2.7.12
Django 1.10.4
cx_Oracle 5.2.1
Oracle 12c
Can any help me with this?
Thanks
The problem is that Django's wrapper is incomplete. As such you need to make sure you have a "raw" cx_Oracle cursor instead. You can do that using the following code:
django_cursor = connection.cursor()
raw_cursor = django_cursor.connection.cursor()
out_arg = raw_cursor.var(int) # or raw_cursor.var(float)
raw_cursor.callproc("<procedure_name>", (in_arg, out_arg))
out_val = out_arg.getvalue()
Then use the "raw" cursor to create the variable and call the stored procedure.
Looking at the definition of the variable wrapper in Django it also looks like you can access the "var" property on the wrapper. You can also pass that directly to the stored procedure instead -- but I don't know if that is a better long-term option or not!
Anthony's solution works for me with Django 2.2 and Oracle 12c. Thanks! Couldn't find this solution anywhere else on the web.
dcursor = connection.cursor()
cursor = dcursor.connection.cursor()
import cx_Oracle
out_arg = cursor.var(cx_Oracle.NUMBER)
ret = cursor.callproc("<procedure_name>", (in_arg, out_arg))
I'm building the skeleton of a larger application which relies on SQL Server to do much of the heavy lifting and passes data back to Pandas for consumption by the user/insertion into flat files or Excel. Thus far my code is able to insert a stored procedure and function into a database and execute both without issue. However when I try to run the code a second time, in the same database, the drop commands don't seem to work. Here are the various files and flow of code through them.
First, proc_drop.sql, which is to used to store the SQL drop commands.
/*
IF EXISTS(SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'proc_create_tractor_test'))
DROP PROCEDURE [dbo].[proc_create_tractor_test]
GO
IF EXISTS(SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'fn_mr_parse'))
DROP FUNCTION [dbo].[fn_mr_parse]
GO
*/
IF OBJECT_ID('[proc_create_tractor_test]') IS NOT NULL DROP PROCEDURE [proc_create_tractor_test]
IF OBJECT_ID('[fn_mr_parse]') IS NOT NULL DROP PROCEDURE [fn_mr_parse]
I realize there are two kinds of drop statements in the file. Have tested a number of different iterations and none of the drop statements seem to work when executed by Python/SQL Alchemy but all work on their own when executed in SQL Management Studio
Next, the helper.py file in which I am storing helper functions. My drop SQL originally fed into the "deploy_procedures" function as a file and to be execute in the body of the function. I've since isolated the drop SQL reading/executing into another function for testing purposes only.
def clean_databases(engines, procedures):
for engine in engines:
for proc in procedures:
with open(proc, "r") as procfile:
code = procfile.read()
print code
engine.execute(code)
def deploy_procedures(engines, procedures):
for engine in engines:
for proc in procedures:
with open(proc, "r") as procfile:
code = procfile.read()
engine.execute(code)
Next, proc_create_tractor_test.sql, which is executed by the code and creates an associated stored procedure in the database. For brevity I've added the top portion of that code only:
CREATE PROCEDURE [dbo].[proc_create_tractor_test]
#meta_risks varchar(256)
AS
BEGIN
Finally, the major file piecing it altogether below. You'll notice that I create SQL Alchemy engines which are passed to the helper functions after being initialized with connection information. Those engines are passed as a list as are the other SQL procedures I referenced and the helper function simply iterates through each engine and procedure executing one at a time. Also using PYMSSQL as a driver to connect which is working fine.
So it is the "deploy_procedures" function which is crashing when trying to create the function or stored procedure the second time the code is run. And as far as I can tell, this is because the drop SQL at the top of my explanation is never run.
Can anyone shed some light on what the issue is or whether I am missing something totally obvious?
run_tractor.py:
import pandas
import pandas.io.sql as pdsql
import sqlalchemy as sqla
import xlwings
import helper as hlp
# Last import contains user-defined functions
# ---------- SERVER VARIABLES ---------- #
server = 'DEV\MSSQLSERVER2K12'
database = 'injection_test'
username= 'username'
password = 'pwd'
# ---------- CONFIGURING [Only change base path if relevant, not file names ---------- #
base_path = r'C:\Code\code\Tractor Analysis Basic Code'
procedure_drop = r'' + base_path + '\proc_drop.sql'
procedure_create_curves = r'' + base_path + '\proc_create_tractor_test.sql'
procedure_create_mr_function = r'' + base_path + '\create_mr_parse_function.sql'
procedures = [procedure_create_curves, procedure_create_mr_function]
del_procedures = [procedure_drop]
engine_analysis = sqla.create_engine('mssql+pymssql://{2}:{3}#{0}/{1}'.format(server,database,username,password))
engine_analysis.connect()
engines = [engine_analysis]
hlp.clean_databases(engines, del_procedures)
hlp.deploy_procedures(engines, procedures)
Good Day.
I have faced following issue using pymongo==2.1.1 in python2.7 with mongo 2.4.8
I have tried to find solution using google and stack overflow but failed.
What's the issue?
I have following function
from bson.code import Code
def read(groupped_by=None):
reducer = Code("""
function(obj, prev){
prev.count++;
}
""")
client = Connection('localhost', 27017)
db = client.urlstats_database
results = db.http_requests.group(key={k:1 for k in groupped_by},
condition={},
initial={"count": 0},
reduce=reducer)
groupped_by = list(groupped_by) + ['count']
result = [tuple(res[col] for col in groupped_by) for res in results]
return sorted(result)
Then I am trying to write test for this function
class UrlstatsViewsTestCase(TestCase):
test_data = {'data%s' % i : 'data%s' % i for i in range(6)}
def test_one_criterium(self):
client = Connection('localhost', 27017)
db = client.urlstats_database
for column in self.test_data:
db.http_requests.remove()
db.http_requests.insert(self.test_data)
response = read([column])
self.assertEqual(response, [(self.test_data[column], 1)])
this test sometimes fails as I understand because of latency. As I can see response has not cleaned data in it
If I add delay after remove test pass all the time.
Is there any proper way to test such functionality?
Thanks in Advance.
A few questions regarding your environment / code:
What version of pymongo are you using?
If you are using any of the newer versions that have MongoClient, is there any specific reason you are using Connection instead of MongoClient?
The reason I ask second question is because Connection provides fire-and-forget kind of functionality for the operations that you are doing while MongoClient works by default in safe mode and is also preferred approach of use since mongodb 2.2+.
The behviour that you see is very conclusive for Connection usage instead of MongoClient. While using Connection your remove is sent to server, and the moment it is sent from client side, your program execution moves to next step which is to add new entries. Based on latency / remove operation completion time, these are going to be conflicting as you have already noticed in your test case.
Can you change to use MongoClient and see if that helps you with your test code?
Additional Ref: pymongo: MongoClient or Connection
Thanks All.
There is no MongoClient class in version of pymongo I use. So I was forced to find out what exactly differs.
As soon as I upgrade to 2.2+ I will test whether everything is ok with MongoClient. But as for connection class one can use write concern to control this latency.
I older version One should create connection with corresponding arguments.
I have tried these twojournal=True, safe=True (journal write concern can't be used in non-safe mode)
j or journal: Block until write operations have been commited to the journal. Ignored if the server is running without journaling. Implies safe=True.
I think this make performance worse but for automatic tests this should be ok.