UnicodeDecodeError while executing query or in jsonpickle - python

I post an arbitrary query to the server side, where it is executed and the result set is sent back to the client. A typical query looks like this:
select Наименование from sys_Атрибут where Наименование = 'Район'
As you can see, it contains non-Latin literals. This query is not executed. However, if I write it like this
select Наименование AS attr from sys_Атрибут where Наименование = 'Район'
Then, it's ok. The server side code looks like this:
#--coding: utf-8
from __future__ import unicode_literals
...
import pyodbc # tried both of them
import pypyodbc #
def resultset(request):
query = request.POST['query']
query = u'{}'.format(query)
cnx = pyodbc.connect("DRIVER=FreeTDS;SERVER=192.168.0.1;PORT=1433;
DATABASE = mydatabase;UID=sa;PWD=password;TDS_Version=7.0;ClientCharset=UTF8;")
cursor = cnx.cursor()
cursor.execute(query.encode('utf-8'))
columns = [desc[0] for desc in cursor.description] # sometimes error happens at this point
data = []
output = []
for row in cursor:
data.append(dict(zip(columns, row)))
output = '{items:'
output += jsonpickle.encode(data) # sometimes at that point
output += '}'
return HttpResponse(output)
The whole problem is in the names of table fields. I guess, to solve this problem I should do this part of coding data.append(dict(zip(columns, row))) in a different manner.

To state the obvious, you shouldn't be sending raw queries down to the server. Second, using unicode_literals and u"" strings is strange. Third, using unicode string and then dumping out out as utf-8 is also strange. I'd suggest [reading up on encodings to start](http://kunststube.net/encoding/.
To solve the actual issue that's likely being presented, the fault is likely with the pyodbc library. What database are you connecting to and have you considered using a different driver? If the database supports the query you're trying to execute (select unicode from table where field = 'value') then it's likely the driver mucking it up.

Related

How to set sqlite TEMP_STORE to 3 with python

I am using python and sqlite3 and would like to use the memory for temp files. According to the docs, https://www.sqlite.org/compile.html, SQLITE_TEMP_STORE=3 means "Always use memory". I can check the current value with:
import sqlite3
conn = sqlite3.connect("test.db")
cur = conn.cursor()
check_db = conn.execute( """ select * from pragma_compile_options where compile_options like 'TEMP_STORE=%' """).fetchall()
print("check_db:", check_db)
When I attempt to update:
sq_update = """ update pragma_compile_options set compile_options = 'TEMP_STORE=3' where compile_options like 'TEMP_STORE=1' """
conn.execute(sq_update) conn.commit()
The following error is returned.
INTERNALERROR> sqlite3.OperationalError: table pragma_compile_options may not be modified
My goal is to set tell sqlite to use the memory for temp files.
You need to examine the content of pragma_compile_options output to see the value of TEMP_STORE. You can only change the run-time setting if TEMP_STORE was explicitly set to non-zero value. In that case, use PRAGMA temp_store = 2 to achieve your goal. See https://www.sqlite.org/pragma.html#pragma_temp_store.

Python SQLite DB query returning nothing for each row

I'm using python in TestComplete to conduct a db query, but the results seem to be empty strings and do not match the data in the table I queried. The file is a s3db file. Does that matter?
Using:
TestComplete Version 14
imported sqlite3 into python file
I've:
-Tried running the same query in SQLite. It returned the expected result
-Verified the connection is established with the correct db
---python
import sqlite3
def getInfo():
conn = sqlite3.connect(db)
c = conn.cursor()
try:
c.execute('SELECT Column_Name FROM Table_Name')
results = c.fetchall()
except:
Log.Error("Query execution failed")
for x in results:
Log.Message(x) `enter code here`
#Log.Message() works like a print statement in testcomplete.
---
Actual Output:
The program runs without errors, but the results come back as 15 lines of blank rows. 15 is the number of records within the table, so I know it's looking in the right place, but it seems like it's not identifying that there's information stored here.
Expected Output:
15 lines of data contained within the Column I specified in the query.
There is no error with sqlite3 and your DB operations. The issue is with Log.Message and what it expects as an argument. Within TestComplete, Log.Message requires variable arguments of type Variant, which can be any of the supported data types within TestComplete; String, Double/Real, Boolean, Date/Time, Object (i.e. TestComplete-recognised UI objects) and Integer.
Log.Message cannot accept arguments of the type returned by cursor.fetchall's rows.
So you'd need to convert each row into a String, e.g.
for x in results:
msg = ('{0} : {1}'.format(x[0], x[1]))
Log.Message(msg)

GET data using requests than insert into DB2

Currently I am trying to retrieve JSON data from an API and store it in a database. I am able to retrieve the JSON data as a list, and I am able to connect to and query the DB2 Database. My issue is that I can not figure out how to generate an INSERT statement for the data retrieved from the API. The application is only for short term personal use, so SQL Injection attacks are not a concern. So overall I need to generate an sql insert statement from a list. My current code is below, with the api url and info changed.
import ibm_db
import requests
ibm_db_conn = ibm_db.connect("DATABASE=node1;HOSTNAME=100.100.100.100;PORT=50000;PROTOCOL=TCPIP;UID=username;PWD=password;", "", "")
api_request = requests.get("http://api-url/resource?api_key=123456",
auth=('user#api.com', 'password'))
api_code = api_request.status_code
api_data = api_request.json()
print(api_code)
print(api_data)
Depends on the format of the Json returned, and on what your table looks like. My first thought, though, is to use Python's json module:
import json
#...
#...
api_data = json.loads(api_request.json())
Now, you have a Python object you can access like normal:
api_data["key"][2]
for instance. You can itterate, slice, or do whatever else to extract the data you want. Say your json represented rows to be inserted:
query = "INSERT INTO <table> VALUES\n"
i = 0
for row in api_data:
query += "(%s)" %([i for i in row])
if i < len(api_data)-1: query += ",\n"
i += 1
Again, this will vary greatly depending on the format of your table and JSON, but that's the general idea I'd start with.

calling GeomFromText and other such functions using sqlalchemy core

I am working in python with a MySQL database. I have a table that uses the MySQL geometry extension, so I need to call the GeomFromText MySQL function during an update statement, something like this:
UPDATE myTable SET Location=GeomFromText('Point(39.0 55.0)') where id=1;
UPDATE myTable SET Location=GeomFromText('Point(39.0 55.0)') where id=2;
Originally, I was using the low-level MySQLdb library. I am switching to using the SQLAlchemy core library (I cannot use the SQLAlchemy ORM for speed and other reasons).
If I were using the lower-level MySQLdb library directly, I would do something like this:
import MySQLdb as mysql
commandTemplate = "UPDATE myTable SET Location=GeomFromText(%s) where id=%s"
connection = mysql.connect(host="myhost",user="user",passwd="password",db="my_schema")
cursor = connection.cursor(mysql.cursors.DictCursor)
data = [
("Point(39.0 55.0)",1),
("Point(39.0 55.0)",2),
]
cursor.executemany(commandTemplate,data)
How do I get the equivalent functionality with SQLAlchemy core?
Without the GeomFromText, I think it would look something like this (thanks to this answer):
from sqlalchemy.sql.expression import bindparam
updateCommand = myTable.update().where(id=bindparam("idToChange"))
data = [
{'idToChange':1,'Location':"Point(39.0 55.0)"},
{'idToChange':2,'Location':"Point(39.0 55.0)"},
]
connection.execute(updateCommand,data)
I can't just textually replace "Point(39.0 55.0)" with "GeomFromText('Point(39.0 55.0)')", or I get:
Cannot get geometry object from data you send to the GEOMETRY field
The easiest way I have found so far involves the use of text (i.e. constructing TextClause objects), which lets you enter SQL syntax (almost) literally.
My example would work something like this:
from sqlalchemy.sql.expression import bindparam
from sqlalchemy import text
updateCommand = myTable.update().where(id=bindparam("idToChange"))
valuesDict = {'idToChange':':idToChange',
'Location':text("GeomFromText(:_location)")
}
updateCommand = updateCommand.values(**valuesDict)
data = [
{'idToChange':1,'_location':"Point(39.0 55.0)"},
{'idToChange':2,'_location':"Point(39.0 55.0)"},
]
#see the MySQL command as it will be executed (except for data)
print(connection.compile(bind=connection))
#actually execute the statement
connection.execute(updateCommand,data)
The key points:
calling updateCommand.values replaces the VALUES part of the SQL clause. Only the columns that you give as kwargs to this call will actually be put into the final UPDATE statement
the values of the keyword arguments to updateCommand.values can either be a literal set of data (if you are only updating one row), or it can be a string giving the names of keys in the data dictionary that will eventually be passed with the command to the connection.execute method. The format to use is ColumnName=":dictionaryKeyName".
the values of the keyword arguments can also be the result of a text clause, which can itself contain field names in the same ":dictionaryKeyName" format.

SQL/Python School Program

I have a school project for a Python script which runs an SQL query and spits out results to a file... I have this so far and wanted some feedback on if this looks right or if I am way off (I'm using adodbapi). Thanks much!
import adodbapi
# Connect to the SQL DB
conn = adodbapi.connect("Provider=SQLDB;SERVER= x.x.x.x ;User Id=user;Password=pass;DATABASE=db database;")
curs = conn.cursor()
# Execute SQL query test_file.sql"
query = 'test_file'
curs.execute("SELECT test_file")
rows = curs.fetchall()
for row in rows:
print test_file | test_file.txt
conn.close()
# Execute SQL query test_file.sql" You are not executing an SQL query from a file. You are executing the SQL query "SELECT test_file".
"SELECT test_file" is not valid SQL syntax for the SELECT query. See this tutorial on the SELECT statement.
rows = curs.fetchall(); for row in rows: ... is not a nice way of iterating through all the results of a query.
If your query returns large number of rows, say a million rows, then all one million rows will have to be transferred from the database to your python program before the loop can start. This could be very slow if the database server is on a remote machine.
Your program will have to allocate memory for the entire data set before work begins. This could be hundreds of megabytes.
The more Pythonic way of doing this is to avoid loading the entire data set into memory unless you have to. Using sqlite3 I would write:
results = curs.execute("SELECT * FROM table_name")
for row in results:
print (row)
This way only one row is loaded at a time.
print test_file | test_file.txt: the print statement does not support the pipe operator for writing to a file. (Python is not a Linux shell!) See Python File I/O.
Additionally, even if this syntax was correct, you have failed to put the file name in 'quote marks'. Without quotes, Python will interpret test_file.txt as the property txt of a variable called test_file. This will get you a NameError because there is no variable called test_file, or possibly an AttributeError.
If you want to test your code without having to connecting to a network database, then use the sqlite3 module. This is a built-in library of Python, implementing a database similar to adodbapi.
import sqlite3
db_conn = sqlite3.connect(":memory:") # connect to temporary database
db_conn.execute("CREATE TABLE colours ( Name TEXT, Red INT, Green INT, Blue INT )")
db_conn.execute("INSERT INTO colours VALUES (?,?,?,?)", ('gray', 128, 128, 128))
db_conn.execute("INSERT INTO colours VALUES (?,?,?,?)", ('blue', 0, 0, 255))
results = db_conn.execute("SELECT * FROM colours")
for row in results:
print (row)
In future please try running your code, or at least testing that individual lines do what you expect. Trying print test_file | test_file.txt in an interpreter would have given you a TypeError: unsupported operand type(s) for |: 'str' and 'str'.

Categories