I have problem with data type conversion.
Using django and pypyodbc lib I'm trying to recieive data from oracle DB (external) and save it into local app DB.
import pypyodbc
def get_data(request):
conn = pypyodbc.connect("DSN=...")
cursor = conn.cursor()
cursor.execute("SELECT value FROM table")
data = cursor.fetchall()
for row in data:
d = External_Data(first_val = row[0])
d.save()
The output from value is "0,2" and I've received error message:
could not convert string to float: b',02'
When I changed sql statement to:
SELECT cast(value as numeric(10,2) from table)
I received error message:
[<class 'decimal.ConversionSyntax'>]
How to change that data to get float data and save it. I use DecimalField(max_digits=10, decimal_places=2) as model field.
I think this problem comes with implicit type change.
when you get data from your get_data function, row[0] var in for loop is seemed like Bytes variable.
So First of all, I recommend to check row[0]'s data type with print(type(row[0])).
If result is Bytes, you can do like this:
import pypyodbc
def get_data(request):
conn = pypyodbc.connect("DSN=...")
cursor = conn.cursor()
cursor.execute("SELECT value FROM table")
data = cursor.fetchall()
for row in data:
data = float(str(row[0]).replace(',','.'))
d = External_Data(first_val=data)
d.save()
Related
[![enter image description here][1]][1]hi im facing type error cx+_orcle has no len() can you please help
query1="SELECT B.RMT_SITE_NM, A.CO_APPL_PRFL_ID, A.PRFL_ID FROM MIGRATION_TRACKING A, T_SFT_INIT_PRTCL B WHERE A.PRFL_ID=B.INIT_PRTCL_ID AND A.STATUS='Scheduled' AND A.PHASE='Phase 1' AND A.WAVE='Wave 1'"
cursor = connection()
ans = cursor.execute(query1)
if ans:
for rows in range(len(ans)):
name = str(ans[rows][0])
co_id_table = cursor.execute(query2,(name))
if co_id_table:
co_id = co_id_table[0][17]
data = cursor.execute(query3,(co_id))
data = data[0]
rndm_id = generate_id() ```
[1]: https://i.stack.imgur.com/YsnMs.jpg
[2]: https://i.stack.imgur.com/bttB1.jpg
This is the incorrect way of iterating over rows. You should instead do this:
for row in cursor.execute(query1):
name = str(row[0])
...
If you prefer to get all of the rows up front (since you are going to use the same cursor to execute other queries), then you can do this:
cursor.execute(query1)
rows = cursor.fetchall()
The value returned from cursor.execute() when the statement executed is a query is simply the cursor itself. Since the cursor implements the iteration protocol, you can also do this:
cursor.execute(query1)
rows = list(cursor)
I am trying to calculate the mode value of each row and store the value in the judge = judge column, however it updates only the first record and leaves the loop
ps: Analisador is my table and resultado_2 is my db
import sqlite3
import statistics
conn = sqlite3.connect("resultado_2.db")
cursor = conn.cursor()
data = cursor.execute("SELECT Bow, FastText, Glove, Wordvec, Python, juiz, id FROM Analisador")
for x in data:
list = [x[0],x[1],x[2],x[3],x[4],x[5],x[6]]
mode = statistics.mode(list)
try:
cursor.execute(f"UPDATE Analisador SET juiz={mode} where id={row[6]}") #row[6] == id
conn.commit()
except:
print("Error")
conn.close()
You have to fetch your records after SQL is executed:
cursor.execute("SELECT Bow, FastText, Glove, Wordvec, Python, juiz, id FROM Analisador")
data = cursor.fetchall()
That type of SQL query is different from UPDATE (that you're using in your code too) which doesn't need additional step after SQL is executed.
I am trying to make a simple AWS Lambda function to get few rows from Amazon RDS(MySQL db) and return it in the json format.
If I try to append the object instance then I get error that object of type XXX is not json serializable. If I do something like below then I get only latest entry from the db. (This is unlike to what shown in https://hackersandslackers.com/create-a-rest-api-endpoint-using-aws-lambda/).
def save_events(event):
result = []
conn = pymysql.connect(rds_host,user=name,passwd=password,db=db_name,connect_timeout=5)
with conn:
cur = conn.cursor()
cur.execute("select * from tblEmployees")
rows = cur.fetchall()
for row in rows:
employee = Employee(row)
data['Id'] = employee.id
data['Name']= employee.name
result.append(data)
return result
def main(event, context):
data = save_events(event)
return {
"StatusCode":200,
"Employee": data
}
I understand that the contend of variable 'data' changes runtime and it affects on result.append(). I've 4 entries in table tblEmployees. The output of above gets 4 entries in the result but all the four entries are same (and equal to the latest record in the db).
The json.dumps() didn't work as the data is in the unicode format. I've already tried .toJSON() and byteify() and it didn't work.
Any help ?
You should re-create the data to avoid overriding old values:
for row in rows:
employee = Employee(row)
data = new Dict(Id=employee.id, Name=employee.name)
result.append(data)
I have a SQL database that displays a varbinary (max) like this 0x9406920691068F... I want to import it to python pycharm to get the same exact type of data.
However, it shows something like this instead
[b'\x94\x06\x92\x06\x91\x06\x8f\x06\x8d..
how do I copy the same numbers to python? I am a beginner in python, please help.
I copied the code from previous post and it didn't work
import pyodbc
def hexToString(binaryString):
try:
hashString = ["{0:0>2}".format(hex(b)[2:].upper()) for b in binaryString]
return '0x' + "".join(hashString)
except:
return binaryString
query = """ select P from Access.table """
conn_str = (
**** private database details # I don't copy on the page
)
cnxn = pyodbc.connect(conn_str)
cnxn.add_output_converter(pyodbc.SQL_BINARY, hexToString)
cursor = cnxn.cursor()
try:
cursor.execute(query)
row = cursor.fetchone()
except MySQLdb.error as err:
print(err)
else:
while row is not None:
print(row)
row = cursor.fetchone()
If the column return type is varbinary(max) then you need to add the output converter function to handle SQL_VARBINARY, not SQL_BINARY
cnxn.add_output_converter(pyodbc.SQL_VARBINARY, converter_function_name)
I have written the following snippet to import a CSV file into an MS SQL Server database but it gives me an error. It is based on code written for Sqlite for Python and changed for MSSQL.
import csv, pyodbc
import logging
def _get_col_datatypes(fin):
dr = csv.DictReader(fin) # comma is default delimiter
fieldTypes = {}
for entry in dr:
feildslLeft = [f for f in dr.fieldnames if f not in fieldTypes.keys()]
if not feildslLeft: break # We're done
for field in feildslLeft:
data = entry[field]
# Need data to decide
if len(data) == 0:
continue
if data.isdigit():
fieldTypes[field] = "INTEGER"
else:
fieldTypes[field] = "TEXT"
# TODO: Currently there's no support for DATE in sqllite
if len(feildslLeft) > 0:
raise Exception("Failed to find all the columns data types - Maybe some are empty?")
return fieldTypes
def escapingGenerator(f):
for line in f:
yield line.encode("ascii", "xmlcharrefreplace").decode("ascii")
def csvToDb(csvFile, outputToFile = False):
# TODO: implement output to file
with open(csvFile,mode='r') as fin:
dt = _get_col_datatypes(fin)
fin.seek(0)
reader = csv.DictReader(fin)
# Keep the order of the columns name just as in the CSV
fields = reader.fieldnames
cols = []
# Set field and type
for f in fields:
cols.append("%s %s" % (f, dt[f]))
# Generate create table statement:
stmt = "CREATE TABLE ads (%s)" % ",".join(cols)
con = pyodbc.connect('DRIVER={SQL Server};SERVER=localhost;DATABASE=sd;UID=Test;PWD=11')
cur = con.cursor()
cur.execute(stmt)
fin.seek(0)
reader = csv.reader(escapingGenerator(fin))
# Generate insert statement:
stmt = "INSERT INTO ads VALUES(%s);" % ','.join('?' * len(cols))
cur.executemany(stmt, reader)
con.commit()
return con
csvToDb('Books.csv')
The error I am getting is
pyodbc.DataError: ('22018', "[22018] [Microsoft][ODBC SQL Server Driver][SQL Server]Conversion failed when converting the varchar value 'a' to data type int. (245) (SQLExecDirectW)")
Also please suggest if you think there are any other methods to dynamically import CSV or text files into an MSSQL database
The error message
Conversion failed when converting the varchar value 'a' to data type int.
reveals that your code can be "fooled" into thinking that a column is integer when it is really text, presumably because it only looks at the first row of data. Testing reveals that both
ID,txt1,txt2,int1
1,foo,123,3
2,bar,abc,4
and
"ID","txt1","txt2","int1"
1,"foo","123",3
2,"bar","abc",4
result in your code producing the CREATE TABLE statement:
CREATE TABLE ads (ID INTEGER,txt1 TEXT,txt2 INTEGER,int1 INTEGER)
which is wrong because the [txt2] column is not really INTEGER.
You could investigate tweaking your code to look at more than the first data row. (Microsoft's own import routines often default to the first eight rows when attempting to auto-detect data types.) You could also just import all columns as text and then convert them later in SQL server.
However, given that there must be hundreds – if not thousands – of examples out there for importing CSV data to SQL Server you should also consider doing a more exhaustive search for existing (debugged) code before you continue investing time and effort into "rolling your own solution".