Why am I getting 'function upper(bytea)' error in python 3? - python

Currently I am working to convert my py-2 project in py-3 & during this conversion I have faced below kind of error.
Partner.objects.filter(name__iexact = name_kv).count()
When I am running above query in py2 it working perfectly & getting output '0', means getting empty list.
When I am running above query in py3 it showing below kind of error.
django.db.utils.ProgrammingError: function upper(bytea) does not exist
LINE 1: ...alse AND UPPER("partners_partner"."name"::text) = UPPER('\x4...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
I have searched lot online & SO other questions but but not able to find any solution.
I figured out that it must have been python version problem & I am getting above error when my ORM query does not have any records.

Try to convert your variable name_kv to string type using str(name_kv).
Update your query like Partner.objects.filter(name__iexact = str(name_kv)).count().
You are getting error because variable containing byte type data so converting byte data to string type may solve your problem.

Related

Confused about <type 'DBNull'> and empty strings

I am really confused and hope someone can help.
I am working on a program that, when retrieving records from a SQL Server database, some of the fields can come back as null. One of the fields is named LaserName. Looking at the locals Window, the LaserName variable appears to just be a empty string.
But, when I use this code:
LaserName != ''
The code simply bypasses it as if the variable was not empty. When I looked at the type of that variable:
print(type(LaserName))
...the type returns <type 'DBNull'>. How do I look for DBNull in Python? Or is there a more elegant way to look for NULL values returned from a database?
Converting the type to a string seems to have alleviated my problem.
if "DBNull" in str(type(wgtper))

values passed to the '_id' (mongodb) is not a valid ObjectId Pymongo

I have a python script that receive (from a node.js script) an _id for a mongodb document as an argument. Using that value I'm trying to query the db and retrieve a document.
However, when i try to run the script it throws an error saying
"'xxxxxxxxxx' is not a valid ObjectId, it must be a 12-byte input or a 24-character hex string".
my script where the error is causing:
result = db.req.find_one({"_id": ObjectId(sys.argv[1])})
When i check the type() of the sys.argv[1] it says str. I thought wrapping the string around ObjectId should do the trick.
value of sys.argv[1] when printed: '"5902fbdd4d2f430dfe2dded4"'
Anyone one know whats the reason causing the issue?
Thanks in advance.
Resolved:
I removed the JSON.stringify() bit from the passing value in the node.js script. All seems fine now.
MongoDB allows storing "_id" in "ObjectId" format.

Python MySQL connector returns bytearray instead of regular string value

I am loading data from one table into pandas and then inserting that data into new table. However, instead of normal string value I am seeing bytearray.
bytearray(b'TM16B0I8') it should be TM16B0I8
What am I doing wrong here?
My code:
engine_str = 'mysql+mysqlconnector://user:pass#localhost/db'
engine = sqlalchemy.create_engine(engine_str, echo=False, encoding='utf-8')
connection = engine.connect()
th_df = pd.read_sql('select ticket_id, history_date', con=connection)
for row in th_df.to_dict(orient="records"):
var_ticket_id = row['ticket_id']
var_history_date = row['history_date']
query = 'INSERT INTO new_table(ticket_id, history_date)....'
For some reason the Python MySql connector only returns bytearrys, (more info in (How return str from mysql using mysql.connector?) but you can decode them into unicode strings with
var_ticket_id = row['ticket_id'].decode()
var_history_date = row['history_date'].decode()
Make sure you are using the right collation, and encoding. I happen to use UTF8MB4_BIN for one of my website db tables. Changed it to utf8mb4_general_ci, and it did the trick.
Producing a bytearray is now the expected behaviour.
It changed with mysql-connector-python 8.0.24 (2021-04-20). According to the v8.0.24 release notes, "Binary columns were returned as strings instead of 'bytes' or 'bytearray'" behaviour was a bug that was fixed in that release.
So producing a Python binaryarray is the correct behaviour, if the database column is a binary type (e.g. binary or varbinary). Previously, it produced a Python string, but now it produces a binaryarray.
So either change the data type in the database to a non-binary data type, or convert the binaryarray to a string in your code. If the column is nullable, you'll have to check for that first; since attempting to invoke decode() method on None would produce an error. You'll also have to be sure the bytes represent a valid string, in the character encoding being used for the decoding/conversion.
Much easier...
How to return str from MySQL using mysql.connector?
Adding mysql-connector-python==8.0.17 to requirements.txt resolved this issue for me
"pip install mysql-connector-python" from terminal

Commiting objects with long integers to MYSQL with SQLAlchemy

I am trying to add a large integer to a MySQL table with SQLAlchemy. As this answer explains, you cannot pass Integer a length argument like you can String. So following that answer I've defined my column with mysql.INTEGER like so:
from sqlalchemy.dialects import mysql
uniqueid = Column(mysql.INTEGER(20))
When I try to commit an object with a 14 digit uniqueid, however, I get the following error message: DataError: (DataError) (1264, "Out of range value for column 'uniqueid' at row 1"). When I try a shorter integer that is not a long, it has no problem committing the same object to the SQL database. I am running python 2.7, other discussions of the long type indicate that it should not behave any differently than int except for printing an L at the end of the number. One final piece of information is if I set the uniqueid to the same short number but make it a long, as in uniqueid = long(32423), I can still commit the object to the SQL database.
I did not solve the mystery of why the mysql.INTEGER class will not work with numbers that have to be long in python 2.7, but the practical solution is to use SQLalchemy's BigInteger class, which as the name suggests can handle big integers, including long.

Find which django model field contains a bad value

I'm new to stackoverflow and to python\django. I have already solved my problem, but I hoped that I can get help about how to solve it faster next time.
I have a very simple python function which copies table records from one db to another (sql server to sqllite). The table has hundreds of columns. When I save the model object to sqllite, django throws the following exception:
'utf8' codec can't decode byte ...
I understand that the data in one of the columns is problematic for utf8 conversion. What I wanted to know is what columns this is. I tried different approaches but eventually I had to write the following code to find the bad column:
build = Builds.objects.using('realdb').get(buildid=12524)
n = Builds()
for field in Builds._meta.fields:
val = getattr(build, field.name);
try:
setattr(n, field.name, val)
n.save(using="default")
except:
return HttpResponse(field.name + ": " + val.__str__())
It basically copies column values one be one to the new model object and stops when it encounters an error. Is there a better way to do this next time? I tried breaking on exception in PyCharm, but it breaks on all the many of exceptions thrown within django framework itself.
Alon.
I don't think there's any way of determining which specific field is causing the problem without resorting to testing each and every field as you're doing here. You can try to repair the problem fields instead of returning the error.
Take a look at this section of the unicode docs. Basically you can coerce the values by replacing the non-unicode portion or removing the non-unicode portion altogether.
Alternatively, if you know what encoding the strings are in, you can decode the string and re-encode appropriately using string.encode and string.decode respectively.

Categories