I am trying to add a large integer to a MySQL table with SQLAlchemy. As this answer explains, you cannot pass Integer a length argument like you can String. So following that answer I've defined my column with mysql.INTEGER like so:
from sqlalchemy.dialects import mysql
uniqueid = Column(mysql.INTEGER(20))
When I try to commit an object with a 14 digit uniqueid, however, I get the following error message: DataError: (DataError) (1264, "Out of range value for column 'uniqueid' at row 1"). When I try a shorter integer that is not a long, it has no problem committing the same object to the SQL database. I am running python 2.7, other discussions of the long type indicate that it should not behave any differently than int except for printing an L at the end of the number. One final piece of information is if I set the uniqueid to the same short number but make it a long, as in uniqueid = long(32423), I can still commit the object to the SQL database.
I did not solve the mystery of why the mysql.INTEGER class will not work with numbers that have to be long in python 2.7, but the practical solution is to use SQLalchemy's BigInteger class, which as the name suggests can handle big integers, including long.
Related
I'm having a db that contains a blob column with the binary representation as follows
The value that I'm interested in is encoded as little endian unsigned long long (8 byte) value in the marked. Reading this value works fine like this
p = session.query(Properties).filter((Properties.object_id==1817012) & (Properties.name.like("%OwnerUniqueID"))).one()
id = unpack("<Q", p.value[-8:])[0]
id in the above example is 1657266.
Now what I would like to do is the reverse. I have the row object p, I have a number in decimal format (using the same 1657266 for testing purposes) and I want to write that number in little endian format to those same 8 byte.
I've been trying to do so via SQL statement
UPDATE properties SET value = (SELECT substr(value, 1, length(value)-8) || x'b249190000000000' FROM properties WHERE object_id=1817012 AND name LIKE '%OwnerUniqueID%') WHERE object_id=1817012 AND name LIKE '%OwnerUniqueID%'
But when I do it like that I then can't read it anymore. At least not with SQLAlchemy. When I try the same code as above, I get the error message Could not decode to UTF-8 column 'properties_value' with text '☻' so it looks like it's written in a different format.
Interestingly using a normal select statement in DB Browser still works fine and the blob is still displayed exactly as in the screenshot above.
Now ideally I'd like to be able to write just those 8 bytes using the SQLAlchemy ORM but I'd settle for a raw SQL statement if that's what it takes.
I managed to get it to work with SQLAlchemy by basically reversing the process that I used to read it. In hindsight using the + to concatenate and the [:-8] to slice the correct part seems pretty obvious.
p = session.query(Properties).filter((Properties.object_id==1817012) & (Properties.name.like("%OwnerUniqueID"))).one()
p.value = p.value[:-8] + pack("<Q", 1657266)
By turning on ECHO for SQLAlchemy I got the following raw SQL statement:
UPDATE properties SET value=? WHERE properties.object_id = ? AND properties.name = ?
(<memory at 0x000001B93A266A00>, 1817012, 'BP_ThrallComponent_C.OwnerUniqueID')
Which is not particularly helpful if you want to do the same thing manually I suppose.
It's worth noting that the raw SQL statement in my question not only works as far as reading it with the DB Browers is concerned but also with the game client that uses the db in question. It's only SQLAlchemy that seems to have troubles, trying to decode it as UTF-8 it seems.
I am new to this and trying to learn python. I wrote a select statement in python where I used a parameter
Select """cln.customer_uid = """[(num_cuid_number)])
TypeError: string indices must be integers
Agree with the others, this doesn't look really like Python by itself.
I will see even without seeing the rest of that code I'll guess the [(num_cuid_number)] value(s) being returned is a string, so you'll want to convert it to integer for the select statement to process.
num_cuid_number is most likely a string in your code; the string indices are the ones in the square brackets. So please first check your data variable to see what you received there. Also, I think that num_cuid_number is a string, while it should be in an integer value.
Let me give you an example for the python code to execute: (Just for the reference: I have used SQLAlchemy with flask)
#app.route('/get_data/')
def get_data():
base_sql="""
SELECT cln.customer_uid='%s' from cln
""" % (num_cuid_number)
data = db.session.execute(base_sql).fetchall()
Pretty sure you are trying to create a select statement with a "where" clause here. There are many ways to do this, for example using raw sql, the query should look similar to this:
query = "SELECT * FROM cln WHERE customer_uid = %s"
parameters = (num_cuid_number,)
separating the parameters from the query is secure. You can then take these 2 variables and execute them with your db engine like
results = db.execute(query, parameters)
This will work, however, especially in Python, it is more common to use a package like SQLAlchemy to make queries more "flexible" (in other words, without manually constructing an actual string as a query string). You can do the same thing using SQLAlchemy core functionality
query = cln.select()
query = query.where(cln.customer_uid == num_cuid_number)
results = db.execute(query)
Note: I simplified "db" in both examples, you'd actually use a cursor, session, engine or similar to execute your queries, but that wasn't your question.
My PosgressSql database is allocating ids that already exist. From what I read can be a problem with sequence generator.
Seems that I get sequence corruption often, with the sequence starting number, being before, the last id in the database.
I know I can change the number in pgadmin, but how can I auto-correct this behavior in production.
I'm using python/django, it is possible to catch the error somehow, and reset the sequence ?
For sequences it goes something like
select setval('foo_id_seq',max(id),true) from foo;
for apropriate values 'foo_id_seq' of foo and id;
python 2.7
pyramid 1.3a4
sqlalchemy 7.3
sqlite3.7.9
from sqlite prompt > I can do:
insert into risk(travel_dt) values ('')
also
insert into risk(travel_dt) values(Null)
Both result in a new row with a null value for risk.travel_dt but when I try those travel_dt values from pyramid, Sqlalchemy gives me an error.
In the first case, I get sqlalchemy.exc.StatementError:
SQLite Date type only accepts python date objects as input
In the second case, I get Null is not defined. When I use "Null", I get the first case error
I apologize for another question on nulls: I have read a lot of material but must have missed something simple. Thanks for any help
Clemens Herschel
While you didn't provide any insight into the table definition you're using or any example code, I am guessing the issue is due to confusing NULL (the database reserved word) and None (the Python reserved word).
The error message is telling you that you need to call your SQLA methods with valid python date objects, rather than strings such as "Null" or ''.
Assuming you have a Table called risk containing a Column called travel_dt, you should be able to create a row in that table with something sort of like:
risk.insert().values(travel_dt=None)
Note that this is just a snippet, you would need to execute such a call within an engine context like that defined in the SA Docs SQL Expression Language Tutorial.
Here is the story.
I have a bunch of stored procedures and all have their own argument types.
What I am looking to do is to create a bit of a type safety layer in python so I can make sure all values are of the correct type before hitting the database.
Of course I don't want to write up the whole schema again in python, so I thought I could auto generate this info on startup by fetching the argument names and types from the database.
So I proceed to hack up this query just for testing
SELECT proname, proargnames, proargtypes
FROM pg_catalog.pg_namespace n
JOIN pg_catalog.pg_proc p
ON pronamespace = n.oid
WHERE nspname = 'public';
Then I run it from python and for 'proargtypes' I get a string like this for each result
'1043 23 1043'
My keen eye tells me these are the oids for the postgresql types, seperated by space, and this particular string means the function accepts varchar,integer,varchar.
So in python speak, this should be
(unicode, int, unicode)
Now how can I get the python types from these numbers?
The ideal end result would be something like this
In [129]: get_python_type(23)
Out[129]: int
I've looked all through psycopg2 and the closest I've found is 'extensions.string_types' but that just maps oids to sql type names.
If you want the Python type classes, like you might get from a SQLALchemy column object, you'll need to build and maintain your own mapping. psycopg2 doesn't have one, even internally.
But if what you want is a way to get from an oid to a function that will convert raw values into Python instances, psycopg2.extensions.string_types is actually already what you need. It might look like it's just a mapping from oid to a name, but that's not quite true: its values aren't strings, they're instances of psycopg2._psycopg.type. Time to delve into a little code.
psycopg2 exposes an API for registering new type converters which we can use to trace back into the C code involved with typecasts; this centers around the typecastObject in
typecast.c, which, unsurprisingly, maps to the psycopg2._psycopg.type we find in our old friend string_types. This object contains pointers to two functions, pcast (for Python casting function) and ccast (for C casting function), which would seem like what we want— just pick whichever one exists and call it, problem solved. Except they're not among the attributes exposed (name, which is just a label, and values, which is a list of oids). What the type does expose to Python is __call__, which, it turns out, just chooses between pcast and ccast for us. The documentation for this method is singularly unhelpful, but looking at the C code further shows that it takes two arguments: a string containing the raw value, and a cursor object.
>>> import psycopg2.extensions
>>> cur = something_that_gets_a_cursor_object()
>>> psycopg2.extensions.string_types[23]
<psycopg2._psycopg.type 'INTEGER' at 0xDEADBEEF>
>>> psycopg2.extensions.string_types[23]('100', cur)
100
>>> psycopg2.extensions.string_types[23]('10.0', cur)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for int() with base 10: '10.0'
>>> string_types[1114]
<psycopg2._psycopg.type 'DATETIME' at 0xDEADBEEF>
>>> string_types[1114]('2018-11-15 21:35:21', cur)
datetime.datetime(2018, 11, 15, 21, 35, 21)
The need for a cursor is unfortunate, and in fact, a cursor isn't always required:
>>> string_types[23]('100', None)
100
But anything having to with converting actual string (varchar, for example) types is dependent on the PostgreSQL server's locale, at least in a Python 3 compilation, and passing None to those casters doesn't just fail— it segfaults. The method you mention, cursor.cast(oid, raw), is essentially a wrapper around the casters in psycopg2.extensions.string_types and may be more convenient in some instances.
The only workaround for needing a cursor and connection that I can think of would be to build essentially a mock connection object. If it exposed all of the relevant environment information without connecting to an actual database, it could be attached to a cursor object and used with string_types[oid](raw, cur) or with cur.cast(oid, raw), but the mock would have be built in C and is left as an exercise to the reader.
The mapping of postgres types and python types is given here. Does that help?
Edit:
When you read a record from a table, the postgres (or any database) driver will automatically map the record column types to Python types.
cur = con.cursor()
cur.execute("SELECT * FROM Writers")
row = cur.fetchone()
for index, val in enumerate(row):
print "column {0} value {1} of type {2}".format(index, val, type(val))
Now, you just have to map Python types to MySQL types while writing your MySQL interface code.
But, frankly, this is a roundabout way of mapping types from PostgreSQL types to MySQL types. I would just refer one of the numerous type mappings between these two databases like this