I'm trying to get a value out of a Postgres DB that is stored as a bytea.
The values are VLAN ID's (so integers between 1-4096).
However, it's stored in the db (as an example) as:
\000\000\001\221 (equal to 401)
I'd like to have my SQL query return the integer value if possible to my python code.
After some searching, I can use get_byte to get one of those 4 octals (by specifying the position):
select get_byte(sub.datavlan_bytes,3) AS vlan -> this gives me the value of /221 (145)
However, I can't get the entire value.
Is there a good way to get the data in a select query, or does that need to happen in my script?
Plain SQL by casting:
select ('x' || encode(o, 'hex'))::bit(32)::int
from (values ('\000\000\001\221'::bytea)) s (o)
;
int4
------
401
Postgres 9+ offers a hex format for bytea values. If you can set the bytea_output connection setting to hex (which is the default, so it probably already is...) you can get back a string that can be fed to Python's int(..., 16) function. That would get your 401 directly.
Edit: Postgres docs: https://www.postgresql.org/docs/9.0/static/datatype-binary.html
Related
I have a sqlite table in Python:
CREATE TABLE lookup ( id string PRIMARY KEY );
In this table I inserted a lot of rows and then I created an index:
CREATE INDEX intindex ON lookup(id);
I now want to test if my string is in the table. I tried the following:
checkval = cursor.execute("""SELECT EXISTS (SELECT 1
FROM lookup
WHERE id=?
LIMIT 1)""", (str(mystring), )).fetchone()[0]
But the problem is, this only seems to check if the first 16 or so characters of the string are identical.
For example, I know that my table contains:
'95592037576585500895905906368332492139177507248814568869956683982249018785792'
However, if I check the non-existing value:
'95592037576585501898360696547541541617886282712262516411315613154848368326494'
then I also get TRUE.
Checking:
'95592137576585500895905906368332492139177507248814568869956683982249018785792'
does correctly result in FALSE.
Why is it not comparing the full string?
In SQLite there is no data type STRING.
If you use it as the data type of a column then as it is explained in Determination Of Column Affinity, the column will have NUMERIC affinity.
So, values like:
'95592037576585500895905906368332492139177507248814568869956683982249018785792'
and
'95592037576585501898360696547541541617886282712262516411315613154848368326494'
will be considered as floating point numbers and because of precision limits, their value will be:
9.5592037576586E+76
so actually they are equal.
See the demo.
For strings there is only the TEXT data type in SQLite and you should change the column's definition to that.
Also you should not use LIMIT in your code.
EXISTS does not perform a full scan in the query because it returns as soon as it finds the first row matching the condition.
The problem is that your id is of type string instead text: ""STRING" has an affinity of NUMERIC, not TEXT.". When I insert those large values sqlite3 seems to store them as floating points instead of text. Floating points have limited number of digits it can store accurately.
CREATE TABLE lookup ( id text PRIMARY KEY not null );
Also you may get an implicit index on the primary key.
I am fetching data from SQL (Oracle and MS SQL both) databases from a python code using pyodbc and cxOracle packages. Python automatically converts all date time fields in SQL to datetime.datetime. Is there any way I can capture data as is from SQL into a file. Same happens to Null and integer columns as well.
1) Date: Value in DB and expected-- 12-AUG-19 12.00.01.000 -- Python Output: 2019-08-12 00:00:01
2) Null becomes a NaN
3) Integer value 1s and 0s becomes True and False.
I tried to google the issue, and seems like a common issue amongst all packages like pyodbc, cx_oracle, pandas.read_sql as well.
I would like the data appearing exactly the same as in the database.
We are calling a Oracle/SQL Server Stored proc and NOT a SQL query to get this result and we can't change the stored proc. We cannot use CAST in sql query.
Pyodbc fetchall() output is the table in list format. We lose the formatting of the data as soon as it is captured in python.
Could someone help with this issue?
I'm not sure about Oracle, but on the SQL Server side, you could change the command you use so that you capture the results of the stored proc in a temp table, and then you can CAST() the columns of the temp table.
So if you currently call a stored proc on SQL Server like this: EXEC {YourProcName}
Then you could change your command to something like this:
CREATE TABLE #temp
(
col1 INT
,col2 DATETIME
,col3 VARCHAR(20)
);
INSERT INTO #temp
EXEC [sproc];
SELECT
col1 = CAST(col1 AS VARCHAR(20))
,col2 = CAST(FORMAT(col2,'dd-MMM-yy ') AS VARCHAR) + REPLACE(CAST(CAST(col2 AS TIME(3)) AS VARCHAR),':','.')
,col3
FROM #temp;
DROP TABLE #temp
You'll want to create your temp table using the same column names and datatypes that get output from the proc. Then you can CAST() numeric values to VARCHAR, and with dates/datetimes, you can use FORMAT() to define your date string format. The example I have here should result in format you want of 12-AUG-19 12.00.01.000. I couldn't find a single format string that gave me the correct output, so I broke the date and time elements apart, format them in the expected way, and then concatenate the casted values.
I have a python script which selects some rows from a table and insert them into another table. One field has type of date and there is a problem with its data when its value is '0000-00-00'. python converts this value to None and so gives an error while inserting it into the second table.
How can I solve this problem? Why python converts that value to None?
Thank you in advance.
This is actually a None value in the data base, in a way. MySQL treats '0000-00-00' specially.
From MySQL documentation:
MySQL permits you to store a “zero” value of '0000-00-00' as a “dummy
date.” This is in some cases more convenient than using NULL values,
and uses less data and index space. To disallow '0000-00-00', enable
the NO_ZERO_DATE mode.
It seems that Python's MySQL library is trying to be nice to you and converts this to None.
When writing, it cannot guess that you wanted '0000-00-00' and uses NULL instead. You should convert it yourself. For example, this might work:
if value_read_from_one_table is not None:
value_written_to_the_other_table = value_read_from_one_table
else:
value_written_to_the_other_table = '0000-00-00'
I'm trying to update TIMESTAMP column in table with None value in python code.
It worked perfectly when using insert statement with null value.
But when using update statement, it doesn't work!!
following is test code for your understanding.
(The reason why I'm updating 'None' value is that the new value is from the other database, and I want to update the value with the new one, and some of the values are NULL.)
:1 is '20160418154000' type string in python code
but when it is 'None' value it raise exception.
INSERT INTO TEST_TABLE (ARR_TIME) VALUES(TO_TIMESTAMP(:1, 'YYYYMMDDHH24MISS'))
it works well!!
UPDATE TEST_TABLE SET ARR_TIME = TO_TIMESTAMP(:1, 'YYYYMMDDHH24MISS')
it doesn't work!!
error message : ORA-00932: inconsistent datatypes: expected - got NUMBER
I think cx_Oracle recognize the None value in python as number (0??)
and it cannot be converted to 'YYYYMMDDHH24MISS' string type.
Is there way to update NULL value in TIMESTAMP column?
Yes, there is. Unless you specify otherwise, nulls are bound as type string. You can override this, though, using the following code:
cursor.setinputsizes(cx_Oracle.TIMESTAMP)
See here for documentation:
http://cx-oracle.readthedocs.org/en/latest/cursor.html#Cursor.setinputsizes
NOTE: you could have also solved this by using this code instead:
update test_table set arr_time = :1
There is no need to convert the data using to_timestamp() as cx_Oracle can bind timestamp values directly (use datetime.datetime) and if you bind None Oracle will implicitly convert for you.
I have a dictionary of column name / values, to insert into a table. I have a function that generates the INSERT statement. I'm stuck because the function always puts quotes around the values, and some are integers.
e.g. If column 1 is type integer then the statement should be INSERT INTO myTable (col1) VALUES 5; vs
INSERT INTO myTable (col1) VALUES '5'; second one causes an error saying column 5 does not exist.
EDIT: I found the problem (I think). the value was in double quotes not single, so it was "5".
In Python, given a table and column name, how can I test if the INSERT statement needs to have '' around the VALUES ?
This question was tagged with "psycopg2" -- you can prepare the statement using a format string and have psycopg2 infer types for you in many cases.
cur.execute('INSERT INTO myTable (col1, col2) VALUES (%s, %s);', (5, 'abc'))
psycopg2 will deal with it for you, because Python knows that 5 is an integer and 'abc' is a string.
http://initd.org/psycopg/docs/usage.html#passing-parameters-to-sql-queries
You certainly want to use a library function to decide whether or not to quote values you insert. If you are inserting anything input by a user, writing your own quoting function can lead to SQL Injection attacks.
It appears from your tags that you're using psycopg2 - I've found another response that may be able to answer your question, since I'm not familiar with that library. The main gist seems to be that you should use
cursor.execute("query with params %s %s", ("param1", "pa'ram2"))
Which will automatically handle any quoting needed for param1 and param2.
Although I personally don't like the idea, you can use single quotes around integers when you insert in Postgres.
Perhaps your problem is the lack of parentheses:
INSERT INTO myTable(col1)
VALUES('5');
Here is a SQL Fiddle illustrating this code.
As you note in the comments, double quotes do not work in Postgres.
You can put always the single quote (be careful, if the value contents a quote you must double it: insert into example (value_t) values ('O''Hara');
You can decide checking the value that you want to insert regardles of the type of de destination
You can decide checking the type of the target field
As you can see in http://sqlfiddle.com/#!15/8bfbd/3 theres no mater with inserting integers into a text field or string that represents an integer in a numeric field.
To check the field type you can use the information_schema:
select data_type from information_schema.columns
where table_schema='public'
and table_name='example'
and column_name='value_i';
http://sqlfiddle.com/#!15/8bfbd/7