I have a column in mysql which is intended to take decimal values (e.g. 0.00585431)
In python, I have a function that gets this value from a webpage, then returns it into a variable. When I print this variable I get [u'0.00585431'] (which is strange)
I then try to insert this into the mysql column which is set to take a decimal(10,0) value. however, the database stores it as just a 0
the code to insert is nothing special and works for other things:
cur.execute("""INSERT INTO earnings VALUES (%s)""", (variable))
if I change the column to a string type then it stores the whole [u'0.00585431']. So i imagine that when I try to store it as a decimal its not actually taking a proper decimal value and stores a 0 instead?
any thoughts on how to fix this?
DECIMAL(10,0) will give 0 to the right of the comma.
The declaration syntax for a DECIMAL column remains DECIMAL(M,D),
although the range of values for the arguments has changed somewhat:
M is the maximum number of digits (the precision). It has a range of 1
to 65. This introduces a possible incompatibility for older
applications, because previous versions of MySQL permit a range of 1
to 254. (The precision of 65 digits actually applies as of MySQL
5.0.6. From 5.0.3 to 5.0.5, the precision is 64 digits.)
D is the number of digits to the right of the decimal point (the
scale). It has a range of 0 to 30 and must be no larger than M.
Try to change your column datatype to DECIMAL(10,8)
If your values will always be in same format as 0.00585431 then DECIMAL(9,8) would suffice.
https://dev.mysql.com/doc/refman/5.0/en/precision-math-decimal-changes.html
Related
I'm currently working with a database table that has an ID as primary key which can have up to 28 digits.
For my use case I need to manipulate some data points in this table (including the ID) and write it back to the db table.
Now, for the ID I need to increment it by one and I'm struggling to achieve this with pandas and windows.
Unfortunately and obviously, I cannot read and save the ID as plain integers in the dataframe.
Converting it to np.float64 beforehand seems to be completely messing up the values.
For example:
I'm manipulating the data point with ID 2021051800100770010113340000
If I convert the ID column to np.float64 by explicitly providing the dtype of this column,
the ID becomes 2021051800100769903675441152.0 which seems to be a completely different number to be.
Also I don't know if incrementing the ID column by 1 is working since the result will be same as the number above.
Is there a way to this in a proper way? The last option to me would be to convert it to a string and then change the last substring of that string. But I don't feel this would be good and clean solution. Not mentioning that I'm not sure if I can write this back to the db in that form.
edit//
Based on this suggestion (https://stackoverflow.com/a/21591439/3856569)
I edited the ID column the following way:
df["ID"] = df["ID"].apply(int)
and then incrementing the number.
I get the following result:
2021051800100769903675441152
2021051800100769903675441153
So the increment seems to work now but I still see completely different numbers opposed which I was getting originally.
Please bare with me to look at this problem from another angle. If we can understand how the ID is formed, we may be able to handle it differently, for example, the first 8 digits looks like a date, and if that is true, then any of your manipulation shouldn't modify those 8 digits unless your intention is to change the date. In this case, you can separate your ID (in str) into 2 parts.
20210518 / 00100770010113340000
Then now we only need to handle the second part which is still too large for np.int64. However, if you find out how it is formed, then perhaps you can further separate it and finally handle a number that can be handled by np.int64.
For example, would the ID be formed in this way?
20210518 / 001 / 007 / 7001011334 / 0000
If we can split it into segments of meaning, then we know which part we need to keep when manipulating (adding 1 in your case)
I have a Oracle table with columns of type VARCHAR2 (i.e. string) and of type NUMBER (i.e. a numeric value with a fractional part). And the numeric columns contain indeed values with decimal points, not integer values.
However when I read this table into a Pandas dataframe via pandas.read_sql I receive the numeric columns in the data frame as int64. How can I avoid this and receive instead float columns with the full decimal values?
I'm using the following versions
python : 3.7.4.final.0
pandas : 1.0.3
Oracle : 18c Enterprise Edition / Version 18.9.0.0.0
I have encountered the same thing. I am not sure if this is the reason but I assume that NUMBER type without any size restrictions is too big for pandas and it is automatically truncated to int64 or the type is improperly chosen by pandas – default NUMBER might be treated as an integer. You can limit the type of the column to e.g. NUMBER(5,4) and pandas should recognise it correctly as a float.
I also found out that using pd.read_sql gives me proper types in contrast to pd.read_sql_table.
I am facing the following issue when working with a sqlite database and python.
This happens also with an external program such as SQLiteStudio, so it is not python.
Suppose to have a table containing a column of type string.
If you enter an entry to that column such as 43234e4324 (so, all the entries of the kind ####e##### give the issue), the value stored in the database is converted to inf!
For sure, since e may be interpreted as the exponent, it would make sense if the column is of type float, but it is string!
Here is a working example:
import sqlite3
a = sqlite3.connect('test.sqlite')
a.execute('CREATE TABLE test (p STRING)')
a.execute('INSERT INTO test (p) VALUES (03242e4444)')
a.commit()
b = a.execute('SELECT p FROM test')
b.fetchall()
and you get [(inf,)]...
If you insert a different value without the 'e' you would get the correct string.
I want to enter a string like 43243e3423 without it being converted. How can I do this?
A number with an e in it is a floating point number in exponential notation. 43234e3423 is the notation for 43234x103423. Since that's far too big to be stored as a floating point number, you get inf.
To enter strings, you should put them in quotes.
a.execute('INSERT INTO test (p) VALUES ("03242e4444")')
The other problem is that there's no STRING datatype. An unrecognized datatype is treated as NUMERIC, so even if you put quotes around the value, it gets converted as if it were a number. Use TEXT instead.
a.execute('CREATE TABLE test (p TEXT)')
I want to store a time value in a mysql table,
1345:55
it is 1345 hours and 55 minutes. What type should column have?
And if I want to pass a time variable from python to this column using mysqldb module, which time type should i use in python? datetime.timedelta?
Generally speaking, one can use MySQL's TIME datatype to store time values:
MySQL retrieves and displays TIME values in 'HH:MM:SS' format (or 'HHH:MM:SS' format for large hours values). TIME values may range from '-838:59:59' to '838:59:59'.
Obviously, in your case, this is insufficient for the range of values required. I would therefore suggest that you instead convert the value to an integer number of minutes and store the result in an 4-byte INT UNSIGNED column (capable of storing values in the range 0 to 4294967295, representing 0:00 to 71582788:15).
I ran into an interesting problem. I have a MySQL database that contains some doubles with very precise decimal values (for example, 0.00895406607247756, 17 decimal places). This is scientific data so this high level of precision is very important.
I'm using MySQLdb in Python to select data from the database:
cursor.execute("SELECT * FROM " ...etc...)
for n in range(cursor.rowcount):
row = cursor.fetchone()
print row
For some reason, when it gets to my very precise decimals, they've been truncated to a maximum of 14 decimal places. (i.e. the previous decimal becomes 0.00895406607248)
Is there any way to get the data from MySQLdb in its original form without truncating?
What MySQL datatype are you using to store the data? Is it DECIMAL(18,17)? DECIMALs have up to 65 digits of precision.
If you set the MySQL data type to use DECIMAL(...), then MySQLdb will convert the data to a Python decimal.Decimal object, which should preserve the precision.