python data types - python

I wrote a script to take files of data that is in columns and plot it depending on which column the user wants to view. Well, I noticed that the plots look crazy, and have all the wrong numbers because python is ignoring the exponential.
My numbers are in the format: 1.000000E+1 OR 1.000000E-1
What dtype is that? I am using numpy.genfromtxt to import with a dtype = float. I know there are all sorts of dtypes you can enter, but I cannot find a comprehensive list of the options, and examples.
Thanks.
Here is an example of my input (those spaces are tabs):
Time StampT1_ModBtT2_90BendT3_InPET5_Stg2Rfrg
5:22 AM2.115800E+21.400000E+01.400000E+03.035100E+1
5:23 AM2.094300E+21.400000E+01.400000E+03.034800E+1
5:24 AM2.079300E+21.400000E+01.400000E+03.031300E+1
5:25 AM2.069500E+21.400000E+01.400000E+03.031400E+1
5:26 AM2.052600E+21.400000E+01.400000E+03.030400E+1
5:27 AM2.040700E+21.400000E+01.400000E+03.029100E+1
Update
I figured out at least part of the reason why what I am doing does not work. Still do not know how to define dtypes the way I want to.
import numpy as np
file = np.genfromtxt('myfile.txt', usecols = (0,1), dtype = (str, float), delimiter = '\t')
That returns an array of strings for each column. How do I tell it I want column 0 to be a str, and all the rest of the columns to be float?

In [55]: type(1.000000E+1)
Out[55]: <type 'float'>
What does your input data look like, it's fair possible that it's in the wrong input format but it's also sure that it's fairly easy to convert it to the right format.

Numbers in the form 1.0000E+1 can be parsed by float(), so I'm not sure what the problem is:
>>> float('1.000E+1')
10.0

I think you'll want to get a text parser to parse the format into a native python data type.
like 1.00000E+1 turns into 1.0^1, which could be expressed as a float.

Related

python/pandas : Pandas changing the value adding extra digits in values [duplicate]

I have a csv file containing numerical values such as 1524.449677. There are always exactly 6 decimal places.
When I import the csv file (and other columns) via pandas read_csv, the column automatically gets the datatype object. My issue is that the values are shown as 2470.6911370000003 which actually should be 2470.691137. Or the value 2484.30691 is shown as 2484.3069100000002.
This seems to be a datatype issue in some way. I tried to explicitly provide the data type when importing via read_csv by giving the dtype argument as {'columnname': np.float64}. Still the issue did not go away.
How can I get the values imported and shown exactly as they are in the source csv file?
Pandas uses a dedicated dec 2 bin converter that compromises accuracy in preference to speed.
Passing float_precision='round_trip' to read_csv fixes this.
Check out this page for more detail on this.
After processing your data, if you want to save it back in a csv file, you can passfloat_format = "%.nf" to the corresponding method.
A full example:
import pandas as pd
df_in = pd.read_csv(source_file, float_precision='round_trip')
df_out = ... # some processing of df_in
df_out.to_csv(target_file, float_format="%.3f") # for 3 decimal places
I realise this is an old question, but maybe this will help someone else:
I had a similar problem, but couldn't quite use the same solution. Unfortunately the float_precision option only exists when using the C engine and not with the python engine. So if you have to use the python engine for some other reason (for example because the C engine can't deal with regex literals as deliminators), this little "trick" worked for me:
In the pd.read_csv arguments, define dtype='str' and then convert your dataframe to whatever dtype you want, e.g. df = df.astype('float64') .
Bit of a hack, but it seems to work. If anyone has any suggestions on how to solve this in a better way, let me know.

Pandas read csv file with float values results in weird rounding and decimal digits

I have a csv file containing numerical values such as 1524.449677. There are always exactly 6 decimal places.
When I import the csv file (and other columns) via pandas read_csv, the column automatically gets the datatype object. My issue is that the values are shown as 2470.6911370000003 which actually should be 2470.691137. Or the value 2484.30691 is shown as 2484.3069100000002.
This seems to be a datatype issue in some way. I tried to explicitly provide the data type when importing via read_csv by giving the dtype argument as {'columnname': np.float64}. Still the issue did not go away.
How can I get the values imported and shown exactly as they are in the source csv file?
Pandas uses a dedicated dec 2 bin converter that compromises accuracy in preference to speed.
Passing float_precision='round_trip' to read_csv fixes this.
Check out this page for more detail on this.
After processing your data, if you want to save it back in a csv file, you can passfloat_format = "%.nf" to the corresponding method.
A full example:
import pandas as pd
df_in = pd.read_csv(source_file, float_precision='round_trip')
df_out = ... # some processing of df_in
df_out.to_csv(target_file, float_format="%.3f") # for 3 decimal places
I realise this is an old question, but maybe this will help someone else:
I had a similar problem, but couldn't quite use the same solution. Unfortunately the float_precision option only exists when using the C engine and not with the python engine. So if you have to use the python engine for some other reason (for example because the C engine can't deal with regex literals as deliminators), this little "trick" worked for me:
In the pd.read_csv arguments, define dtype='str' and then convert your dataframe to whatever dtype you want, e.g. df = df.astype('float64') .
Bit of a hack, but it seems to work. If anyone has any suggestions on how to solve this in a better way, let me know.

How to preserve float precision in CSV to JSON conversion (via pandas.read_csv)?

NB: My question is not a duplicate of Format floats with standard json module. In fact, Mark Dickinson provided a good answer to my question in one of his comments, and this answer is all about pandas.read_csv, which is not even mentioned in that earlier post. Although [pandas] was one of the post's tags from the beginning, I have now edited the title to make the connection with pandas explicit.
As a very minimal example, suppose that I have a file foo.csv with the following content:
foo
-482.044
Now, if I read this file in with pandas.read_csv, and dump a transform of these data using simplejson.dumps I get the following:
simplejson.dumps(pandas.read_csv('/tmp/foo.csv')
.to_dict(orient='index')
.values()[0])
# '{"foo": -482.04400000000004}'
IOW, the original -482.044 became -482.04400000000004.
NB: I understand why this happens.
What I'm looking for is some convenient way to get around it.
IOW, the desired JSON string in this case is something like
'{"foo": -482.044}'
I'm looking for a convenient way to generate this string, starting from the file foo.csv shown earlier.
Needless to say, this example is unrealistically simple. In practice, foo.csv would contain thousands/millions of rows, and tens/hundreds of columns, not all necessarily floats (or even numeric). I'm only interested in solutions that would work for such real-life data.
Of course, I could avoid floating-point issues altogether by passing dtype=str to pandas.read_csv, but this would not produce the desired result:
simplejson.dumps(pandas.read_csv('/tmp/foo.csv', dtype=str)
.to_dict(orient='index')
.values()[0])
# '{"foo": "-482.044"}'
To put it in different terms: I want the input CSV to serve as the explicit specification of how to serialize whatever floating point values it contains. Is there a simple/convenient way to achieve this?
pandas uses numpy and converts your data -482.044 as float64. But the real set is very dense. Then a set of floats as the same representant, here it is -482.04400000000004. The float -482.044 is rounded to closest representant.
https://en.wikipedia.org/wiki/IEEE_floating_point
Here:
>>> import numpy as np
>>> np.float64(-482.044)
-482.04400000000004
>>> float(-482.044)
-482.044
>>> float(-482.044) == np.float64(-482.044)
True
because numpy float hasn't the same representation than native python float.
You can use that:
def truncate(n, n_digits):
i, d = str(float(n)).split('.')
return '.'.join([i, d[:n_digits]])
For your issue:
foo.csv:
foo
-482.044
Python script:
# python3
import simplejson
import pandas
# /!\ if dtype=float here it is float numpy
df = pandas.read_csv('foo.csv', dtype=str)
# here it is native float python
df['foo'] = df['foo'].apply(float)
data = simplejson.dumps({'foo': df.values[0][0]})
# data = '{"foo": -482.044}'

Converting long integers to strings in pandas (to avoid scientific notation)

I want the following records (currently displaying as 3.200000e+18 but actually (hopefully) each a different long integer), created using pd.read_excel(), to be interpreted differently:
ipdb> self.after['class_parent_ref']
class_id
3200000000000515954 3.200000e+18
3200000000000515951 NaN
3200000000000515952 NaN
3200000000000515953 NaN
3200000000000515955 3.200000e+18
3200000000000515956 3.200000e+18
Name: class_parent_ref, dtype: float64
Currently, they seem to 'come out' as scientifically notated strings:
ipdb> self.after['class_parent_ref'].iloc[0]
3.2000000000005161e+18
Worse, though, it's not clear to me that the number has been read correctly from my .xlsx file:
ipdb> self.after['class_parent_ref'].iloc[0] -3.2e+18
516096.0
The number in Excel (the data source) is 3200000000000515952.
This is not about the display, which I know I can change here. It's about keeping the underlying data in the same form it was in when read (so that if/when I write it back to Excel, it'll look the same and so that if I use the data, it'll look like it did in Excel and not Xe+Y). I would definitely accept a string if I could count on it being a string representation of the correct number.
You may notice that the number I want to see is in fact (incidentally) one of the labels. Pandas correctly read those in as strings (perhaps because Excel treated them as strings?) unlike this number which I entered. (Actually though, even when I enter ="3200000000000515952" into the cell in question before redoing the read, I get the same result described above.)
How can I get 3200000000000515952 out of the dataframe? I'm wondering if pandas has a limitation with long integers, but the only thing I've found on it is 1) a little dated, and 2) doesn't look like the same thing I'm facing.
Thank you!
Convert your column values with NaN into 0 then typcast that column as integer to do so.
df[['class_parent_ref']] = df[['class_parent_ref']].fillna(value = 0)
df['class_parent_ref'] = df['class_parent_ref'].astype(int)
Or in reading your file, specify keep_default_na = False for pd.read_excel() and na_filter = False for pd.read_csv()

numpy array values to be converted from string to float?

I have a dataset like the one shown below
http://i.stack.imgur.com/1uxCK.png
I am able to read them into an numpy array but the datatype is of type string when it has read from the CSV file. I am unable to convert the same into float since without that i would not be able to proceed further.Mind you there are blank spaces between the two data columns shown in the first screenshot.
The numpy array structure when printed looks like in the screenshot given below:
http://i.stack.imgur.com/JFfzw.png
Note: (Observe the Single Quotation Marks between the start and end of each data line in the screenshot which is a proof that numpy has stored the data as a string rather than float)
Any help would be appreciated in helping me convert the data from string to float type?????? have Tried many things but yet all in vain!!!!!!!!
numpy.loadtxt(filename) should work out of the box: it yields numbers.

Categories