I need convert the data stored in a pandas.DataFrame into a byte string where each column can have a separate data type (integer or floating point). Here is a simple set of data:
df = pd.DataFrame([ 10, 15, 20], dtype='u1', columns=['a'])
df['b'] = np.array([np.iinfo('u8').max, 230498234019, 32094812309], dtype='u8')
df['c'] = np.array([1.324e10, 3.14159, 234.1341], dtype='f8')
and df looks something like this:
a b c
0 10 18446744073709551615 1.324000e+10
1 15 230498234019 3.141590e+00
2 20 32094812309 2.341341e+02
The DataFrame knows about the types of each column df.dtypes so I'd like to do something like this:
data_to_pack = [tuple(record) for _, record in df.iterrows()]
data_array = np.array(data_to_pack, dtype=zip(df.columns, df.dtypes))
data_bytes = data_array.tostring()
This typically works fine but in this case (due to the maximum value stored in df['b'][0]. The second line above converting the array of tuples to an np.array with a given set of types causes the following error:
OverflowError: Python int too large to convert to C long
The error results (I believe) in the first line which extracts the record as a Series with a single data type (defaults to float64) and the representation chosen in float64 for the maximum uint64 value is not directly convertible back to uint64.
1) Since the DataFrame already knows the types of each column is there a way to get around creating a row of tuples for input into the typed numpy.array constructor? Or is there a better way than outlined above to preserve the type information in such a conversion?
2) Is there a way to go directly from DataFrame to a byte string representing the data using the type information for each column.
You can use df.to_records() to convert your dataframe to a numpy recarray, then call .tostring() to convert this to a string of bytes:
rec = df.to_records(index=False)
print(repr(rec))
# rec.array([(10, 18446744073709551615, 13240000000.0), (15, 230498234019, 3.14159),
# (20, 32094812309, 234.1341)],
# dtype=[('a', '|u1'), ('b', '<u8'), ('c', '<f8')])
s = rec.tostring()
rec2 = np.fromstring(s, rec.dtype)
print(np.all(rec2 == rec))
# True
import pandas as pd
df = pd.DataFrame([ 10, 15, 20], dtype='u1', columns=['a'])
df_byte = df.to_json().encode()
print(df_byte)
Related
I have the following DataFrame from a SQL query:
(Pdb) pp total_rows
ColumnID RespondentCount
0 -1 2
1 3030096843 1
2 3030096845 1
and I want to pivot it like this:
total_data = total_rows.pivot_table(cols=['ColumnID'])
(Pdb) pp total_data
ColumnID -1 3030096843 3030096845
RespondentCount 2 1 1
[1 rows x 3 columns]
total_rows.pivot_table(cols=['ColumnID']).to_dict('records')[0]
{3030096843: 1, 3030096845: 1, -1: 2}
but I want to make sure the 303 columns are casted as strings instead of integers so that I get this:
{'3030096843': 1, '3030096845': 1, -1: 2}
One way to convert to string is to use astype:
total_rows['ColumnID'] = total_rows['ColumnID'].astype(str)
However, perhaps you are looking for the to_json function, which will convert keys to valid json (and therefore your keys to strings):
In [11]: df = pd.DataFrame([['A', 2], ['A', 4], ['B', 6]])
In [12]: df.to_json()
Out[12]: '{"0":{"0":"A","1":"A","2":"B"},"1":{"0":2,"1":4,"2":6}}'
In [13]: df[0].to_json()
Out[13]: '{"0":"A","1":"A","2":"B"}'
Note: you can pass in a buffer/file to save this to, along with some other options...
If you need to convert ALL columns to strings, you can simply use:
df = df.astype(str)
This is useful if you need everything except a few columns to be strings/objects, then go back and convert the other ones to whatever you need (integer in this case):
df[["D", "E"]] = df[["D", "E"]].astype(int)
pandas >= 1.0: It's time to stop using astype(str)!
Prior to pandas 1.0 (well, 0.25 actually) this was the defacto way of declaring a Series/column as as string:
# pandas <= 0.25
# Note to pedants: specifying the type is unnecessary since pandas will
# automagically infer the type as object
s = pd.Series(['a', 'b', 'c'], dtype=str)
s.dtype
# dtype('O')
From pandas 1.0 onwards, consider using "string" type instead.
# pandas >= 1.0
s = pd.Series(['a', 'b', 'c'], dtype="string")
s.dtype
# StringDtype
Here's why, as quoted by the docs:
You can accidentally store a mixture of strings and non-strings in an object dtype array. It’s better to have a dedicated dtype.
object dtype breaks dtype-specific operations like DataFrame.select_dtypes(). There isn’t a clear way to select just text
while excluding non-text but still object-dtype columns.
When reading code, the contents of an object dtype array is less clear than 'string'.
See also the section on Behavioral Differences between "string" and object.
Extension types (introduced in 0.24 and formalized in 1.0) are closer to pandas than numpy, which is good because numpy types are not powerful enough. For example NumPy does not have any way of representing missing data in integer data (since type(NaN) == float). But pandas can using Nullable Integer columns.
Why should I stop using it?
Accidentally mixing dtypes
The first reason, as outlined in the docs is that you can accidentally store non-text data in object columns.
# pandas <= 0.25
pd.Series(['a', 'b', 1.23]) # whoops, this should have been "1.23"
0 a
1 b
2 1.23
dtype: object
pd.Series(['a', 'b', 1.23]).tolist()
# ['a', 'b', 1.23] # oops, pandas was storing this as float all the time.
# pandas >= 1.0
pd.Series(['a', 'b', 1.23], dtype="string")
0 a
1 b
2 1.23
dtype: string
pd.Series(['a', 'b', 1.23], dtype="string").tolist()
# ['a', 'b', '1.23'] # it's a string and we just averted some potentially nasty bugs.
Challenging to differentiate strings and other python objects
Another obvious example example is that it's harder to distinguish between "strings" and "objects". Objects are essentially the blanket type for any type that does not support vectorizable operations.
Consider,
# Setup
df = pd.DataFrame({'A': ['a', 'b', 'c'], 'B': [{}, [1, 2, 3], 123]})
df
A B
0 a {}
1 b [1, 2, 3]
2 c 123
Upto pandas 0.25, there was virtually no way to distinguish that "A" and "B" do not have the same type of data.
# pandas <= 0.25
df.dtypes
A object
B object
dtype: object
df.select_dtypes(object)
A B
0 a {}
1 b [1, 2, 3]
2 c 123
From pandas 1.0, this becomes a lot simpler:
# pandas >= 1.0
# Convenience function I call to help illustrate my point.
df = df.convert_dtypes()
df.dtypes
A string
B object
dtype: object
df.select_dtypes("string")
A
0 a
1 b
2 c
Readability
This is self-explanatory ;-)
OK, so should I stop using it right now?
...No. As of writing this answer (version 1.1), there are no performance benefits but the docs expect future enhancements to significantly improve performance and reduce memory usage for "string" columns as opposed to objects. With that said, however, it's never too early to form good habits!
Here's the other one, particularly useful to convert the multiple columns to string instead of just single column:
In [76]: import numpy as np
In [77]: import pandas as pd
In [78]: df = pd.DataFrame({
...: 'A': [20, 30.0, np.nan],
...: 'B': ["a45a", "a3", "b1"],
...: 'C': [10, 5, np.nan]})
...:
In [79]: df.dtypes ## Current datatype
Out[79]:
A float64
B object
C float64
dtype: object
## Multiple columns string conversion
In [80]: df[["A", "C"]] = df[["A", "C"]].astype(str)
In [81]: df.dtypes ## Updated datatype after string conversion
Out[81]:
A object
B object
C object
dtype: object
There are four ways to convert columns to string
1. astype(str)
df['column_name'] = df['column_name'].astype(str)
2. values.astype(str)
df['column_name'] = df['column_name'].values.astype(str)
3. map(str)
df['column_name'] = df['column_name'].map(str)
4. apply(str)
df['column_name'] = df['column_name'].apply(str)
Lets see the performance of each type
#importing libraries
import numpy as np
import pandas as pd
import time
#creating four sample dataframes using dummy data
df1 = pd.DataFrame(np.random.randint(1, 1000, size =(10000000, 1)), columns =['A'])
df2 = pd.DataFrame(np.random.randint(1, 1000, size =(10000000, 1)), columns =['A'])
df3 = pd.DataFrame(np.random.randint(1, 1000, size =(10000000, 1)), columns =['A'])
df4 = pd.DataFrame(np.random.randint(1, 1000, size =(10000000, 1)), columns =['A'])
#applying astype(str)
time1 = time.time()
df1['A'] = df1['A'].astype(str)
print('time taken for astype(str) : ' + str(time.time()-time1) + ' seconds')
#applying values.astype(str)
time2 = time.time()
df2['A'] = df2['A'].values.astype(str)
print('time taken for values.astype(str) : ' + str(time.time()-time2) + ' seconds')
#applying map(str)
time3 = time.time()
df3['A'] = df3['A'].map(str)
print('time taken for map(str) : ' + str(time.time()-time3) + ' seconds')
#applying apply(str)
time4 = time.time()
df4['A'] = df4['A'].apply(str)
print('time taken for apply(str) : ' + str(time.time()-time4) + ' seconds')
Output
time taken for astype(str): 5.472359895706177 seconds
time taken for values.astype(str): 6.5844292640686035 seconds
time taken for map(str): 2.3686647415161133 seconds
time taken for apply(str): 2.39758563041687 seconds
map(str) and apply(str) are takes less time compare with remaining two techniques
I usually use this one:
pd['Column'].map(str)
pandas version: 1.3.5
Updated answer
df['colname'] = df['colname'].astype(str) => this should work by default. But if you create str variable like str = "myString" before using astype(str), this won't work. In this case, you might want to use the below line.
df['colname'] = df['colname'].astype('str')
===========
(Note: incorrect old explanation)
df['colname'] = df['colname'].astype('str') => converts dataframe column into a string type
df['colname'] = df['colname'].astype(str) => gives an error
Using .apply() with a lambda conversion function also works in this case:
total_rows['ColumnID'] = total_rows['ColumnID'].apply(lambda x: str(x))
For entire dataframes you can use .applymap().
(but in any case probably .astype() is faster)
I'm looking for a method to add a column of float values to a matrix of string values.
Mymatrix =
[["a","b"],
["c","d"]]
I need to have a matrix like this =
[["a","b",0.4],
["c","d",0.6]]
I would suggest using a pandas DataFrame instead:
import pandas as pd
df = pd.DataFrame([["a","b",0.4],
["c","d",0.6]])
print(df)
0 1 2
0 a b 0.4
1 c d 0.6
You can also specify column (Series) names:
df = pd.DataFrame([["a","b",0.4],
["c","d",0.6]], columns=['A', 'B', 'C'])
df
A B C
0 a b 0.4
1 c d 0.6
As noted you can't mix data types in a ndarray, but can do so in a structured or record array. They are similar in that you can mix datatypes as defined by the dtype= argument (it defines the datatypes and field names). Record arrays allow access to fields of structured arrays by attribute instead of only by index. You don't need for loops when you want to copy the entire contents between arrays. See my example below (using your data):
Mymatrix = np.array([["a","b"], ["c","d"]])
Mycol = np.array([0.4, 0.6])
dt=np.dtype([('col0','U1'),('col1','U1'),('col2',float)])
new_recarr = np.empty((2,), dtype=dt)
new_recarr['col0'] = Mymatrix[:,0]
new_recarr['col1'] = Mymatrix[:,1]
new_recarr['col2'] = Mycol[:]
print (new_recarr)
Resulting output looks like this:
[('a', 'b', 0.4) ('c', 'd', 0.6)]
From there, use formatted strings to print.
You can also copy from a recarray to an ndarray if you reverse assignment order in my example.
Note: I discovered there can be a significant performance penalty when using recarrays. See answer in this thread:
is ndarray faster than recarray access?
You need to understand why you do that. Numpy is efficient because data are aligned in memory. So mixing types is generally source of bad performance. but in your case you can preserve alignement, since all your strings have same length. since types are not homogeneous, you can use structured array:
raw=[["a","b",0.4],
["c","d",0.6]]
dt=dtype([('col0','U1'),('col1','U1'),('col2',float)])
aligned=ndarray(len(raw),dt)
for i in range (len(raw)):
for j in range (len(dt)):
aligned[i][j]=raw[i][j]
You can also use pandas, but you loose often some performance.
How to read string column only using numpy in python?
csv file:
1,2,3,"Hello"
3,3,3,"New"
4,5,6,"York"
How to get array like:
["Hello","york","New"]
without using pandas and sklearn.
I give the column name as a,b,c,d in csv
import numpy as np
ary=np.genfromtxt(r'yourcsv.csv',delimiter=',',dtype=None)
ary.T[-1]
Out[139]:
array([b'd', b'Hello', b'New', b'York'],
dtype='|S5')
import numpy
fname = 'sample.csv'
csv = numpy.genfromtxt(fname, dtype=str, delimiter=",")
names = csv[:,-1]
print(names)
Choosing the data type
The main way to control how the sequences of strings we have read from the file are converted to other types is to set the dtype argument. Acceptable values for this argument are:
a single type, such as dtype=float. The output will be 2D with the given dtype, unless a name has been associated with each column with the use of the names argument (see below). Note that dtype=float is the default for genfromtxt.
a sequence of types, such as dtype=(int, float, float).
a comma-separated string, such as dtype="i4,f8,|U3".
a dictionary with two keys 'names' and 'formats'.
a sequence of tuples (name, type), such as dtype=[('A', int), ('B', float)].
an existing numpy.dtype object.
the special value None. In that case, the type of the columns will be determined from the data itself (see below).
When dtype=None, the type of each column is determined iteratively from its data. We start by checking whether a string can be converted to a boolean (that is, if the string matches true or false in lower cases); then whether it can be converted to an integer, then to a float, then to a complex and eventually to a string. This behavior may be changed by modifying the default mapper of the StringConverter class.
The option dtype=None is provided for convenience. However, it is significantly slower than setting the dtype explicitly.
A quick file substitute:
In [275]: txt = b'''
...: 1,2,3,"Hello"
...: 3,3,3,"New"
...: 4,5,6,"York"'''
In [277]: np.genfromtxt(txt.splitlines(), delimiter=',',dtype=None,usecols=3)
Out[277]:
array([b'"Hello"', b'"New"', b'"York"'],
dtype='|S7')
bytestring array in Py3; or a default unicode string dtype:
In [278]: np.genfromtxt(txt.splitlines(), delimiter=',',dtype=str,usecols=3)
Out[278]:
array(['"Hello"', '"New"', '"York"'],
dtype='<U7')
Or the whole thing:
In [279]: data=np.genfromtxt(txt.splitlines(), delimiter=',',dtype=None)
In [280]: data
Out[280]:
array([(1, 2, 3, b'"Hello"'), (3, 3, 3, b'"New"'), (4, 5, 6, b'"York"')],
dtype=[('f0', '<i4'), ('f1', '<i4'), ('f2', '<i4'), ('f3', 'S7')])
select the f3 field:
In [282]: data['f3']
Out[282]:
array([b'"Hello"', b'"New"', b'"York"'],
dtype='|S7')
Speed should be basically the same
To extract specific values into the numpy array one approach could be:
with open('Exercise1.csv', 'r') as file:
file_content = list(csv.reader(file, delimiter=","))
data = np.array(file_content)
print(file_content[1][1], len(file_content))
for i in range(1, len(file_content)):
patient.append(file_content[i][0])
first_column_array = np.array(patient, dtype=(''))
i iterates through the rows of data and j is the place of the value in the row, so for 0, the first value
Question: How to replace all specific int64 values in the data frame but avoid wrongly replacing unequal int32 values.
Dataframe replaces int32 values wrongly when large int64 values supplied.
Below I created minimal example, where I would like to replace all fields with large value to -1. Given that all data is zero, nothing should be updated.
However column 'a' becomes -1 after the replace
import pandas
import numpy
dtype = [('a','int32'), ('b','int64'), ('c','float32')]
index = ['x', 'y']
columns = ['a','b','c']
values = numpy.zeros(2, dtype=dtype)
df2 = pandas.DataFrame(values, index=index)
df2.replace(-9223372036854775808, -1)
output is:
a b c
x -1 0 0.0
y -1 0 0.0
Edit:
Looks like numpy converts type down, but question is still how to avoid it in data frame conversion?
Note: -9223372036854775808 is HEX 8000000000000000
x = numpy.array(-9223372036854775808, dtype='int64')
print('as int32: ', x.astype(numpy.int32))
#produces
#('as int32: ', array(0, dtype=int32))
You correctly observed that the problem is caused by the type narrowing. Why don't you replace only those columns that have the matching or at least wide enough data type?
df2[['b','c']].replace(-9223372036854775808, -1, inplace=True)
I would like to save some attributes of a dataframe and given a slice of the underlying numpy array, I would like to rebuild the dataframe as if I had taken a slice of the dataframe. If an object column has a value that can be coerced into a float, I can't figure out any method that would work. In the real dataset, I have millions of observations and several hundred columns.
The actual use case involves custom code where pandas interacts with scikit-learn. I know the latest build of scikit-learn has compatibility with pandas built in, but I am unable to use this version because the RandomizedSearchCV object cannot handle large parameter grids (this will be fixed in a future version).
data = [[2, 4, "Focus"],
[3, 4, "Fiesta",],
[1, 4, "300"],
[7, 3, "Pinto"]]
# This dataframe is exactly as intended
df = pd.DataFrame(data=data)
# Slice a subset of the underlying numpy array
raw_slice = df.values[1:,:]
# Try using the dtype option to force dtypes
df_dtype = pd.DataFrame(data=raw_slice, dtype=df.dtypes)
print "\n Dtype arg doesn't use passed dtypes \n", df_dtype.dtypes
# Try converting objects to numeric after reading into dataframe
df_convert = pd.DataFrame(data=raw_slice).convert_objects(convert_numeric=True)
print "\n Convert objects drops object values that are not numeric \n", df_convert
[Out]
Converted data does not use passed dtypes
0 object
1 object
2 object
dtype: object
Converted data drops object values that are not numeric
0 1 2
0 3 4 NaN
1 1 4 300
2 7 3 NaN
EDIT:
Thank you #unutbu for the answer which precisely answered my question. In scikit-learn versions prior to 0.16.0, gridsearch objects stripped the underlying numpy array from the pandas dataframe. This meant that a single object column made the entire array an object and pandas methods could not be wrapped in custom transformers.
The solution, using #unutbu's answer is to make the first step of the pipeline a custom "DataFrameTransformer" object.
class DataFrameTransformer(BaseEstimator, TransformerMixin):
def __init__(self, X):
self.columns = list(X.columns)
self.dtypes = X.dtypes
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
X = pd.DataFrame(X, columns=self.columns)
for col, dtype in zip(X, self.dtypes):
X[col] = X[col].astype(dtype)
return X
In the pipeline, just include your original df in the constructor:
pipeline = Pipeline([("df_converter", DataFrameTransformer(X)),
...,
("rf", RandomForestClassifier())])
If you are trying to save a slice of a DataFrame to disk, then a powerful and
convenient way to do it is to use a pd.HDFStore. Note that this requires
PyTables to be installed.
# To save the slice `df.iloc[1:, :]` to disk:
filename = '/tmp/test.h5'
with pd.HDFStore(filename) as store:
store['mydata'] = df.iloc[1:, :]
# To load the DataFrame from disk:
with pd.get_store(filename) as store:
newdf2 = store['mydata']
print(newdf2.dtypes)
print(newdf2)
yields
0 int64
1 int64
2 object
dtype: object
0 1 2
0 3 4 Fiesta
1 1 4 300
2 7 3 Pinto
To reconstruct the sub-DataFrame from a NumPy array (of object dtype!)
and df.dtypes, you could use
import pandas as pd
data = [[2, 4, "Focus"],
[3, 4, "Fiesta",],
[1, 4, "300"],
[7, 3, "Pinto"]]
# This dataframe is exactly as intended
df = pd.DataFrame(data=data)
# Slice a subset of the `values` numpy object array
raw_slice = df.values[1:,:]
newdf = pd.DataFrame(data=raw_slice)
for col, dtype in zip(newdf, df.dtypes):
newdf[col] = newdf[col].astype(dtype)
print(newdf.dtypes)
print(newdf)
which yields the same result as above. However, if you are not saving
raw_slice to disk, then you could simply keep a
reference to df.iloc[1:, :] instead of converting the data to a NumPy array of
object dtype -- a relatively inefficient data structure (in terms of both memory and
performance).