I am playing around with data and need to look at differences across columns (as well as rows) in a fairly large dataframe.
The easiest way for rows is clearly the diff() method, but I cannot find the equivalent for columns?
My current solution to obtain a dataframe with the columns differenced for via
df.transpose().diff().transpose()
Is there a more efficient alternative? Or is this such odd usage of pandas that this was just never requested/ considered useful? :)
Thanks,
Pandas DataFrames are excellent for manipulating table-like data whose columns have different dtypes.
If subtracting across columns and rows both make sense, then it means all the values are the same kind of quantity. That might be an indication that you should be using a NumPy array instead of a Pandas DataFrame.
In any case, you can use arr = df.values to extract a NumPy array of the underlying data from the DataFrame. If all the columns share the same dtype, then the NumPy array will have the same dtype. (When the columns have different dtypes, df.values has object dtype).
Then you can compute the differences along rows or columns using np.diff(arr, axis=...):
import numpy as np
import pandas as pd
df = pd.DataFrame(np.arange(12).reshape(3,4), columns=list('ABCD'))
# A B C D
# 0 0 1 2 3
# 1 4 5 6 7
# 2 8 9 10 11
np.diff(df.values, axis=0) # difference of the rows
# array([[4, 4, 4, 4],
# [4, 4, 4, 4]])
np.diff(df.values, axis=1) # difference of the columns
# array([[1, 1, 1],
# [1, 1, 1],
# [1, 1, 1]])
Just difference the columns, e.g.
df['new_col'] = df['a'] - df['b']
For multiple columns, I believe unutbu's answer is the best (although it returns a np.ndarray object instead of a dataframe, it is still faster even after then converting it to a dataframe).
# Create a large dataframe.
df = pd.DataFrame(np.random.randn(1e6, 100))
%%timeit
np.diff(df.values, axis=1)
1 loops, best of 3: 450 ms per loop
%%timeit
df - df.shift(axis=1)
1 loops, best of 3: 727 ms per loop
%%timeit
df.T.diff().T
1 loops, best of 3: 1.52 s per loop
Use the axis parameter in diff:
df = pd.DataFrame(np.arange(12).reshape(3, 4), columns=list('ABCD'))
# A B C D
# 0 0 1 2 3
# 1 4 5 6 7
# 2 8 9 10 11
df.diff(axis=1) # subtracting column wise
# A B C D
# 0 NaN 1 1 1
# 1 NaN 1 1 1
# 2 NaN 1 1 1
df.diff() # subtracting row wise
# A B C D
# 0 NaN NaN NaN NaN
# 1 4 4 4 4
# 2 4 4 4 4
Related
I'm trying to get the correlation between a single column and the rest of the numerical columns of the dataframe, but I'm stuck.
I'm trying with this:
corr = IM['imdb_score'].corr(IM)
But I get the error
operands could not be broadcast together with shapes
which I assume is because I'm trying to find a correlation between a vector (my imdb_score column) with the dataframe of several columns.
How can this be fixed?
The most efficient method it to use corrwith.
Example:
df.corrwith(df['A'])
Setup of example data:
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randint(10, size=(5, 5)), columns=list('ABCDE'))
# A B C D E
# 0 7 2 0 0 0
# 1 4 4 1 7 2
# 2 6 2 0 6 6
# 3 9 8 0 2 1
# 4 6 0 9 7 7
output:
A 1.000000
B 0.526317
C -0.209734
D -0.720400
E -0.326986
dtype: float64
I think you can you just use .corr which returns all correlations between all columns and then select just the column you are interested in.
So, something like
IM.corr()['imbd_score']
should work.
Rather than calculating all correlations and keeping the ones of interest, it can be computationally more efficient to compute the subset of interesting correlations:
import pandas as pd
df = pd.DataFrame()
df['a'] = range(10)
df['b'] = range(10)
df['c'] = range(10)
pd.DataFrame([[c, df['a'].corr(df[c])] for c in df.columns if c!='a'], columns=['var', 'corr'])
I want to select:
2nd column
from the 3rd column onwards every odd column including the 3rd column also
from a Pandas dataframe. Pandas documentation mentions the following:
An integer, e.g. 5.
A list or array of integers, e.g. [4, 3, 0].
A slice object with ints, e.g. 1:7.
A boolean array.
A callable function with one argument (the calling Series, DataFrame or Panel) and that returns valid output for indexing (one of the above)
My requirement seems to be a combination of an integer and range object or an array with a range object, e.g. .iloc[:, [2, 3::2]]. What is the best and easiest way achieve the above?
we can use numpy.r_[...]
Demo:
In [126]: df = pd.DataFrame(np.random.rand(5, 10), columns=list(range(1, 11)))
In [127]: df
Out[127]:
1 2 3 4 5 6 7 8 9 10
0 0.971111 0.209419 0.266902 0.410897 0.702329 0.199330 0.622634 0.391587 0.357186 0.738886
1 0.195173 0.409414 0.543279 0.090533 0.621940 0.096192 0.050050 0.513417 0.384031 0.191914
2 0.973278 0.825286 0.434370 0.012834 0.694801 0.645579 0.261067 0.240224 0.488762 0.665984
3 0.671826 0.184333 0.773337 0.870569 0.325016 0.871609 0.968624 0.103269 0.347466 0.262120
4 0.268309 0.242649 0.098463 0.979625 0.500496 0.965501 0.544177 0.959747 0.411557 0.979344
In [128]: df.iloc[:, np.r_[1, 2:df.shape[1]:2]]
Out[128]:
2 3 5 7 9
0 0.209419 0.266902 0.702329 0.622634 0.357186
1 0.409414 0.543279 0.621940 0.050050 0.384031
2 0.825286 0.434370 0.694801 0.261067 0.488762
3 0.184333 0.773337 0.325016 0.968624 0.347466
4 0.242649 0.098463 0.500496 0.544177 0.411557
I have the following two columns in pandas data frame
256 Z
0 2 2
1 2 3
2 4 4
3 4 9
There are around 1594 rows. '256' and 'Z' are column headers whereas 0,1,2,3,4 are row numbers (1st column above). I want to print row numbers where value in Column '256' is not equal to values in column 'Z'. Thus output in the above case will be 1, 3.
How can this comparison be made in pandas? I will be very grateful for help. Thanks.
Create the data frame:
import pandas as pd
df = pd.DataFrame({"256":[2,2,4,4], "Z": [2,3,4,9]})
ouput:
256 Z
0 2 2
1 2 3
2 4 4
3 4 9
After subsetting your data frame, use the index to get the id of rows in the subset:
row_ids = df[df["256"] != df.Z].index
gives
Int64Index([1, 3], dtype='int64')
Another way could be to use the .loc method of pandas.DataFrame which returns the indexed location of the rows that qualify the boolean indexing:
df.loc[(df['256'] != df['Z'])].index
with an output of:
Int64Index([1, 3], dtype='int64')
This happens to be the quickest of the listed implementations as can be seen in ipython notebook:
import pandas as pd
import numpy as np
df = pd.DataFrame({"256":np.random.randint(0,10,1594), "Z": np.random.randint(0,10,1594)})
%timeit df.loc[(df['256'] != df['Z'])].index
%timeit row_ids = df[df["256"] != df.Z].index
%timeit rows = list(df[df['256'] != df.Z].index)
%timeit df[df['256'] != df['Z']].index
with an output of:
1000 loops, best of 3: 352 µs per loop
1000 loops, best of 3: 358 µs per loop
1000 loops, best of 3: 611 µs per loop
1000 loops, best of 3: 355 µs per loop
However, when it comes down to 5-10 microseconds it doesn't make a significant difference, but if in the future you have a very large data set timing and efficiency may become a much more important issue. For your relatively small data set of 1594 rows I would go with the solution that looks the most elegant and promotes the most readability.
You can try this:
# Assuming your DataFrame is named "frame"
rows = list(frame[frame['256'] != frame.Z].index)
rows will now be a list containing the row numbers for which those two column values are not equal. So with your data:
>>> frame
256 Z
0 2 2
1 2 3
2 4 4
3 4 9
[4 rows x 2 columns]
>>> rows = list(frame[frame['256'] != frame.Z].index)
>>> print(rows)
[1, 3]
Assuming df is your dataframe, this should do it:
df[df['256'] != df['Z']].index
yielding:
Int64Index([1, 3], dtype='int64')
Obviously new to Pandas. How can i simply count the number of records in a dataframe.
I would have thought some thing as simple as this would do it and i can't seem to even find the answer in searches...probably because it is too simple.
cnt = df.count
print cnt
the above code actually just prints the whole df
To get the number of rows in a dataframe use:
df.shape[0]
(and df.shape[1] to get the number of columns).
As an alternative you can use
len(df)
or
len(df.index)
(and len(df.columns) for the columns)
shape is more versatile and more convenient than len(), especially for interactive work (just needs to be added at the end), but len is a bit faster (see also this answer).
To avoid: count() because it returns the number of non-NA/null observations over requested axis
len(df.index) is faster
import pandas as pd
import numpy as np
df = pd.DataFrame(np.arange(24).reshape(8, 3),columns=['A', 'B', 'C'])
df['A'][5]=np.nan
df
# Out:
# A B C
# 0 0 1 2
# 1 3 4 5
# 2 6 7 8
# 3 9 10 11
# 4 12 13 14
# 5 NaN 16 17
# 6 18 19 20
# 7 21 22 23
%timeit df.shape[0]
# 100000 loops, best of 3: 4.22 µs per loop
%timeit len(df)
# 100000 loops, best of 3: 2.26 µs per loop
%timeit len(df.index)
# 1000000 loops, best of 3: 1.46 µs per loop
df.__len__ is just a call to len(df.index)
import inspect
print(inspect.getsource(pd.DataFrame.__len__))
# Out:
# def __len__(self):
# """Returns length of info axis, but here we use the index """
# return len(self.index)
Why you should not use count()
df.count()
# Out:
# A 7
# B 8
# C 8
Regards to your question... counting one Field? I decided to make it a question, but I hope it helps...
Say I have the following DataFrame
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.normal(0, 1, (5, 2)), columns=["A", "B"])
You could count a single column by
df.A.count()
#or
df['A'].count()
both evaluate to 5.
The cool thing (or one of many w.r.t. pandas) is that if you have NA values, count takes that into consideration.
So if I did
df['A'][1::2] = np.NAN
df.count()
The result would be
A 3
B 5
Simply, row_num = df.shape[0] # gives number of rows, here's the example:
import pandas as pd
import numpy as np
In [322]: df = pd.DataFrame(np.random.randn(5,2), columns=["col_1", "col_2"])
In [323]: df
Out[323]:
col_1 col_2
0 -0.894268 1.309041
1 -0.120667 -0.241292
2 0.076168 -1.071099
3 1.387217 0.622877
4 -0.488452 0.317882
In [324]: df.shape
Out[324]: (5, 2)
In [325]: df.shape[0] ## Gives no. of rows/records
Out[325]: 5
In [326]: df.shape[1] ## Gives no. of columns
Out[326]: 2
The Nan example above misses one piece, which makes it less generic. To do this more "generically" use df['column_name'].value_counts()
This will give you the counts of each value in that column.
d=['A','A','A','B','C','C'," " ," "," "," "," ","-1"] # for simplicity
df=pd.DataFrame(d)
df.columns=["col1"]
df["col1"].value_counts()
5
A 3
C 2
-1 1
B 1
dtype: int64
"""len(df) give you 12, so we know the rest must be Nan's of some form, while also having a peek into other invalid entries, especially when you might want to ignore them like -1, 0 , "", also"""
Simple method to get the records count:
df.count()[0]
I used pandas library for this. Here is the code
import pandas as pd
name_of_file = "test.xlsx"
data = pd.read_excel(name_of_file)
required_colum_name = "Post test Number"
print(len(data[required_colum_name]))
# this also works -> data["Post test Number"].count()
I am trying to transform DataFrame, such that some of the rows will be replicated a given number of times. For example:
df = pd.DataFrame({'class': ['A', 'B', 'C'], 'count':[1,0,2]})
class count
0 A 1
1 B 0
2 C 2
should be transformed to:
class
0 A
1 C
2 C
This is the reverse of aggregation with count function. Is there an easy way to achieve it in pandas (without using for loops or list comprehensions)?
One possibility might be to allow DataFrame.applymap function return multiple rows (akin apply method of GroupBy). However, I do not think it is possible in pandas now.
You could use groupby:
def f(group):
row = group.irow(0)
return DataFrame({'class': [row['class']] * row['count']})
df.groupby('class', group_keys=False).apply(f)
so you get
In [25]: df.groupby('class', group_keys=False).apply(f)
Out[25]:
class
0 A
0 C
1 C
You can fix the index of the result however you like
I know this is an old question, but I was having trouble getting Wes' answer to work for multiple columns in the dataframe so I made his code a bit more generic. Thought I'd share in case anyone else stumbles on this question with the same problem.
You just basically specify what column has the counts in it in and you get an expanded dataframe in return.
import pandas as pd
df = pd.DataFrame({'class 1': ['A','B','C','A'],
'class 2': [ 1, 2, 3, 1],
'count': [ 3, 3, 3, 1]})
print df,"\n"
def f(group, *args):
row = group.irow(0)
Dict = {}
row_dict = row.to_dict()
for item in row_dict: Dict[item] = [row[item]] * row[args[0]]
return pd.DataFrame(Dict)
def ExpandRows(df,WeightsColumnName):
df_expand = df.groupby(df.columns.tolist(), group_keys=False).apply(f,WeightsColumnName).reset_index(drop=True)
return df_expand
df_expanded = ExpandRows(df,'count')
print df_expanded
Returns:
class 1 class 2 count
0 A 1 3
1 B 2 3
2 C 3 3
3 A 1 1
class 1 class 2 count
0 A 1 1
1 A 1 3
2 A 1 3
3 A 1 3
4 B 2 3
5 B 2 3
6 B 2 3
7 C 3 3
8 C 3 3
9 C 3 3
With regards to speed, my base df is 10 columns by ~6k rows and when expanded is ~100,000 rows takes ~7 seconds. I'm not sure in this case if grouping is necessary or wise since it's taking all the columns to group form, but hey whatever only 7 seconds.
There is even a simpler and significantly more efficient solution.
I had to make similar modification for a table of about 3.5M rows, and the previous suggested solutions were extremely slow.
A better way is to use numpy's repeat procedure for generating a new index in which each row index is repeated multiple times according to its given count, and use iloc to select rows of the original table according to this index:
import pandas as pd
import numpy as np
df = pd.DataFrame({'class': ['A', 'B', 'C'], 'count': [1, 0, 2]})
spread_ixs = np.repeat(range(len(df)), df['count'])
spread_ixs
array([0, 2, 2])
df.iloc[spread_ixs, :].drop(columns='count').reset_index(drop=True)
class
0 A
1 C
2 C
This question is very old and the answers do not reflect pandas modern capabilities. You can use iterrows to loop over every row and then use the DataFrame constructor to create new DataFrames with the correct number of rows. Finally, use pd.concat to concatenate all the rows together.
pd.concat([pd.DataFrame(data=[row], index=range(row['count']))
for _, row in df.iterrows()], ignore_index=True)
class count
0 A 1
1 C 2
2 C 2
This has the benefit of working with any size DataFrame.