df.value.apply returns NaN - python

I have a dataframe with 2 columns (time and pressure).
timestep value
0 393
1 389
2 402
3 408
4 413
5 463
6 471
7 488
8 422
9 404
10 370
I first need to find the frequency of each pressure value and rank them df['freq_rank'] which works fine, but when I am trying to mask the dataframe by comparing the column against count value & find interval difference, I am getting NaN results..
import numpy as np
import pandas as pd
from matplotlib.pylab import *
import re
import pylab
from pylab import *
import datetime
from scipy import stats
import matplotlib.pyplot
df = pd.read_csv('copy.csv')
dataset = np.loadtxt(df, delimiter=";")
df.columns = ["Timestamp", "Pressure"]
## Timestep as int
df = pd.DataFrame({'timestep':np.arange(3284), 'value': df.Pressure})
## Rank of the frequency of each value in the df
vcs = {v: i for i, v in enumerate(df.value.value_counts().index)}
df['freq_rank'] = df.value.apply(vcs.get)
print(df.freq_rank)
>>Output:
>>0 131
>>1 235
>>2 99
>>3 99
>>4 101
>>5 101
>>6 131
>>7 79
>>8 79
## Find most frequent value
count = df['value'].value_counts().sort_values(ascending=[False]).nlargest(10).index.values[0]
## Mask the DF by comparing the column against count value & find interval diff.
x = df.loc[df['value'] == count, 'timestep'].diff()
print(x)
>>Output:
>>50 1.0
>>112 62.0
>>215 103.0
>>265 50.0
>>276 11.0
>>277 1.0
>>278 1.0
>>318 40.0
>>366 48.0
>>367 1.0
>>368 1.0
>>372 4.0
df['freq'] = df.value.apply(x.get)
print(df.freq)
>>Output:
>>0 NaN
>>1 NaN
>>2 NaN
>>3 NaN
>>4 NaN
>>5 NaN
>>6 NaN
>>7 NaN
>>8 NaN
I don't understand why print(x) returns the right output and print(df['freq']) returns NaN.

I think your problem is with the last statement df['freq'] = df.value.apply(x.get)
If you just want to copy the x to the new column df['freq'] you can just:
df['freq'] = x
Then print(df.freq) will give you the same results as your print(x) statement.
Update:
Your problem is with the indicies. df only has index values from 0-10 where as your x has 50, 112, 215...
When assigning to df, only values that has an existing index is added.

Related

Pandas Dataframe - How to transpose one value for the row n to the row n-5 [duplicate]

I would like to shift a column in a Pandas DataFrame, but I haven't been able to find a method to do it from the documentation without rewriting the whole DF. Does anyone know how to do it?
DataFrame:
## x1 x2
##0 206 214
##1 226 234
##2 245 253
##3 265 272
##4 283 291
Desired output:
## x1 x2
##0 206 nan
##1 226 214
##2 245 234
##3 265 253
##4 283 272
##5 nan 291
In [18]: a
Out[18]:
x1 x2
0 0 5
1 1 6
2 2 7
3 3 8
4 4 9
In [19]: a['x2'] = a.x2.shift(1)
In [20]: a
Out[20]:
x1 x2
0 0 NaN
1 1 5
2 2 6
3 3 7
4 4 8
You need to use df.shift here.
df.shift(i) shifts the entire dataframe by i units down.
So, for i = 1:
Input:
x1 x2
0 206 214
1 226 234
2 245 253
3 265 272
4 283 291
Output:
x1 x2
0 Nan Nan
1 206 214
2 226 234
3 245 253
4 265 272
So, run this script to get the expected output:
import pandas as pd
df = pd.DataFrame({'x1': ['206', '226', '245',' 265', '283'],
'x2': ['214', '234', '253', '272', '291']})
print(df)
df['x2'] = df['x2'].shift(1)
print(df)
Lets define the dataframe from your example by
>>> df = pd.DataFrame([[206, 214], [226, 234], [245, 253], [265, 272], [283, 291]],
columns=[1, 2])
>>> df
1 2
0 206 214
1 226 234
2 245 253
3 265 272
4 283 291
Then you could manipulate the index of the second column by
>>> df[2].index = df[2].index+1
and finally re-combine the single columns
>>> pd.concat([df[1], df[2]], axis=1)
1 2
0 206.0 NaN
1 226.0 214.0
2 245.0 234.0
3 265.0 253.0
4 283.0 272.0
5 NaN 291.0
Perhaps not fast but simple to read. Consider setting variables for the column names and the actual shift required.
Edit: Generally shifting is possible by df[2].shift(1) as already posted however would that cut-off the carryover.
If you don't want to lose the columns you shift past the end of your dataframe, simply append the required number first:
offset = 5
DF = DF.append([np.nan for x in range(offset)])
DF = DF.shift(periods=offset)
DF = DF.reset_index() #Only works if sequential index
I suppose imports
import pandas as pd
import numpy as np
First append new row with NaN, NaN,... at the end of DataFrame (df).
s1 = df.iloc[0] # copy 1st row to a new Series s1
s1[:] = np.NaN # set all values to NaN
df2 = df.append(s1, ignore_index=True) # add s1 to the end of df
It will create new DF df2. Maybe there is more elegant way but this works.
Now you can shift it:
df2.x2 = df2.x2.shift(1) # shift what you want
Trying to answer a personal problem and similar to yours I found on Pandas Doc what I think would answer this question:
DataFrame.shift(periods=1, freq=None, axis=0)
Shift index by desired number of periods with an optional time freq
Notes
If freq is specified then the index values are shifted but the data is not realigned. That is, use freq if you would like to extend the index when shifting and preserve the original data.
Hope to help future questions in this matter.
df3
1 108.210 108.231
2 108.231 108.156
3 108.156 108.196
4 108.196 108.074
... ... ...
2495 108.351 108.279
2496 108.279 108.669
2497 108.669 108.687
2498 108.687 108.915
2499 108.915 108.852
df3['yo'] = df3['yo'].shift(-1)
yo price
0 108.231 108.210
1 108.156 108.231
2 108.196 108.156
3 108.074 108.196
4 108.104 108.074
... ... ...
2495 108.669 108.279
2496 108.687 108.669
2497 108.915 108.687
2498 108.852 108.915
2499 NaN 108.852
This is how I do it:
df_ext = pd.DataFrame(index=pd.date_range(df.index[-1], periods=8, closed='right'))
df2 = pd.concat([df, df_ext], axis=0, sort=True)
df2["forecast"] = df2["some column"].shift(7)
Basically I am generating an empty dataframe with the desired index and then just concatenate them together. But I would really like to see this as a standard feature in pandas so I have proposed an enhancement to pandas.
I'm new to pandas, and I may not be understanding the question, but this solution worked for my problem:
# Shift contents of column 'x2' down 1 row
df['x2'] = df['x2'].shift()
Or, to create a new column with contents of 'x2' shifted down 1 row
# Create new column with contents of 'x2' shifted down 1 row
df['x3'] = df['x2'].shift()
I had a read of the official docs for shift() while trying to figure this out, but it doesn't make much sense to me, and has no examples referencing this specific behavior.
Note that the last row of column 'x2' is effectively pushed off the end of the Dataframe. I expected shift() to have a flag to change this behaviour, but I can't find anything.

Find the duplicate values of first column present in second column and return the corresponding row number of the value in second column

I have two columns in a dataframe with overlapping values.How to find the duplicate values of first column present in the second column and return the corresponding row number of the value in second column in a new column.
import pandas as pd
import csv
from pandas.compat import StringIO
print(pd.__version__)
csvdata = StringIO("""a,b
111,122
122,3
111,9
254,395
265,245
111,395
220,111
395,305
395,8""")
df1 = pd.read_csv(csvdata, sep=",")
# find unique duplicate values in first column
col_a_dups = df1['a'][df1['a'].duplicated()].unique()
corresponding_value = df1['b'][df1['b'].isin(col_a_dups)]
print(df1.join(corresponding_value, lsuffix="_l", rsuffix="_r"))
#print(corresponding_value.index)
Produces
0.24.2
a b_l b_r
0 111 122 NaN
1 122 3 NaN
2 111 9 NaN
3 254 395 395.0
4 265 245 NaN
5 111 395 395.0
6 220 111 111.0
7 395 305 NaN
8 395 8 NaN

iteritems() in dataframe column

I have a dataset of U.S. Education Datasets: Unification Project. I want to find out
Number of rows where enrolment in grade 9 to 12 (column: GRADES_9_12_G) is less than 5000
Number of rows where enrolment is grade 9 to 12 (column: GRADES_9_12_G) is between 10,000 and 20,000.
I am having problem in updating the count whenever the value in the if statement is correct.
import pandas as pd
import numpy as np
df = pd.read_csv("C:/Users/akash/Downloads/states_all.csv")
df.shape
df = df.iloc[:, -6]
for key, value in df.iteritems():
count = 0
count1 = 0
if value < 5000:
count += 1
elif value < 20000 and value > 10000:
count1 += 1
print(str(count) + str(count1))
df looks like this
0 196386.0
1 30847.0
2 175210.0
3 123113.0
4 1372011.0
5 160299.0
6 126917.0
7 28338.0
8 18173.0
9 511557.0
10 315539.0
11 43882.0
12 66541.0
13 495562.0
14 278161.0
15 138907.0
16 120960.0
17 181786.0
18 196891.0
19 59289.0
20 189795.0
21 230299.0
22 419351.0
23 224426.0
24 129554.0
25 235437.0
26 44449.0
27 79975.0
28 57605.0
29 47999.0
...
1462 NaN
1463 NaN
1464 NaN
1465 NaN
1466 NaN
1467 NaN
1468 NaN
1469 NaN
1470 NaN
1471 NaN
1472 NaN
1473 NaN
1474 NaN
1475 NaN
1476 NaN
1477 NaN
1478 NaN
1479 NaN
1480 NaN
1481 NaN
1482 NaN
1483 NaN
1484 NaN
1485 NaN
1486 NaN
1487 NaN
1488 NaN
1489 NaN
1490 NaN
1491 NaN
Name: GRADES_9_12_G, Length: 1492, dtype: float64
In the output I got
00
With Pandas, using loops is almost always the wrong way to go. You probably want something like this instead:
print(len(df.loc[df['GRADES_9_12_G'] < 5000]))
print(len(df.loc[(10000 < df['GRADES_9_12_G']) & (df['GRADES_9_12_G'] < 20000)]))
I downloaded your data set, and there are multiple ways to go about this. First of all, you do not need to subset your data if you do not want to. Your problem can be solved like this:
import pandas as pd
df = pd.read_csv('states_all.csv')
df.fillna(0, inplace=True) # fill NA with 0, not required but nice looking
print(len(df.loc[df['GRADES_9_12_G'] < 5000])) # 184
print(len(df.loc[(df['GRADES_9_12_G'] > 10000) & (df['GRADES_9_12_G'] < 20000)])) # 52
The line df.loc[df['GRADES_9_12_G'] < 5000] is telling pandas to query the dataframe for all rows in column df['GRADES_9_12_G'] that are less than 5000. I am then calling python's builtin len function to return the length of the returned, which outputs 184. This is essentially a boolean masking process which returns all True values for your df that meet the conditions you give it.
The second query df.loc[(df['GRADES_9_12_G'] > 10000) & (df['GRADES_9_12_G'] < 20000)]
uses an & operator which is a bitwise operator that requires both conditions to be met for a row to be returned. We then call the len function on that as well to get an integer value of the number of rows which outputs 52.
To go off your method:
import pandas as pd
df = pd.read_csv('states_all.csv')
df.fillna(0, inplace=True) # fill NA with 0, not required but nice looking
df = df.iloc[:, -6] # select all rows for your column -6
print(len(df[df < 5000])) # query your "df" for all values less than 5k and print len
print(len(df[(df > 10000) & (df < 20000)])) # same as above, just for vals in between range
Why did I change the code in my answer instead of using yours?
Simply enough to say, it is more pandonic. Where we can, it is cleaner to use pandas built-ins than iterating over dataframes with for loops, as this is what pandas was designed for.

Pandas: Iterate over rows and find frequency of occurances

I have a dataframe with 2 columns and 3000 rows.
First column is representing time in time-steps. For example first row is 0, second is 1, ..., last one is 2999.
Second column is representing pressure. The pressure changes as we iterate over the rows, but shows a repetitive behaviour. So every few steps we see that it goes to its minimum value (which is 375), then goes up again, then again at 375 etc.
What I want to do in Python, is to iterate over the rows and see:
1) at which time-steps we see pressure is at its minimum
2)Find the frequency between the minimum values.
import numpy as np
import pandas as pd
import numpy.random as rnd
import scipy.linalg as lin
from matplotlib.pylab import *
import re
from pylab import *
import datetime
df = pd.read_csv('test.csv')
row = next(df.iterrows())[0]
dataset = np.loadtxt(df, delimiter=";")
df.columns = ["Timestamp", "Pressure"]
print(df[[0, 1]])
You don't need to iterate row-wise, you can compare the entire column against the min value to mask it, you can then use the mask to find the timestep diff:
Data setup:
In [44]:
df = pd.DataFrame({'timestep':np.arange(20), 'value':np.random.randint(375, 400, 20)})
df
Out[44]:
timestep value
0 0 395
1 1 377
2 2 392
3 3 396
4 4 377
5 5 379
6 6 384
7 7 396
8 8 380
9 9 392
10 10 395
11 11 393
12 12 390
13 13 393
14 14 397
15 15 396
16 16 393
17 17 379
18 18 396
19 19 390
mask the df by comparing the column against the min value:
In [45]:
df[df['value']==df['value'].min()]
Out[45]:
timestep value
1 1 377
4 4 377
We can use the mask with loc to find the corresponding 'timestep' value and use diff to find the interval differences:
In [48]:
df.loc[df['value']==df['value'].min(),'timestep'].diff()
Out[48]:
1 NaN
4 3.0
Name: timestep, dtype: float64
You can divide the above by 1/60 to find frequency wrt to 1 minute or whatever frequency unit you desire

Pandas: Find frequency of occurrences in a DF

I have a dataframe with 2 columns, Time and Pressure, with around 3000 rows, as this:
time value
0 393
1 389
2 402
3 408
4 413
5 463
6 471
7 488
8 422
9 404
10 370
I want to find 1) the most frequent value of pressure and 2) after how many time-steps we see this value. My code is this so far:
import numpy as np
import pandas as pd
from matplotlib.pylab import *
import re
from pylab import *
import datetime
from scipy import stats
pd.set_option('display.max_rows', 5000)
df = pd.read_csv('copy.csv')
row = next(df.iterrows())[0]
dataset = np.loadtxt(df, delimiter=";")
df.columns = ["LTimestamp", "LPressure"]
list(df.columns.values)
## Timestep
df = pd.DataFrame({'timestep': df.LTimestamp, 'value': df.LPressure})
df['timestep'] = pd.to_datetime(df['timestep'], unit='ms').dt.time
# print(df)
## Find most seen value in pressure
count = df['value'].value_counts().sort_values(ascending=[False]).nlargest(1).values[0]
print (count)
## Mask the df by comparing the column against the most seen value.
print(df[df['value'] == count])
## Find interval differences
x = df.loc[df['value'] == count, 'timestep'].diff()
print(x)
The output is this, where 101 is the number of times the most frequent value (400) occurs.
>>> 101
>>> Empty DataFrame
>>> Columns: [timestep, value]
>>> Index: []
>>> Series([], Name: timestep, dtype: object)
>>> [Finished in 1.7s]
I don't understand why it returns an empty Index array. If instead of
print(df[df['value'] == count])
I use
print(df[df['value'] == 400])
I can see the masked df with the interval differences, as here:
50 1.0
112 62.0
215 103.0
265 50.0
276 11.0
277 1.0
278 1.0
318 40.0
366 48.0
367 1.0
But later on, I will want to calculate this for the minimum values, or the second largest etc. This is why I want to use count and not a specific number. Can someone help with this?
I'd suggest to use
>>> val = df['value'].value_counts().nlargest(1).index[0]
>>> df[df['value'] == val]
time value
2 2 402
3 3 402
7 7 402
8 8 402
A more general solution is to assign a rank of the frequency to each value in df.
import pandas as pd
df = pd.DataFrame({
'time': np.arange(20)
})
df['value'] = df.time ** 2 % 7
vcs = {v: i for i, v in enumerate(df.value.value_counts().index)}
df['freq_rank'] = df.value.apply(vcs.get)

Categories