copy some rows from existing pandas dataframe to a new one - python

And the copy has to be done for 'City' column starting with 'BH'.
The copied df.index shouls be same as the original
Eg -
STATE CITY
315 KA BLR
423 WB CCU
554 KA BHU
557 TN BHY
# state_df is new dataframe, df is existing
state_df = pd.DataFrame(columns=['STATE', 'CITY'])
for index, row in df.iterrows():
city = row['CITY']
if(city.startswith('BH')):
append row from df to state_df # pseudocode
Being new to pandas and Python, I need help in the pseudocode for the most efficient way.

Solution with startswith and boolean indexing:
print (df['CITY'].str.startswith('BH'))
315 False
423 False
554 True
557 True
state_df = df[df['CITY'].str.startswith('BH')]
print (state_df)
STATE CITY
554 KA BHU
557 TN BHY
If need copy only some columns add loc:
state_df = df.loc[df['CITY'].str.startswith('BH'), ['STATE']]
print (state_df)
STATE
554 KA
557 TN
Timings:
#len (df) = 400k
df = pd.concat([df]*100000).reset_index(drop=True)
In [111]: %timeit (df.CITY.str.startswith('BH'))
10 loops, best of 3: 151 ms per loop
In [112]: %timeit (df.CITY.str.contains('^BH'))
1 loop, best of 3: 254 ms per loop

try this:
In [4]: new = df[df['CITY'].str.contains(r'^BH')].copy()
In [5]: new
Out[5]:
STATE CITY
554 KA BHU
557 TN BHY
What if I need to copy only some columns of the row and not the entire
row
cols_to_copy = ['STATE']
new = df.loc[df.CITY.str.contains(r'^BH'), cols_to_copy].copy()
In [7]: new
Out[7]:
STATE
554 KA
557 TN

Removed the for loop and finally wrote this :
state_df = df.loc[df['CTYNAME'].str.startswith('Washington'), cols_to_copy]
For loop may be slower, but need to check on that

Related

How to transform data frames into one big dataframe?

I have written a program (code below) that gives me for each file in a folder a data frame. In the data frame are the Quarters in the Year from the file and the counts (how often the quarters occurs in the file). An output for one file in the loop look for example like:
2008Q4 230
2009Q1 186
2009Q2 166
2009Q3 173
2009Q4 246
2010Q1 341
2010Q2 336
2010Q3 200
2010Q4 748
2011Q1 625
2011Q2 690
2011Q3 970
2011Q4 334
2012Q1 573
2012Q2 53
How can I create a big data frame where the counts for the quarters are summed up for all files in the folder?
path = "crisisuser"
os.chdir(path)
result = [i for i in glob.glob('*.{}'.format("csv"))]
os.chdir("..")
for i in result:
df = pd.read_csv("crisisuser/"+i)
df['quarter'] = pd.PeriodIndex(df.time, freq='Q')
df=df['quarter'].value_counts().sort_index()
I think you need append all Series to list, then use concat and sum per index values:
out = []
for i in result:
df = pd.read_csv("crisisuser/"+i)
df['quarter'] = pd.PeriodIndex(df.time, freq='Q')
out.append(df['quarter'].value_counts().sort_index())
s = pd.concat(out).sum(level=0)

Combine two data frames without a common column

I am adding a column "state" into an existing dataframe that does not share a common column with my other data frame. Therefore, I need to convert zipcodes into states (example, 00704 would be PR) to load into the dataframe that has the new column state.
reviewers = pd.read_csv('reviewers.txt',
sep='|',
header=None,
names=['user id','age','gender','occupation','zipcode'])
reviewers['state'] = ""
user id age gender occupation zipcode state
0 1 24 M technician 85711
1 2 53 F other 94043
zipcodes = pd.read_csv('zipcodes.txt',
usecols = [1,4],
converters={'Zipcode':str})
Zipcode State
0 00704 PR
1 00704 PR
2 00704 PR
3 00704 PR
4 00704 PR
zipcodes1 = zipcodes.set_index('Zipcode') ###Setting the index to zipcode
dfzip = zipcodes1
print(dfzip)
State
Zipcode
00704 PR
00704 PR
00704 PR
zips = (pd.Series(dfzip.values.tolist(), index = zipcodes1['State'].index))
states = []
for zipcode in reviewers['Zipcode']:
if re.search('[a-zA-Z]+', zipcode):
append.states['canada']
elif zipcode in zips.index:
append.states(zips['zipcode'])
else:
append.states('unkown')
I am not sure if my loop is correct either. I have to sort the zipcodes by U.S zipcode (numerical), Canada zip codes(alphabetical), and then other zip codes which we define as (unknown). Let me know if you need the data file.
Use:
#remove duplicates and create Series for mapping
zips = zipcodes.drop_duplicates().set_index('Zipcode')['State']
#get mask for canada zip codes
#if possible small letters change to [a-zA-Z]+
mask = reviewers['zipcode'].str.match('[A-Z]+')
#new column by mask
reviewers['state'] = np.where(mask, 'canada', reviewers['zipcode'].map(zips))
#NaNs are replaced
reviewers['state'] = reviewers['state'].fillna('unknown')
Loop version with apply:
import re
def f(code):
res="unknown"
#if possible small letter change to [a-zA-Z]+
if re.match('[A-Z]+', code):
res='canada'
elif code in zips.index:
res=zips[code]
return res
reviewers['State1'] = reviewers['zipcode'].apply(f)
print (reviewers.tail(10))
user id age gender occupation zipcode state State1
933 934 61 M engineer 22902 VA VA
934 935 42 M doctor 66221 KS KS
935 936 24 M other 32789 FL FL
936 937 48 M educator 98072 WA WA
937 938 38 F technician 55038 MN MN
938 939 26 F student 33319 FL FL
939 940 32 M administrator 02215 MA MA
940 941 20 M student 97229 OR OR
941 942 48 F librarian 78209 TX TX
942 943 22 M student 77841 TX TX
#test if same output
print ((reviewers['State1'] == reviewers['state']).all())
True
Timings:
In [56]: %%timeit
...: mask = reviewers['zipcode'].str.match('[A-Z]+')
...: reviewers['state'] = np.where(mask, 'canada', reviewers['zipcode'].map(zips))
...: reviewers['state'] = reviewers['state'].fillna('unknown')
...:
100 loops, best of 3: 2.08 ms per loop
In [57]: %%timeit
...: reviewers['State1'] = reviewers['zipcode'].apply(f)
...:
100 loops, best of 3: 17 ms per loop
Your loop needs to be fixed:
states = []
for zipcode in reviewers['Zipcode']:
if re.match(r'\w+', zipcode):
states.extend('Canada')
elif zipcode in zips.index:
states.extend(zips[zipcode])
else:
states.extend('Unknown')
Also, am assuming you want the states list to be plugged back into the dataframe. In that case you don't need the for loop. You can use pandas apply on the dataframe to get a new column:
def findState(code):
res='Unknown'
if re.match(r'\w+', code):
res='Canada'
elif code in zips.index:
res=zips[code]
return res
reviewers['State'] = reviewers['Zipcode'].apply(findstate)

optimize a string query with pandas. large data

I have a dataframe data, which has close to 4 millions rows. It is a list of cities on the world. I need to query the city name as fast as possible.
I found out that one with 346ms via indexing the city name:
d2=data.set_index("city",inplace=False)
timeit d2.loc[['PARIS']]
1 loop, best of 3: 346 ms per loop
This is still much too slow. I wonder if with group-by I could achieve faster query (how to do such a query). Each city has around 10 rows in the dataframe (duplicate city).
I searched several days and could not find a clear solution on the internet
thank you
Setup
df = pd.DataFrame(data=[['Paris'+str(i),i] for i in range(100000)]*10,columns=['city','value'])
Baseline
df2 = df.set_index('city')
%timeit df2.loc[['Paris9999']]
10 loops, best of 3: 45.6 ms per loop
Solution
Using a lookup dict and then use iloc:
idx_dict = df.groupby(by='city').apply(lambda x: x.index.tolist()).to_dict()
%timeit df.iloc[d['Paris9999']]
1000 loops, best of 3: 432 µs per loop
It seems this approach is almost 100 times faster than the baseline.
Comparing to other approaches:
%timeit df2[df2.index.values=="Paris9999"]
100 loops, best of 3: 16.7 ms per loop
%timeit full_array_based(df2, "Paris9999")
10 loops, best of 3: 19.6 ms per loop
Working with the array data for the index, comparing against the needed index and then using the mask from the comparison might be one option when looking for performance. A sample case might make things clear.
1) Input dataframes :
In [591]: df
Out[591]:
city population
0 Delhi 1000
1 Paris 56
2 NY 89
3 Paris 36
4 Delhi 300
5 Paris 52
6 Paris 34
7 Delhi 40
8 NY 89
9 Delhi 450
In [592]: d2 = df.set_index("city",inplace=False)
In [593]: d2
Out[593]:
population
city
Delhi 1000
Paris 56
NY 89
Paris 36
Delhi 300
Paris 52
Paris 34
Delhi 40
NY 89
Delhi 450
2) Indexing with .loc :
In [594]: d2.loc[['Paris']]
Out[594]:
population
city
Paris 56
Paris 36
Paris 52
Paris 34
3) Use mask based indexing :
In [595]: d2[d2.index.values=="Paris"]
Out[595]:
population
city
Paris 56
Paris 36
Paris 52
Paris 34
4) Finally timings :
In [596]: %timeit d2.loc[['Paris']]
1000 loops, best of 3: 475 µs per loop
In [597]: %timeit d2[d2.index.values=="Paris"]
10000 loops, best of 3: 156 µs per loop
Further boost
Going further with using array data, we can extract the entire input dataframe as array and index into it. Thus, an implementation using that philosophy would look something like this -
def full_array_based(d2, indexval):
df0 = pd.DataFrame(d2.values[d2.index.values==indexval])
df0.index = [indexval]*df0.shape[0]
df0.columns = d2.columns
return df0
Sample run and timings -
In [635]: full_array_based(d2, "Paris")
Out[635]:
population
Paris 56
Paris 36
Paris 52
Paris 34
In [636]: %timeit full_array_based(d2, "Paris")
10000 loops, best of 3: 146 µs per loop
If we are allowed to pre-process to setup a dictonary that could be indexed for extracting city string based data extraction from the input dataframe, here's one solution using NumPy to do so -
def indexed_dict_numpy(df):
cs = df.city.values.astype(str)
sidx = cs.argsort()
scs = cs[sidx]
idx = np.concatenate(( [0], np.flatnonzero(scs[1:] != scs[:-1])+1, [cs.size]))
return {n:sidx[i:j] for n,i,j in zip(cs[sidx[idx[:-1]]], idx[:-1], idx[1:])}
Sample run -
In [10]: df
Out[10]:
city population
0 Delhi 1000
1 Paris 56
2 NY 89
3 Paris 36
4 Delhi 300
5 Paris 52
6 Paris 34
7 Delhi 40
8 NY 89
9 Delhi 450
In [11]: dict1 = indexed_dict_numpy(df)
In [12]: df.iloc[dict1['Paris']]
Out[12]:
city population
1 Paris 56
3 Paris 36
5 Paris 52
6 Paris 34
Runtime test against #Allen's solution to setup a similar dictionary with 4 Mil rows -
In [43]: # Setup 4 miliion rows of df
...: df = pd.DataFrame(data=[['Paris'+str(i),i] for i in range(400000)]*10,\
...: columns=['city','value'])
...: np.random.shuffle(df.values)
...:
In [44]: %timeit df.groupby(by='city').apply(lambda x: x.index.tolist()).to_dict()
1 loops, best of 3: 2.01 s per loop
In [45]: %timeit indexed_dict_numpy(df)
1 loops, best of 3: 1.15 s per loop

Append a Single Line by Elements into a Pandas Dataframe

I have the following (sample) dataframe:
Age height weight haircolor
joe 35 5.5 145 brown
mary 26 5.25 110 blonde
pete 44 6.02 185 red
....
There are no duplicate values in the index.
I am in the unenviable position of having to append to this dataframe using elements from a number of other dataframes. So I'm appending as follows:
names_df = names_df.append({'Age': someage,
'height': someheight,
'weight':someweight,
'haircolor': somehaircolor'},
ignore_index=True)
My question is using this method how do I set the new index value in names_df equal to the person's name?
The only thing I can think of is to reset the df index before I append and then re-set it afterward. Ugly. Has to be a better way.
Thanks in advance.
I am not sure in what format are you getting the data that you are appending to the original df but one way is as follows:
df.loc['new_name', :] = ['someage', 'someheight', 'someweight', 'somehaircolor']
Age height weight haircolor
joe 35 5.5 145 brown
mary 26 5.25 110 blonde
pete 44 6.02 185 red
new_name someage someheight someweight somehaircolor
Time Testing:
%timeit df.loc['new_name', :] = ['someage', 'someheight', 'someweight', 'somehaircolor']
1000 loops, best of 3: 408 µs per loop
%timeit df.append(pd.DataFrame({'Age': 'someage', 'height': 'someheight','weight':'someweight','haircolor': 'somehaircolor'}, index=['some_person']))
100 loops, best of 3: 2.59 ms per loop
Here's another way using append. Instead of passing a dictionary, pass a dataframe (created with dictionary) while specifying index:
names_df = names_df.append(pd.DataFrame({'Age': 'someage',
'height': 'someheight',
'weight':'someweight',
'haircolor': 'somehaircolor'}, index=['some_person']))

Python and Pandas - Moving Average Crossover

There is a Pandas DataFrame object with some stock data. SMAs are moving averages calculated from previous 45/15 days.
Date Price SMA_45 SMA_15
20150127 102.75 113 106
20150128 103.05 100 106
20150129 105.10 112 105
20150130 105.35 111 105
20150202 107.15 111 105
20150203 111.95 110 105
20150204 111.90 110 106
I want to find all dates, when SMA_15 and SMA_45 intersect.
Can it be done efficiently using Pandas or Numpy? How?
EDIT:
What I mean by 'intersection':
The data row, when:
long SMA(45) value was bigger than short SMA(15) value for longer than short SMA period(15) and it became smaller.
long SMA(45) value was smaller than short SMA(15) value for longer than short SMA period(15) and it became bigger.
I'm taking a crossover to mean when the SMA lines -- as functions of time --
intersect, as depicted on this investopedia
page.
Since the SMAs represent continuous functions, there is a crossing when,
for a given row, (SMA_15 is less than SMA_45) and (the previous SMA_15 is
greater than the previous SMA_45) -- or vice versa.
In code, that could be expressed as
previous_15 = df['SMA_15'].shift(1)
previous_45 = df['SMA_45'].shift(1)
crossing = (((df['SMA_15'] <= df['SMA_45']) & (previous_15 >= previous_45))
| ((df['SMA_15'] >= df['SMA_45']) & (previous_15 <= previous_45)))
If we change your data to
Date Price SMA_45 SMA_15
20150127 102.75 113 106
20150128 103.05 100 106
20150129 105.10 112 105
20150130 105.35 111 105
20150202 107.15 111 105
20150203 111.95 110 105
20150204 111.90 110 106
so that there are crossings,
then
import pandas as pd
df = pd.read_table('data', sep='\s+')
previous_15 = df['SMA_15'].shift(1)
previous_45 = df['SMA_45'].shift(1)
crossing = (((df['SMA_15'] <= df['SMA_45']) & (previous_15 >= previous_45))
| ((df['SMA_15'] >= df['SMA_45']) & (previous_15 <= previous_45)))
crossing_dates = df.loc[crossing, 'Date']
print(crossing_dates)
yields
1 20150128
2 20150129
Name: Date, dtype: int64
The following methods gives the similar results, but takes less time than the previous methods:
df['position'] = df['SMA_15'] > df['SMA_45']
df['pre_position'] = df['position'].shift(1)
df.dropna(inplace=True) # dropping the NaN values
df['crossover'] = np.where(df['position'] == df['pre_position'], False, True)
Time taken for this approach: 2.7 ms ± 310 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Time taken for previous approach: 3.46 ms ± 307 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
As an alternative to the unutbu's answer, something like below can also be done to find the indices where SMA_15 crosses SMA_45.
diff = df['SMA_15'] < df['SMA_45']
diff_forward = diff.shift(1)
crossing = np.where(abs(diff - diff_forward) == 1)[0]
print(crossing)
>>> [1,2]
print(df.iloc[crossing])
>>>
Date Price SMA_15 SMA_45
1 20150128 103.05 100 106
2 20150129 105.10 112 105

Categories