Extract an array from a numpy array - python

Before I start sorry for my English, my poor python knowledge (newbie) and a possible duplicate question. I tried and searched a lot but couldn't find any solution to the problem that I got stuck. Here is the problem;
I have an array named array1 that is loaded with numpy.loadtxt() it is a text file that has 2 columns of data with x and y. x range from 0.4 to 15. the increment is not problem.
I also have a second array array2 which contains x' values range from 10 to 12.
Note: The increment of xin each array is different. I will use them for linear interpolation for y values later.
I want to crop the first array by using the second array x' values range 10 to 12 .
I tried this;
new_array = array1[(array1>=np.amin(array2)) * (array1<= np.amax(array2))]
It crops the first array (array1). But I can only extract x values.
[ 10. 10.1 10.2 10.3 10.4 10.5 10.6 10.7 10.8 10.9 11. 11.1
11.2 11.3 11.4 11.5 11.6 11.7 11.8 11.9 12. 12.1 12.2 12.3
12.4 12.5 12.6 12.7 12.8 12.9]
I want to extract the values of x and y from array1 by a given range of x values from another array.
Edit
array1[[ 0.3 0.302 0.304 0.306 0.308 0.31 0.312 0.314 0.316
... 13.4 13.5 13.6 13.7 13.8 13.9 14. 14.1
14.2 14.3 14.4 14.5 14.6 14.7 14.8 14.9 15. ]
[ 8.82 9. 9.18 9.35 9.52 9.69 9.85 10.02
10.18 10.35 10.52 10.67 10.82 10.97 11.12 11.25
11.39 11.52 …................... 2.3044 1.7773 2.271 2.721 ]]
array2 = [[ 10. 10.02 10.03 10.04 10.05 10.06 10.07 10.08 10.09 10.1
10.12 10.13 10.14 10.15 10.16 10.17 10.18 10.19 10.2 10.21
10.22 10.23 10.24 10.25 10.26 10.27 10.28 10.29 10.3 10.31
10.33 10.34 10.35 10.36 10.37 10.38 10.39 10.4 10.41 10.42
10.43 10.44 10.45 10.46 10.47 10.48 10.49 10.5 10.51 10.52
10.53 10.54 10.59 10.64 10.7 10.75 10.8 10.85 10.9 10.95 11.
11.05 11.1 11.15 ...... 12.64 12.65 12.66 12.67 12.68 12.69
12.7 12.71 12.72 12.73 12.74 12.75 12.76 12.77 12.78 12.79
12.8 12.81 12.82 12.83 12.84 12.85 12.86 12.87 12.88 12.89
12.9 ][ 0.0058 0.0073 0.0081 0.0088 0.0096 0.0104 0.0112 0.012 0.0128
0.0136 0.0165 0.0018 0.0195 0.021 0.0226 0.0241 0.0256 0.0272
0.0288 0.0334 …. 0.1092 0.0879 0.0667 0.0458 0.0433 0.0409
0.0385 0.0361 0.0337 0.0314 0.0291 0.0268 0.0245 0.0223 0.0209
0.0195 0.0182 0.0168 0.0155 0.0141 0.0128 0.0115 0.0101 0.0088
0.0085 0.0081 0.0078 0.0074 0.0071 0.0068 0.0064 0.0061 0.0058
0.0054]]
Again sorry for my English. I hope I succeded in explaining myself
Thank you a lot for your helps :)

Assuming that the first index corresponds to x, this may work:
indices = (array1[0,...] >= np.min(array2[0,...])) & (array1[0,...] <= np.max(array2[0,...]))
xselected = array1[0,indices]
yselected = array1[1,indices]
Notes: do not use np.amin, but np.min instead. Do not combine the indices together with a *, but use the boolean and: &.
I've indexed the arrays with array[0,...], but I think you can just use array[0] there as well, since the 0 indexes the first dimension.

Related

Pandas pivot_table. How to change sorting

I have this dataframe df:
alpha1 week_day calendar_week
0 2.49 Freitag 2022-04-(01/07)
1 1.32 Samstag 2022-04-(01/07)
2 2.70 Sonntag 2022-04-(01/07)
3 3.81 Montag 2022-04-(01/07)
4 3.58 Dienstag 2022-04-(01/07)
5 3.48 Mittwoch 2022-04-(01/07)
6 1.79 Donnerstag 2022-04-(01/07)
7 2.12 Freitag 2022-04-(08/14)
8 2.41 Samstag 2022-04-(08/14)
9 1.78 Sonntag 2022-04-(08/14)
10 3.19 Montag 2022-04-(08/14)
11 3.33 Dienstag 2022-04-(08/14)
12 2.88 Mittwoch 2022-04-(08/14)
13 2.98 Donnerstag 2022-04-(08/14)
14 3.01 Freitag 2022-04-(15/21)
15 3.04 Samstag 2022-04-(15/21)
16 2.72 Sonntag 2022-04-(15/21)
17 4.11 Montag 2022-04-(15/21)
18 3.90 Dienstag 2022-04-(15/21)
19 3.16 Mittwoch 2022-04-(15/21)
and so on, with ascending calendar weeks.
I performed a pivot table to generate a heatmap.
df_pivot = pd.pivot_table(df, values=['alpha1'], index=['week_day'], columns=['calendar_week'])
What I get is:
alpha1 \
calendar_week 2022-(04-29/05-05) 2022-(05-27/06-02) 2022-(07-29/08-04)
week_day
Dienstag 3.32 2.09 4.04
Donnerstag 3.27 2.21 4.65
Freitag 2.83 3.08 4.19
Mittwoch 3.22 3.14 4.97
Montag 2.83 2.86 4.28
Samstag 2.62 3.62 3.88
Sonntag 2.81 3.25 3.77
\
calendar_week 2022-(08-26/09-01) 2022-04-(01/07) 2022-04-(08/14)
week_day
Dienstag 2.92 3.58 3.33
Donnerstag 3.58 1.79 2.98
Freitag 3.96 2.49 2.12
Mittwoch 3.09 3.48 2.88
Montag 3.85 3.81 3.19
Samstag 3.10 1.32 2.41
Sonntag 3.39 2.70 1.78
As you see the sorting of the pivot table is messed up. I need the same sorting for the columns (calendar weeks) as in the original dataframe.
I have been looking all over but couldn't find how to achieve this.
Would be also very nice, if the sorting of the rows remains the same.
Any help will be greatly appreciated
UPDATE
I didn't paste all the data. It would have been too long
The calendar_week column consist of following elements
'2022-04-(01/07)',
'2022-04-(08/14)',
'2022-04-(15/21)',
'2022-04-(22/28)',
'2022-(04-29/05-05)',
'2022-05-(06/12)',
'2022-05-(13/19)',
'2022-05-(20/26)',
'2022-(05-27/06-02)',
'2022-06-(03/09)'
'2022-06-(10/16)'
'2022-06-(17/23)'
'2022-06-(24/30)'
'2022-07-(01/07)'
'2022-07-(08/14)'
'2022-07-(15/21)'
'2022-07-(22/28)'
'2022-(07-29/08-04)'
'2022-08-(05/11)'
etc....
Each occurs 7 times in df. It represents a calendar week.
The sorting is the natural time sorting.
After pivoting the dataframe, the sorting of the column get messed up. And I guess it's due to the 2 different types: 2022-(07-29/08-04) and 2022-07-(15/21).
Try running this:
df_pivot.sort_values(by = ['calendar_week'], axis = 1, ascending = True)
I got the following output. Is this what you wanted?
calendar_week
2022-04-(01/07)
2022-04-(08/14)
2022-04-(15/21)
week_day
Dienstag
3.58
3.33
3.90
Donnerstag
1.79
2.98
NaN
Freitag
2.49
2.12
3.01
Mittwoch
3.48
2.88
3.16
Montag
3.81
3.19
4.11
be sure to remove the NaN values using the fillna() function.
I hope that answers it. :)
You can use an ordered Categorical for your week days and sort the dates after pivoting with sort_index:
# define the desired order of the days
days = ['Montag', 'Dienstag', 'Mittwoch', 'Donnerstag',
'Freitag', 'Samstag', 'Sonntag']
df_pivot = (df
.assign(week_day=pd.Categorical(df['week_day'], categories=days,
ordered=True))
.pivot_table(values='alpha1', index='week_day',
columns='calendar_week')
.sort_index(axis=1)
)
output:
calendar_week 2022-04-(01/07) 2022-04-(08/14) 2022-04-(15/21)
week_day
Montag 3.81 3.19 4.11
Dienstag 3.58 3.33 3.90
Mittwoch 3.48 2.88 3.16
Donnerstag 1.79 2.98 NaN
Freitag 2.49 2.12 3.01
Samstag 1.32 2.41 3.04
Sonntag 2.70 1.78 2.72

Opening and Closing Tags are Removed from html When Using BeautifulSoup

I am running into an issue when using BeautifulSoup to scrape data off of www.basketball-reference.com. I've used BeautifulSoup before on Bballreference before so I am a little stumped as to what is happening (granted I am a pretty huge noob so please bear with me).
I am trying to scrape team season stats off of https://www.basketball-reference.com/leagues/NBA_2020.html and am running into troubles from the very start:
from bs4 import BeautifulSoup
import requests
web_response = requests.get('https://www.basketball-reference.com/leagues/NBA_2020.html').text
soup = BeautifulSoup(web_response, 'lxml')
table = soup.find('table', id='team-stats-per_game')
print(table)
This shows that the finding of the table in question was unsuccessful even though I can clearly locate that tag when inspecting the web page. Okay... no biggie so far (usually these errors are on my end) so I instead just print out the whole soup:
soup = BeautifulSoup(web_response, 'lxml')
print(soup)
I copy and paste that into https://codebeautify.org/htmlviewer/. To get a better view than from the terminal and I see that it does not look how I would expect it to. Essentially the meta tags are fine but everything else appears to have lost its opening and closing tags, just making my soup into an actual soup...
Again, no biggie (still pretty sure it is something that I am doing), so I go and grab the html from a simple blog site, print it, and paste it into codebeautify and lo and behold it looks normal. Now I have a suspicion that something is occurring on basketball-reference's side that is obscuring my ability to even grab the html.
My question is this; what exactly is going on here? I am assuming it's an 80% chance it is still me but the 20% is not so sure at this point. Can someone point out what I am doing wrong or how to grab the html?
The data is stored within the page, but inside the HTML comment.
To parse it, you can do for example:
import requests
from bs4 import BeautifulSoup, Comment
web_response = requests.get('https://www.basketball-reference.com/leagues/NBA_2020.html').text
soup = BeautifulSoup(web_response, 'lxml')
table = soup.find('table', id='team-stats-per_game')
# find the comment section where the data is stored
for idx, c in enumerate(soup.select_one('div#all_team-stats-per_game').contents):
if isinstance(c, Comment):
break
# load the data from comment:
soup2 = BeautifulSoup(soup.select_one('div#all_team-stats-per_game').contents[idx], 'html.parser')
# print data:
for tr in soup2.select('tr:has(td)'):
tds = tr.select('td')
for td in tds:
print(td.get_text(strip=True), end='\t')
print()
Prints:
Dallas Mavericks 67 241.5 41.6 90.0 .462 15.3 41.5 .369 26.3 48.5 .542 17.9 23.1 .773 10.6 36.4 47.0 24.5 6.3 5.0 12.8 19.0 116.4
Milwaukee Bucks* 65 240.8 43.5 91.2 .477 13.7 38.6 .356 29.8 52.6 .567 17.8 24.0 .742 9.5 42.2 51.7 25.9 7.4 6.0 14.9 19.2 118.6
Houston Rockets 64 241.2 41.1 90.7 .454 15.4 44.3 .348 25.7 46.4 .554 20.5 26.0 .787 10.4 34.6 44.9 21.5 8.5 5.1 14.7 21.6 118.1
Portland Trail Blazers 66 240.8 41.9 90.9 .461 12.6 33.8 .372 29.3 57.1 .513 17.3 21.7 .798 10.1 35.4 45.5 20.2 6.1 6.2 13.0 21.4 113.6
Atlanta Hawks 67 243.0 40.6 90.6 .449 12.0 36.1 .333 28.6 54.5 .525 18.5 23.4 .790 9.9 33.4 43.3 24.0 7.8 5.1 16.2 23.1 111.8
New Orleans Pelicans 64 242.3 42.6 92.2 .462 14.0 37.6 .372 28.6 54.6 .525 16.9 23.2 .729 11.2 35.8 47.0 27.0 7.6 5.1 16.2 21.0 116.2
Los Angeles Clippers 64 241.2 41.6 89.7 .464 12.2 33.2 .366 29.5 56.5 .522 20.8 26.2 .792 11.0 37.0 48.0 23.8 7.1 5.0 14.8 22.0 116.2
Washington Wizards 64 241.2 41.9 91.0 .461 12.3 33.1 .372 29.6 57.9 .511 19.5 24.8 .787 10.1 31.6 41.7 25.3 8.1 4.3 14.1 22.6 115.6
Memphis Grizzlies 65 240.4 42.8 91.0 .470 10.9 31.1 .352 31.8 59.9 .531 16.2 21.3 .761 10.4 36.3 46.7 27.0 8.0 5.6 15.3 20.8 112.6
Phoenix Suns 65 241.2 40.8 87.8 .464 11.2 31.7 .353 29.6 56.1 .527 19.8 24.0 .826 9.8 33.3 43.1 27.2 7.8 4.0 15.1 22.1 112.6
Miami Heat 65 243.5 39.6 84.4 .470 13.4 34.8 .383 26.3 49.6 .530 19.5 25.1 .778 8.5 36.0 44.5 26.0 7.4 4.5 14.9 20.4 112.2
Minnesota Timberwolves 64 243.1 40.4 91.6 .441 13.3 39.7 .336 27.1 52.0 .521 19.1 25.4 .753 10.5 34.3 44.8 23.8 8.7 5.7 15.3 21.4 113.3
Boston Celtics* 64 242.0 41.2 89.6 .459 12.4 34.2 .363 28.8 55.4 .519 18.3 22.8 .801 10.7 35.3 46.0 22.8 8.3 5.6 13.6 21.4 113.0
Toronto Raptors* 64 241.6 40.6 88.5 .458 13.8 37.0 .371 26.8 51.5 .521 18.1 22.6 .800 9.7 35.5 45.2 25.4 8.8 4.9 14.4 21.5 113.0
Los Angeles Lakers* 63 240.8 42.9 88.6 .485 11.2 31.4 .355 31.8 57.1 .556 17.3 23.7 .730 10.6 35.5 46.1 25.9 8.6 6.8 15.1 20.6 114.3
Denver Nuggets 65 242.3 41.8 88.9 .471 10.9 30.4 .358 31.0 58.5 .529 15.9 20.5 .775 10.8 33.5 44.3 26.5 8.1 4.6 13.7 20.0 110.4
San Antonio Spurs 63 242.8 42.0 89.5 .470 10.7 28.7 .371 31.4 60.8 .517 18.4 22.8 .809 8.8 35.6 44.4 24.5 7.2 5.5 12.3 19.2 113.2
Philadelphia 76ers 65 241.2 40.8 87.7 .465 11.4 31.6 .362 29.4 56.1 .523 16.6 22.1 .752 10.4 35.1 45.5 25.9 8.2 5.4 14.2 20.6 109.6
Indiana Pacers 65 241.5 42.2 88.4 .477 10.0 27.5 .363 32.2 60.9 .529 15.1 19.1 .787 8.8 34.0 42.8 25.9 7.2 5.1 13.1 19.6 109.3
Utah Jazz 64 240.4 40.1 84.6 .475 13.2 34.4 .383 27.0 50.2 .537 17.6 22.8 .772 8.8 36.3 45.1 22.2 5.9 4.0 14.9 20.0 111.0
Oklahoma City Thunder 64 241.6 40.3 85.1 .473 10.4 29.3 .355 29.9 55.8 .536 19.8 24.8 .797 8.1 34.6 42.7 21.9 7.6 5.0 13.5 18.8 110.8
Brooklyn Nets 64 243.1 40.0 90.0 .444 12.9 37.9 .340 27.1 52.2 .519 18.0 24.1 .744 10.8 37.6 48.5 24.0 6.5 4.6 15.5 20.7 110.8
Detroit Pistons 66 241.9 39.3 85.7 .459 12.0 32.7 .367 27.3 53.0 .515 16.6 22.4 .743 9.8 32.0 41.7 24.1 7.4 4.5 15.3 19.7 107.2
New York Knicks 66 241.9 40.0 89.3 .447 9.6 28.4 .337 30.4 61.0 .499 16.3 23.5 .694 12.0 34.5 46.5 22.1 7.6 4.7 14.3 22.2 105.8
Sacramento Kings 64 242.3 40.4 87.8 .459 12.6 34.7 .364 27.7 53.2 .522 15.6 20.3 .769 9.6 32.9 42.5 23.4 7.6 4.2 14.4 21.9 109.0
Cleveland Cavaliers 65 241.9 40.3 87.9 .458 11.2 31.8 .351 29.1 56.1 .519 15.1 19.9 .758 10.8 33.4 44.2 23.1 6.9 3.2 16.5 18.3 106.9
Chicago Bulls 65 241.2 39.6 88.6 .447 12.2 35.1 .348 27.4 53.5 .511 15.5 20.5 .755 10.5 31.4 41.9 23.2 10.0 4.1 15.5 21.8 106.8
Orlando Magic 65 240.4 39.2 88.8 .442 10.9 32.0 .341 28.3 56.8 .498 17.0 22.1 .770 10.4 34.2 44.5 24.0 8.4 5.7 12.6 17.6 106.4
Golden State Warriors 65 241.9 38.6 88.2 .438 10.4 31.3 .334 28.2 56.9 .495 18.7 23.2 .803 10.0 32.9 42.8 25.6 8.2 4.6 14.9 20.1 106.3
Charlotte Hornets 65 242.3 37.3 85.9 .434 12.1 34.3 .352 25.2 51.6 .489 16.2 21.6 .748 11.0 31.8 42.8 23.8 6.6 4.1 14.6 18.8 102.9
League Average 65 241.7 40.8 88.8 .460 12.1 33.9 .357 28.7 54.9 .523 17.7 22.9 .771 10.1 34.7 44.9 24.3 7.7 4.9 14.5 20.6 111.4

Select rows from DataReader based on value and transfer to DataFrame

I am doing a project where I read in the historical values for a given stock, I then want to filter out the days where the price has jumped +5% or -5% into a different dataframe.
But I am struggling with the transfer of the row.
import pandas_datareader as web
import pandas as pd
import datetime
start = datetime.datetime(2015, 9, 1)
end = datetime.datetime(2019, 11, 2)
df1 = pd.DataFrame()
df = web.DataReader("amd", 'yahoo', start, end)
df['Close'] = df['Close'].astype(float)
df['Open'] = df['Open'].astype(float)
for row in df:
df['perchange'] = ((df['Close']-df['Open'])/df['Open'])*100
df['perchange'] = df['perchange'].astype(float)
if df['perchange'] >= 5.0:
df1 += df
if ['perchange'] <= -5.0:
df1 += df
df.to_csv('amd_volume_price_history.csv')
df1.to_csv('amd_5_to_5.csv')
You can do this to create a new dataframe with
the rows where the percentage of changes is greater than 5% in absolute value. As you can see Series.between has been used to performance a boolean indexing:
not_significant=((df['Close']-df['Open'])/df['Open']).between(-0.05,0.05)
df_filtered=df[~not_significant]
print(df_filtered)
Output
High Low Open Close Volume Adj Close
Date
2015-09-11 2.140000 1.810000 1.880000 2.010000 31010300 2.010000
2015-09-14 2.000000 1.810000 2.000000 1.820000 16458500 1.820000
2015-10-19 2.010000 1.910000 1.910000 2.010000 10670800 2.010000
2015-10-23 2.210000 2.100000 2.100000 2.210000 9564200 2.210000
2015-11-03 2.290000 2.160000 2.160000 2.280000 8705800 2.280000
... ... ... ... ... ... ...
2019-06-06 31.980000 29.840000 29.870001 31.820000 131267800 31.820000
2019-07-31 32.299999 30.299999 32.080002 30.450001 119190000 30.450001
2019-08-08 34.270000 31.480000 31.530001 33.919998 167278800 33.919998
2019-08-12 34.650002 32.080002 34.160000 32.430000 106936000 32.430000
2019-08-23 31.830000 29.400000 31.299999 29.540001 83681100 29.540001
[123 rows x 6 columns]
if you really need perchange column you can create changing the code:
df['Perchange']=(df['Close']-df['Open'])/df['Open']*100
not_significant=(df['Perchange']).between(-5,5)
df_filtered=df[~not_significant]
print(df_filtered)
Also you can use DataFrame.pct_change:
df['Perchange']=df[['Open','Close']].pct_change(axis=1).Close*100
Output
High Low Open Close Volume Adj Close \
Date
2015-09-11 2.140000 1.810000 1.880000 2.010000 31010300 2.010000
2015-09-14 2.000000 1.810000 2.000000 1.820000 16458500 1.820000
2015-10-19 2.010000 1.910000 1.910000 2.010000 10670800 2.010000
2015-10-23 2.210000 2.100000 2.100000 2.210000 9564200 2.210000
2015-11-03 2.290000 2.160000 2.160000 2.280000 8705800 2.280000
... ... ... ... ... ... ...
2019-06-06 31.980000 29.840000 29.870001 31.820000 131267800 31.820000
2019-07-31 32.299999 30.299999 32.080002 30.450001 119190000 30.450001
2019-08-08 34.270000 31.480000 31.530001 33.919998 167278800 33.919998
2019-08-12 34.650002 32.080002 34.160000 32.430000 106936000 32.430000
2019-08-23 31.830000 29.400000 31.299999 29.540001 83681100 29.540001
Perchange
Date
2015-09-11 6.914893
2015-09-14 -8.999997
2015-10-19 5.235603
2015-10-23 5.238102
2015-11-03 5.555550
... ...
2019-06-06 6.528285
2019-07-31 -5.081050
2019-08-08 7.580074
2019-08-12 -5.064401
2019-08-23 -5.622998
[123 rows x 7 columns]
your code would look like this:
#Libraries
import pandas_datareader as web
import pandas as pd
import datetime
#Getting data
start = datetime.datetime(2015, 9, 1)
end = datetime.datetime(2019, 11, 2)
df = web.DataReader("amd", 'yahoo', start, end)
#Convertint to float to calculate and filtering
df['Close'] = df['Close'].astype(float)
df['Open'] = df['Open'].astype(float)
#Creating Perchange column.
df['Perchange']=(df['Close']-df['Open'])/df['Open']*100
#df['Perchange']=df[['Open','Close']].pct_change(axis=1).Close*100
#Filtering
not_significant=(df['Perchange']).between(-5,5)
df_filtered=df[~not_significant]
#Saving data.
df.to_csv('amd_volume_price_history.csv')
df_filtered.to_csv('amd_5_to_5.csv')
EDIT
df['Perchange']=(df['Close']-df['Open'])/df['Open']*100
significant=~(df['Perchange']).between(-5,5)
group_by_jump=significant.cumsum()
jump_and_4=group_by_jump.groupby(group_by_jump,sort=False).cumcount().le(4)&group_by_jump.ne(0)
df_filtered=df[jump_and_4]
print(df_filtered.head(50))
High Low Open Close Volume Adj Close Perchange
Date
2015-09-11 2.14 1.81 1.88 2.01 31010300 2.01 6.914893
2015-09-14 2.00 1.81 2.00 1.82 16458500 1.82 -8.999997
2015-09-15 1.87 1.81 1.84 1.86 6524400 1.86 1.086955
2015-09-16 1.90 1.85 1.87 1.89 4928300 1.89 1.069518
2015-09-17 1.94 1.87 1.90 1.89 5831600 1.89 -0.526315
2015-09-18 1.92 1.85 1.87 1.87 11814000 1.87 0.000000
2015-10-19 2.01 1.91 1.91 2.01 10670800 2.01 5.235603
2015-10-20 2.03 1.97 2.00 2.02 5584200 2.02 0.999999
2015-10-21 2.12 2.01 2.02 2.10 14944100 2.10 3.960392
2015-10-22 2.16 2.09 2.10 2.14 8208400 2.14 1.904772
2015-10-23 2.21 2.10 2.10 2.21 9564200 2.21 5.238102
2015-10-26 2.21 2.12 2.21 2.15 6313500 2.15 -2.714929
2015-10-27 2.16 2.10 2.12 2.15 5755600 2.15 1.415104
2015-10-28 2.20 2.12 2.14 2.18 6950600 2.18 1.869157
2015-10-29 2.18 2.11 2.15 2.13 4500400 2.13 -0.930232
2015-11-03 2.29 2.16 2.16 2.28 8705800 2.28 5.555550
2015-11-04 2.30 2.18 2.27 2.20 8205300 2.20 -3.083698
2015-11-05 2.24 2.17 2.21 2.20 4302200 2.20 -0.452488
2015-11-06 2.21 2.13 2.19 2.15 8997100 2.15 -1.826482
2015-11-09 2.18 2.10 2.15 2.11 6231200 2.11 -1.860474
2015-11-18 2.15 1.98 1.99 2.12 9384700 2.12 6.532657
2015-11-19 2.16 2.09 2.10 2.14 4704300 2.14 1.904772
2015-11-20 2.25 2.13 2.14 2.22 10727100 2.22 3.738314
2015-11-23 2.24 2.18 2.22 2.22 4863200 2.22 0.000000
2015-11-24 2.40 2.17 2.20 2.34 15859700 2.34 6.363630
2015-11-25 2.40 2.31 2.36 2.38 6914800 2.38 0.847467
2015-11-27 2.38 2.32 2.37 2.33 2606600 2.33 -1.687762
2015-11-30 2.37 2.25 2.34 2.36 9924400 2.36 0.854700
2015-12-01 2.37 2.31 2.36 2.34 5646400 2.34 -0.847457
2015-12-16 2.55 2.37 2.39 2.54 19543600 2.54 6.276144
2015-12-17 2.60 2.52 2.52 2.56 11374100 2.56 1.587300
2015-12-18 2.55 2.42 2.51 2.45 17988100 2.45 -2.390436
2015-12-21 2.53 2.43 2.47 2.53 6876600 2.53 2.429147
2015-12-22 2.78 2.54 2.55 2.77 24893200 2.77 8.627452
2015-12-23 2.94 2.75 2.76 2.83 30365300 2.83 2.536229
2015-12-24 3.00 2.86 2.88 2.92 11890900 2.92 1.388888
2015-12-28 3.02 2.86 2.91 3.00 16050500 3.00 3.092780
2015-12-29 3.06 2.97 3.04 3.00 15300900 3.00 -1.315788
2016-01-06 2.71 2.47 2.66 2.51 23759400 2.51 -5.639101
2016-01-07 2.48 2.26 2.43 2.28 22203500 2.28 -6.172843
2016-01-08 2.42 2.10 2.36 2.14 31822400 2.14 -9.322025
2016-01-11 2.36 2.12 2.16 2.34 19629300 2.34 8.333325
2016-01-12 2.46 2.28 2.40 2.39 17986100 2.39 -0.416666
2016-01-13 2.45 2.21 2.40 2.25 12749700 2.25 -6.250004
2016-01-14 2.35 2.21 2.29 2.21 15666600 2.21 -3.493447
2016-01-15 2.13 1.99 2.10 2.03 21199300 2.03 -3.333330
2016-01-19 2.11 1.90 2.08 1.95 18978900 1.95 -6.249994
2016-01-20 1.95 1.75 1.81 1.80 29243600 1.80 -0.552486
2016-01-21 2.18 1.81 1.82 2.09 26387900 2.09 14.835157
2016-01-22 2.17 1.98 2.11 2.02 16245500 2.02 -4.265399
try to integrate your code with these modifications:
1) you probably don't need any loop to calculate the new column:
df['perchange'] = ((df['Close']-df['Open'])/df['Open'])*100
df['perchange'] = df['perchange'].astype(float)
2) define an empty df
df1=pd.DataFrame([])
3) filter the old df with loc method (get used with its notation it is very useful) and append to the empty data frame, this will transfer the rows that verify the condition
df1=df1.append(df.loc[(df['perchange'] <= -5.0) | (df['perchange'] >= -5.0)])
print(df1)
hope it helps

How to group daily time series data into smaller dataframes of weeks

I have a dataframe that looks like this:
open high low close weekday
time
2011-11-29 2.55 2.98 2.54 2.75 1
2011-11-30 2.75 3.09 2.73 2.97 2
2011-12-01 2.97 3.14 2.93 3.06 3
2011-12-02 3.06 3.14 3.03 3.12 4
2011-12-03 3.12 3.13 2.75 2.79 5
2011-12-04 2.79 2.90 2.61 2.83 6
2011-12-05 2.83 2.93 2.78 2.88 0
2011-12-06 2.88 3.05 2.87 3.03 1
2011-12-07 3.03 3.08 2.93 2.99 2
2011-12-08 2.99 3.01 2.88 2.98 3
2011-12-09 2.98 3.04 2.93 2.97 4
2011-12-10 2.97 3.13 2.93 3.05 5
2011-12-11 3.05 3.38 2.99 3.25 6
The weekday column refers to 0 = Monday,...6 = Sunday.
I want to make groups of smaller dataframes only containing the data for Friday, Saturday, Sunday and Monday. So one subset would look like this:
2011-12-02 3.06 3.14 3.03 3.12 4
2011-12-03 3.12 3.13 2.75 2.79 5
2011-12-04 2.79 2.90 2.61 2.83 6
2011-12-05 2.83 2.93 2.78 2.88 0
filter before drop_duplicates
df[df.weekday.isin([4,5,6,0])].drop_duplicates('weekday')
Out[10]:
open high low close weekday
2011-12-02 3.06 3.14 3.03 3.12 4
2011-12-03 3.12 3.13 2.75 2.79 5
2011-12-04 2.79 2.90 2.61 2.83 6
2011-12-05 2.83 2.93 2.78 2.88 0

python (pandas) - TypeError: must be str, not list when concatenating lists

I have this dataframe; please note the last column ("Yr_Mo_Date") on the right
In[38]: data.head()
Out[38]:
RPT VAL ROS KIL SHA BIR DUB CLA MUL CLO BEL MAL Yr_Mo_Dy
0 15.04 14.96 13.17 9.29 NaN 9.87 13.67 10.25 10.83 12.58 18.50 15.04 61-1-1
1 14.71 NaN 10.83 6.50 12.62 7.67 11.50 10.04 9.79 9.67 17.54 13.83 61-1-2
2 18.50 16.88 12.33 10.13 11.17 6.17 11.25 NaN 8.50 7.67 12.75 12.71 61-1-3
3 10.58 6.63 11.75 4.58 4.54 2.88 8.63 1.79 5.83 5.88 5.46 10.88 61-1-4
4 13.33 13.25 11.42 6.17 10.71 8.21 11.92 6.54 10.92 10.34 12.92 11.83 61-1-5
The type of the "Yr_Mo_Dy" column is object while the others are float64.
I simply want to change the order of the columns so that "Yr_Mo_Dy" is the first column in the dataframe.
I tried the following but I get TypeError. What's wrong?
In[39]: cols = data.columns.tolist()
In[40]: cols
Out[40]:
['RPT',
'VAL',
'ROS',
'KIL',
'SHA',
'BIR',
'DUB',
'CLA',
'MUL',
'CLO',
'BEL',
'MAL',
'Yr_Mo_Dy']
In[41]: cols = cols[-1] + cols[:-1]
TypeError Traceback (most recent call last)
<ipython-input-59-c0130d1863e8> in <module>()
----> 1 cols = cols[-1] + cols[:-1]
TypeError: must be str, not list
You need add : for one element list because need concanecate 2 lists:
#string
print (cols[-1])
Yr_Mo_Dy
#one element list
print (cols[-1:])
['Yr_Mo_Dy']
cols = cols[-1:] + cols[:-1]
Or is possible add [], but it is worse readable:
cols = [cols[-1]] + cols[:-1]
print (cols)
['Yr_Mo_Dy', 'RPT', 'VAL', 'ROS', 'KIL', 'SHA', 'BIR',
'DUB', 'CLA', 'MUL', 'CLO', 'BEL', 'MAL']
Option 1
Use pd.DataFrame.insert and pd.DataFrame.pop to alter the dataframe in place. This is a very generalizable solution as you can swap in any column position for popping or inserting.
c = df.columns[-1]
df.insert(0, c, df.pop(c))
df
Yr_Mo_Dy RPT VAL ROS KIL SHA BIR DUB CLA MUL CLO BEL MAL
0 61-1-1 15.04 14.96 13.17 9.29 NaN 9.87 13.67 10.25 10.83 12.58 18.50 15.04
1 61-1-2 14.71 NaN 10.83 6.50 12.62 7.67 11.50 10.04 9.79 9.67 17.54 13.83
2 61-1-3 18.50 16.88 12.33 10.13 11.17 6.17 11.25 NaN 8.50 7.67 12.75 12.71
3 61-1-4 10.58 6.63 11.75 4.58 4.54 2.88 8.63 1.79 5.83 5.88 5.46 10.88
4 61-1-5 13.33 13.25 11.42 6.17 10.71 8.21 11.92 6.54 10.92 10.34 12.92 11.83
Option 2
pd.DataFrame.reindex_axis and np.roll
df.reindex_axis(np.roll(df.columns, 1), 1)
Yr_Mo_Dy RPT VAL ROS KIL SHA BIR DUB CLA MUL CLO BEL MAL
0 61-1-1 15.04 14.96 13.17 9.29 NaN 9.87 13.67 10.25 10.83 12.58 18.50 15.04
1 61-1-2 14.71 NaN 10.83 6.50 12.62 7.67 11.50 10.04 9.79 9.67 17.54 13.83
2 61-1-3 18.50 16.88 12.33 10.13 11.17 6.17 11.25 NaN 8.50 7.67 12.75 12.71
3 61-1-4 10.58 6.63 11.75 4.58 4.54 2.88 8.63 1.79 5.83 5.88 5.46 10.88
4 61-1-5 13.33 13.25 11.42 6.17 10.71 8.21 11.92 6.54 10.92 10.34 12.92 11.83

Categories