Python WebScraping FlashScore - python

I am using the following code to extract the outcome of the matches on FlashScore:
from requests_html import AsyncHTMLSession
from collections import defaultdict
import pandas as pd
url = 'https://www.flashscore.com/football/netherlands/eredivisie/results/'
asession = AsyncHTMLSession()
async def get_scores():
r = await asession.get(url)
await r.html.arender()
return r
results = asession.run(get_scores)
results = results[0]
times = results.html.find("div.event__time")
home_teams = results.html.find("div.event__participant.event__participant--home")
scores = results.html.find("div.event__scores.fontBold")
away_teams = results.html.find("div.event__participant.event__participant--away")
event_part = results.html.find("div.event__part")
dict_res = defaultdict(list)
for ind in range(len(times)):
dict_res['times'].append(times[ind].text)
dict_res['home_teams'].append(home_teams[ind].text)
dict_res['scores'].append(scores[ind].text)
dict_res['away_teams'].append(away_teams[ind].text)
dict_res['event_part'].append(event_part[ind].text)
df_res = pd.DataFrame(dict_res)
print(df_res)
This results in the following out:
times home_teams scores away_teams event_part
0 22.01. 20:00 Willem II 1 - 3 Zwolle (1 - 0)
1 17.01. 16:45 Ajax 1 - 0 Feyenoord (1 - 0)
2 17.01. 14:30 Groningen 2 - 2 Twente (0 - 2)
3 17.01. 14:30 Venlo 1 - 1 Heerenveen (0 - 0)
4 17.01. 12:15 Waalwijk 1 - 1 Willem II (1 - 0)
.. ... ... ... ... ...
101 25.10. 20:00 Den Haag 2 - 2 AZ Alkmaar (0 - 1)
102 25.10. 16:45 Waalwijk 2 - 2 Feyenoord (0 - 0)
103 25.10. 14:30 Sparta Rotterdam 1 - 1 Heracles (0 - 0)
104 25.10. 14:30 Vitesse 2 - 1 PSV (1 - 0)
105 25.10. 12:15 Sittard 1 - 3 Groningen (0 - 2)
[106 rows x 5 columns]
However, whenever going to the website https://www.flashscore.com/football/netherlands/eredivisie/results/, it shows at the bottom a 'Show more matches' button. The output shows only the first couple of matches, and not the additional information which shows up if you click on 'Show more matches'. Is it possible to also extract this additional information?

Related

How to create a list of N items with a budget constraint and multiple conditions on Python

I have the following df of Premier League players (ROI_top_players):
player team position cost_2223 total_points ROI
0 Mohamed Salah Liverpool FWD 13.0 259 29.77
1 Trent Alexander Liverpool DEF 8.4 206 24.52
2 Jarrod Bowen West Ham MID 8.5 204 23.56
3 Kevin De Bruyne Man City MID 12.0 190 15.70
4 Virgil van Dijk Liverpool DEF 6.5 183 14.91
... ... ... ... ... ... ... ... ... ...
151 Jamaal Lascelles Newcastle DEF 4.5 45 10.22
152 Ben Godfrey Everton GKP 4.5 45 9.57
153 Aaron Wan-Bissaka Man Utd DEF 4.5 41 8.03
154 Brandon Williams Norwich DEF 4.0 36 7.23
I want to create a list of 15 players (must be 15 - not more, not less), with the highest ROI possible, and it has to fulfill certain conditions:
Position constraints: it must have 2 GKP, 5 DEF, 5 MID, and 3 FWD
Budget constraint: I have a budget of $100, so for each player I add to the list, I must subtract the player's cost (cost_2223) from the budget.
Team constraint: It can't have more than 3 players per club.
Here's my current code:
def get_ideal_team_ROI(budget = 100, star_player_limit = 3, gk = 2, df = 5, md = 5, fwd = 3):
money_team = []
budget = budget
positions = {'GK': gk, 'DEF': df, 'MID': md, 'FWD': fwd}
for index, row in ROI_top_players.iterrows():
if (budget >= row['cost_2223'] and positions[row['position']] > 0):
money_team.append(row['player'])
budget -= row['cost_2223']
positions[row['position']] = positions[row['position']] - 1
return money_team
This code has two problems:
It creates the list BUT, the list does not end up with 15 players.
It doesn't fulfill the team constraint (I have more than 3 players per team).
How should I tackle this? I want my code to make sure that I always have enough budget to buy 15 players and that I always have at maximum 3 players per team.
**I do not need all possible combinations. Just ONE team with the highest possible ROI.
As OP did not provide the data, I went and scraped the first 'Fantasy Football players list' I could find. There is no ROI in that data, however there are 'Points', which we will try to maximize, so I guess OP can apply this to maximize the ROI in his data.
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time as t
import pandas as pd
from pulp import *
## get some data approximating OP's data
chrome_options = Options()
chrome_options.add_argument("--no-sandbox")
webdriver_service = Service("chromedriver/chromedriver") ## path to where you saved chromedriver binary
browser = webdriver.Chrome(service=webdriver_service, options=chrome_options)
big_df = pd.DataFrame()
url = 'https://fantasy.premierleague.com/player-list/'
browser.get(url)
try:
WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.XPATH, "//button[text()='Accept All Cookies']"))).click()
print('cookies accepted')
except Exception as e:
print('no cookies for you!')
tables_divs = WebDriverWait(browser, 20).until(EC.presence_of_all_elements_located((By.XPATH, "//table/parent::div/parent::div")))
for t in tables_divs:
category = t.find_element(By.TAG_NAME, 'h3')
print(category.text)
WebDriverWait(t, 20).until(EC.presence_of_all_elements_located((By.XPATH, "//table")))
dfs = pd.read_html(t.get_attribute('outerHTML'))
for df in dfs:
df['Type'] = category.text
big_df = pd.concat([big_df, df], axis=0, ignore_index=True)
big_df.to_json('f_footie.json')
browser.quit()
footie_df = pd.read_json('f_footie.json')
footie_df.columns = ['Player', 'Team', 'Points', 'Cost', 'Position']
footie_df['Player'] = footie_df.apply( lambda row: row.Player.replace(' ', '_').strip(), axis=1)
footie_df['Cost'] = footie_df.apply( lambda row: row.Cost.split('£')[1], axis=1)
footie_df['Cost'] = footie_df['Cost'].astype('float')
footie_df['Points'] = footie_df['Points'].astype('int')
print(footie_df)
## constraining variables
positions = footie_df.Position.unique()
clubs = footie_df.Team.unique()
budget = 100
available_roles = {
'Goalkeepers': 2,
'Defenders': 5,
'Midfielders': 5,
'Forwards': 3
}
names = [footie_df.Player[i] for i in footie_df.index]
teams = [footie_df.Team[i] for i in footie_df.index]
roles = [footie_df.Position[i] for i in footie_df.index]
costs = [footie_df.Cost[i] for i in footie_df.index]
points = [footie_df.Points[i] for i in footie_df.index]
players = [LpVariable("player_" + str(i), cat="Binary") for i in footie_df.index]
prob = LpProblem("Secret Fantasy Player Choices", LpMaximize)
## define the objective -> maximize the points
prob += lpSum(players[i] * points[i] for i in range(len(footie_df)))
## define budget constraint
prob += lpSum(players[i] * footie_df.Cost[footie_df.index[i]] for i in range(len(footie_df))) <= budget
for pos in positions:
prob += lpSum(players[i] for i in range(len(footie_df)) if roles[i] == pos) <= available_roles[pos]
## add max 3 per team constraint
for club in clubs:
prob += lpSum(players[i] for i in range(len(footie_df)) if teams[i] == club) <= 3
prob.solve()
df_list = []
for variable in prob.variables():
if variable.varValue != 0:
name = footie_df.Player[int(variable.name.split("_")[1])]
club = footie_df.Team[int(variable.name.split("_")[1])]
role = footie_df.Position[int(variable.name.split("_")[1])]
points = footie_df.Points[int(variable.name.split("_")[1])]
cost = footie_df.Cost[int(variable.name.split("_")[1])]
df_list.append((name, club, role, points, cost))
# print(name, club, position, points, cost)
result_df = pd.DataFrame(df_list, columns = ['Name', 'Club', 'Role', 'Points', 'Cost'])
result_df.to_csv('win_at_fantasy_football.csv')
print(result_df)
This will display some control printouts, the data scraped, the long printout from pulp solver, and the result dataframe in the end, looking like this:
Name
Club
Role
Points
Cost
0
Alisson
Liverpool
Goalkeepers
176
5.5
1
Lloris
Spurs
Goalkeepers
158
5.5
2
Bowen
West Ham
Midfielders
206
8.5
3
Saka
Arsenal
Midfielders
179
8
4
Maddison
Leicester
Midfielders
181
8
5
Ward-Prowse
Southampton
Midfielders
159
6.5
6
Gallagher
Chelsea
Midfielders
140
6
7
Antonio
West Ham
Forwards
140
7.5
8
Toney
Brentford
Forwards
139
7
9
Mbeumo
Brentford
Forwards
119
6
10
Alexander-Arnold
Liverpool
Defenders
208
7.5
11
Robertson
Liverpool
Defenders
186
7
12
Cancelo
Man City
Defenders
201
7
13
Gabriel
Arsenal
Defenders
146
5
14
Cash
Aston Villa
Defenders
147
5
For PuLP documentation, visit https://coin-or.github.io/pulp/

How to speed up this python script with multiprocessing

I have a script that get data from a dataframe, use those data to make a request to a website, using fuzzywuzzy module find the exact href and then runs a function to scrape odds. I would speed up this script with the multiprocessing module, it is possible?
Date HomeTeam AwayTeam
0 Monday 6 December 2021 20:00 Everton Arsenal
1 Monday 6 December 2021 17:30 Empoli Udinese
2 Monday 6 December 2021 19:45 Cagliari Torino
3 Monday 6 December 2021 20:00 Getafe Athletic Bilbao
4 Monday 6 December 2021 15:00 Real Zaragoza Eibar
5 Monday 6 December 2021 17:15 Cartagena Tenerife
6 Monday 6 December 2021 20:00 Girona Leganes
7 Monday 6 December 2021 19:45 Niort Toulouse
8 Monday 6 December 2021 19:00 Jong Ajax FC Emmen
9 Monday 6 December 2021 19:00 Jong AZ Excelsior
Script
df = pd.read_excel(path)
dates = df.Date
hometeams = df.HomeTeam
awayteams = df.AwayTeam
matches_odds = list()
for i,(a,b,c) in enumerate(zip(dates, hometeams, awayteams)):
try:
r = requests.get(f'https://www.betexplorer.com/results/soccer/?year={a.split(" ")[3]}&month={monthToNum(a.split(" ")[2])}&day={a.split(" ")[1]}')
except requests.exceptions.ConnectionError:
sleep(10)
r = requests.get(f'https://www.betexplorer.com/results/soccer/?year={a.split(" ")[3]}&month={monthToNum(a.split(" ")[2])}&day={a.split(" ")[1]}')
soup = BeautifulSoup(r.text, 'html.parser')
f = soup.find_all('td', class_="table-main__tt")
for tag in f:
match = fuzz.ratio(f'{b} - {c}', tag.find('a').text)
hour = a.split(" ")[4]
if hour.split(':')[0] == '23':
act_hour = '00' + ':' + hour.split(':')[1]
else:
act_hour = str(int(hour.split(':')[0]) + 1) + ':' + hour.split(':')[1]
if match > 70 and act_hour == tag.find('span').text:
href_id = tag.find('a')['href']
table = get_odds(href_id)
matches_odds.append(table)
print(i, ' of ', len(dates))
PS: The monthToNum function just replace the month name to his number
First, you make a function of your loop body with inputs i, a, b and c. Then, you create a multiprocessing.Pool and submit this function with the proper arguments (i, a, b, c) to the pool.
import multiprocessing
df = pd.read_excel(path)
dates = df.Date
hometeams = df.HomeTeam
awayteams = df.AwayTeam
matches_odds = list()
def fetch(data):
i, (a, b, c) = data
try:
r = requests.get(f'https://www.betexplorer.com/results/soccer/?year={a.split(" ")[3]}&month={monthToNum(a.split(" ")[2])}&day={a.split(" ")[1]}')
except requests.exceptions.ConnectionError:
sleep(10)
r = requests.get(f'https://www.betexplorer.com/results/soccer/?year={a.split(" ")[3]}&month={monthToNum(a.split(" ")[2])}&day={a.split(" ")[1]}')
soup = BeautifulSoup(r.text, 'html.parser')
f = soup.find_all('td', class_="table-main__tt")
for tag in f:
match = fuzz.ratio(f'{b} - {c}', tag.find('a').text)
hour = a.split(" ")[4]
if hour.split(':')[0] == '23':
act_hour = '00' + ':' + hour.split(':')[1]
else:
act_hour = str(int(hour.split(':')[0]) + 1) + ':' + hour.split(':')[1]
if match > 70 and act_hour == tag.find('span').text:
href_id = tag.find('a')['href']
table = get_odds(href_id)
matches_odds.append(table)
print(i, ' of ', len(dates))
if __name__ == '__main__':
num_processes = 20
with multiprocessing.Pool(num_processes) as pool:
pool.map(fetch, enumerate(zip(dates, hometeams, awayteams)))
Besides, multiprocessing is not the only way to improve the speed. Asynchronous programming can be used as well and is probably better for this scenario, although multiprocessing does the job, too - just want to mention that.
If carefully read the Python multiprocessing documentation, then it'll be obvious.

Appending a dictionary to a dataframe as a new column

I'm very new to Python and was hoping to get some help. I am following an online example where the author creates a dictionary, adds some data to it and then appends this to his original dataframe.
When I follow the code the data in the dictionary doesn't get appended to the dataframe and as such I can't continue with the example.
The authors code is as follows:
from collections import defaultdict
won_last = defaultdict(int)
for index,row in data.iterrows():
home_team = row['HomeTeam']
visitor_team = row['AwayTeam']
row['HomeLastWin'] = won_last[home_team]
row['VisitorLastWin'] = won_last[visitor_team]
results.ix[index]=row
won_last[home_team] = row['HomeWin']
won_last[visitor_team] = not row['HomeWin']
When I run this code I get the error message (note that the name of the dataframe is different but apart from that nothing has changed)
AttributeError Traceback (most recent call last)
<ipython-input-46-d31706a5f745> in <module>
4 row['HomeLastWin'] = won_last[home_team]
5 row['VisitorLastWin'] = won_last[visitor_team]
----> 6 data.ix[index]=row
7 won_last[home_team] = row['HomeWin']
8 won_last[visitor_team] = not row['HomeWin']
~\anaconda3\lib\site-packages\pandas\core\generic.py in __getattr__(self, name)
5137 if self._info_axis._can_hold_identifiers_and_holds_name(name):
5138 return self[name]
-> 5139 return object.__getattribute__(self, name)
5140
5141 def __setattr__(self, name: str, value) -> None:
AttributeError: 'DataFrame' object has no attribute 'ix'
If I change the row data.ix[index]=row to data.loc[index]=row the code runs ok but nothing happens to my dataframe
Below is an example of the dataset I am working with
Div Date Time HomeTeam AwayTeam FTHG FTAG FTR HomeWIn
E0 12/09/2020 12:30 Fulham Arsenal 0 3 A FALSE
E0 12/09/2020 15:00 Crystal Palace Southampton 1 0 H FALSE
E0 12/09/2020 17:30 Liverpool Leeds 4 3 H TRUE
E0 12/09/2020 20:00 West Ham Newcastle 0 2 A TRUE
E0 13/09/2020 14:00 West Brom Leicester 0 3 A FALSE
and below is the dataset of the example I am working through with the columns added
Date Visitor Team VisitorPts Home Team HomePts HomeWin
20 01/11/2013 Milwaukee 105 Boston 98 FALSE
21 01/11/2013 Miami Heat 100 Brooklyn 101 TRUE
22 01/11/2013 Clevland 84 Charlotte 90 TRUE
23 01/11/2013 Portland 113 Denver 98 FALSE
24 01/11/2013 Dallas 91 Houston 113 TRUE
HomeLastWin VisitorLastWIn
FALSE FALSE
FALSE FALSE
FALSE TRUE
FALSE FALSE
TRUE TRUE
Thanks
Jon
Could you please try this,
Data that used as dataset_stack.csv
from collections import defaultdict
won_last = defaultdict(int)
# Load the Pandas libraries with alias 'pd'
import pandas as pd
# Read data from file 'dataset_stack.csv'
# (in the same directory that your python process is based)
# Control delimiters, rows, column names with read_csv (see later)
data = pd.read_csv("dataset_stack.csv")
results=pd.DataFrame(data=data)
#print(results)
# Preview the first 5 lines of the loaded data
#data.head()
for index,row in data.iterrows():
home_team = row['HomeTeam']
visitor_team = row['VisitorTeam']
row['HomeLastWin'] = won_last[home_team]
row['VisitorLastWin'] = won_last[visitor_team]
#results.ix[index]=row
#results.loc[index]=row
#add new column directly to dataframe instead of adding it to row & appending to dataframe
results['HomeLastWin']=won_last[home_team]
results['VisitorLastWin']=won_last[visitor_team]
results.append(row, ignore_index=True)
won_last[home_team] = row['HomeWin']
won_last[visitor_team] = not row['HomeWin']
print(results)
Output:
Date VisitorTeam VisitorPts HomeTeam HomePts HomeWin \
0 1/11/2013 Milwaukee 105 Boston 98 False
1 1/11/2013 Miami Heat 100 Brooklyn 101 True
2 1/11/2013 Clevland 84 Charlotte 90 True
3 1/11/2013 Portland 113 Denver 98 False
4 1/11/2013 Dallas 91 Houston 113 True
HomeLastWin VisitorLastWin
0 0 0
1 0 0
2 0 0
3 0 0
4 0 0

My year list doesn't work on BeautifulSoup. Why?

I'm newbie learning BeautifulSoup. May someone have a look at the following code? I'd like to scrape data from a website without any success. I'd like to create a dataframe with the sum of player arrivals per year and with a column of players average age.
dataframe repeating codes:
img dataframe error
my code:
import pandas as pd
import requests
from bs4 import BeautifulSoup
anos_list = list(range(2005, 2018))
anos_lista = []
valor_contratos_lista = []
idade_média_lista = []
for ano_lista in anos_list:
url = 'https://www.transfermarkt.com/flamengo-rio-de-janeiro/transfers/verein/614/saison_id/'+ str(anos_list) + ''
page = requests.get(url, headers={'User-Agent': 'Custom5'})
soup = BeautifulSoup(page.text, 'html.parser')
tag_list = soup.tfoot.find_all('td')
valor = (tag_list[0].string)
idade = (tag_list[1].string)
ano = ano_lista
valor_contratos_lista.append(valor)
idade_media_lista.append(idade)
anos_lista.append(ano)
flamengo_df = pd.DataFrame({'Ano': ano_lista,
'Despesa com contratações':valor_contratos_lista,
'Média de idade': idade_média_lista
})
flamengo_df.to_csv('flamengo.csv', encoding = 'utf-8')`
Here's my approach:
Using Beautiful Soup + Regex:
import requests
from bs4 import BeautifulSoup
import re
import numpy as np
# Set min and max years as variables
min_year = 2005
max_year = 2019
year_range = list(range(min_year, 2019+1))
base_url = 'https://www.transfermarkt.com/flamengo-rio-de-janeiro/transfers/verein/614/saison_id/'
# Begin iterating
records = []
for year in year_range:
url = base_url+str(year)
# get the page
page = requests.get(url, headers={'User-Agent': 'Custom5'})
soup = BeautifulSoup(page.text, 'html.parser')
# I used the class of "responsive table"
tables = soup.find_all('div',{'class':'responsive-table'})
rows = tables[0].find_all('tr')
cells = [row.find_all('td', {'class':'zentriert'}) for row in rows]
# get variable names:
variables = [x.text for x in rows[0].find_all('th')]
variables_values = {x:[] for x in variables}
# get values
for row in rows:
values = [' '.join(x.text.split()) for x in row.find_all('td')]
values = [x for x in values if x!='']
if len(variables)< len(values):
values.pop(4)
values.pop(2)
for k,v in zip(variables_values.keys(), values):
variables_values[k].append(v)
num_pattern = re.compile('[0-9,]+')
to_float = lambda x: float(x) if x!='' else np.NAN
get_nums = lambda x: to_float(''.join(num_pattern.findall(x)).replace(',','.'))
# Add values to an individual record
rec = {
'Url':url,
'Year':year,
'Total Transfers':len(variables_values['Player']),
'Avg Age': np.mean([int(x) for x in variables_values['Age']]),
'Avg Cost': np.nanmean([get_nums(x) for x in variables_values['Fee'] if ('loan' not in x)]),
'Total Cost': np.nansum([get_nums(x) for x in variables_values['Fee'] if ('loan' not in x)]),
}
# Store record
records.append(rec)
Thereafter, initialize dataframe:
Of note, some of the numbers represent millions and would need to be adjusted for.
import pandas as pd
# Drop the URL
df = pd.DataFrame(records, columns=['Year','Total Transfers','Avg Age','Avg Cost','Total Cost'])
Year Total Transfers Avg Age Avg Cost Total Cost
0 2005 26 22.038462 2.000000 2.00
1 2006 32 23.906250 240.660000 1203.30
2 2007 37 22.837838 462.750000 1851.00
3 2008 41 22.926829 217.750000 871.00
4 2009 31 23.419355 175.000000 350.00
5 2010 46 23.239130 225.763333 1354.58
6 2011 47 23.042553 340.600000 1703.00
7 2012 45 24.133333 345.820000 1037.46
8 2013 36 24.166667 207.166667 621.50
9 2014 37 24.189189 111.700000 335.10
10 2015 49 23.530612 413.312000 2066.56
11 2016 41 23.341463 241.500000 966.00
12 2017 31 24.000000 101.433333 304.30
13 2018 18 25.388889 123.055000 738.33
14 2019 10 25.300000 NaN 0.00

Creating a dataframe where one of the arrays has a different length

I am learning to scrape data from website through Python. Extracting weather information about San Francisco from this page. I get stuck while combining data into a Pandas Dataframe. Is it possible to create a dataframe where each rows have different length?
I have already tried 2 ways based on answers here, but they are not excatly what I am looking for. Both answers shift the values of temps column to up. Here is the screen what I try to explain..
1st way: https://stackoverflow.com/a/40442094/10179259
2nd way: https://stackoverflow.com/a/19736406/10179259
import requests
from bs4 import BeautifulSoup
import pandas as pd
page = requests.get("http://forecast.weather.gov/MapClick.php?lat=37.7772&lon=-122.4168")
soup = BeautifulSoup(page.content, 'html.parser')
seven_day = soup.find(id="seven-day-forecast")
forecast_items = seven_day.find_all(class_="tombstone-container")
periods=[pt.get_text() for pt in seven_day.select('.tombstone-container .period-name')]
short_descs=[sd.get_text() for sd in seven_day.select('.tombstone-container .short-desc')]
temps=[t.get_text() for t in seven_day.select('.tombstone-container .temp')]
descs = [d['alt'] for d in seven_day.select('.tombstone-container img')]
#print(len(periods), len(short_descs), len(temps), len(descs))
weather = pd.DataFrame({
"period": periods, #length is 9
"short_desc": short_descs, #length is 9
"temp": temps, #problem here length is 8
#"desc":descs #length is 9
})
print(weather)
I expect that first row of the temp column to be Nan. Thank you.
You can loop each forecast_items value with iter and next for select first value, if not exist is assigned fo dictionary NaN value:
page = requests.get("http://forecast.weather.gov/MapClick.php?lat=37.7772&lon=-122.4168")
soup = BeautifulSoup(page.content, 'html.parser')
seven_day = soup.find(id="seven-day-forecast")
forecast_items = seven_day.find_all(class_="tombstone-container")
out = []
for x in forecast_items:
periods = next(iter([t.get_text() for t in x.select('.period-name')]), np.nan)
short_descs = next(iter([t.get_text() for t in x.select('.short-desc')]), np.nan)
temps = next(iter([t.get_text() for t in x.select('.temp')]), np.nan)
descs = next(iter([d['alt'] for d in x.select('img')]), np.nan)
out.append({'period':periods, 'short_desc':short_descs, 'temp':temps, 'descs':descs})
weather = pd.DataFrame(out)
print (weather)
descs period \
0 NOW until4:00pm Sat
1 Today: Showers, with thunderstorms also possib... Today
2 Tonight: Showers likely and possibly a thunder... Tonight
3 Sunday: A chance of showers before 11am, then ... Sunday
4 Sunday Night: Rain before 11pm, then a chance ... SundayNight
5 Monday: A 40 percent chance of showers. Cloud... Monday
6 Monday Night: A 30 percent chance of showers. ... MondayNight
7 Tuesday: A 50 percent chance of rain. Cloudy,... Tuesday
8 Tuesday Night: Rain. Cloudy, with a low aroun... TuesdayNight
short_desc temp
0 Wind Advisory NaN
1 Showers andBreezy High: 56 °F
2 ShowersLikely Low: 49 °F
3 Heavy Rainand Windy High: 56 °F
4 Heavy Rainand Breezythen ChanceShowers Low: 52 °F
5 ChanceShowers High: 58 °F
6 ChanceShowers Low: 53 °F
7 Chance Rain High: 59 °F
8 Rain Low: 53 °F

Categories