I have a ranking of countries across the world in a variable called rank_2000 that looks like this:
Seoul
Tokyo
Paris
New_York_Greater
Shizuoka
Chicago
Minneapolis
Boston
Austin
Munich
Salt_Lake
Greater_Sydney
Houston
Dallas
London
San_Francisco_Greater
Berlin
Seattle
Toronto
Stockholm
Atlanta
Indianapolis
Fukuoka
San_Diego
Phoenix
Frankfurt_am_Main
Stuttgart
Grenoble
Albany
Singapore
Washington_Greater
Helsinki
Nuremberg
Detroit_Greater
TelAviv
Zurich
Hamburg
Pittsburgh
Philadelphia_Greater
Taipei
Los_Angeles_Greater
Miami_Greater
MannheimLudwigshafen
Brussels
Milan
Montreal
Dublin
Sacramento
Ottawa
Vancouver
Malmo
Karlsruhe
Columbus
Dusseldorf
Shenzen
Copenhagen
Milwaukee
Marseille
Greater_Melbourne
Toulouse
Beijing
Dresden
Manchester
Lyon
Vienna
Shanghai
Guangzhou
San_Antonio
Utrecht
New_Delhi
Basel
Oslo
Rome
Barcelona
Madrid
Geneva
Hong_Kong
Valencia
Edinburgh
Amsterdam
Taichung
The_Hague
Bucharest
Muenster
Greater_Adelaide
Chengdu
Greater_Brisbane
Budapest
Manila
Bologna
Quebec
Dubai
Monterrey
Wellington
Shenyang
Tunis
Johannesburg
Auckland
Hangzhou
Athens
Wuhan
Bangalore
Chennai
Istanbul
Cape_Town
Lima
Xian
Bangkok
Penang
Luxembourg
Buenos_Aires
Warsaw
Greater_Perth
Kuala_Lumpur
Santiago
Lisbon
Dalian
Zhengzhou
Prague
Changsha
Chongqing
Ankara
Fuzhou
Jinan
Xiamen
Sao_Paulo
Kunming
Jakarta
Cairo
Curitiba
Riyadh
Rio_de_Janeiro
Mexico_City
Hefei
Almaty
Beirut
Belgrade
Belo_Horizonte
Bogota_DC
Bratislava
Dhaka
Durban
Hanoi
Ho_Chi_Minh_City
Kampala
Karachi
Kuwait_City
Manama
Montevideo
Panama_City
Quito
San_Juan
What I would like to do is a map of the world where those cities are colored according to their position on the ranking above. I am opened to further solutions for the representation (such as bubbles of increasing dimension according to the position of the cities in the rank or, if necessary, representing only a sample of countries taken from the top rank, the middle and the bottom).
Thank you,
Federico
Your question has two parts; finding the location of each city and then drawing them on the map. Assuming you have the latitude and longitude of each city, here's how you'd tackle the latter part.
I like Folium (https://pypi.org/project/folium/) for drawing maps. Here's an example of how you might draw a circle for each city, with it's position in the list is used to determine the size of that circle.
import folium
cities = [
{'name':'Seoul', 'coodrs':[37.5639715, 126.9040468]},
{'name':'Tokyo', 'coodrs':[35.5090627, 139.2094007]},
{'name':'Paris', 'coodrs':[48.8588787,2.2035149]},
{'name':'New York', 'coodrs':[40.6976637,-74.1197631]},
# etc. etc.
]
m = folium.Map(zoom_start=15)
for counter, city in enumerate(cities):
circle_size = 5 + counter
folium.CircleMarker(
location=city['coodrs'],
radius=circle_size,
popup=city['name'],
color="crimson",
fill=True,
fill_color="crimson",
).add_to(m)
m.save('map.html')
Output:
You may need to adjust the circle_size calculation a little to work with the number of cities you want to include.
Related
I have a dataframe A that looks like this:
ID
SOME_CODE
TITLE
1
024df3
Large garden in New York, New York
2
0ffw34
Small house in dark Detroit, Michigan
3
93na09
Red carpet in beautiful Miami
4
8339ct
Skyscraper in Los Angeles, California
5
84p3k9
Big shop in northern Boston, Massachusetts
I have also another dataframe B:
City
Shortcut
Los Angeles
LA
New York
NYC
Miami
MI
Boston
BO
Detroit
DTW
I would like to add new "SHORTCUT" column to dataframe A, based on the fact that "Title" column in A contains city from column "City" in dataframe B.
I have tried to use dataframe B as dictionary and map it to dataframe A, but I can't overcome fact that city names are in the middle of the sentence.
The desired output is:
ID
SOME_CODE
TITLE
SHORTCUT
1
024df3
Large garden in New York, New York
NYC
2
0ffw34
Small house in dark Detroit, Michigan
DTW
3
93na09
Red carpet in beautiful Miami, Florida
MI
4
8339ct
Skyscraper in Los Angeles, California
LA
5
84p3k9
Big shop in northern Boston, Massachusetts
BO
I will appreciate your help.
You can leverage pandas.apply function
And see if this helps:
import numpy as np
import pandas as pd
data1={'id':range(5),'some_code':["024df3","0ffw34","93na09","8339ct","84p3k9"],'title':["Large garden in New York, New York","Small house in dark Detroit, Michigan","Red carpet in beautiful Miami","Skyscraper in Los Angeles, California","Big shop in northern Boston, Massachusetts"]}
df1=pd.DataFrame(data=data1)
data2={'city':["Los Angeles","New York","Miami","Boston","Detroit"],"shortcut":["LA","NYC","MI","BO","DTW"]}
df2=pd.DataFrame(data=data2)
# Creating a list of cities.
cities=list(df2['city'].values)
def matcher(x):
for index,city in enumerate(cities):
if x.lower().find(city.lower())!=-1:
return df2.iloc[index]["shortcut"]
return np.nan
df1['shortcut']=df1['title'].apply(matcher)
print(df1.head())
This would generate the following o/p:
id some_code title shortcut
0 0 024df3 Large garden in New York, New York NYC
1 1 0ffw34 Small house in dark Detroit, Michigan DTW
2 2 93na09 Red carpet in beautiful Miami MI
3 3 8339ct Skyscraper in Los Angeles, California LA
4 4 84p3k9 Big shop in northern Boston, Massachusetts BO
Good Morning, My df(df_part3) is above:
Postal Code Borough Neighbourhood Latitude Longitude
0 M5A Downtown Toronto Regent Park, Harbourfront 43.654260 -79.360636
1 M7A Downtown Toronto Queen's Park, Ontario Provincial Government 43.662301 -79.389494
2 M5B Downtown Toronto Garden District, Ryerson 43.657162 -79.378937
3 M5C Downtown Toronto St. James Town 43.651494 -79.375418
4 M4E East Toronto The Beaches 43.676357 -79.293031
... ... ... ... ... ...
34 M5W Downtown Toronto Stn A PO Boxes 43.646435 -79.374846
35 M4X Downtown Toronto St. James Town, Cabbagetown 43.667967 -79.367675
36 M5X Downtown Toronto First Canadian Place, Underground city 43.648429 -79.382280
37 M4Y Downtown Toronto Church and Wellesley 43.665860 -79.383160
38 M7Y East Toronto Business reply mail Processing Centre, South C... 43.662744 -79.321558
And My Code is Here:
map_toronto = folium.Map(location=[latitude, longitude], zoom_start=11)
# add markers to map
for lat, lng, label in zip(df_part3['Latitude'], df_part3['Longitude'], df_part3['Neighbourhood']):
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_toronto)
map_toronto
But When i Run it i get:
TypeError: 'DataFrame' object is not callable
----> 5 for lat, lng, label in zip(df_part3['Latitude'], df_part3['Longitude'], df_part3['Neighbourhood']):
Does anyone knows how to help-me?
For each letter in the alphabet. The code should go to website.com/a and grab a table. Then it should check for a next button grab the link and makesoup and grab the next table and repeat until there is no valid next link. Then move to website.com/b(next letter in alphabet) and repeat. But I can only get as far as 2 pages for each letter. the first for loop grabs page 1 and the second grabs page 2 for each letter. I know I could write a loop for as many pages as needed but that is not scalable. How can I fix this?
from nfl_fun import make_soup
import urllib.request
import os
from string import ascii_lowercase
import requests
letter = ascii_lowercase
link = "https://www.nfl.com"
for letter in ascii_lowercase:
soup = make_soup(f"https://www.nfl.com/players/active/{letter}")
for tbody in soup.findAll("tbody"):
for tr in tbody.findAll("a"):
if tr.has_attr("href"):
print(tr.attrs["href"])
for letter in ascii_lowercase:
soup = make_soup(f"https://www.nfl.com/players/active/{letter}")
for page in soup.footer.findAll("a", {"nfl-o-table-pagination__next"}):
pagelink = ""
footer = ""
footer = page.attrs["href"]
pagelink = f"{link}{footer}"
print(footer)
getpage = requests.get(pagelink)
if getpage.status_code == 200:
next_soup = make_soup(pagelink)
for next_page in next_soup.footer.findAll("a", {"nfl-o-table-pagination__next"}):
print(getpage)
for tbody in next_soup.findAll("tbody"):
for tr in tbody.findAll("a"):
if tr.has_attr("href"):
print(tr.attrs["href"])
soup = next_soup
Thank You again,
There is an element in there that says when the "Next" button is inactive. So that'll tell you you are on the last page. So what you can do is a while loop, and just keep going to the next page, until it reaches the last page (Ie "Next" is inactive) and then tell it to stop the loop and go to the next letter:
from bs4 import BeautifulSoup
from string import ascii_lowercase
import requests
import pandas as pd
import re
letters = ascii_lowercase
link = "https://www.nfl.com"
results = pd.DataFrame()
for letter in letters:
continueToNextPage = True
after = ''
page=1
while continueToNextPage == True:
# Get the Table
url = f"https://www.nfl.com/players/active/{letter}?query={letter}&after={after}"
response = requests.get(url, 'html.parser')
soup = BeautifulSoup(response.text, 'html.parser')
temp_df = pd.read_html(response.text)[0]
results = results.append(temp_df, sort=False).reset_index(drop=True)
print ("{letter}: Page: {page}".format(letter=letter.upper(), page=page))
# Check if next page is inactive
buttons = soup.find('div', {'class':'nfl-o-table-pagination__buttons'})
regex = re.compile('.*pagination__next.*is-inactive.*')
if buttons.find('span', {'class':regex}):
continueToNextPage = False
else:
after = buttons.find('a', {'title':'Next'})['href'].split('after=')[-1]
page+=1
Output:
print (results)
Player Current Team Position Status
0 Chidobe Awuzie Dallas Cowboys CB ACT
1 Josh Avery Seattle Seahawks DT ACT
2 Genard Avery Philadelphia Eagles DE ACT
3 Anthony Averett Baltimore Ravens CB ACT
4 Lee Autry Chicago Bears DT ACT
5 Denico Autry Indianapolis Colts DT ACT
6 Tavon Austin Dallas Cowboys WR UFA
7 Blessuan Austin New York Jets CB ACT
8 Antony Auclair Tampa Bay Buccaneers TE ACT
9 Jeremiah Attaochu Denver Broncos LB ACT
10 Hunter Atkinson Atlanta Falcons OT ACT
11 John Atkins Detroit Lions DE ACT
12 Geno Atkins Cincinnati Bengals DT ACT
13 Marcell Ateman Las Vegas Raiders WR ACT
14 George Aston New York Giants RB ACT
15 Dravon Askew-Henry New York Giants DB ACT
16 Devin Asiasi New England Patriots TE ACT
17 George Asafo-Adjei New York Giants OT ACT
18 Ade Aruna Las Vegas Raiders DE ACT
19 Grayland Arnold Philadelphia Eagles SAF ACT
20 Dan Arnold Arizona Cardinals TE ACT
21 Damon Arnette Las Vegas Raiders CB UDF
22 Ray-Ray Armstrong Dallas Cowboys LB UFA
23 Ka'John Armstrong Denver Broncos OT ACT
24 Dorance Armstrong Dallas Cowboys DE ACT
25 Cornell Armstrong Houston Texans CB ACT
26 Terron Armstead New Orleans Saints OT ACT
27 Ryquell Armstead Jacksonville Jaguars RB ACT
28 Arik Armstead San Francisco 49ers DE ACT
29 Alex Armah Carolina Panthers FB ACT
... ... ... ...
3180 Clive Walford Miami Dolphins TE UFA
3181 Cameron Wake Tennessee Titans DE UFA
3182 Corliss Waitman Pittsburgh Steelers P ACT
3183 Rick Wagner Green Bay Packers OT ACT
3184 Bobby Wagner Seattle Seahawks MLB ACT
3185 Ahmad Wagner Chicago Bears WR ACT
3186 Colby Wadman Denver Broncos P ACT
3187 Christian Wade Buffalo Bills RB ACT
3188 LaAdrian Waddle Buffalo Bills OT UFA
3189 Oshane Ximines New York Giants LB ACT
3190 Trevon Young Cleveland Browns DE ACT
3191 Sam Young Las Vegas Raiders OT ACT
3192 Kenny Young Los Angeles Rams ILB ACT
3193 Chase Young Washington Redskins DE UDF
3194 Bryson Young Atlanta Falcons DE ACT
3195 Isaac Yiadom Denver Broncos CB ACT
3196 T.J. Yeldon Buffalo Bills RB ACT
3197 Deon Yelder Kansas City Chiefs TE ACT
3198 Rock Ya-Sin Indianapolis Colts CB ACT
3199 Eddie Yarbrough Minnesota Vikings DE ACT
3200 Marshal Yanda Baltimore Ravens OG ACT
3201 Tavon Young Baltimore Ravens CB ACT
3202 Brandon Zylstra Carolina Panthers WR ACT
3203 Jabari Zuniga New York Jets DE UDF
3204 Greg Zuerlein Dallas Cowboys K ACT
3205 Isaiah Zuber New England Patriots WR ACT
3206 Justin Zimmer Cleveland Browns DT ACT
3207 Anthony Zettel Minnesota Vikings DE ACT
3208 Kevin Zeitler New York Giants OG ACT
3209 Olamide Zaccheaus Atlanta Falcons WR ACT
[3210 rows x 4 columns]
I am trying to read excel file cells having multi line text in it. I am using xlrd 1.2.0. But when I print or even write the text in cell to .txt file it doesn't preserve line breaks or tabs i.e \n or \t.
Input:
File URL:
Excel file
Code:
import xlrd
filenamedotxlsx = '16.xlsx'
gall_artists = xlrd.open_workbook(filenamedotxlsx)
sheet = gall_artists.sheet_by_index(0)
bio = sheet.cell_value(0,1)
print(bio)
Output:
"Biography 2018-2019 Manoeuvre Textiles Atelier, Gent, Belgium 2017-2018 Thalielab, Brussels, Belgium 2017 Laboratoires d'Aubervilliers, Paris 2014-2015 Galveston Artist Residency (GAR), Texas 2014 MACBA, Barcelona & L'appartment 22, Morocco - Residency 2013 International Residence Recollets, Paris 2007 Gulbenkian & RSA Residency, BBC Natural History Dept, UK 2004-2006 Delfina Studios, UK Studio Award, London 1998-2000 De Ateliers, Post-grad Residency, Amsterdam 1995-1998 BA (Hons) Textile Art, Winchester School of Art UK "
Expected Output:
1975 Born in Hangzhou, Zhejiang, China
1980 Started to learn Chinese ink painting
2000 BA, Major in Oil Painting, China Academy of Art, Hangzhou, China
Curator, Hangzhou group exhibition for 6 female artists Untitled, 2000 Present
2007 MA, New Media, China Academy of Art, Hangzhou, China, studied under Jiao Jian
Lecturer, Department of Art, Zhejiang University, Hangzhou, China
2015 PhD, Calligraphy, China Academy of Art, Hangzhou, China, studied under Wang Dongling
Jury, 25th National Photographic Art Exhibition, China Millennium Monument, Beijing, China
2016 Guest professor, Faculty of Humanities, Zhejiang University, Hangzhou, China
Associate professor, Research Centre of Modern Calligraphy, China Academy of Art, Hangzhou, China
Researcher, Lanting Calligraphy Commune, Zhejiang, China
2017 Christie's produced a video about Chu Chu's art
2018 Featured by Poetry Calligraphy Painting Quarterly No.2, Beijing, China
Present Vice Secretary, Lanting Calligraphy Society, Hangzhou, China
Vice President, Zhejiang Female Calligraphers Association, Hangzhou, China
I have also used repr() to see if there are \n characters or not, but there aren't any.
I know this should be easy but it's driving me mad...
I am trying to turn a dataframe into a grouped dataframe.
df outputs:
Postcode Borough Neighbourhood
0 M3A North York Parkwoods
1 M4A North York Victoria Village
2 M5A Downtown Toronto Harbourfront
3 M5A Downtown Toronto Regent Park
4 M6A North York Lawrence Heights
5 M6A North York Lawrence Manor
6 M7A Queen's Park Not assigned
7 M9A Etobicoke Islington Avenue
8 M1B Scarborough Rouge
9 M1B Scarborough Malvern
10 M3B North York Don Mills North
...
I want to make a grouped dataframe where the Neighbourhood is grouped by Postcode and all neighborhoods then become a concatenated string of Neighbourhoods as grouped by Postcode...
something like:
Postcode Borough Neighbourhood
0 M3A North York Parkwoods
1 M4A North York Victoria Village
2 M5A Downtown Toronto Harbourfront, Regent Park
...
I am trying to use:
df.groupby(['Postcode'])['Neighbourhood'].apply(lambda strs: ', '.join(strs))
But this does not return a new dataframe .. it outputs the same original dataframe when I use df after running.
if I use:
df = df.groupby(['Postcode'])['Neighbourhood'].apply(lambda strs: ', '.join(strs))
it turns df into an object?
Use this code
new_df = df.groupby(['Postcode', 'Borough']).agg({'Neighbourhood':lambda x:', '.join(x)}).reset_index()
reset_index() will take your group by columns out of the index and return it as a column to the dataframe and create a new integer index.