Python chunks write to excel - python

I am new to python and I m learning by doing.
At this moment, my code is running quite slow and it seems to take longer and longer by each time I run it.
The idea is to download an employee list as CSV, then to check the location of each Employee ID by running it trough a specific page then writing it to an excel file.
We have around 600 associates on site each day and I need to find their location and to keep refreshing it each 2-4 minutes.
EDIT:
For everyone to have a better understanding, I have a CSV file ( TOT.CSV ) that contains Employee ID's, Names and other information of the associates that I have on site.
In order to get their location, I need to run each employee ID from that CSV file trough https://guided-coaching-dub.corp.amazon.com/api/employee-location-svc/GetLastSeenLocationOfEmployee?employeeId= 1 by 1 and at the same time to write it in another CSV file ( Location.csv ). Right now, it does in about 10 minutes and I want to understand if the way I did it is the best possible way, or if there is something else that I could try.
My code looks like this:
# GET EMPLOYEE ID FROM THE CSV
data = read_csv("Z:\\_Tracker\\Dump\\attendance\\TOT.csv")
# converting column data to list
TOT_employeeID = data['Employee ID'].tolist()
# Clean the Location Sheet
with open("Z:\\_Tracker\\Dump\\attendance\\Location.csv", "w") as f:
pass
print("Previous Location data cleared ... ")
# go through EACH employee ID to find out location
for x in TOT_employeeID:
driver.get(
"https://guided-coaching-dub.corp.amazon.com/api/employee-location-svc/GetLastSeenLocationOfEmployee?employeeId=" + x)
print("Getting Location data for EmployeeID: " + x)
locData = driver.find_element(By.TAG_NAME, 'body').text
aaData = str(locData)
realLoc = aaData.split('"')
# write to excel
with open("Z:\\_Tracker\\Dump\\attendance\\Location.csv",
"a") as f:
writer = csv.writer(f)
writer.writerow(realLoc)
time.sleep(5)
print("Employee Location data downloaded...")
Is there a way I can do this faster?
Thank you in advance!
Regards,
Alex

Something like this.
import concurrent.futures
def process_data(data: pd.DataFrame) -> None:
associates = data['Employee ID'].unique()
with concurrent.futures.ProcessPoolExecutor() as executer:
executer.map(get_location, associates)
def get_location(associate: str) -> None:
driver.get(
"https://guided-coaching-dub.corp.amazon.com/api/employee-location-svc/GetLastSeenLocationOfEmployee?"
f"employeeId={associate}")
print(f"Getting Location data for EmployeeID: {associate}")
realLoc = str(driver.find_element(By.TAG_NAME, 'body').text).split('"')
with open("Z:\\_Tracker\\Dump\\attendance\\Location.csv", "a") as f:
writer = csv.writer(f)
writer.writerow(realLoc)
if __name__ == "__main__":
data = read_csv("Z:\\_Tracker\\Dump\\attendance\\TOT.csv")
process_data(data)

You could try separating the step of reading the information and writing the information to your CSV file, like below:
# Get Employee Location Information
# Create list for employee information, to be used below
employee_Locations = []
for x in TOT_employeeID:
driver.get("https://guided-coaching-dub.corp.amazon.com/api/employee-location-svc/GetLastSeenLocationOfEmployee?employeeId=" + x)
print("Getting Location data for EmployeeID: " + x)
locData = driver.find_element(By.TAG_NAME, 'body').text
aaData = str(locData)
realLoc = [aaData.split('"')]
employee_Locations.extend(realLoc)
# Write to excel - Try this as a separate step
with open("Z:\\_Tracker\\Dump\\attendance\\Location.csv","a") as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(employee_Locations)
print("Employee Location data downloaded...")
You may see some performance gains by collecting all your information first, then writing to your CSV file

Related

How to create a for loop from a input dependent function in Python?

I am finally getting the hang of Python and have started using it on a daily basis at work. However, the learning curve is still steep and I have hit a roadblock in trying something new with a code I found here for scraping members from telegram channels.
Currently in lines 38-44 we can select a group from the list and it will scrape the user data into members.csv .
EDIT: Resolved the CSV naming issue:
print('Saving In file...')
print(target_group.title)
filename = target_group.title
with open(("{}.csv".format(filename)),"w",encoding='UTF-8') as f:
Instead of relying on input, I would like to create a for loop which would iterate through every group in the list.
print('Choose a group to scrape members from:')
i=0
for g in groups:
print(str(i) + '- ' + g.title)
i+=1
g_index = input("Enter a Number: ")
target_group=groups[int(g_index)]
The problem is that I am not sure exactly how to replace this part of the code with a for loop.
Although, just changing it into a for loop would make it merely overwrite the same members.csv file with each iteration, I plan on changing that so that it outputs into unique files.
So circling back to my question. How do I make this single program iteration loop through all of the groups, or just select all of them.
Thanks for the help !
Couldn't test this, but something like this maybe? This creates a new .csv file for each group.
for chat in chats:
try:
if chat.megagroup == True:
groups.append(chat)
except:
continue
for current_group in groups:
print(f"Fetching members for group \"{current_group.title}\"...")
all_participants = client.get_participants(current_group, aggressive=True)
current_file_name = f"members_{current_group.title}.csv"
print(f"Saving in file \"{current_file_name}\"...")
with open(current_file_name, "w+", encoding="UTF-8") as file:
writer = csv.writer(file, delimiter=",", lineterminator="\n")
writer.writerow(["username", "user id", "access hash", "name", "group", "group id"])
for user in all_participants:
username = user.username if user.username else ""
first_name = user.first_name.strip() if user.first_name else ""
last_name = user.last_name.strip() if user.last_name else ""
name = f"{first_name} {last_name}"
row = [username, user.id, user.access_hash, name, current_group.title, current_group.id]
writer.writerow(row)
print(f"Finished writing to file \"{current_file_name}\".")
print("Members scraped successfully.")
Ended up figuring out the issue:
On naming the CSV file: Used the title attribute to name the file and replacement within the string.
g_index = chat_num
target_group=groups[int(g_index)]
filename = target_group.title
print('Fetching Members from {} ...'.format(filename))
all_participants = []
all_participants = client.get_participants(target_group, aggressive=True)
print('Saving In file...')
with open(("{}.csv".format(filename)),"w",encoding='UTF-8') as f:
On creating a for loop for the sequence: The original code (posted in the question) did not include a for loop. My version of a workaround was to create a function from everything and then iterate through a an indexed list that was equal to the amount of instances detected. In the end looking like this:
chat_list_index = list(range(len(chats)))
for x in chat_list_index:
try:
get(x)
except:
print("No more groups.", end = " ")
pass
pass
print("Done")
Overall, this might not be the best solution to accomplish what I sought out to, however its good enough for me now, and I have learned a lot. Maybe someone in the future finds this beneficial. Full code available here: (https://github.com/ivanstruk/telegram-member-scraper/).
Cheers !

Function that uses infomation from a csv file to display more infomation

the question I have to answer is : Write a function displayTime that receives the list of runners and the number of a runner and displays the name and the time. If the number is not found on the list the function displays an error message.
So far I have created a function which loads all the data from the excel spreadsheet and stores it as dictionaries under separate categories. The code is as follows:
import csv
def loadResults():
with open('marathon.csv') as csvfile:
readCSV = csv.reader(csvfile, delimiter=',')
s = {}
runners = []
number = []
time = []
name = []
surname = []
for row in readCSV:
num = row[0]
times = row[1]
firstname = row[2]
surnam = row[3]
number.append(num)
time.append(times)
name.append(firstname)
surname.append(surnam)
However for the question I have to display the name and the time of a runner when their number is entered. So far I have:
def displayTime(runners,number):
for s in runners:
if s['time']==number:
print(s['name'])
Any help would be greatly appreciated
Try doing this, I am not 100% sure but I think it would work
Variable_Name = open("marathon.csv","r")
and then seperate the parts of the file you want with
Variable_Name.split(",")
tell me if it works or not

EOFError: Ran out of input

When I run the code below I get this error message "EOFError: Ran out of input"
what does it mean?? How it can be corrected?? and how to output the records details on the screen.
import pickle # this library is required to create binary files
class CarRecord:
def __init__(self):
self.VehicleID = " "
self.Registration = " "
self.DateOfRegistration = " "
self.EngineSize = 0
self.PurchasePrice = 0.00
ThisCar = CarRecord()
Car = [ThisCar for i in range(2)] # list of 2 car records
Car[0].VehicleID = "CD333"
Car[0].Registration = "17888"
Car[0].DateOfRegistration = "18/2/2017"
Car[0].EngineSize = 2500
Car[0].PurchasePrice = 22000.00
Car[1].VehicleID = "AB123"
Car[1].Registration = "16988"
Car[1].DateOfRegistration = "19/2/2017"
Car[1].EngineSize = 2500
Car[1].PurchasePrice = 20000.00
CarFile = open ('Cars.TXT', 'wb' ) # open file for binary write
for j in range (2): # loop for each array element
pickle.dump (Car[j], CarFile) # write a whole record to the binary file
CarFile.close() # close file
CarFile = open ('Cars.TXT','rb') # open file for binary read
Car = [] # start with empty list
while True: #check for end of file
Car.append(pickle.load(CarFile)) # append record from file to end of list
CarFile.close()
Short answer: The simplest solution is to write the complete list to file using pickle.dump(). There's no need to write all objects one by one in a loop. Pickle is designed to do this for you.
Example code and alternative solutions:
Below is a fully working example. Some notes:
I've updated your __init__ function a bit to make the initialization code a lot easier and shorter.
I've also added a __repr__ function. This could be used to print the record details to screen, which you also asked. (Note that you could also implement a __str__ function, but I chose to implement __repr__ for this example).
This code example uses standard Python coding styles (PEP-8).
This code uses a context manager to open the file. This is safer and avoid the need to manually close the file.
If you really want to write the objects manually, for whatever reason, there are a few alternatives to do that safely. I'll explain them after this code example:
import pickle
class CarRecord:
def __init__(self, vehicle_id, registration, registration_date, engine_size, purchase_price):
self.vehicle_id = vehicle_id
self.registration = registration
self.registration_date = registration_date
self.engine_size = engine_size
self.purchase_price = purchase_price
def __repr__(self):
return "CarRecord(%r, %r, %r, %r, %r)" % (self.vehicle_id, self.registration,
self.registration_date, self.engine_size,
self.purchase_price)
def main():
cars = [
CarRecord("CD333", "17888", "18/2/2017", 2500, 22000.00),
CarRecord("AB123", "16988", "19/2/2017", 2500, 20000.00),
]
# Write cars to file.
with open('Cars.TXT', 'wb') as car_file:
pickle.dump(cars, car_file)
# Read cars from file.
with open('Cars.TXT', 'rb') as car_file:
cars = pickle.load(car_file)
# Print cars.
for car in cars:
print(car)
if __name__ == '__main__':
main()
Output:
CarRecord('CD333', '17888', '18/2/2017', 2500, 22000.0)
CarRecord('AB123', '16988', '19/2/2017', 2500, 20000.0)
Instead of dumping the list at once, you could also do it in a loop. The following code snippets are alternative implementations to "Write cars to file" and "Read cars from file".
Alternative 1: write number of objects to file
At the start of the file, write the number of cars. This can be used to read the same amount of cars from the file.
# Write cars to file.
with open('Cars.TXT', 'wb') as car_file:
pickle.dump(len(cars), car_file)
for car in cars:
pickle.dump(car, car_file)
# Read cars from file.
with open('Cars.TXT', 'rb') as car_file:
num_cars = pickle.load(car_file)
cars = [pickle.load(car_file) for _ in range(num_cars)]
Alternative 2: use an "end" marker
At the end of the file, write some recognizable value, for example None. When reading this object can be used to detect the end of file.
# Write cars to file.
with open('Cars.TXT', 'wb') as car_file:
for car in cars:
pickle.dump(car, car_file)
pickle.dump(None, car_file)
# Read cars from file.
with open('Cars.TXT', 'rb') as car_file:
cars = []
while True:
car = pickle.load(car_file)
if car is None:
break
cars.append(car)
You can change you while loop to this:
this will break out of your while loop at the end of the input when it recieves the EOFError
while True: #check for end of file
try:
Car.append(pickle.load(CarFile)) # append record from file to end of list
except EOFError:
break
CarFile.close()
You get that error when the file you are trying to load with pickle is empty.So make sure that there's things written into '.pkl file'

Previously working script now fails to generate csv file. Why?

title can be misleading: python script WORKS, but fails to generate a csv file as it previously had no problem of doing
Source:
import requests
import unicodecsv as csv
import json
api_url = 'http://api.indeed.com/ads/apisearch?publisher=8710117352111766&v=2&limit=100000&format=json'
number= 0
SearchTerm = 'McKinsey'
countries = set(['us','ar','au','at','bh','be','br','ca','cl','cn','co','cz','dk','fi','fr','de','gr','hk','hu','in','id','ie','il','it','jp','kr','kw','lu','my','mx','nl','nz','no','om','pk','pe','ph','pl','pt','qa','ro','ru','sa','sg','za','es','se','ch','tw','tr','ae','gb','ve'])
with open( SearchTerm + '.csv' , 'a' ) as csvfile:
fieldnames = ['city','company','country','date','expired','formattedLocation','formattedLocationFull','formattedRelativeTime','indeedApply','jobkey','jobtitle','latitude','longitude','onmousedown','snippet','source','sponsored','state','url']
writer = csv.DictWriter(csvfile, fieldnames = fieldnames, lineterminator = '\n')
writer.writeheader()
for SCountry in countries:
Country = SCountry #this is the variable assigned to the country
urlfirst = api_url + '&co=' + Country + '&q=' + SearchTerm
grabforNum = requests.get(urlfirst)
json_content = json.loads(grabforNum.content)
print(json_content["totalResults"])
numresults = (json_content["totalResults"])
# must match the actual number of job results to the lower of the 25 increment or the last page will repeat over and over
for number in range(0, numresults, 25):
url = api_url + '&co=' + Country + '&q=' + SearchTerm + '&latlong=1' + '&start=' + str(number)
response = requests.get(url)
grabforclean = json.loads(response.content)
clean_json = (grabforclean['results'])
print 'Complete '+ url
for job in clean_json:
writer.writerow(job)
This is the original owner of the script. I was using it 3 days ago until I had to reinstall my operating system. Now for some reason, it fails to store all the content it collects into a CSV file. API key works, no error messages. requests unicodecsv and json are all installed.
stuff like this really drives me up the wall, how can you diagnose something that previously worked? I had multiple versions of the script searching for different keywords so I know my modifications are not to blame, but perhaps something outside the script is broken.
The website has probably recently started returning a new field, so you have two choices:
Add stations to your list of fieldnames.
Add extrasaction='ignore' to your csv.Dictwriter parameters to keep all your existing fields and ignore any new ones that are added.
Both of these solutions will allow your script to work again.

Python - Web Scraping - BeautifulSoup & CSV

I am hoping to extract the change in cost of living from one city against many cities. I plan to list the cities I would like to compare in a CSV file and using this list to create the web link that would take me to the website with the information I am looking for.
Here is the link to an example: http://www.expatistan.com/cost-of-living/comparison/phoenix/new-york-city
Unfortunately I am running into several challenges. Any assistance to the following challenges is greatly appreciated!
The output only shows the percentage, but no indication whether it is more expensive or cheaper. For the example listed above, my output based on the current code shows 48%, 129%, 63%, 43%, 42%, and 42%. I tried to correct for this by adding an 'if-statement' to add '+' sign if it is more expensive, or a '-' sign if it is cheaper. However, this 'if-statement' is not functioning correctly.
When I write the data to a CSV file, each of the percentages is written to a new row. I can't seem to figure out how to write it as a list on one line.
(related to item 2) When I write the data to a CSV file for the example listed above, the data is written in the format listed below. How can I correct the format and have the data written in the preferred format listed below (also without the percentage sign)?
CURRENT CSV FORMAT (Note: 'if-statement' not functioning correctly):
City,Food,Housing,Clothes,Transportation,Personal Care,Entertainment
n,e,w,-,y,o,r,k,-,c,i,t,y,-,4,8,%
n,e,w,-,y,o,r,k,-,c,i,t,y,-,1,2,9,%
n,e,w,-,y,o,r,k,-,c,i,t,y,-,6,3,%
n,e,w,-,y,o,r,k,-,c,i,t,y,-,4,3,%
n,e,w,-,y,o,r,k,-,c,i,t,y,-,4,2,%
n,e,w,-,y,o,r,k,-,c,i,t,y,-,4,2,%
PREFERRED CSV FORMAT:
City,Food,Housing,Clothes,Transportation,Personal Care,Entertainment
new-york-city, 48,129,63,43,42,42
Here is my current code:
import requests
import csv
from bs4 import BeautifulSoup
#Read text file
Textfile = open("City.txt")
Textfilelist = Textfile.read()
Textfilelistsplit = Textfilelist.split("\n")
HomeCity = 'Phoenix'
i=0
while i<len(Textfilelistsplit):
url = "http://www.expatistan.com/cost-of-living/comparison/" + HomeCity + "/" + Textfilelistsplit[i]
page = requests.get(url).text
soup_expatistan = BeautifulSoup(page)
#Prepare CSV writer.
WriteResultsFile = csv.writer(open("Expatistan.csv","w"))
WriteResultsFile.writerow(["City","Food","Housing","Clothes","Transportation","Personal Care", "Entertainment"])
expatistan_table = soup_expatistan.find("table",class_="comparison")
expatistan_titles = expatistan_table.find_all("tr",class_="expandable")
for expatistan_title in expatistan_titles:
percent_difference = expatistan_title.find("th",class_="percent")
percent_difference_title = percent_difference.span['class']
if percent_difference_title == "expensiver":
WriteResultsFile.writerow(Textfilelistsplit[i] + '+' + percent_difference.span.string)
else:
WriteResultsFile.writerow(Textfilelistsplit[i] + '-' + percent_difference.span.string)
i+=1
Answers:
Question 1: the class of the span is a list, you need to check if expensiver is inside this list. In other words, replace:
if percent_difference_title == "expensiver"
with:
if "expensiver" in percent_difference.span['class']
Questions 2 and 3: you need to pass a list of column values to writerow(), not string. And, since you want only one record per city, call writerow() outside of the loop (over the trs).
Other issues:
open csv file for writing before the loop
use with context managers while working with files
try to follow PEP8 style guide
Here's the code with modifications:
import requests
import csv
from bs4 import BeautifulSoup
BASE_URL = 'http://www.expatistan.com/cost-of-living/comparison/{home_city}/{city}'
home_city = 'Phoenix'
with open('City.txt') as input_file:
with open("Expatistan.csv", "w") as output_file:
writer = csv.writer(output_file)
writer.writerow(["City", "Food", "Housing", "Clothes", "Transportation", "Personal Care", "Entertainment"])
for line in input_file:
city = line.strip()
url = BASE_URL.format(home_city=home_city, city=city)
soup = BeautifulSoup(requests.get(url).text)
table = soup.find("table", class_="comparison")
differences = []
for title in table.find_all("tr", class_="expandable"):
percent_difference = title.find("th", class_="percent")
if "expensiver" in percent_difference.span['class']:
differences.append('+' + percent_difference.span.string)
else:
differences.append('-' + percent_difference.span.string)
writer.writerow([city] + differences)
For the City.txt containing just one new-york-city line, it produces Expatistan.csv with the following content:
City,Food,Housing,Clothes,Transportation,Personal Care,Entertainment
new-york-city,+48%,+129%,+63%,+43%,+42%,+42%
Make sure you understand what changes have I made. Let me know if you need further help.
csv.writer.writerow() takes a sequence and makes each element a column; normally you'd give it a list with columns, but you are passing in strings instead; that'll add individual characters as columns instead.
Just build a list, then write it to the CSV file.
First, open the CSV file once, not for every separate city; you are clearing out the file every time you open it.
import requests
import csv
from bs4 import BeautifulSoup
HomeCity = 'Phoenix'
with open("City.txt") as cities, open("Expatistan.csv", "wb") as outfile:
writer = csv.writer(outfile)
writer.writerow(["City", "Food", "Housing", "Clothes",
"Transportation", "Personal Care", "Entertainment"])
for line in cities:
city = line.strip()
url = "http://www.expatistan.com/cost-of-living/comparison/{}/{}".format(
HomeCity, city)
resp = requests.get(url)
soup = BeautifulSoup(resp.content, from_encoding=resp.encoding)
titles = soup.select("table.comparison tr.expandable")
row = [city]
for title in titles:
percent_difference = title.find("th", class_="percent")
changeclass = percent_difference.span['class']
change = percent_difference.span.string
if "expensiver" in changeclass:
change = '+' + change
else:
change = '-' + change
row.append(change)
writer.writerow(row)
So, first of all, one passes the writerow method an iterable, and each object in that iterable gets written with commas separating them. So if you give it a string, then each character gets separated:
WriteResultsFile.writerow('hello there')
writes
h,e,l,l,o, ,t,h,e,r,e
But
WriteResultsFile.writerow(['hello', 'there'])
writes
hello,there
That's why you are getting results like
n,e,w,-,y,o,r,k,-,c,i,t,y,-,4,8,%
The rest of your problems are errors in your webscraping. First of all, when I scrape the site, searching for tables with CSS class "comparison" gives me None. So I had to use
expatistan_table = soup_expatistan.find("table","comparison")
Now, the reason your "if statement is broken" is because
percent_difference.span['class']
returns a list. If we modify that to
percent_difference.span['class'][0]
things will work the way you expect.
Now, your real issue is that inside the innermost loop you are finding the % changing in price for the individual items. You want these as items in your row of price differences, not individual rows. So, I declare an empty list items to which I append percent_difference.span.string, and then write the row outside the innermost loop Like so:
items = []
for expatistan_title in expatistan_titles:
percent_difference = expatistan_title.find("th","percent")
percent_difference_title = percent_difference.span["class"][0]
print percent_difference_title
if percent_difference_title == "expensiver":
items.append('+' + percent_difference.span.string)
else:
items.append('-' + percent_difference.span.string)
row = [Textfilelistsplit[i]]
row.extend(items)
WriteResultsFile.writerow(row)
The final error, is the in the while loop you re-open the csv file, and overwrite everything so you only have the final city in the end. Accounting for all theses errors (many of which you should have been able to find without help) leaves us with:
#Prepare CSV writer.
WriteResultsFile = csv.writer(open("Expatistan.csv","w"))
i=0
while i<len(Textfilelistsplit):
url = "http://www.expatistan.com/cost-of-living/comparison/" + HomeCity + "/" + Textfilelistsplit[i]
page = requests.get(url).text
print url
soup_expatistan = BeautifulSoup(page)
WriteResultsFile.writerow(["City","Food","Housing","Clothes","Transportation","Personal Care", "Entertainment"])
expatistan_table = soup_expatistan.find("table","comparison")
expatistan_titles = expatistan_table.find_all("tr","expandable")
items = []
for expatistan_title in expatistan_titles:
percent_difference = expatistan_title.find("th","percent")
percent_difference_title = percent_difference.span["class"][0]
print percent_difference_title
if percent_difference_title == "expensiver":
items.append('+' + percent_difference.span.string)
else:
items.append('-' + percent_difference.span.string)
row = [Textfilelistsplit[i]]
row.extend(items)
WriteResultsFile.writerow(row)
i+=1
YAA - Yet Another Answer.
Unlike the other answers, this treats the data as a series key-value pairs; ie: a list of dictionaries, which are then written to CSV. A list of wanted fields is provided to the csv writer (DictWriter), which discards additional information (beyond the specified fields) and blanks missing information. Also, should the order of the information on the original page change, this solution is unaffected.
I also assume you are going to open the CSV file in something like Excel. Additional parameters need to be given to the csv writer for this to happen nicely (see dialect parameter). Given that we are not sanitising the returned data, we should explicitly delimit it with unconditional quoting (see quoting parameter).
import csv
import requests
from bs4 import BeautifulSoup
#Read text file
with open("City.txt") as cities_h:
cities = cities_h.readlines()
home_city = "Phoenix"
city_data = []
for city in cities:
url = "http://www.expatistan.com/cost-of-living/comparison/%s/%s" % (home_city, city)
resp = requests.get(url)
soup = BeautifulSoup(resp.content, from_encoding = resp.encoding)
titles = soup.select("table.comparison tr.expandable")
if titles:
data = {}
for title in titles:
name = title.find("th", class_ = "clickable")
diff = title.find("th", class_ = "percent")
exp = bool(diff.find("span", class_ = "expensiver"))
data[name.text] = ("+" if exp else "-") + diff.span.text
data["City"] = soup.find("strong", class_ = "city-2").text
city_data.append(data)
with open("Expatistan.csv","w") as csv_h:
fields = \
[
"City",
"Food",
"Housing",
"Clothes",
"Transportation",
"Personal Care",
"Entertainment"
]
#Prepare CSV writer.
writer = csv.DictWriter\
(
csv_h,
fields,
quoting = csv.QUOTE_ALL,
extrasaction = "ignore",
dialect = "excel",
lineterminator = "\n",
)
writer.writeheader()
writer.writerows(city_data)

Categories