Extracting data with BeautifulSoup and output to CSV - python

As mentioned in the previous questions, I am using Beautiful soup with python to retrieve weather data from a website.
Here's how the website looks like:
<channel>
<title>2 Hour Forecast</title>
<source>Meteorological Services Singapore</source>
<description>2 Hour Forecast</description>
<item>
<title>Nowcast Table</title>
<category>Singapore Weather Conditions</category>
<forecastIssue date="18-07-2016" time="03:30 PM"/>
<validTime>3.30 pm to 5.30 pm</validTime>
<weatherForecast>
<area forecast="TL" lat="1.37500000" lon="103.83900000" name="Ang Mo Kio"/>
<area forecast="SH" lat="1.32100000" lon="103.92400000" name="Bedok"/>
<area forecast="TL" lat="1.35077200" lon="103.83900000" name="Bishan"/>
<area forecast="CL" lat="1.30400000" lon="103.70100000" name="Boon Lay"/>
<area forecast="CL" lat="1.35300000" lon="103.75400000" name="Bukit Batok"/>
<area forecast="CL" lat="1.27700000" lon="103.81900000" name="Bukit Merah"/>`
<channel>
I managed to retrieve the information I need using these codes :
import requests
from bs4 import BeautifulSoup
import urllib3
#getting the ValidTime
r = requests.get('http://www.nea.gov.sg/api/WebAPI/?
dataset=2hr_nowcast&keyref=781CF461BB6606AD907750DFD1D07667C6E7C5141804F45D')
soup = BeautifulSoup(r.content, "xml")
time = soup.find('validTime').string
print "validTime: " + time
#getting the date
for currentdate in soup.find_all('item'):
element = currentdate.find('forecastIssue')
print "date: " + element['date']
#getting the time
for currentdate in soup.find_all('item'):
element = currentdate.find('forecastIssue')
print "time: " + element['time']
for area in soup.find('weatherForecast').find_all('area'):
area_attrs_li = [area.attrs for area in soup.find('weatherForecast').find_all('area')]
print area_attrs_li
Here are my results :
{'lat': u'1.34039000', 'lon': u'103.70500000', 'name': u'Jurong West',
'forecast': u'LR'}, {'lat': u'1.31200000', 'lon': u'103.86200000', 'name':
u'Kallang', 'forecast': u'LR'},
How do I remove u' from the result? I tried using the method I found while googling but it doesn't seem to work
I'm not strong in Python and have been stuck at this for quite a while.
EDIT : I tried doing this :
f = open("C:\\scripts\\nea.csv" , 'wt')
try:
for area in area_attrs_li:
writer = csv.writer(f)
writer.writerow( (time, element['date'], element['time'], area_attrs_li))
finally:
f.close()
print open("C:/scripts/nea.csv", 'rt').read()
It worked however, I would like to split the area apart as the records are duplicates in the CSV :
Thank you.

EDIT 1 -Topic:
You're missing escape characters:
C:\scripts>python neaweather.py
File "neaweather.py", line 30
writer.writerow( ('time', 'element['date']', 'element['time']', 'area_attrs_li') )
writer.writerow( ('time', 'element[\'date\']', 'element[\'time\']', 'area_attrs_li')
^
SyntaxError: invalid syntax
EDIT 2:
if you want to insert values:
writer.writerow( (time, element['date'], element['time'], area_attrs_li) )
EDIT 3:
to split the result to different lines:
for area in area_attrs_li:
writer.writerow( (time, element['date'], element['time'], area)
EDIT 4:
The splitting is not correct at all, but it shall give a better understanding of how to parse and split data to change it for your needs.
to split the area element again as you show in your image, you can parse it
for area in area_attrs_li:
# cut off the characters you don't need
area = area.replace('[','')
area = area.replace(']','')
area = area.replace('{','')
area = area.replace('}','')
# remove other characters
area = area.replace("u'","\"").replace("'","\"")
# split the string into a list
areaList = area.split(",")
# create your own csv-seperator
ownRowElement = ';'.join(areaList)
writer.writerow( (time, element['date'], element['time'], ownRowElement)
Offtopic:
This works for me:
import csv
import json
x="""[
{'lat': u'1.34039000', 'lon': u'103.70500000', 'name': u'Jurong West','forecast': u'LR'}
]"""
jsontxt = json.loads(x.replace("u'","\"").replace("'","\""))
f = csv.writer(open("test.csv", "w+"))
# Write CSV Header, If you dont need that, remove this line
f.writerow(['lat', 'lon', 'name', 'forecast'])
for jsontext in jsontxt:
f.writerow([jsontext["lat"],
jsontext["lon"],
jsontext["name"],
jsontext["forecast"],
])

Related

How to extract text and save as excel file using python or JavaScript

How do I extract text from this PDF files where some data is in the form of table while some are key value based data
eg:
https://drive.internxt.com/s/file/78f2d73478b832b2ab55/3edb275967deeca6ad33e7d53f2337c50d5dfb50e0aa525bb7f10d49dff1e2b4
This is what I have tried :
import PyPDF2
import openpyxl
from openpyxl import Workbook
pdfFileObj = open('sample.pdf', 'rb')
pdfReader = PyPDF2.PdfFileReader(pdfFileObj)
pdfReader.numPages
pageObj = pdfReader.getPage(0)
mytext = pageObj.extractText()
wb = Workbook()
sheet = wb.active
sheet.title = 'MyPDF'
sheet['A1'] = mytext
wb.save('sample.xlsx')
print('Save')
However I'd like the data to be stored in the following format.
This pdf does not have well defined tables, hence cannot use any tool to extract the entire data in one table format. What we can do is read the entire pdf as text. And process each data fields line by line by using regex to extract the data.
Before you move ahead, please install the pdfplumber package for python
pip install pdfplumber
Assumptions
Here are some assumptions that I made for your pdf and accordingly I have written the code.
First line will always contain the title Account History Report.
Second line will contain the names IMAGE All Notes
Third line will contain only the data Date Created in the form of key:value.
Fourth line will contain only the data Number of Pages in the form of key:value.
Fifth line will only contain the data Client Code, Client Name
Starting line 6, a pdf can have multiple data entity, these data entity for eg in this pdf is 2 but can be any number of entity.
Each data entity will contain the following fields:
First line in data entity will contain only the data Our Ref, Name, Ref 1, Ref 2
Second line line will only contain data in the form as present in pdf Amount, Total Paid, Balance, Date of A/C, Date Received
Third line in data entity will contain the data Last Paid, Amt Last Paid, Status, Collector.
Fourth line will contain the column name Date Notes
The subsequent lines will contain data in the form of table until the next data entity is started.
I also assume that each data entity will contain the first data with key Our Ref :.
I assume that the data entity will be separated on the first line of each entity in the pattern of key values as Our Ref :Value Name: Value Ref 1 :Value Ref 2:value
pattern = r'Our Ref.*?Name.*?Ref 1.*?Ref 2.*?'
Please note that the rectangle that I have created(thick black) in above image, I am calling those as data entity.
The final data will be stored in a dictionary(json) where the data entity will have key as dataentity1, dataentity2, dataentity3 based on the number of entities you have in your pdf.
The header details are stored in the json as key:value and I assume that each key will be present in header only once.
CODE
Here is the simple elegant code, that gives you information from the pdf in the form of json. In the output the first few field contains information from the header part, subsequent data entities can be found as data_entity 1 and 2.
In the below code all you need to change is pdf_path.
import pdfplumber
import re
# regex pattern for keys in line1 of data entity
my_regex_dict_line1 = {
'Our Ref' : r'Our Ref :(.*?)Name',
'Name' : r'Name:(.*?)Ref 1',
'Ref 1' : r'Ref 1 :(.*?)Ref 2',
'Ref 2' : r'Ref 2:(.*?)$'
}
# regex pattern for keys in line2 of data entity
my_regex_dict_line2 = {
'Amount' : r'Amount:(.*?)Total Paid',
'Total Paid' : r'Total Paid:(.*?)Balance',
'Balance' : r'Balance:(.*?)Date of A/C',
'Date of A/C' : r'Date of A/C:(.*?)Date Received',
'Date Received' : r'Date Received:(.*?)$'
}
# regex pattern for keys in line3 of data entity
my_regex_dict_line3 ={
'Last Paid' : r'Last Paid:(.*?)Amt Last Paid',
'Amt Last Paid' : r'Amt Last Paid:(.*?)A/C\s+Status',
'A/C Status': r'A/C\s+Status:(.*?)Collector',
'Collector' : r'Collector :(.*?)$'
}
def preprocess_data(data):
return [el.strip() for el in data.splitlines() if el.strip()]
def get_header_data(text, json_data = {}):
header_data_list = preprocess_data(text)
# third line in text of header contains Date Created field
json_data['Date Created'] = re.search(r'Date Created:(.*?)$', header_data_list[2]).group(1).strip()
# fourth line in text contains Number of Pages, Client Code, Client Name
json_data['Number of Pages'] = re.search(r'Number of Pages:(.*?)$', header_data_list[3]).group(1).strip()
# fifth line in text contains Client Code and ClientName
json_data['Client Code'] = re.search(r'Client Code - (.*?)Client Name', header_data_list[4]).group(1).strip()
json_data['ClientName'] = re.search(r'Client Name - (.*?)$', header_data_list[4]).group(1).strip()
def iterate_through_regex_and_populate_dictionaries(data_dict, regex_dict, text):
''' For the given pattern of regex_dict, this function iterates through each regex pattern and adds the key value to regex_dict dictionary '''
for key, regex in regex_dict.items():
matched_value = re.search(regex, text)
if matched_value is not None:
data_dict[key] = matched_value.group(1).strip()
def populate_date_notes(data_dict, text):
''' This function populates date and Notes in the data chunk in the form of list to data_dict dictionary '''
data_dict['Date'] = []
data_dict['Notes'] = []
iter = 4
while(iter < len(text)):
date_match = re.search(r'(\d{2}/\d{2}/\d{4})',text[iter])
data_dict['Date'].append(date_match.group(1).strip())
notes_match = re.search(r'\d{2}/\d{2}/\d{4}\s*(.*?)$',text[iter])
data_dict['Notes'].append(notes_match.group(1).strip())
iter += 1
data_index = 1
json_data = {}
pdf_path = r'C:\Users\hpoddar\Desktop\Temp\sample3.pdf' # ENTER YOUR PDF PATH HERE
pdf_text = ''
data_entity_sep_pattern = r'(?=Our Ref.*?Name.*?Ref 1.*?Ref 2)'
if(__name__ == '__main__'):
with pdfplumber.open(pdf_path) as pdf:
index = 0
while(index < len(pdf.pages)):
page = pdf.pages[index]
pdf_text += '\n' + page.extract_text()
index += 1
split_on_data_entity = re.split(data_entity_sep_pattern, pdf_text.strip())
# first data in the split_on_data_entity list will contain the header information
get_header_data(split_on_data_entity[0], json_data)
while(data_index < len(split_on_data_entity)):
data_entity = {}
data_processed = preprocess_data(split_on_data_entity[data_index])
iterate_through_regex_and_populate_dictionaries(data_entity, my_regex_dict_line1, data_processed[0])
iterate_through_regex_and_populate_dictionaries(data_entity, my_regex_dict_line2, data_processed[1])
iterate_through_regex_and_populate_dictionaries(data_entity, my_regex_dict_line3, data_processed[2])
if(len(data_processed) > 3 and data_processed[3] != None and 'Date' in data_processed[3] and 'Notes' in data_processed[3]):
populate_date_notes(data_entity, data_processed)
json_data['data_entity' + str(data_index)] = data_entity
data_index += 1
print(json_data)
Output :
Result string :
{'Date Created': '18/04/2022', 'Number of Pages': '4', 'Client Code': '110203', 'ClientName': 'AWS PTE. LTD.', 'data_entity1': {'Our Ref': '2118881115', 'Name': 'Sky Blue', 'Ref 1': '12-34-56789-2021/2', 'Ref 2': 'F2021004444', 'Amount': '$100.11', 'Total Paid': '$0.00', 'Balance': '$100.11', 'Date of A/C': '01/08/2021', 'Date Received': '10/12/2021', 'Last Paid': '', 'Amt Last Paid': '', 'A/C Status': 'CLOSED', 'Collector': 'Sunny Jane', 'Date': ['04/03/2022'], 'Notes': ['Letter Dated 04 Mar 2022.']}, 'data_entity2': {'Our Ref': '2112221119', 'Name': 'Green Field', 'Ref 1': '98-76-54321-2021/1', 'Ref 2': 'F2021001111', 'Amount': '$233.88', 'Total Paid': '$0.00', 'Balance': '$233.88', 'Date of A/C': '01/08/2021', 'Date Received': '10/12/2021', 'Last Paid': '', 'Amt Last Paid': '', 'A/C Status': 'CURRENT', 'Collector': 'Sam Jason', 'Date': ['11/03/2022', '11/03/2022', '08/03/2022', '08/03/2022', '21/02/2022', '18/02/2022', '18/02/2022'], 'Notes': ['Email for payment', 'Case Status', 'to send a Letter', '845***Ringing, No reply', 'Letter printed - LET: LETTER 2', 'Letter sent - LET: LETTER 2', '845***Line busy']}}
Now once you got the data in the json format, you can load it in a csv file, as a data frame or whatever format you need the data to be in.
Save as xlsx
To save the same in a xlsx file in the format as shown in the image in the question above. We can use xlsx writer to do the same.
Please install the package using pip
pip install xlsxwriter
From the previous code, we have our entire data in the variable json_data, we will be iterating through all the data entities and write the data to appropriate cell specified by row, col in the code.
import xlsxwriter
workbook = xlsxwriter.Workbook('Sample.xlsx')
worksheet = workbook.add_worksheet("Sheet 1")
row = 0
col = 0
# write columns
columns = ['Account History Report', 'All Notes'] + [ key for key in json_data.keys() if 'data_entity' not in key ] + list(json_data['data_entity1'].keys())
worksheet.write_row(row, col, tuple(columns))
row += 1
column_index_map = {}
for index, col in enumerate(columns):
column_index_map[col] = index
# write the header
worksheet.write(row, column_index_map['Date Created'], json_data['Date Created'])
worksheet.write(row, column_index_map['Number of Pages'], json_data['Number of Pages'])
worksheet.write(row, column_index_map['Client Code'], json_data['Client Code'])
worksheet.write(row, column_index_map['ClientName'], json_data['ClientName'])
data_entity_index = 1
#iterate through each data entity and for each key insert the values in the sheet
while True:
data_entity_key = 'data_entity' + str(data_entity_index)
row_size = 1
if(json_data.get(data_entity_key) != None):
for key, value in json_data.get(data_entity_key).items():
if(type(value) == list):
worksheet.write_column(row, column_index_map[key], tuple(value))
row_size = len(value)
else:
worksheet.write(row, column_index_map[key], value)
else:
break
data_entity_index += 1
row += row_size
workbook.close()
Result :
The above code creates a file sample.xlsx in the working directory.

XML Parsing Python ElementTree - Nested for loops

I'm using Jupyter Notebook and ElementTree (Python 3) to create a dataframe and save as csv from an XML file. Here is the XML format (in Estonian):
<asutused hetk="2020-04-14T03:53:33" ver="2">
<asutus>
<registrikood>10000515</registrikood>
<nimi>Osaühing B.Braun Medical</nimi>
<aadress />
<tegevusload>
<tegevusluba>
<tegevusloa_number>L04647</tegevusloa_number>
<alates>2019-12-10</alates>
<kuni />
<loaliik_kood>1</loaliik_kood>
<loaliik_nimi>Eriarstiabi</loaliik_nimi>
<haiglaliik_kood />
<haiglaliik_nimi />
<tegevuskohad>
<tegevuskoht>
<aadress>Harju maakond, Tallinn, Mustamäe linnaosa, J. Sütiste tee 17/1</aadress>
<teenused>
<teenus>
<kood>T0038</kood>
<nimi>ambulatoorsed üldkirurgiateenused</nimi>
</teenus>
<teenus>
<kood>T0236</kood>
<nimi>õe vastuvõtuteenus</nimi>
</teenus>
</teenused>
</tegevuskoht>
<tegevuskoht>
<aadress>Harju maakond, Tallinn, Mustamäe linnaosa, J. Sütiste tee 17/1</aadress>
<teenused>
<teenus>
<kood>T0038</kood>
<nimi>ambulatoorsed üldkirurgiateenused</nimi>
</teenus>
<teenus>
<kood>T0236</kood>
<nimi>õe vastuvõtuteenus</nimi>
</teenus>
</teenused>
</tegevuskoht>
</tegevuskohad>
</tegevusluba>
<tegevusluba>
<tegevusloa_number>L04651</tegevusloa_number>
<alates>2019-12-11</alates>
<kuni />
<loaliik_kood>2</loaliik_kood>
<loaliik_nimi>Õendusabi</loaliik_nimi>
<haiglaliik_kood />
<haiglaliik_nimi />
<tegevuskohad>
<tegevuskoht>
<aadress>Harju maakond, Tallinn, Mustamäe linnaosa, J. Sütiste tee 17/1</aadress>
<teenused>
<teenus>
<kood>T0038</kood>
<nimi>ambulatoorsed üldkirurgiateenused</nimi>
</teenus>
<teenus>
<kood>T0236</kood>
<nimi>õe vastuvõtuteenus</nimi>
</teenus>
</teenused>
</tegevuskoht>
<tegevuskoht>
<aadress>Harju maakond, Tallinn, Mustamäe linnaosa, J. Sütiste tee 17/1</aadress>
<teenused>
<teenus>
<kood>T0038</kood>
<nimi>ambulatoorsed üldkirurgiateenused</nimi>
</teenus>
<teenus>
<kood>T0236</kood>
<nimi>õe vastuvõtuteenus</nimi>
</teenus>
</teenused>
</tegevuskoht>
</tegevuskohad>
</tegevusluba>
</tegevusload>
<tootajad>
<tootaja>
<kood>D03091</kood>
<eesnimi>Evo</eesnimi>
<perenimi>Kaha</perenimi>
<kutse_kood>11</kutse_kood>
<kutse_nimi>Arst</kutse_nimi>
<erialad>
<eriala>
<kood>E420</kood>
<nimi>üldkirurgia</nimi>
</eriala>
</erialad>
</tootaja>
<tootaja>
<kood>N01146</kood>
<eesnimi>Karmen</eesnimi>
<perenimi>Mežulis</perenimi>
<kutse_kood>15</kutse_kood>
<kutse_nimi>Õde</kutse_nimi>
</tootaja>
<tootaja>
<kood>N01153</kood>
<eesnimi>Nele</eesnimi>
<perenimi>Terras</perenimi>
<kutse_kood>15</kutse_kood>
<kutse_nimi>Õde</kutse_nimi>
</tootaja>
<tootaja>
<kood>N02767</kood>
<eesnimi>Helena</eesnimi>
<perenimi>Tern</perenimi>
<kutse_kood>15</kutse_kood>
<kutse_nimi>Õde</kutse_nimi>
</tootaja>
<tootaja>
<kood>N12882</kood>
<eesnimi>Hanna</eesnimi>
<perenimi>Leemet</perenimi>
<kutse_kood>15</kutse_kood>
<kutse_nimi>Õde</kutse_nimi>
</tootaja>
</tootajad>
</asutus>
</asutused>
Each "asutus" is a hospital and I need some of the information inside. Here is my code:
tree = ET.parse("od_asutused.xml")
root = tree.getroot()
# open a file for writing
data = open('EE.csv', 'w')
# create the csv writer object
csvwriter = csv.writer(data, delimiter=';')
head = []
count = 0
for member in root.findall('asutus'):
hospital = []
if count == 0:
ident = member.find('registrikood').tag
head.append(id)
name = member.find('nimi').tag
head.append(name)
address = member.find('aadress').tag
head.append(address)
facility_type = member.find('./tegevusload/tegevusluba/haiglaliik_nimi').tag
head.append(facility_type)
site_address = member.find('./tegevusload/tegevusluba/tegevuskohad/tegevuskoht/aadress').tag
head.append(site_address)
for elem in member.findall('tegevusload'):
list_specs = elem.find('./tegevusluba/tegevuskohad/tegevuskoht/teenused/teenus/nimi').tag
head.append(list_specs)
csvwriter.writerow(head)
count = count + 1
ident = member.find('registrikood').text
hospital.append(ident)
name = member.find('nimi').text
hospital.append(name)
address = member.find('aadress').text
hospital.append(address)
facility_type = member.find('./tegevusload/tegevusluba/haiglaliik_nimi').text
hospital.append(facility_type)
site_address = member.find('./tegevusload/tegevusluba/tegevuskohad/tegevuskoht/aadress').text
hospital.append(site_address)
for spec in elem.findall('tegevusload'):
list_specs = spec.find('./tegevusluba/tegevuskohad/tegevuskoht/teenused/teenus/nimi').text
hospital.append(list_specs)
csvwriter.writerow(hospital)
data.close()
#Upload csv for geocoding
df = pd.read_csv(r'EE.csv', na_filter= False, delimiter=';')
#Rename columns
df.rename(columns = {'<built-in function id>':'id',
'nimi':'name',
'aadress':'address',
'haiglaliik_nimi':'facility_type',
'haiglaliik_kood':'facility_type_c',
'aadress.1':'site_address',
'nimi.1':'list_specs'},
inplace = True)
#Add columns
df['country'] = 'Estonia'
df['cc'] = 'EE'
df.head(10)
And the result of the df.head(10):
Result of dataframe
The "list_specs" is blank no matter what I do. How can I populate this field with a list of each 'nimi' for each site address? Thank you.
I found in your code the following points to change:
At least on my computer, calling csv.writer causes that newline chars
are doubled. The remedy I found is to open the output file with
additional parameters:
data = open('EE.csv', 'w', newline='\n', encoding='utf-8')
There is no sense to write head with Estonian column names and then
rename the columns. Note also that in head.append(id) you use an undeclared
variable (id).
But this is not so important, as I changed this whole section with writing
target column names (see below).
As you write the CSV file to be read by read_csv, it should contain a
fixed number of columns. So it is a bad practice to use a loop to write
one element.
Your instruction list_specs = elem.findall(...) was wrong, because
elem is not set in the current loop. Instead you should use member (but
I solved this detail other way).
There is no sense to create a variable only in order to use it once.
More concise and readable code is e.g. hospital.append(member.findtext('nimi')).
To avoid long XPath expressions, with repeated initial part, I decided
to set a temporary variable "in the middle" of this path, e.g.
tgvLb = member.find('tegevusload/tegevusluba') and then use a relative
XPath starting from this node.
Your rename instruction contains one not needed column, namely facility_type_c. You read only 6 columns, not 7.
So change the middle part of your code to:
data = open('EE.csv', 'w', newline='\n', encoding='utf-8')
csvwriter = csv.writer(data, delimiter=';')
head = ['id', 'name', 'address', 'facility_type', 'site_address', 'list_specs']
csvwriter.writerow(head)
for member in root.findall('asutus'):
hospital = []
hospital.append(member.findtext('registrikood'))
hospital.append(member.findtext('nimi'))
hospital.append(member.findtext('aadress'))
tgvLb = member.find('tegevusload/tegevusluba')
hospital.append(tgvLb.findtext('haiglaliik_nimi'))
tgvKoht = tgvLb.find('tegevuskohad/tegevuskoht')
hospital.append(tgvKoht.findtext('aadress'))
hospital.append(tgvKoht.findtext('teenused/teenus/nimi'))
csvwriter.writerow(hospital)
data.close()
df = pd.read_csv(r'EE.csv', na_filter= False, delimiter=';')
and drop df.rename from your code.

Sorting API response with python to excel or csv. Python

I'm trying to sort out UK Police free API response to a readable format-csv or excel.
Im using Requests library. My initial code is getting the response in a json format:
import requests
r=requests.get('https://data.police.uk/api/crimes-street/all-crime?poly=51.169,-0.633:51.186,-0.5436:51.226,-0.6224&date=2019-12')
r_json=r.json()
for i in j:
for key,value in i.items():
print (key, ":", value)
The code above produces as follows:
category : anti-social-behaviour location_type : Force location : {'latitude': '51.196818', 'street': {'id': 1147343, 'name': 'On or near Parking Area'}, 'longitude': '-0.605146'} context : outcome_status : None persistent_id : id : 79955592 location_subtype : month : 2019-12
How can I create a table with correct headers for the response I get? Headers would be 'category', 'latitude', 'street', 'name', 'longitude', ' month'.
You need to get dipper in dictionary tree to get some data like latitude. Results are collected into collection of lists then loaded into data frame and saved as csv file.
import requests
import pandas as pd
r=requests.get('https://data.police.uk/api/crimes-street/all-crime?poly=51.169,-0.633:51.186,-0.5436:51.226,-0.6224&date=2019-12')
r_json=r.json()
# collect data into list of lists
collected_data = []
for data in r_json:
category = data.get('category')
month = data.get('month')
latitude = ''
longitude = ''
street = ''
for key, value in data.items():
if key == 'location':
latitude = value.get('latitude')
longitude = value.get('longitude')
street = value.get('street').get('name')
collected_data.append([category, latitude, longitude, street, month])
# load data into data frame
df = pd.DataFrame(collected_data, columns = ['Category' , 'Latitude', 'Longitude', 'Street', 'Month'])
# save data frame into csv
df.to_csv('data.csv')

unable to extract and read a particular part of link from a file

So basically I was trying to scrape a Reddit link about game of thrones. This is the link: https://www.reddit.com/r/gameofthrones/wiki/episode_discussion, this has many other links! What i was trying was to scrape all the links in a file which is done! Now i Have to individually scrape every link and print out the data in individual files either csv or json.
Ive tried all possible methods from google but still unable to come to a solution! Any help would be helpful
import praw
import json
import pandas as pd #Pandas for scraping and saving it as a csv
#This is PRAW.
reddit = praw.Reddit(client_id='',
client_secret='',
user_agent='android:com.example.myredditapp:v1.2.3 (by /u/AshKay12)',
username='******',
password='******')
subreddit=reddit.subreddit("gameofthrones")
Comments = []
submission = reddit.submission("links")
with open('got_reddit_links.json') as json_file:
data = json.load(json_file)
for p in data:
print('season: ' + str(p['season']))
print('episode: ' + str(p['episode']))
print('title: ' + str(p['title']))
print('links: ' + str(p['links']))
print('')
submission.comments.replace_more(limit=None)
for comment in submission.comments.list():
print(20*'#')
print('Parent ID:',comment.parent())
print('Comment ID:',comment.id)
print(comment.body)
Comments.append([comment.body, comment.id])
Comments = pd.DataFrame(Comments, columns=['All_Comments', 'Comment ID'])
Comments.to_csv('Reddit3.csv')
This code prints out the links, title and episode number. It also extracts data when the link is manually entered but there are over 50 links in the webiste so i extracted those and put it in a file.
You can find all episode blocks with the links, and then write a function to scrape the comments for each episode discovered by each link:
from selenium import webdriver
import requests, itertools, re
d = webdriver.Chrome('/path/to/chromedriver')
d.get('https://www.reddit.com/r/gameofthrones/wiki/episode_discussion')
new_d = soup(d.page_source, 'html.parser').find('div', {'class':'md wiki'}).find_all(re.compile('h2|h4|table'))
g = [(a, list(b)) for a, b in itertools.groupby(new_d, key=lambda x:x.name == 'h2')]
r = {g[i][-1][0].text:{g[i+1][-1][k].text:g[i+1][-1][k+1] for k in range(0, len(g[i+1][-1]), 2)} for i in range(0, len(g), 2)}
final_r = {a:{b:[j['href'] for j in c.find_all('a', {'href':re.compile('redd\.it')})] for b, c in k.items()} for a, k in r.items()}
Now, you have a dictionary with all the links structured according to Season and episode:
{'Season 1 Threads': {'1.01 Winter Is Coming': ['https://redd.it/gsd0t'], '1.02 The Kingsroad': ['https://redd.it/gwlcx'], '1.03 Lord Snow': ['https://redd.it/h1otp/'], '1.04 Cripples, Bastards, & Broken Things': ['https://redd.it/h70vv'].....
To get the comments, you have to use selenium as well to be able click on the button to display the entire comment structure:
import time
d = webdriver.Chrome('/path/to/chromedriver')
def scrape_comments(url):
d.get(url)
_b = [i for i in d.find_elements_by_tag_name('button') if 'VIEW ENTIRE DISCUSSION' in i.text][0]
_b.send_keys('\n')
time.sleep(1)
p_obj = soup(d.page_source, 'html.parser').find('div', {'class':'_1YCqQVO-9r-Up6QPB9H6_4 _1YCqQVO-9r-Up6QPB9H6_4'}).contents
p_obj = [i for i in p_obj if i != '\n']
c = [{'poster':'[deleted]' if i.a is None else i.a['href'], 'handle':getattr(i.find('div', {'class':'_2X6EB3ZhEeXCh1eIVA64XM _2hSecp_zkPm_s5ddV2htoj _zMIUk6t-WDI7fxfkvD02'}), 'text', 'N/A'), 'points':getattr(i.find('span', {'class':'_2ETuFsVzMBxiHia6HfJCTQ _3_GZIIN1xcMEC5AVuv4kfa'}), 'text', 'N/A'), 'time':getattr(i.find('a', {'class':'_1sA-1jNHouHDpgCp1fCQ_F'}), 'text', 'N/A'), 'comment':getattr(i.p, 'text', 'N/A')} for i in p_obj]
return c
Sample output when running scrape_comments on one of the urls:
[{'poster': '/user/BWPhoenix/', 'handle': 'N/A', 'points': 'Score hidden', 'time': '2 years ago', 'comment': 'Week one, so a couple of quick questions:'}, {'poster': '/user/No0neAtAll/', 'handle': 'N/A', 'points': '957 points', 'time': '2 years ago', 'comment': "Davos fans showing their love Dude doesn't say a word the entire episode and gives only 3 glances but still get's 548 votes."}, {'poster': '/user/MairmanChao/', 'handle': 'N/A', 'points': '421 points', 'time': '2 years ago', 'comment': 'Davos always gets votes for being the most honorable man in Westeros'}, {'poster': '/user/BourbonSlut/', 'handle': 'N/A', 'points': '47 points', 'time': '2 years ago', 'comment': 'I was hoping for some Tyrion dialogue too..'}.....
Now, putting it all together:
final_result = {a:{b:[scrape_comments(i) for i in c] for b, c in k.items()} for a, k in final_r.items()}
From here, you can now create a pd.DataFrame from final_result or write the results to the file.

insert a list 2xn obtained from a json in python

Hi I'm trying to access a json to save it in a list to perform a sort of append and create a pdf in ReportLab, I have the following code but I have several problems the first is that I would like to have a list of 2xn to always it has columns and rows be dynamic according to the json.
If anyone can help me be grateful much
import json
json_data = []
attributesName = []
testTable = { "attributes":[] }
attributesValue = []
path="prueba2.pdf"
doc = SimpleDocTemplate(path, pagesize=letter)
styleSheet = getSampleStyleSheet()
text = []
with open("prueba.json") as json_file:
document = json.load(json_file)
for item in document:
for data_item in item['data']:
attributesName.append([str(data_item['name'])
attributesValue.append([data_item['value']])
testTable[attributesName].extend({data_item['name'], data_item['value']})
print attributesName[0]
print testTable[0]
parts = []
p = Paragraph('''<para align=left fontsize=9>{0}</para>'''.format(text), styleSheet["BodyText"])
parts.append(p)
doc.build(parts)
I implemented the following,but it prints the list
[[['RFC', 'NOMBRE', 'APELLIDO PATERNO', 'APELLIDO MATERNO', 'FECHA NACIMIENTO', 'CALLE', 'No. EXTERI
OR', 'No. INTERIOR', 'C.P.', 'ENTIDAD', 'MUNICIPIO', 'COLONIA', 'DOCUMENTO']], [['MORR910304FL2', 'R
JOSE', 'MONTIEL', 'ROBLES', '1992-02-04', 'AMOR', '4', '2', '55064', 'EDO DE MEX', 'ECATEPEC', 'INDUSTRIAL', 'Documento']]]
I want some like this
[['RFC'], ['22232446']]
[['NOMBRE'], ['22239952']]
[['APELLIDO'], ['22245430']]
if you change your code with the next code
with open("prueba.json") as json_file:
document = json.load(json_file)
for item in document:
for data_item in item['data']:
attributesName.append(str(data_item["name"]))
attributesValue.append(str(data_item["value"]))
tabla.append([[attributesName],[attributesValue]])
print attributesName
print attributesValue
for Y in tabla:
print(Y)

Categories