I am trying to populate a txt file with the response I get from a mechanized form. Here's the form code
import mechanize
from bs4 import BeautifulSoup
br = mechanize.Browser()
br.open ('https://www.cpsbc.ca/physician_search')
first = raw_input('Enter first name: ')
last = raw_input('Enter last name: ')
br.select_form(nr=0)
br.form['filter[first_name]'] = first
br.form['filter[last_name]'] = last
response = br.submit()
content = response.read()
soup = BeautifulSoup(content, "html.parser")
for row in soup.find_all('tbody'):
print row
This spits out lines of html code depending on how many privileges the doc has in regards to locations, but the last line has their specialty of training. Please go ahead and test it with any physician from BC, Canada.
I have a txt file that is listed as such:
lastname1, firstname1
lastname2, firstname2
lastname3, firstname3 middlename3
lastname4, firstname4 middlename4
I hope you get the idea. I would appreciate any help in automatizing the following steps:
go through txt with names one by one and record the output text into a new txt file.
So far, I have this working to spit out the row (which is raw html), which I don't mind, but I can't get it to write into a txt file...
import mechanize
from bs4 import BeautifulSoup
with open('/Users/s/Downloads/hope.txt', 'w') as file_out:
with open('/Users/s/Downloads/names.txt', 'r') as file_in:
for line in file_in:
a = line
delim = ", "
i1 = a.find(delim)
br = mechanize.Browser()
br.open('https://www.cpsbc.ca/physician_search')
br.select_form(nr=0)
br.form['filter[first_name]'] = a[i1+2:]
br.form['filter[last_name]'] = a[:i1]
response = br.submit()
content = response.read()
soup = BeautifulSoup(content, "html.parser")
for row in soup.find_all('tbody'):
print row
This should not be too complicated. Assuming your file with all the names you want to query upon is called "names.txt" and the output file you want to create is called "output.txt", the code should look something like:
with open('output.txt', 'w') as file_out:
with open('names.txt', 'r') as file_in:
for line in file_in:
<your parsing logic goes here>
file_out.write(new_record)
This assumes your parsing logic generates some sort of "record" to be written on file as a string.
If you get more advanced, you can also look into the csv module to import/export data in CSV.
Also have a look at the Input and Output tutorial.
Related
Lovely people! I'm totally new with Python. I tried to scrape several URLs and encountered a problem with "print".
I tried to print and write the "shipment status".
I have two URLs, so ideally I get two results.
This is my code:
from bs4 import BeautifulSoup
import re
import urllib.request
import urllib.error
import urllib
# read urls of websites from text file
list_open = open("c:/Users/***/Downloads/web list.txt")
read_list = list_open.read()
line_in_list = read_list.split("\n")
for url in line_in_list:
soup = BeautifulSoup(urllib.request.urlopen(url).read(), 'html')
# parse something special in the file
shipment = soup.find_all('span')
Preparation=shipment[0]
Sent=shipment[1]
InTransit=shipment[2]
Delivered=shipment[3]
for p in shipment:
# extract information
print (url,';',"Preparation",Preparation.getText(),";","Sent",Sent.getText(),";","InTransit",InTransit.getText(),";","Delivered",Delivered.getText())
import sys
file_path = 'randomfile.txt'
sys.stdout = open(file_path, "w")
print(url,';',"Preparation",Preparation.getText(),";","Sent",Sent.getText(),";","InTransit",InTransit.getText(),";","Delivered",Delivered.getText())`
I have two problems here:
Problem one: I have only two URLs, and when I print the results, every "span" is repeated 4 times (as there are four "span"s).
The result in the "output" is as below:
(I deleted the result example to protect privacy.)
Problem two: I tried to write the "print" to a text file, but only one line appeared in the file:
(I deleted the result example to protect privacy.)
I want to know what is wrong in the code. I want to print 2 url results only.
Your help is really appreciated!
Thank you in advance!
First point is caused by iterating over shipment - Just delete the for loop and correct indent of print():
for url in line_in_list:
soup = BeautifulSoup(urllib.request.urlopen(url).read(), 'html')
# parse something special in the file
shipment = soup.find_all('span')
Preparation=shipment[0]
Sent=shipment[1]
InTransit=shipment[2]
Delivered=shipment[3]
print (url,';',"Preparation",Preparation.getText(),";","Sent",Sent.getText(),";","InTransit",InTransit.getText(),";","Delivered",Delivered.getText())
Second issue is caused while you call the writing outside the loop and not in append mode - You will end up with this as your loop:
#open file in append mode
with open('somefile.txt', 'a') as f:
#start iterating your urls
for url in line_in_list:
soup = BeautifulSoup(urllib.request.urlopen(url).read(), 'html')
# parse something special in the file
shipment = soup.find_all('span')
Preparation=shipment[0]
Sent=shipment[1]
InTransit=shipment[2]
Delivered=shipment[3]
#create output text
line = f'{url};Preparation{Preparation.getText()};Sent{Sent.getText()};InTransit{InTransit.getText()};Delivered{Delivered.getText()}'
#print output text
print (line)
#append output text to file
f.write(line+'\n')
And you can delete:
import sys
file_path = 'randomfile.txt'
sys.stdout = open(file_path, "w")
print(url,';',"Preparation",Preparation.getText(),";","Sent",Sent.getText(),";","InTransit",InTransit.getText(),";","Delivered",Delivered.getText())`
Example of a bit optimized code:
from bs4 import BeautifulSoup
import urllib.request
import urllib.error
import urllib
# read urls of websites from text file
list_open = open("c:/Users/***/Downloads/web list.txt")
read_list = list_open.read()
line_in_list = read_list.split("\n")
file_path = "randomfile.txt"
with open('somefile.txt', 'a', encoding='utf-8') as f:
for url in line_in_list:
soup = BeautifulSoup(urllib.request.urlopen(url).read(), 'html')
# parse something special in the file
shipment = list(soup.select_one('#progress').stripped_strings)
line = f"{url},{';'.join([':'.join(x) for x in list(zip(shipment[::2], shipment[1::2]))])}"
print (line)
f.write(line+'\n')
list_open = open("c:/Users/***/Downloads/web list.txt")
read_list = list_open.read()
line_in_list = read_list.split("\n")
file_path = 'randomfile.txt'
sys.stdout = open(file_path, "w")
There are four spans actuelly, try this
for url in line_in_list:
soup = BeautifulSoup(urlopen(url).read(), 'html')
# parse something special in the file
shipments = soup.find_all("span") # there are four span actually;
sys.stdout.write('Url '+url+'; Preparation'+shipments[0].getText()+'; Sent'+shipments[1].getText()+'; InTransit'+shipments[2].getText()+'; Delivered'+shipments[3].getText())
# change line
sys.stdout.write("\r")
First question
You have two nested loops :
for url in line_in_list:
for p in shipment:
print(...)
The print is nested in the second loop. If you have 4 shipments per url, that will lead to 4 prints per url.
Since you don't use p from for p in shipment you can completely get rid of the second loop and move the print one indentation level left, like this :
for url in line_in_list:
soup = BeautifulSoup(urllib.request.urlopen(url).read(), 'html')
# parse something special in the file
shipment = soup.find_all('span')
Preparation=shipment[0]
Sent=shipment[1]
InTransit=shipment[2]
Delivered=shipment[3]
print (url,';',"Preparation",Preparation.getText(),";","Sent",Sent.getText(),";","InTransit",InTransit.getText(),";","Delivered",Delivered.getText())
Second question
sys.stdout = open(file_path, "w")
print(url,';',"Preparation",Preparation.getText(),";","Sent",Sent.getText(),";","InTransit",InTransit.getText(),";","Delivered",Delivered.getText())`
Without keyword argument, print is writing to sys.stdout, which is by default your terminal output. There's only one print after sys.sdtout = ... so there will only be one line written to the file.
There's another way to print to a file :
with open('demo.txt', 'a') as f:
print('Hello world', file = f)
The keyword with will ensure the file is closed even if an exception is raised.
Both combined
From what I understood, you want to print two lines to the file. Here's a solution :
from bs4 import BeautifulSoup
import urllib.request
import urllib.error
import urllib
# read urls of websites from text file
list_open = open("c:/Users/***/Downloads/web list.txt")
read_list = list_open.read()
line_in_list = read_list.split("\n")
file_path = "randomfile.txt"
for url in line_in_list:
soup = BeautifulSoup(urllib.request.urlopen(url).read(), "html")
# parse something special in the file
shipment = soup.find_all("span")
Preparation = shipment[0]
Sent = shipment[1]
InTransit = shipment[2]
Delivered = shipment[3]
with open(file_path, "a") as f:
f.write(
f"{url} ; Preparation {Preparation.getText()}; Sent {Sent.getText()}; InTransit {InTransit.getText()}; Delivered {Delivered.getText()}"
)
So I am trying to do a POST request to a website and this website will display a CSV, however, the CSV is not downloadable only there in the form it is in so can be copied and pasted.
I am trying to get the HTML from the POST request and get the CSV, export this into a CSV file, to then run a function on. I have managed to get it into CSV form as a string but there doesn't appear to be new lines i.e.
>>> print(text1)
"Heading 1","Heading 2""Item 1","Item 2"
not
"Heading 1","Heading 2"
"Item 1","Item 2"
Is this format OK?
If not how do I get it into an OK format?
Secondly, how can I write this string into a CSV file? If I try to convert text1 into bytes, I get _csv.Error: iterable expected, not int, if not I get TypeError: a bytes-like object is required, not 'str'.
My code so far:
with requests.Session() as s:
response = s.post(headers=headers, data=data, url=url)
html = response.content
soup = BeautifulSoup(html, features="html.parser")
# kill all script and style elements
for script in soup(["script", "style"]):
script.extract() # rip it out
# get text
text = soup.get_text()
# break into lines and remove leading and trailing space on each
lines = (line.strip() for line in text.splitlines())
# break multi-headlines into a line each
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
# drop blank lines
text = '\n'.join(chunk for chunk in chunks if chunk)
text1 = text.replace(text[:56], '')
print(text1)
I think this will work for you, this will find the element containing the csv data (could be body, could be a div, could be a p, etc) and only extract text from there so you don't need to worry about scripts or classes getting in your data:
import csv
from bs4 import BeautifulSoup
# emulate your html format
html_string = '''
<body>
<div class="csv">"Category","Position","Name","Time","Team","avg_power","20min","Male?"<br>"A","1","James ","00:21:31.45","5743","331","5.3","1"<br>"A","2","Da","00:21:31.51","4435","377","5.0","1"<br>"A","3","Timmy ","00:21:31.52","3964","371","4.8","1"<br>"A","4","Timothy ","00:21:31.83","5229","401","5.7","1"<br>"A","5","Stefan ","00:21:31.86","2991","338","","1"<br>"A","6","Josh ","00:21:31.92","","403","5.1","1"<br></div>
<body>
'''
soup = BeautifulSoup(html_string)
for br in soup.find_all('br'):
br.replace_with('\n')
rows = [[i.replace('"', '').strip() # clean the lines
for i in item.split(',')] # splite each item by the comma
# get all the lines inside the div
# this will get the first item matching the filter
for item in soup.find('div', class_='csv').text.splitlines()]
# csv writing function
def write_csv(path, data):
with open(path, 'w') as file:
writer = csv.writer(file)
writer.writerows(data)
print(rows)
write_csv('./data.csv', rows)
Output (data.csv):
Category,Position,Name,Time,Team,avg_power,20min,Male?
A,1,James,00:21:31.45,5743,331,5.3,1
A,2,Da,00:21:31.51,4435,377,5.0,1
A,3,Timmy,00:21:31.52,3964,371,4.8,1
A,4,Timothy,00:21:31.83,5229,401,5.7,1
A,5,Stefan,00:21:31.86,2991,338,,1
A,6,Josh,00:21:31.92,,403,5.1,1
soup.find()/find_all() can isolate an html element for you to scrape from so you don't have to worry about parsing other elements.
I am new to python and I am trying to turn scraping data to a CSV file but without success.
Here is the code:
from urllib.request import urlopen, Request
from bs4 import BeautifulSoup
import os
import random
import re
from itertools import cycle
def cleanhtml(raw_html):
cleanr = re.compile('<.*?>') #cleaning the strings from these terms
cleantext = re.sub(cleanr, '', raw_html)
return cleantext
def scrape(url, filename, number_id):
"""
This function scrapes a web page looking for text inside its html structure and saves it in .txt file.
So it works only for static content, if you need text in a dynamic part of the web page (e.g. a banner)
look at the other file. Pay attention that the retrieved text must be filtered out
in order to keep only the part you need.
url: url to scrape
filename: name of file where to store text
number_id: itis appended to the filename, to distinguish different filenames
"""
#here there is a list of possible user agents
user_agent = random.choice(user_agent_list)
req = Request(url, headers={'User-Agent': user_agent})
page = urlopen(req).read()
# parse the html using beautiful soup and store in variable 'soup'
soup = BeautifulSoup(page, "html.parser")
row = soup.find_all(class_="row")
for element in row:
viaggio = element.find_all(class_="nowrap")
Partenza = viaggio[0]
Ritorno = viaggio[1]
Viaggiatori = viaggio[2]
Costo = viaggio[3]
Title = element.find(class_="taglist bold")
Content = element.find("p")
Destination = Title.text
Review = Content.text
Departure = Partenza.text
Arrival = Ritorno.text
Travellers = Viaggiatori.text
Cost = Costo.text
TuristiPerCasoList = [Destination, Review, Departure, Arrival, Travellers, Cost]
print(TuristiPerCasoList)
Till here, everything works. Now I have to turn it into a CSV file.
I tried with this:
import csv
with open('turistipercaso','w') as file:
writer = csv.writer(file)
writer.writerows(TuristiPerCasoList)
but it doesn't return anything in the CSV file.
Can someone help me understanding what to do to turn into a CSV file?
In each iteration, you are reassigning the TuristiPerCasoList value.
What you actually want is a list of list of strings, where the string is the value for a specific cell, the second list contains the values of a row and the first list contains all the rows.
To achieve this, you should append a list representing a row to the main list:
# instead of
TuristiPerCasoList = [Destination, Review, Departure, Arrival, Travellers, Cost]
# use
TuristiPerCasoList.append([Destination, Review, Departure, Arrival, Travellers, Cost])
This may end up being a really novice question, because i'm a novice, but here goes.
i have a set of .html pages obtained using wget. i want to iterate through them and extract certain info, putting it in a .csv file.
using the code below, all the names print when my program runs, but only the info from the next to last page (i.e., page 29.html here) prints to the .csv file. i'm trying this with only a handful of files at first, there are about 1,200 that i'd like to get into this format.
the files are based on those here: https://www.cfis.state.nm.us/media/ReportLobbyist.aspx?id=25&el=2014 where page numbers are the id
thanks for any help!
from bs4 import BeautifulSoup
import urllib2
import csv
for i in xrange(22, 30):
try:
page = urllib2.urlopen('file:{}.html'.format(i))
except:
continue
else:
soup = BeautifulSoup(page.read())
n = soup.find(id='ctl00_ContentPlaceHolder1_lnkBCLobbyist')
name = n.string
print name
table = soup.find('table', 'reportTbl')
#get the rows
list_of_rows = []
for row in table.findAll('tr')[1:]:
col = row.findAll('td')
filing = col[0].string
status = col[1].string
cont = col[2].string
exp = col[3].string
record = (name, filing, status, cont, exp)
list_of_rows.append(record)
#write to file
writer = csv.writer(open('lob.csv', 'wb'))
writer.writerows(list_of_rows)
You need to append each time not overwrite, use a, open('lob.csv', 'wb') is overwriting each time through your outer loop:
writer = csv.writer(open('lob.csv', 'ab'))
writer.writerows(list_of_rows)
You could also declare list_of_rows = [] outside the for loops and write to the file once at the very end.
If you are wanting page 30 also you need to loop in range(22,31).
I am working on a scraper to pull street names and zip codes from a site and all of that is working great and it builds a CSV file just fine for me. But when I open the CSV file in Excel the file will have a blank row than a row with a street name with the zip code in the next column just like I want. But next I have a blank row than a row with a street name and zip code beside it. And this just continues on all the way through the file which gives me a row with a street name and zip codes in row then the word none in the next row when imported into the PHPMyAdmin database. I want to get rid of the blank rows. Here is my code.
from bs4 import BeautifulSoup
import csv
import urllib2
url="http://www.conakat.com/states/ohio/cities/defiance/road_maps/"
page=urllib2.urlopen(url)
soup = BeautifulSoup(page.read())
f = csv.writer(open("Defiance Steets1.csv", "w"))
f.writerow(["Street", "Zipcode"]) # Write column headers as the first line
links = soup.find_all('a')
for link in links:
i = link.find_next_sibling('i')
if getattr(i, 'name', None):
a, i = link.string, i.string[1:-1]
f.writerow([a, i])
This worked for me (I added lineterminator ="\n"):
from BeautifulSoup import BeautifulSoup
import csv
import urllib2
url="http://www.conakat.com/states/ohio/cities/defiance/road_maps/"
page=urllib2.urlopen(url)
soup = BeautifulSoup(page.read())
f = csv.writer(open("Defiance Steets1.csv", "w"), lineterminator ="\n")
f.writerow(["Street", "Zipcode"]) # Write column headers as the first line
#print soup.
links = soup.findAll('a')
for link in links:
#i = link.find_next_sibling('i')
i = link.findNextSibling('i')
if getattr(i, 'name', None):
a, i = link.string, i.string[1:-1]
print [a,i]
f.writerow([a, i])
this works for me... thanks
if you have the writer and open in different lines,
put it as a param in the writer function...