Saving and Resuming on scraperwiki - CPU time - python

This is my first time doing this, so I better apologize in advance for my rookie mistakes. I'm trying to scrape legacy.com for the first page results from searching for a first and last name within the state. I'm new to programming, and was using scraperwiki to do the code. It worked, but I ran out of cpu time long before the 10,000 ish queries had time to process. Now I'm trying to save progress, catch when it time is running low, and then resume from where it left off.
I can't get the save to work, and any help with the other parts would be appreciated as well. As of now I'm just grabbing links, but if there was a way to save the main content of the linked pages that would be really helpful as well.
Here's my code:
import scraperwiki
from urllib import urlopen
from BeautifulSoup import BeautifulSoup
f = open('/tmp/workfile', 'w')
#read database, find last, start from there
def searchname(fname, lname, id, stateid):
url = 'http://www.legacy.com/ns/obitfinder/obituary-search.aspx?daterange=Last1Yrs&firstname= %s &lastname= %s &countryid=1&stateid=%s&affiliateid=all' % (fname, lname, stateid)
obits=urlopen(url)
soup=BeautifulSoup(obits)
obits_links=soup.findAll("div", {"class":"obitName"})
print obits_links
s = str(obits_links)
id2 = int(id)
f.write(s)
#save the database here
scraperwiki.sqlite.save(unique_keys=['id2'], data=['id2', 'fname', 'lname', 'state_id', 's'])
# Import Data from CSV
import scraperwiki
data = scraperwiki.scrape("https://dl.dropbox.com/u/14390755/legacy.csv")
import csv
reader = csv.DictReader(data.splitlines())
for row in reader:
#scraperwiki.sqlite.save(unique_keys=['id'], 'fname', 'lname', 'state_id', data=row)
FNAME = str(row['fname'])
LNAME = str(row['lname'])
ID = str(row['id'])
STATE = str(row['state_id'])
print "Person: %s %s" % (FNAME,LNAME)
searchname(FNAME, LNAME, ID, STATE)
f.close()
f = open('/tmp/workfile', 'r')
data = f.read()
print data

At the bottom of the CSV loop, write each fname+lname+state combination with save_var. Then, right before that loop, add another loop that goes through the rows without processing them until it passes the saved value.
You should be able to write entire web pages into the datastore, but I haven't tested that.

Related

How to scrape data once a day and write it to csv

i'm a total noobie, i'm just starting with web scraping as a hobby.
I want to scrape data from forum (total numer of post, total numer of subjects and numer of all users) from https://www.fly4free.pl/forum/
photo of which data I want to scrape
Watching some turotirals i've came to this code:
from bs4 import BeautifulSoup
import requests
import datetime
import csv
source = requests.get('https://www.fly4free.pl/forum/').text
soup = BeautifulSoup(source, 'lxml')
csv_file = open('4fly_forum.csv', 'w')
csv_writer = csv.writer(csv_file)
csv_writer.writerow(['Data i godzina', 'Wszytskich postów', 'Wszytskich tematów', 'Wszytskich użytkowników'])
czas = datetime.datetime.now()
czas = czas.strftime("%Y-%m-%d %H:%M:%S")
print(czas)
dane = soup.find('p', class_='genmed')
posty = dane.find_all('strong')[0].text
print(posty)
tematy = dane.find_all('strong')[1].text
print(tematy)
user = dane.find_all('strong')[2].text
print(user)
print()
csv_writer.writerow([czas, posty, tematy, user])
csv_file.close()
I don't know how to make it run once a day and how to add data to the file once a day. Sorry if my questions are infantile for you pros ;), it's my first training assignment.
Also my reasult csv file looks not nice, i would like that the data will nice formated into columns
Any help and insight will be much appreciated.
thx in advance
Dejvciu
You can use the Schedule library in Python to do this.
First install it using
pip install schedule
Then you can modify your code to run at intervals of your choice
import schedule
import time
def scrape():
# your web scraping code here
print('web scraping')
schedule.every().day.at("10:30").do(scrape) # change 10:30 to time of your choice
while True:
schedule.run_pending()
time.sleep(1)
This will run the web scraping script every day at 10:30 and you can easily host it for free to make it run continually.
Here's how you would save the results to a csv in a nice formatted way with filednames (czas, tematy, posty and user) as column names.
import csv
from os import path
# this will avoid appending the headers (fieldnames or column names) everytime the script runs. Headers will be written to csv only once
file_status = path.isfile('filename.csv')
with open('filename.csv', 'a+', newline='') as csvfile:
fieldnames = ['czas', 'posty', 'tematy', 'user']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
if not file_status:
writer.writeheader()
writer.writerow({'czas': czas, 'posty': posty, 'tematy': tematy, 'user': user})
I'm also not very experienced but I think that to do that once a day, you can use the task scheduler of your computer. That will run your script once every day. Maybe this video helps you with the task scheduler: https://www.youtube.com/watch?v=s_EMsHlDPnE

Many-record upload to postgres

I have a series of .csv files with some data, and I want a Python script to open them all, do some preprocessing, and upload the processed data to my postgres database.
I have it mostly complete, but my upload step isn't working. I'm sure it's something simple that I'm missing, but I just can't find it. I'd appreciate any help you can provide.
Here's the code:
import psycopg2
import sys
from os import listdir
from os.path import isfile, join
import csv
import re
import io
try:
con = db_connect("dbname = '[redacted]' user = '[redacted]' password = '[redacted]' host = '[redacted]'")
except:
print("Can't connect to database.")
sys.exit(1)
cur = con.cursor()
upload_file = io.StringIO()
file_list = [f for f in listdir(mypath) if isfile(join(mypath, f))]
for file in file_list:
id_match = re.search(r'.*-(\d+)\.csv', file)
if id_match:
id = id_match.group(1)
file_name = format(id_match.group())
with open(mypath+file_name) as fh:
id_reader = csv.reader(fh)
next(id_reader, None) # Skip the header row
for row in id_reader:
[stuff goes here to get desired values from file]
if upload_file.getvalue() != '': upload_file.write('\n')
upload_file.write('{0}\t{1}\t{2}'.format(id, [val1], [val2]))
print(upload_file.getvalue()) # prints output that looks like I expect it to
# with thousands of rows that seem to have the right values in the right fields
cur.copy_from(upload_file, '[my_table]', sep='\t', columns=('id', 'col_1', 'col_2'))
con.commit()
if con:
con.close()
This runs without error, but a select query in psql still shows no records in the table. What am I missing?
Edit: I ended up giving up and writing it to a temporary file, and then uploading the file. This worked without any trouble...I'd obviously rather not have the temporary file though, so I'm happy to have suggestions if someone sees the problem.
When you write to an io.StringIO (or any other file) object, the file pointer remains at the position of the last character written. So, when you do
f = io.StringIO()
f.write('1\t2\t3\n')
s = f.readline()
the file pointer stays at the end of the file and s contains an empty string.
To read (not getvalue) the contents, you must reposition the file pointer to the beginning, e.g. use seek(0)
upload_file.seek(0)
cur.copy_from(upload_file, '[my_table]', columns = ('id', 'col_1', 'col_2'))
This allows copy_from to read from the beginning and import all the lines in your upload_file.
Don't forget, that you read and keep all the files in your memory, which might work for a single small import, but may become a problem when doing large imports or multiple imports in parallel.

Read data from api and populate .csv bug

I am trying to write a script (Python 2.7.11, Windows 10) to collect data from an API and append it to a csv file.
The API I want to use returns data in json.
It limits the # of displayed records though, and pages them.
So there is a max number of records you can get with a single query, and then you have to run another query, changing the page number.
The API informs you about the nr of pages a dataset is divided to.
Let's assume that the max # of records per page is 100 and the nr of pages is 2.
My script:
import json
import urllib2
import csv
url = "https://some_api_address?page="
limit = "&limit=100"
myfile = open('C:\Python27\myscripts\somefile.csv', 'ab')
def api_iterate():
for i in xrange(1, 2, 1):
parse_url = url,(i),limit
json_page = urllib2.urlopen(parse_url)
data = json.load(json_page)
for item in data['someobject']:
print item ['some_item1'], ['some_item2'], ['some_item3']
f = csv.writer(myfile)
for row in data:
f.writerow([str(row)])
This does not seem to work, i.e. it creates a csv file, but the file is not populated. There is obviously something wrong with either the part of the script which builds the address for the query OR the part dealing with reading json OR the part dealing with writing query to csv. Or all of them.
I have tried using other resources and tutorials, but at some point I got stuck and I would appreciate your assistance.
The url you have given provides a link to the next page as one of the objects. You can use this to iterate automatically over all of the pages.
The script below gets each page, extracts two of the entries from the Dataobject array and writes them to an output.csv file:
import json
import urllib2
import csv
def api_iterate(myfile):
url = "https://api-v3.mojepanstwo.pl/dane/krs_osoby"
csv_myfile = csv.writer(myfile)
cols = ['id', 'url']
csv_myfile.writerow(cols) # Write a header
while True:
print url
json_page = urllib2.urlopen(url)
data = json.load(json_page)
json_page.close()
for data_object in data['Dataobject']:
csv_myfile.writerow([data_object[col] for col in cols])
try:
url = data['Links']['next'] # Get the next url
except KeyError as e:
break
with open(r'e:\python temp\output.csv', 'wb') as myfile:
api_iterate(myfile)
This will give you an output file looking something like:
id,url
1347854,https://api-v3.mojepanstwo.pl/dane/krs_osoby/1347854
1296239,https://api-v3.mojepanstwo.pl/dane/krs_osoby/1296239
705217,https://api-v3.mojepanstwo.pl/dane/krs_osoby/705217
802970,https://api-v3.mojepanstwo.pl/dane/krs_osoby/802970

Python scraping and outputting to excel

I am trying to create a web crawler. I am currently just testing it on Youtube, but I intend to expand it to do more later. For now, I am still learning.
Currently I am trying to export the information to a csv, the code below is what I have at the moment and it seemed to be working great when I was running it to pull title descriptions. However, when I added in code to get the "views" and "likes", it messes up the output file because they have commas in them.
Does anyone know what I can do to get around this?
import urllib2
import __builtin__
from selenium import webdriver
from selenium.common.exceptions import NoSuchAttributeException
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.keys import Keys
import time
from time import sleep
from random import randint
from lxml import etree
browser = webdriver.Firefox()
time.sleep(2)
browser.get("https://www.youtube.com/results?search_query=funny")
time.sleep(2)
browser.find_element_by_xpath("//*[#id='section-list']/li/ol/li[1]/div/div/div[2]/h3/a").click()
time.sleep(2)
url = browser.current_url
title = browser.find_element_by_xpath("//*[#id='eow-title']").text
views = browser.find_element_by_xpath("//*[#id='watch7-views-info']/div[1]").text
likes = browser.find_element_by_xpath("//*[#id='watch-like']/span").text
dislikes = browser.find_element_by_xpath("//*[#id='watch-dislike']/span").text
tf = 'textfile.csv'
f2 = open(tf, 'a+')
f2.write(', '.join([data.encode('utf-8') for data in [url]]) + ',')
f2.write(', '.join([data.encode('utf-8') for data in [title]]) + ',')
f2.write(', '.join([data.encode('utf-8') for data in [views]]) + ',')
f2.write(', '.join([data.encode('utf-8') for data in [likes]]) + ',')
f2.write(', '.join([data.encode('utf-8') for data in [dislikes]]) + '\n')
f2.close()
First, the fact that you see those numbers with commas rather than a point is dependant on the language and regional settings that youtube detects for your browser.
Once you have your views, likes and dislikes as strings, you could perform an operation like the following to get rid of the commas:
likes = "3,141,592"
likes = likes.replace(',', '') # likes is now: "3141592"
likes = int(likes) # likes is now an actual integer, not just a string
This works because those 3 parameters are all integers, so you don't have to start thinking of commas or points that are actually important to indicate the start of the non-integer part.
Finally, good examples on how to use the csv module are everywhere on the internet. I could suggest the one from Python Module of the Week. If you understand the examples, you'll be able to change your code to use this highly efficient module.
You needn't write raw csv format yourself. Use https://docs.python.org/2/library/csv.html.
a sample code:
stringio = StringIO.StringIO()
csv_writer = csv.writer(stringio)
csv_writer.writerow([data.encode('utf-8') for data in [url]])
csv_writer.writerow([data.encode('utf-8') for data in [title]])
csv_writer.writerow([data.encode('utf-8') for data in [views]])
csv_writer.writerow([data.encode('utf-8') for data in [likes]])
csv_writer.writerow([data.encode('utf-8') for data in [dislikes]])
with open('textfile.csv') as fp:
fp.write(stringio.getvalue())
I can't understand the purpose of [data.encode('utf-8') for data in [url]] or you mean:
csv_writer.writerow([data.encode('utf-8') for data in [url, title, views, likes, dislikes]])
you can also try csv.writer(open('textfile.csv', 'a+')) without writing to a string buffer.

Great CSV module for python?

I'm automating a long task that involves vulnerabilities within a spreadsheet. However, I'm noticing that the "recommendation" for these vulnerabilities are sometimes pretty long.
The CSV module for python seems to be truncating some of this text when writing new rows. Is there any way to prevent this from happening? I simply see "NOTE: THIS FIELD WAS TRUNCATED" in places where the recommendation (which is a lot of text) would be.
The whole objective is to do this:
Import a master spreadsheet which has confirmation statuses and everything up-to-date
Take a new spreadsheet containing vulnerabilities which doesn't have conf status/severity up-to-date.
Compare the second spreadsheet to the first. It'll update the severity levels from the second spreadsheet, and then write to a new file.
Newly created csv file can be copied and pasted into master spreadsheet. All vulnerabilities which match the first spreadsheet now have the same severity level/confirmation status.
What I'm noticing though, even in Ruby for some reason, is that some of the recommendations in these vulnerabilities have long text; therefore, it's being truncated when the CSV file is created for some reason. Here's a sample piece of the code that I've quickly written for demonstration:
#!/usr/bin/python
from sys import argv
import getopt, csv
master_vulns = {}
criticality = {}
############################ Extracting unique vulnerabilities from master file
contents = csv.reader(open(argv[1], 'rb'), delimiter=',')
for row in contents:
if "Confirmation_Status" in row:
continue
try:
if row[7] in master_vulns:
continue
if row[7] in master_vulns:
continue
master_vulns[row[7]] = row[3]
criticality[rows[7]] = row[2]
except Exception:
pass
############################ Updating confirmation status of newly created file
new_contents = csv.reader(open(argv[1], 'rb'), delimiter=',')
new_data = []
results = open('results.csv','wb')
writer = csv.writer(results, delimiter=',')
for nrow in new_contents:
if "Confirmation_Status" in nrow:
continue
try:
if nrow[1] == "DELETE":
continue
vuln_name = nrow[7]
vuln_status = nrow[3]
criticality = criticality[vuln_name]
vuln_status = master_vulns[vuln_name]
nrow[3] = vuln_status
nrow[2] = criticality
writer.writerow(nrow)
except Exception:
writer.writerow(nrow)
pass
results.close()

Categories