I've used this community a number of times, and the answers for questions I search are awesome. I have searched around for a solution to this one, but I am having problems. I think it has to do with my lack of knowledge about html code and structure. Right now I am trying to use urllib.urlencode to fill out a form on a website. Unfortunately, no matter what combinations of values I add to the dictionary, the html data returned as 'soup' is the same webpage with a list of the search options. I'm guessing that means that it is not passing the search data properly with urllib.urlencode.
An example of the webpage is:
http://www.mci.mndm.gov.on.ca/Claims/Cf_Claims/clm_cls.cfm?Div=80
which is the url I will go to, where the end DIV=80 or Div=70 etc is made in first two lines with a reference to another function 'urlData(division)'. After these lines is where the problem is happening. I've tried to include a value for each input line under the search form, but I am definitely missing something.
Code:
def searchHolder(Name, division):
url = ('http://www.mci.mndm.gov.on.ca/Claims/Cf_Claims/clm_cls.cfm'+
'?Div='+str(urlData(division)))#creates url given above
print url#checked its same url as the url given above for the case I am having problems with
values = ({'HolderName': Name, 'action':'clm_clr.cfm', 'txtDiv' : 80,
'submit': 'Start Search'})
data = urllib.urlencode(values)
html = urllib.urlopen(url, data)
soup = bs4.BeautifulSoup(html)
soup.unicode
print soup.text
return soup
The form "action" isn't a parameter you pass. Rather, it's the URL you need to send your request to in order to get results. Give this a try:
def searchHolder(Name, division):
url = ('http://www.mci.mndm.gov.on.ca/Claims/Cf_Claims/clm_clr.cfm')
values = ({'HolderName': Name, 'txtDiv' : 80})
data = urllib.urlencode(values)
html = urllib.urlopen(url, data)
soup = bs4.BeautifulSoup(html)
soup.unicode
print soup.text
return soup
Related
I have a list of webid that I want to scrape from WikiData website. Here are the two links as an example.
https://www.wikidata.org/wiki/Special:EntityData/Q317521.jsonld
https://www.wikidata.org/wiki/Special:EntityData/Q478214.jsonld
I only need the first set of "P31" from the URL. For the first URL, the information that I need will be "wd:Q5" and second URL will be ["wd:Q786820", "wd:Q167037", "wd:Q6881511","wd:Q4830453","wd:Q431289","wd:Q43229","wd:Q891723"] and store them into a list.
When I use find and input "P31", I only need the first results out of all the results. The picture above illustrate it
The output will look like this.
info = ['wd:Q5',
["wd:Q786820", "wd:Q167037", "wd:Q6881511","wd:Q4830453","wd:Q431289","wd:Q43229","wd:Q891723"],
]
lst = ["Q317521","Q478214"]
for q in range(len(lst)):
link =f'https://www.wikidata.org/wiki/Special:EntityData/{q}.jsonld'
page = requests.get(link)
soup = BeautifulSoup(page.text, 'html.parser')
After that, I do not know how to extract the information from the first set of "P31". I am using request, BeautifulSoup, and Selenium libraries but I am wondering are there any better ways to scrape/extract that information from the URL besides using XPath or Class?
Thank you so much!
You only need requests as you are getting a JSON response.
You can use a function which loops the relevant JSON nested object and exits at first occurrence of target key whilst appending the associated value to your list.
The loop variable should be the id to add into the url for the request.
import requests
lst = ["Q317521","Q478214"]
info = []
def get_first_p31(data):
for i in data['#graph']:
if 'P31' in i:
info.append(i['P31'])
break
with requests.Session() as s:
s.headers = {"User-Agent": "Safari/537.36"}
for q in lst:
link =f'https://www.wikidata.org/wiki/Special:EntityData/{q}.jsonld'
try:
r = s.get(link).json()
get_first_p31(r)
except:
print('failed with link: ', link)
I am trying to scrape some data from the webmd messageboard. Initially I constructed a loop to get the page numbers for each category and stored the in a dataframe. When I try to run the loop I do get the proper amount of post for each subcategory but only for the first page. Any ideas what might be going wrong?
lists2=[]
df1= pd.DataFrame (columns=['page'],data=page_links)
for j in range(len(df1)):
pages = (df1.page.iloc[j])
print(pages)
req1 = urllib.request.Request(pages, headers=headers)
resp1 = urllib.request.urlopen(req1)
soup1 = bs.BeautifulSoup(resp1,'lxml')
for body_links in soup1.find_all('div',class_="thread-detail"):
body= body_links.a.get('href')
lists2.append(body)
I am getting the proper page in the print function but then it seem to iterate only in the first page and getting the links of the posts. Also when I copy and paste the link for any page besides the first one it seems to momentarily load the first page and then goes to the proper number page. I tried to add time.sleep(1) but does not work. Another thing I tried was to add {headers='Cookie': 'PHPSESSID=notimportant'}
Replace this line:
pages = (df1.page.iloc[j])
With this:
pages = (df1.page.iloc[j, 0])
You will now iterate through the values of your DataFrame
If page_links is list with urls like
page_links = ["http://...", "http://...", "http://...", ]
then you could use it directly
for url in page_links:
req1 = urllib.request.Request(url headers=headers)
If you need it in DataFrame then
for url in df1['page']:
req1 = urllib.request.Request(url headers=headers)
But if your current code displays all urls but you get result only for one page then problem is not in DataFrame but in HTML and find_all.
It seems only first page has <div class_="thread-detail"> so it can't find it on other pages and it can't add it to list. You should check it again. For other pages you may need different arguments in find_all. But without urls to these pages we can't check it and we can't help more.
It can be other common problem - page may use JavaScript to add these elements but BeautifulSoup can't run JavaScript - and then you woould need [Selenium](https://selenium-python.readthedocs.io/) to control web browser which can run JavaScript. You could turn off JavaScript in browser and open urls to check if you can see elements on page and in HTML inDevTools` in Chrome/Firefox.
As for PHPSESSID with requests you could use Session to get from server fresh cookies with PHPSESSID and automatically add them to other reuqests
import requests
s = reqeusts.Session()
# get any page to get fresh cookies from server
r = s.get('http://your-domain/main-page.html')
# use it automatically with cookies
for url in page_links:
r = s.get(url)
I am trying to programmatically download (open) data from a website using BeautifulSoup.
The website is using a php form where you need to submit input data and then outputs the resulting links apparently within this form.
My approach was as follows
Step 1: post form data via request
Step 2: parse resulting links via BeautifulSoup
However, it seems like this is not working / I am doing wrong as the post method seems not to work and Step 2 is not even possible as no results are available.
Here is my code:
from bs4 import BeautifulSoup
import requests
def get_text_link(soup):
'Returns list of links to individual legal texts'
ergebnisse = soup.findAll(attrs={"class":"einErgebnis"})
if ergebnisse:
links = [el.find("a",href=True).get("href") for el in ergebnisse]
else:
links = []
return links
url = "https://www.justiz.nrw.de/BS/nrwe2/index.php#solrNrwe"
# Post specific day to get one day of data
params ={'von':'01.01.2018',
'bis': '31.12.2018',
"absenden":"Suchen"}
response = requests.post(url,data=params)
content = response.content
soup = BeautifulSoup(content,"lxml")
resultlinks_to_parse = get_text_link(soup) # is always an empty list
# proceed from here....
Can someone tell what I am doing wrong. I am not really familiar with request post. The form field for "bis" e.g. looks as follows:
<input id="bis" type="text" name="bis" size="10" value="">
If my approach is flawed I would appreaciate any hint how to deal with this kind of site.
Thanks!
I've found what is the issue in your requests.
My investigation give the following params was availables:
gerichtst:
yp:
gerichtsbarkeit:
gerichtsort:
entscheidungsart:
date:
von: 01.01.2018
bis: 31.12.2018
validFrom:
von2:
bis2:
aktenzeichen:
schlagwoerter:
q:
method: stem
qSize: 10
sortieren_nach: relevanz
absenden: Suchen
advanced_search: true
I think the qsize param is mandatory for yourPOST request
So, you have to replace your params by:
params = {
'von':'01.01.2018',
'bis': '31.12.2018',
'absenden': 'Suchen',
'qSize': 10
}
Doing this, here are my results when I print resultlinks_to_parse
print(resultlinks_to_parse)
OUTPUT:
[
'http://www.justiz.nrw.de/nrwe/lgs/detmold/lg_detmold/j2018/03_S_69_18_Urteil_20181031.html',
'http://www.justiz.nrw.de/nrwe/arbgs/hamm/lag_hamm/j2018/10_Sa_1122_17_Urteil_20180126.html',
'http://www.justiz.nrw.de/nrwe/arbgs/hamm/lag_hamm/j2018/13_TaBV_10_18_Beschluss_20181123.html',
'http://www.justiz.nrw.de/nrwe/arbgs/hamm/lag_hamm/j2018/10_Sa_1810_17_Urteil_20180629.html',
'http://www.justiz.nrw.de/nrwe/arbgs/hamm/lag_hamm/j2018/10_Sa_1811_17_Urteil_20180629.html',
'http://www.justiz.nrw.de/nrwe/arbgs/hamm/lag_hamm/j2018/11_Sa_1196_17_Urteil_20180118.html',
'http://www.justiz.nrw.de/nrwe/arbgs/hamm/lag_hamm/j2018/11_Sa_1775_17_Urteil_20180614.html',
'http://www.justiz.nrw.de/nrwe/arbgs/hamm/lag_hamm/j2018/11_SaGa_9_18_Urteil_20180712.html',
'http://www.justiz.nrw.de/nrwe/arbgs/hamm/lag_hamm/j2018/12_Sa_748_18_Urteil_20181009.html',
'http://www.justiz.nrw.de/nrwe/arbgs/hamm/lag_hamm/j2018/12_Sa_755_18_Urteil_20181106.html'
]
I am working on a blog and learning web development at the same time. I want to learn more about JSON so I am trying to implement a way to export the entire contents of my blog to JSON and later XML. I am hitting a lot of problems on the way, the biggest one being getting the url of the page which I want to render as JSON/XML dynamically. The code for my website can be found here. I still need to comment more and I have to implement a lot of functionalities. The main class which is responsible for exporting the contents to JSON is as follows :
class JSONHandler(BaseHandler):
#TODO: get a way to gt the url from the request
def get(self):
self.response.headers['Content-Type'] = 'application/json'
url = "http://www.bigb-myapp.appspot.com/blog"
#url = self.request.path_url
logging.info(url)
page = urllib2.urlopen(url).read()
soup = BeautifulSoup(page)
subject_list = []
day_list = []
content_list = []
subjects = soup.findAll('div', {'class' : 'subject-title'})
days = soup.findAll('div', {'class' : 'day'})
contents = soup.findAll('div', {'class' : 'post'})
for subject in subjects:
subject_list.append(subject.findAll(text = True))
for day in days:
day_list.append(day.findAll(text = True))
for content in contents:
content_list.append(content.findAll(text = True))
i = 0
for s, d, c in subject_list, day_list, content_list:
json_text = json.dumps({'subject': s[i][i],'day': d[i][i], 'content': c[i][i]})
i += 1
self.write(json_text)
I am also sure that the printing function is erroneous, but that is the easy part. As I said getting the url is proving to be a major difficulty.
I have tried to get the url form the environment variable and I also have tired webapp2's request handlers such as self.request.path_url to no avail.
I am working with Google App engine and use the jinja2 template engine.
Thanks.
self.request.url or self.request.path should do the trick.
However, the better way to do this is using similar to what you used in the permalink section. Just parse the post-id from the request. Meaning you should separate JSONHandler into handling two things - a) return the entire blog, b) return an individual post.
I'd also suggest to not use this method you're using to get the blog posts... In the Mainpage class you do it so elegantly with GQL so why do it with urllib2 and BeautifulSoup ?
And as for the last question about the response.. the correct way is: self.response.out.write("something")
EDITED TO ADD:
I meant to split the JSONHandler into two parts, such that there'd be two handlers: ('/blog/(\d+).json',PermalinkJSONHandler),
('/blog.json',FullJSONHandler),...
Both should be about the same (even using the same function for dumping the json) just with different GQLs to get the correct information.
I've been working on a script and I thought I would ask for help. I'm looking to search a series of websites, check if the site is valid. Then the next step would be to check for specific content on the site. If the site holds that content, place the URL in a list.
import urllib2
def getPage():
url="import urllib2
National=[]
Local=[]
Sports=[]
Culture=[]
def getPage():
url="http://readingeagle.com/section.aspx?id=2"
for i in range (0,100,1)
req = urllib2.Request(http://readingeagle.com/section.aspx?id=,i)
if "national" in response:
response = urllib2.urlopen(req)
return response.read()
for g in range (0,100,1)
if "national" in response:
National.append("http://readingeagle.com/section.aspx?id=,g"
# I would like to set-up an iteration to check the 'entryid from 1-100. If the term is found on the page, place the url in the list.
if __name__ == "__main__":
namesPage = getPage()
print (namesPage)
Here's my answer to the question of how to validate a given web site.
python check html valid
For checking the context of the page the tools consist of basic string methods, regex, or more sophisticated tools like lxml or beautifulsoup.
matchingSites = []
matchingSites.append(url) #Since you asked. :-p