regex python simple findall start and end points known - python

import bs4
from urllib.request import urlopen
import re
import os
html=urlopen('https://www.flickr.com/search/?text=dog')
soup=bs4.BeautifulSoup(html,'html.parser')
print(soup.title)
x=soup.text
y=[]
for i in re.findall('c1.staticflickr.com\.jpg',x):
print(i)
i know images start with c1.staticflickr.com and end with .jpg,how can i print each image link,(i am bit rusty on regex i tried adding some stuff but didn't work)

You have two way to gather what you desire, but it seems regex would be better because the urls have a canonical format. But if you use bs4 to extract the urls, which will be a bit complex, since they inside style.
import bs4
import requests
import re
resp = requests.get('https://www.flickr.com/search/?text=dog')
html = resp.text
result = re.findall(r'c1\.staticflickr\.com/.*?\.jpg',html)
print(len(result))
print(result[:5])
soup=bs4.BeautifulSoup(html,'html.parser')
result2 = [ re.findall(r'c1\.staticflickr\.com/.*?\.jpg',ele.get("style"))[0]
for ele in soup.find_all("div",class_="view photo-list-photo-view requiredToShowOnServer awake")]
print(len(result2))
print(result2[:5])
Edit: you can gain extra information through the special URL, instead of using selenium. And i did not check if it can get the information which in page one.
import requests
url = "https://api.flickr.com/services/rest?sort=relevance&parse_tags=1&content_type=7&extras=can_comment,count_comments,count_faves,description,isfavorite,license,media,needs_interstitial,owner_name,path_alias,realname,rotation,url_c,url_l,url_m,url_n,url_q,url_s,url_sq,url_t,url_z&per_page={per_page}&page={page}&lang=en-US&text=dog&viewerNSID=&method=flickr.photos.search&csrf=&api_key=352afce50294ba9bab904b586b1b4bbd&format=json&hermes=1&hermesClient=1&reqId=c1148a88&nojsoncallback=1"
with requests.Session() as s:
#resp = s.get(url.format(per_page=100,page=1))
resp2 = s.get(url.format(per_page=100,page=2))
for each in resp2.json().get("photos").get("photo")[:5]:
print(each.get("url_n_cdn"))
print(each.get("url_m")) # there are more url type in JSON, url_q url_s url_sq url_t url_z

Related

Web scraping using Python and Beautiful Soup for /post-sitemap.xml/

I am trying to scrape a page website/post-sitemap.xml which contains all url's posted for a wordpress website. In the first step, I need to make a list of all the url's present in post-sitemap. When I use requests.get and I check the output, it opens all of the internal urls as well, which is weird. My intention is to make a list of all url's first and then using a loop, I will scrape individual url's in the next function. Below is the code I have done so far. I would need all url's as a list as my final output if python gurus can help.
I have tried using requests.get and openurl but nothing seems to open only the base url for /post-sitemap.xml
import pandas as pd
import numpy as np
from urllib.request import urlopen
from bs4 import BeautifulSoup
import requests
import re
class wordpress_ext_url_cleanup(object):
def __init__(self,wp_url):
self.wp_url_raw = wp_url
self.wp_url = wp_url + '/post-sitemap.xml/'
def identify_ext_url(self):
html = requests.get(self.wp_url)
print(self.wp_url)
print(html.text)
soup = BeautifulSoup(html.text,'lxml')
#print(soup.get_text())
raw_data = soup.find_all('tr')
print (raw_data)
#for link in raw_data:
#print(link.get("href"))
def main():
print ("Inside Main Function");
url="http://punefirst dot com" #(knowingly removed the . so it doesnt look spammy)
first_call = wordpress_ext_url_cleanup(url)
first_call.identify_ext_url()
if __name__ == '__main__':
main()
I would need all 548 url's present in the post sitemap as a list which I will use it for the next function for further scraping.
The document that is returned from the server is XML and transformed with XSLT to HTML form (more info here). To parse all links from this XML, you can use this script:
import requests
from bs4 import BeautifulSoup
url = 'http://punefirst.com/post-sitemap.xml/'
soup = BeautifulSoup(requests.get(url).text, 'lxml')
for loc in soup.select('url > loc'):
print(loc.text)
Prints:
http://punefirst.com
http://punefirst.com/hospitals/pcmc-hospitals/aditya-birla-memorial-hospital-chinchwad-pune
http://punefirst.com/hospitals/pcmc-hospitals/saijyoti-hospital-and-icu-chinchwad-pune
http://punefirst.com/hospitals/pcmc-hospitals/niramaya-hospital-chinchwad-pune
http://punefirst.com/hospitals/pcmc-hospitals/chetna-hospital-chinchwad-pune
http://punefirst.com/hospitals/hadapsar-hospitals/pbmas-h-v-desai-eye-hospital
http://punefirst.com/hospitals/punecentral-hospitals/shree-sai-prasad-hospital
http://punefirst.com/hospitals/punecentral-hospitals/sadhu-vaswani-missions-medical-complex
http://punefirst.com/hospitals/katraj-kondhwa-hospitals/shivneri-hospital
http://punefirst.com/hospitals/punecentral-hospitals/kelkar-nursing-home
http://punefirst.com/hospitals/pcmc-hospitals/shrinam-hospital
http://punefirst.com/hospitals/pcmc-hospitals/dhanwantari-hospital-nigdi
http://punefirst.com/hospitals/punecentral-hospitals/dr-tarabai-limaye-hospital
http://punefirst.com/hospitals/katraj-kondhwa-hospitals/satyanand-hospital-kondhwa-pune
...and so on.

How to save a file from BeautifulSoup?

I am trying to scrape a website, which so far I am able to scrape but I want to output the file to a text file then from there I want to delete some strings in it.
from urllib.request import urlopen
from bs4 import BeautifulSoup
delete = ['https://', 'http://', 'b\'http://', 'b\'https://']
url = urlopen('https://openphish.com/feed.txt')
bs = BeautifulSoup(url.read(), 'html.parser' )
print(bs.encode('utf_8'))
The result is a lot of links, I can show a sample.
"b'https://certain-wrench.000webhostapp.com/auth/signin/details.html\nhttps://sweer-adherence.000webhostapp.com/auth/signin/details.html\n"
UPDATED
import requests
from bs4 import BeautifulSoup
url = "https://openphish.com/feed.txt"
url_get = requests.get(url)
soup = BeautifulSoup(url_get.content, 'lxml')
with open('url.txt', 'w', encoding='utf-8') as f_out:
f_out.write(soup.prettify())
delete = ["</p>", "</body>", "</html>", "<body>", "<p>", "<html>", "www.",
"https://", "http://", " ", " ", " "]
with open(r'C:\Users\v-morisv\Desktop\scripts\url.txt', 'r') as file:
with open(r'C:\Users\v-morisv\Desktop\scripts\url1.txt', 'w') as
file1:
for line in file:
for word in delete:
line = line.replace(word, "")
print(line, end='')
file1.write(line)
This code above works, but I have a problem because I am not getting only the domain I am getting everything after the forwarddash so it looks like this
bofawebplus.webcindario.com/index4.html and I want to remove "/" and everything after it.
This seems like a proper situation using Regular Expression.
from urllib.request import urlopen
from bs4 import BeautifulSoup
url = urlopen('https://openphish.com/feed.txt')
bs = BeautifulSoup(url.read(), 'html.parser' )
import re
domain_list = re.findall(re.compile('http[s]?://([^/]*)/'), bs.text)
print('\n'.join(domain_list))
There's no reason to use BeautifulSoup here, it is used for parsing HTML, but the URL being opened is plain text.
Here's a solution that should do what you need. It uses the Python urlparse as an easier and more reliable way of extracting the domain name.
This also uses a python set to remove duplicate entries, since there were quite a few.
from urllib.request import urlopen
from urllib.parse import urlparse
feed_list = urlopen('https://openphish.com/feed.txt')
domains = set()
for line in feed_list:
url = urlparse(line)
domain = url.netloc.decode('utf-8') # decode from utf-8 to string
domains.add(domain) # Keep all the domains in the set to remove duplicates
for domain in domains:
print(domains)

Scraping in Python with BeautifulSoup

I've read quite a few posts here about this, but I'm very new to Python in general so I was hoping for some more info.
Essentially, I'm trying to write something that will pull word definitions from a site and write them to a file. I've been using BeautifulSoup, and I've made quite some progress, but here's my issue -
from __future__ import print_function
import requests
import urllib2, urllib
from BeautifulSoup import BeautifulSoup
wordlist = open('test.txt', 'a')
word = raw_input('Paste your word ')
url = 'http://services.aonaware.com/DictService/Default.aspx?action=define&dict=wn&query=%s' % word
# print url
html = urllib.urlopen(url).read()
# print html
soup = BeautifulSoup(html)
visible_text = soup.find('pre')(text=True)
print(visible_text, file=wordlist)
this seems to pull what I need, but puts it in this format
[u'passable\n adj 1: able to be passed or traversed or crossed; "the road is\n passable"
but I need it to be in plaintext. I've tried using a sanitizer (I was running it through bleach, but that didn't work. I've read some of the other answers here, but they don't explain HOW the code works, and I don't want to add something if I don't understand how it works.
Is there any way to just pull the plaintext?
edit: I ended up doing
from __future__ import print_function
import requests
import urllib2, urllib
from bs4 import BeautifulSoup
wordlist = open('test.txt', 'a')
word = raw_input('Paste your word ')
url = 'http://services.aonaware.com/DictService/Default.aspx?action=define&dict=wn&query=%s' % word
# print url
html = urllib.urlopen(url).read()
# print html
soup = BeautifulSoup(html)
visible_text = soup.find('pre')(text=True)[0]
print(visible_text, file=wordlist)
The code is already giving you plaintext, it just happens to have some characters encoded as entity references. In this case, special characters, which form part of the XML/HTML syntax are encoded to prevent them from breaking the structure of the text.
To decode them, use the HTMLParser module:
import HTMLParser
h = HTMLParser.HTMLParser()
h.unescape('"the road is passable"')
>>> u'"the road is passable"'

search a specific word in BeautifulSoup python

I am trying to make a python script that reads crunchyroll's page and gives me the ssid of the subtitle.
For example :- http://www.crunchyroll.com/i-cant-understand-what-my-husband-is-saying/episode-1-wriggling-memories-678035
Go to the source code and look for ssid,I want to extract the numbers after ssid of this element
English (US)
I want to extract "154757", but I can't seem to get my script working
This is my current script:
import feedparser
import re
import urllib2
from urllib2 import urlopen
from bs4 import BeautifulSoup
feed = feedparser.parse('http://www.crunchyroll.com/rss/anime')
url1 = feed['entries'][0]['link']
soup = BeautifulSoup(urlopen(url1), 'html.parser')
How can I modify my code to search and extract that particular number?
This should get you started with being able to extract the ssid for each entry. Note that some of those link don't have any ssid so you'll have to account for that with some error catching. No need for re or the urllib2 modules here.
import feedparser
import requests
from bs4 import BeautifulSoup
d = feedparser.parse('http://www.crunchyroll.com/rss/anime')
for url in d.entries:
#print url.link
r = requests.get(url.link)
soup = BeautifulSoup(r.text)
#print soup
subtitles = soup.find_all('span',{'class':'showmedia-subtitle-text'})
for ssid in subtitles:
x = ssid.findAll('a')
for a in x:
print a['href']
Output:
--snip--
/i-cant-understand-what-my-husband-is-saying/episode-12-baby-skip-beat-678057?ssid=166035
/i-cant-understand-what-my-husband-is-saying/episode-12-baby-skip-beat-678057?ssid=165817
/i-cant-understand-what-my-husband-is-saying/episode-12-baby-skip-beat-678057?ssid=165819
/i-cant-understand-what-my-husband-is-saying/episode-12-baby-skip-beat-678057?ssid=166783
/i-cant-understand-what-my-husband-is-saying/episode-12-baby-skip-beat-678057?ssid=165839
/i-cant-understand-what-my-husband-is-saying/episode-12-baby-skip-beat-678057?ssid=165989
/i-cant-understand-what-my-husband-is-saying/episode-12-baby-skip-beat-678057?ssid=166051
/urawa-no-usagi-chan/episode-11-if-i-retort-i-lose-678873?ssid=166011
/urawa-no-usagi-chan/episode-11-if-i-retort-i-lose-678873?ssid=165995
/urawa-no-usagi-chan/episode-11-if-i-retort-i-lose-678873?ssid=165997
/urawa-no-usagi-chan/episode-11-if-i-retort-i-lose-678873?ssid=166033
/urawa-no-usagi-chan/episode-11-if-i-retort-i-lose-678873?ssid=165825
/urawa-no-usagi-chan/episode-11-if-i-retort-i-lose-678873?ssid=166013
/urawa-no-usagi-chan/episode-11-if-i-retort-i-lose-678873?ssid=166009
/urawa-no-usagi-chan/episode-11-if-i-retort-i-lose-678873?ssid=166003
/etotama/episode-11-catrat-shuffle-678659?ssid=166007
/etotama/episode-11-catrat-shuffle-678659?ssid=165969
/etotama/episode-11-catrat-shuffle-678659?ssid=166489
/etotama/episode-11-catrat-shuffle-678659?ssid=166023
/etotama/episode-11-catrat-shuffle-678659?ssid=166015
/etotama/episode-11-catrat-shuffle-678659?ssid=166049
/etotama/episode-11-catrat-shuffle-678659?ssid=165993
/etotama/episode-11-catrat-shuffle-678659?ssid=165981
--snip--
There are more but I left them out for brevity. From these results you should be able to easily parse out the ssid with some slicing since it looks like the ssid are all 6 digits long. Doing something like:
print a['href'][-6:]
would do the trick and get you just the ssid.

Reading data from a website

I'm trying to read data from a website that contains only text. I'd like to read only the data that follows "&values". I've been able to open the entire website, but I don't know how to get rid of the extraneous data and I don't know any HTML. Any help would be much appreciated.
The contents of that url look like url parameters. You could use urllib.parse_qs to parse them into a dict:
import urllib2
import urlparse
url = 'http://www.tip.it/runescape/gec/price_graph.php?avg=1&start=1327715574&mainitem=10350&item=10350'
response = urllib2.urlopen(url)
content = response.read()
params = urlparse.parse_qs(content)
print(params['values'])
You may want to look into the re module (although if you do eventually move to HTML, regex is not the best solution). Here is a basic example that grabs the text after &values and returns the following number/comma/space combinations:
>>> import re
>>> import urllib2
>>> url = 'http://www.tip.it/runescape/gec/price_graph.php?avg=1&start=1327715574&mainitem=10350&item=10350'
>>> contents = urllib2.urlopen(url).read()
>>> values = re.findall(r'&values=([\d,\s]*)', contents)
>>> values[0].split(',')
['33900000', '33900000', '33900000', #continues....]

Categories