getting specific images from page - python
I am pretty new with BeautifulSoup. I am trying to print image links from http://www.bing.com/images?q=owl:
redditFile = urllib2.urlopen("http://www.bing.com/images?q=owl")
redditHtml = redditFile.read()
redditFile.close()
soup = BeautifulSoup(redditHtml)
productDivs = soup.findAll('div', attrs={'class' : 'dg_u'})
for div in productDivs:
print div.find('a')['t1'] #works fine
print div.find('img')['src'] #This getting issue KeyError: 'src'
But this gives only title, not the image source
Is there anything wrong?
Edit:
I have edited my source, still could not get image url.
Bing is using some techniques to block automated scrapers. I tried to print
div.find('img')
and found that they are sending source in attribute names src2, so following should work -
div.find('img')['src2']
This is working for me. Hope it helps.
If you open up browser develop tools, you'll see that there is an additional async XHR request issued to the http://www.bing.com/images/async endpoint which contains the image search results.
Which leads to the 3 main options you have:
simulate that XHR request in your code. You might want to use something more suitable for humans than urllib2; see requests module. This would be so called "low-level" approach, going down to the bare metal and web-site specific implementation which would make this option non-reliable, difficult, "heavy", error-prompt and fragile
automate a real browser using selenium - stay on the high-level. In other words, you don't care how the results are retrieved, what requests are made, what javascript needs to be executed. You just wait for search results to appear and extract them.
use Bing Search API (this should probably be option #1)
Related
Python - Scrapy ecommerce website
I'm trying to scrape the price of this product http://www.asos.com/au/fila/fila-vintage-plus-ringer-t-shirt-with-small-logo-in-green/prd/9065343?clr=green&SearchQuery=&cid=7616&gridcolumn=2&gridrow=1&gridsize=4&pge=1&pgesize=72&totalstyles=4699 With the following code but it returns an empty array response.xpath('//*[#id="product-price"]/div/span[2]/text()').extract() Any help is appreciated, Thanks.
Because the site is dynamic(this is what I got when I use view(response) command in scrapy shell: As you can see, the price info doesn't come out. Solutions: 1. splash. 2. selenium+phantomJS It might help also by checking this answer:Empty List From Scrapy When Using Xpath to Extract Values
The price is later added by the browser which renders the page using javascript code found in the html. If you disable javascript in your browser, you would notice that the page would look a bit different. Also, take a look at the page source, usually that's unaltered, to see that the tag you're looking for doesn't exist (yet). Scrapy doesn't execute any javascript code. It receives the plain html and that's what you have to work with. If you want to extract data from pages which look the same as in the browser, I recommend using an headless browser like Splash (if you're already using scrapy): https://github.com/scrapinghub/splash You can programaticaly tell it to download your page, render it and select the data points you're interested in. The other way is to check for the request made to the Asos API which asks for the product data. In your case, for this product: http://www.asos.com/api/product/catalogue/v2/stockprice?productIds=9065343¤cy=AUD&keyStoreDataversion=0ggz8b-4.1&store=AU I got this url by taking a look at all the XMLHttpRequest (XHR) requests sent in the Network tab found in Developers Tools (on Google Chrome).
You can try to find JSON inside HTML (using regular expression) and parse it: json_string = response.xpath('//script[contains(., "function (view) {")]/text()').re_first( r'view\(\'([^\']+)' ) data = json.loads(json_string) price = data["price"]["current"]
Python - Beautiful Soup to grab emails from website
I've been trying to figure out a simple way to run through a set of URLs that lead to pages that all have the same layout. We figured out that one issue is that in the original list the URLs are http but then they redirect to https. I am not sure if that then causes a problem in trying to pull the information from the page. I can see the structure of the page when I use Inspector in Chrome, but when I try to set up the code to grab relevant links I come up empty (literally). The most general code I have been using is: soup = BeautifulSoup(urllib2.urlopen('https://ngcproject.org/program/algirls').read()) links = SoupStrainer('a') print links which yields: a|{} Given that I'm new to this I've been trying to work with anything that I think might work. I also tried: mail = soup.find(attrs={'class':'tc-connect-details_send-email'}).a['href'] and spans = soup.find_all('span', {'class' : 'tc-connect-details_send-email'}) lines = [span.get_text() for span in spans] print lines but these don't yield anything either. I am assuming that it's an issue with my code and not one that the data are hidden from being scraped. Ideally I want to have the data passed to a CSV file for each URL I scrape but right now I need to be able to confirm that the code is actually grabbing the right information. Any suggestions welcome!
If you press CTRL+U on Google Chrome or Right click > view source. You'll see that the page is rendered by using javascript or other. urllib is not going to be able to display/download what you're looking for. You'll have to use automated browser (Selenium - most popular) and you can use it with Google Chrome / Firefox or a headless browser (PhantomJS). You can then get the information from Selenium and store it then manipulate it in anyway you see fit.
Python- Downloading a file from a webpage by clicking on a link
I've looked around the internet for a solution to this but none have really seemed applicable here. I'm writing a Python program to predict the next day's stock price using historical data. I don't need all the historical data since inception as Yahoo finance provides but only the last 60 days or so. The NASDAQ website provides just the right amount of historical data and I wanted to use that website. What I want to do is, go to a particular stock's profile on NASDAQ. For Example: (www.nasdaq.com/symbol/amd/historical) and click on the "Download this File in Excel Format" link at the very bottom. I inspected the page's HTML to see if there was an actual link I can just use with urllib to get the file but all I got was: <a id="lnkDownLoad" href="javascript:getQuotes(true);"> Download this file in Excel Format </a> No link. So my question is,how can I write a Python script that goes to a given stock's NASDAQ page, click on the Download file in excel format link and actually download the file from it. Most solutions online require you to know the url where the file is stored but in this case, I don't have access to that. So how do I go about doing this?
Using Chrome, go to View > Developer > Developer Tools In this new developer tools UI, change to the Network tab Navigate to the place where you would need to click, and click the ⃠ symbol to clear all recent activity. Click the link, and see if there was any requests made to the server If there was, click it, and see if you can reverse engineer the API of its endpoint Please be aware that this may be against the website's Terms of Service!
It appears that BeautifulSoup might be the easiest way to do this. I've made a cursory check that the results of the following script are the same as those that appear on the page. You would just have to write the results to a file, rather than print them. However, the columns are ordered differently. import requests from bs4 import BeautifulSoup URL = 'http://www.nasdaq.com/symbol/amd/historical' page = requests.get(URL).text soup = BeautifulSoup(page, 'lxml') tableDiv = soup.find_all('div', id="historicalContainer") tableRows = tableDiv[0].findAll('tr') for tableRow in tableRows[2:]: row = tuple(tableRow.getText().split()) print ('"%s",%s,%s,%s,%s,"%s"' % row) Output: "03/24/2017",14.16,14.18,13.54,13.7,"50,022,400" "03/23/2017",13.96,14.115,13.77,13.79,"44,402,540" "03/22/2017",13.7,14.145,13.55,14.1,"61,120,500" "03/21/2017",14.4,14.49,13.78,13.82,"72,373,080" "03/20/2017",13.68,14.5,13.54,14.4,"91,009,110" "03/17/2017",13.62,13.74,13.36,13.49,"224,761,700" "03/16/2017",13.79,13.88,13.65,13.65,"44,356,700" "03/15/2017",14.03,14.06,13.62,13.98,"55,070,770" "03/14/2017",14,14.15,13.6401,14.1,"52,355,490" "03/13/2017",14.475,14.68,14.18,14.28,"72,917,550" "03/10/2017",13.5,13.93,13.45,13.91,"62,426,240" "03/09/2017",13.45,13.45,13.11,13.33,"45,122,590" "03/08/2017",13.25,13.55,13.1,13.22,"71,231,410" "03/07/2017",13.07,13.37,12.79,13.05,"76,518,390" "03/06/2017",13,13.34,12.38,13.04,"117,044,000" "03/03/2017",13.55,13.58,12.79,13.03,"163,489,100" "03/02/2017",14.59,14.78,13.87,13.9,"103,970,100" "03/01/2017",15.08,15.09,14.52,14.96,"73,311,380" "02/28/2017",15.45,15.55,14.35,14.46,"141,638,700" "02/27/2017",14.27,15.35,14.27,15.2,"95,126,330" "02/24/2017",14,14.32,13.86,14.12,"46,130,900" "02/23/2017",14.2,14.45,13.82,14.32,"79,900,450" "02/22/2017",14.3,14.5,14.04,14.28,"71,394,390" "02/21/2017",13.41,14.1,13.4,14,"66,250,920" "02/17/2017",12.79,13.14,12.6,13.13,"40,831,730" "02/16/2017",13.25,13.35,12.84,12.97,"52,403,840" "02/15/2017",13.2,13.44,13.15,13.3,"33,655,580" "02/14/2017",13.43,13.49,13.19,13.26,"40,436,710" "02/13/2017",13.7,13.95,13.38,13.49,"57,231,080" "02/10/2017",13.86,13.86,13.25,13.58,"54,522,240" "02/09/2017",13.78,13.89,13.4,13.42,"72,826,820" "02/08/2017",13.21,13.75,13.08,13.56,"75,894,880" "02/07/2017",14.05,14.27,13.06,13.29,"158,507,200" "02/06/2017",12.46,13.7,12.38,13.63,"139,921,700" "02/03/2017",12.37,12.5,12.04,12.24,"59,981,710" "02/02/2017",11.98,12.66,11.95,12.28,"116,246,800" "02/01/2017",10.9,12.14,10.81,12.06,"165,784,500" "01/31/2017",10.6,10.67,10.22,10.37,"51,993,490" "01/30/2017",10.62,10.68,10.3,10.61,"37,648,430" "01/27/2017",10.6,10.73,10.52,10.67,"32,563,480" "01/26/2017",10.35,10.66,10.3,10.52,"35,779,140" "01/25/2017",10.74,10.975,10.15,10.35,"61,800,440" "01/24/2017",9.95,10.49,9.95,10.44,"43,858,900" "01/23/2017",9.68,10.06,9.68,9.91,"27,848,180" "01/20/2017",9.88,9.96,9.67,9.75,"27,936,610" "01/19/2017",9.92,10.25,9.75,9.77,"46,087,250" "01/18/2017",9.54,10.1,9.42,9.88,"51,705,580" "01/17/2017",10.17,10.23,9.78,9.82,"70,388,000" "01/13/2017",10.79,10.87,10.56,10.58,"38,344,340" "01/12/2017",10.98,11.0376,10.33,10.76,"75,178,900" "01/11/2017",11.39,11.41,11.15,11.2,"39,337,330" "01/10/2017",11.55,11.63,11.33,11.44,"29,122,540" "01/09/2017",11.37,11.64,11.31,11.49,"37,215,840" "01/06/2017",11.29,11.49,11.11,11.32,"34,437,560" "01/05/2017",11.43,11.69,11.23,11.24,"38,777,380" "01/04/2017",11.45,11.5204,11.235,11.43,"40,742,680" "01/03/2017",11.42,11.65,11.02,11.43,"55,114,820" "12/30/2016",11.7,11.78,11.25,11.34,"44,033,460" "12/29/2016",11.24,11.62,11.01,11.59,"50,180,310" "12/28/2016",12.28,12.42,11.46,11.55,"71,072,640" "12/27/2016",11.65,12.08,11.6,12.07,"44,168,130" The script escapes dates and thousands-separated numbers.
Dig a little bit deeper and find out what js function getQuotes() does. You should get a good clue from that. If it all seem too much complicated, then you can always use selenium. It is used to simulate the browser. However, it is much slower than using native network calls. You can find official documentation here.
Web scraping for divs inserted by scripts
Sorry if this is a silly question. I am trying to use Beautifulsoup and urllib2 in python to look at a url and extract all divs with a particular class. However, the result is always empty even though I can see the divs when I "inspect element" in chrome's developer tools. I looked at the page source and those divs were not there which means they were inserted by a script. So my question is how can i look for those divs (using their class name) using Beautifulsoup? I want to eventually read and follow hrefs under those divs. Thanks. [Edit] I am currently looking at the H&M website: http://www.hm.com/sg/products/ladies and I am interested to get all the divs with class 'product-list-item'
Try using selenium to run the javascript from selenium import webdriver driver = webdriver.Firefox() driver.get("http://www.python.org") html = driver.page_source
check this link enter link description here you can get all info by change the url, this link can be found in chrome dev tools > Network
The reason why you got nothing from that specific url is simply because, the info you need is not there. So first let me explain a little bit about how that page is loaded in a browser: when you request for that page(http://www.hm.com/sg/products/ladies), the literal content will be returned in the very first phase(which is what you got from your urllib2 request), then the browser starts to read/parse the content, basically it tells the browser where to find all information it needs to render the whole page(e.g. CSS to control layout, additional javascript/urls/pages to populate certain area etc.), and the browser does all that behind the scene. When you "inspect element" in chrome, the page is already fully loaded, and those info you want is not in original url, so you need to find out which url is used to populate those area and go after that specific url instead. So now we need to find out what happens behind the scene, and a tool is needed to capture all traffic when that page loads(I would recommend fiddler). As you can see, lots of things happen when you open that page in a browser!(and that's only part of the whole page-loading process) So by educated guess, those info you need should be in one of those three "api.hm.com" requests, and the best part is they are alread JSON formatted, which means you might not even bother with BeautifulSoup, the built-in json module could do the job! OK, now what? Use urllib2 to simulate those requests and get what you want. P.S. requests is a great tool for this kind of job, you can get it here.
Try This one : from bs4 import BeautifulSoup import urllib2 page = urllib2.urlopen("http://www.hm.com/sg/products/ladies") soup = BeautifulSoup(page.read(),'lxml') scrapdiv = open('scrapdiv.txt','w') product_lists = soup.findAll("div",{"class":"o-product-list"}) print product_lists for product_list in product_lists: print product_list scrapdiv.write(str(product_list)) scrapdiv.write("\n\n") scrapdiv.close()
Using BeautifulSoup to parse facebook
so I'm trying to parse public facebook pages using BeautifulSoup. I've managed to successfully scrape LinkedIn, but I've spent hours trying to get it to work on facebook with no luck. The code I'm trying to use looks like this: for urls in my_urls: try: page = urllib2.urlopen(urls) soup = BeautifulSoup(page) info = soup.find_all("div", class_="fsl fwb fcb") info2 = info.findall('a') The part that's frustrating me is that I can get the title element out, and I can even get pretty far down the document, but I can't get to the part where I need to get. This line successfuly grabs the pageTitle: info = soup.find_all("title", attrs={"id": "pageTitle"}) This line can get pretty far down the list of elements, but can't go any farther. info = soup.find_all(id="pagelet_timeline_main_column") Here's a sample page that I'm trying to parse, I want current city from it: https://www.facebook.com/100004210542493 and heres a quick screenshot of what the part I want looks like: http://prntscr.com/1t8xx6 I feel like I'm really close, but I just can't figure it out. Thanks in advance for any help! EDIT 2: I should also mention that I can successfully print the whole soup and visually find the part I need, but for whatever reason the parsing just won't work the way it should.
Try looking at content returned by using curl or wget. What you are seeing in the browser is what has been rendered after javascripts has been executed. wget https://www.facebook.com/100004210542493 You might want to use memchanize or selenium, since you want to simulate a client browser (instead of handling raw content). Another issue related to it might be Beautiful Soup cannot find a CSS class if the object has other classes, too