How can I get data from this link into a JSON? - python

I am trying to extract the search results with Python from this link into a JSON file, but normal request methods seem not functioning in this case. How can extract all the results?
url= https://apps.usp.org/app/worldwide/medQualityDatabase/reportResults.html?country=Ethiopia%2BGhana%2BKenya%2BMozambique%2BNigeria%2BCambodia%2BLao+PDR%2BPhilippines%2BThailand%2BViet+Nam%2BBolivia%2BColombia%2BEcuador%2BGuatemala%2BGuyana%2BPeru&period=2017%2B2016%2B2015%2B2014%2B2013%2B2012%2B2011%2B2010%2B2009%2B2008%2B2007%2B2006%2B2005%2B2004%2B2003&conclusion=Both&testType=Both&counterfeit=Both&recordstart=50
my code
import requests
from bs4 import BeautifulSoup
r = requests.get(url)
results_page = BeautifulSoup(response.content,'lxml')
Why am I not getting the full source code of the page?

Related

Why wont BeautifulSoup extract all the HTML from a public twitter page?

I am trying to write some code to extract tweets from a public twitter page (Nike store) using the Python BS4 module. When I print the page HTML into the console, only some of the HTML is printed - when I try to search (ctrl +F) the specific class values for a tag from the console output and it returns with zero results. Why is this happening?
Here a code snippet:
from bs4 import BeautifulSoup as soup
from urllib.request import urlopen
import re
if __name__ == '__main__':
# Read webpage into page_html' and close connection to webpage'
first_page = 'https://twitter.com/nikestore'
url_client = urlopen(first_page)
page_html = url_client.read()
url_client.close()
print(page_html)
I came across the accepted answer in the following link. Answer also suggests using selenium to circumvent the problem.
Problem while scraping twitter using beautiful soup

How to get the url from extracted information from a website

So basically I am stuck on the problem where I don't know how to the url from the extracted data from a website.
Here is my code:
import requests
from bs4 import BeautifulSoup
req = requests.get('https://api.randomtube.xyz/video.get?chan=2ch.hk&board=b&page=1')
soup = BeautifulSoup(req.content, "html.parser")
print(soup.prettify())
I get a lot of information on output, but the only thing I need is the url, I hope someone can help me.
P.S:
It gives me this information:
{"response":{"items":[{"url":"https:\/\/2ch.hk\/b\/src\/262671212\/16440825183970.webm","type":"video\/webm","filesize":"20259","width":1280,"height":720,"name":"1521967932778.webm","board":"b","thread":"262671212"},{"url":"https:\/\/2ch.hk\/b\/src\/261549765\/16424501976450.webm","type":"video\/webm","filesize":"12055","width":1280,"height":720,"name":"1526793203110.webm","board":"b","thread":"261549765"}...
But i only need this part out of all the things
https:\/\/2ch.hk\/b\/src\/261549765\/16424501976450.webm (Not exactly this url, but just as an example)
You can do it this way:
url_array = []
for item in soup['response']['items']:
url_array.append(item['url'])
I guess if the API returns JSON data then it should be better to just parse it directly.
The url produces json data. Beautifulsoup can't grab json data and to grab json data, you can follow the next example.
import requests
import json
data = requests.get('https://api.randomtube.xyz/video.get?chan=2ch.hk&board=b&page=1').json()
url= data['response']['items'][0]['url']
if url:
url=url.replace('.webm','.mp4')
print(url)
Output:
https://2ch.hk/b/src/263361969/16451225633240.mp4
The problem is you are telling BeautifulSoup to parse JSON data as HTML. You can get the URL you need more directly with the following code
import json
import requests
from bs4 import BeautifulSoup
req = requests.get('https://api.randomtube.xyz/video.get?chan=2ch.hk&board=b&page=1')
data = json.loads(req.content)
my_url = data['response']['items'][0]['url']

How do I decode a webpage using Requests and BeatifulSoup library in Python?

I tied writing some code for a project I am doing. First, I'll show you my code.
import requests
from bs4 import BeautifulSoup
url = 'http://github.com'
r = requests.get(url)
r_html = r.text
soup = BeautifulSoup(r_html, "html.parser")
title = soup.find('span', 'articletitle')
The project is to be able to decode a webpage. Basicaly in the variable url, you put in any url and use python for it to return back the basic html code back in a txt format. I am using the requests and BeautifulSoup library for Python.
I tried running this code, and it should be right but when it runs, it doesn't return anything. Can you help me?

Accessing a website in python

I am trying to get all the urls on a website using python. At the moment I am just copying the websites html into the python program and then using code to extract all the urls. Is there a way I could do this straight from the web without having to copy the entire html?
In Python 2, you can use urllib2.urlopen:
import urllib2
response = urllib2.urlopen('http://python.org/')
html = response.read()
In Python 3, you can use urllib.request.urlopen:
import urllib.request
with urllib.request.urlopen('http://python.org/') as response:
html = response.read()
If you have to perform more complicated tasks like authentication or passing parameters I suggest to have a look at the requests library.
The most straightforward would probably be urllib.urlopen if you're using python2, or urllib.request.urlopen if you're using python3 (you have to do import urllib or import urllib.request first of course). That way you get an file like object from which you can read (ie f.read()) the html document.
Example for python 2:
import urllib
f = urlopen("http://stackoverflow.com")
http_document = f.read()
f.close()
The good news is that you seem to have done the hard part which is analyzing the html document for links.
You might want to use the bs4(BeautifulSoup) library.
Beautiful Soup is a Python library for pulling data out of HTML and XML files.
You can download bs4 with the followig command at the cmd line. pip install BeautifulSoup4
import urllib2
import urlparse
from bs4 import BeautifulSoup
url = "http://www.google.com"
response = urllib2.urlopen(url)
content = response.read()
soup = BeautifulSoup(content, "html.parser")
for link in soup.find_all('a', href=True):
print urlparse.urljoin(url, link['href'])
You can simply use the combination of requests and BeautifulSoup.
First make an HTTP request using requests to get the HTML content. You will get it as a Python string, which you can manipulate as you like.
Take the HTML content string and supply it into the BeautifulSoup, which has done all the job to extract the DOM, and get all URLs, i.e. <a> elements.
Here is an example of how to fetch all links from StackOverflow:
import requests
from bs4 import BeautifulSoup, SoupStrainer
response = requests.get('http://stackoverflow.com')
html_str = response.text
bs = BeautifulSoup(html_str, parseOnlyThese=SoupStrainer('a'))
for a_element in bs:
if a_element.has_attr('href'):
print(a_element['href'])
Sample output:
/questions/tagged/facebook-javascript-sdk
/questions/31743507/facebook-app-request-dialog-keep-loading-on-mobile-after-fb-login-called
/users/3545752/user3545752
/questions/31743506/get-nuspec-file-for-existing-nuget-package
/questions/tagged/nuget
...

urllib keeps freezing while trying to pull HTML data from a website - is my code correct?

I'm trying to build a simple Python script algorithm on Mac OS X that has four parts to it.
go to a defined website and grab all the HTML using urllib
parse the HTML data to find a table of numbers (using beautifulsoup)
with those numbers do a simple calculation
print out the results in a table in numerical order
I'm having trouble with step 1, i can grab the data with urllib using this code
import urllib.request
y=urllib.request.urlopen('my target website url')
x=y.read()
print(x)
But it keeps freezing once it has returned the HTML and the Python shell is non-responsive.
Since you mentioned requests, I think it's a great solution.
import requests
import BeautifulSoup
r = requests.get('http://example.com')
html = r.content
soup = BeautifulSoup(html)
table = soup.find("table", {"id": "targettable"})
As suggested by jonrsharpe, if you're concerned about the size of the response returned by that url, you can check the size first before printing or parsing.
With requests:
r = requests.get('http://example.com')
print r.headers['content-length']

Categories