Extract full HTML of a website by using pyppeteer in python [closed] - python

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed last year.
Improve this question
I'm using the below code to extract full HTML:
cont = await page1.content()
The website I intend to extract from is:
https://www.mohmal.com/en
which is a website to make temporary email accounts. The exact thing I want to do is reading the content of received emails, but by using the above code, I could not extract inner frame HTML where received emails contents placed within it. How can I do so?

Did you try using urllib?
You can use the urllib module to read html websites.
from urllib.request import urlopen
f = urlopen("https://www.google.com")
print(f.read())
f.close()

Related

How to extract URL of Facebook image [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Given a FB url like:
https://www.facebook.com/photo.php?fbid[LONGUSERID]&set=a.313002535549859&type=3&theater
How could I extract the real photo URL using PHP or Python?
Normally the actual URL looks like this ( as seen in Chrome Network tab)
https://scontent.fbru1-1.fna.fbcdn.net/v/t31.0-1/cp0/p32x32/11942095_139657766378816_623531952343456734_o.jpg?_nc_cat=106&_nc_sid=0081f9&_nc_ohc=VpijQtyWbUQAX-fsPMj&_nc_ht=scontent.fbru1-1.fna&oh=eb4435eed183716c807b405d0d57c3a4&oe=5F674BAB
But is there a way to automate this extraction this with script? Any example would be appreciated.
The simplest example.
I just got an HTML page, divided the text by double quotes into lines. Then I checked to see if the JPG extension was on the line.
import requests
from html import unescape
from urllib.parse import unquote
url = "https://www.facebook.com/photo.php?fbid=445552432123146"
response = requests.get(url)
if response:
lines = response.text.split('\"')
for line in lines:
if ".jpg" in line:
print(unquote(unescape(line)))
else:
print("fail!")
With the help of Selenium you can already search for elements in HTML code correctly.

Simple and performant way to save public list of IPs into python list [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
What's a simple and performat way to save online published lists of IP addresses like this one in a standard python list? Example
ip_list = ['109.70.100.20','185.165.168.229','51.79.86.174']
HTML parsing library beautifulsoap seems way to sophisticated for the simple structure.
Its not that beautifulsoup is too sophisticated, its that the content type is text, not html. There are several APIs for downloading content, and requests is popular. If you use its text property, it will perform any decoding and unzipping needed
import requests
resp = requests.get("https://www.dan.me.uk/torlist/")
ip_list = resp.text.split()

extracting html code from urls list [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to get a html code value from many urls for the same domain and as example
the html code is the name
and the domain is facebook
and the urls is just like
https://www.facebook.com/mohamed.nazem2
so if you opened that url you will see the name is Mohamed Nazem
at shown by the code :
‏‎Mohamed Nazem‎‏ ‏(ناظِم)‏
as so that facebook url
https://www.facebook.com/zuck
Mark Zuckerberg
so the value at the first url was >Mohamed Nazem<
and the second url it's Mark Zuckerberg
hopefully you got what i thinking in..
To fetch the HTML page for each url you will need to use something like the requests library. To install it, use pip install requests and then in your code use it like so:
import requests
response = requests.get('https://facebook.com/zuck')
print(response.data)

How to read top 10 headlines from the front page of Reddit using it's API? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am new to Python and I want to use Reddit API to retrieve top 10 headline on the front page of Reddit using Python. I tried to read the API documentation but I am not able to understand how to proceed.
It would be great if someone can give me an example.
Thanks
Here's a quick example on how to download the json data you want. Basically, open the URL, download the data in JSON format, and use json.loads() to load it into a dictionary.
try:
from urllib.request import urlopen
except ImportError: # Python 2
from urllib2 import urlopen
import json
url = 'http://www.reddit.com/r/python/.json?limit=10'
jsonDownload = urlopen(url)
jsonData = json.loads(jsonDownload.read())
From there, you can print out 'jsonData', write it to a file, parse it, whatever.

How do I get the HTML of a web page if I have the url in python [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
So let's say I have this URL: https://www.python.org/
and I want to download the page's source into a .txt file named python_source.txt
how would I do that?
Use urllib2, Here's how it's done:
response = urllib2.urlopen(url)
content = response.read()
Now you can save the content in any text file.
The python package urllib does just this. The documentation gives a very clear example on what you want to do.
import urllib.request
local_filename, headers = urllib.request.urlretrieve('http://python.org/')
html = open(local_filename)

Categories