I would like to get store info from the web-site(http://www.hilife.com.tw/storeInquiry_street.aspx).
The method I found by chrome is POST.
By using below method, I still cannot access.
Could someone give me a hint?
import requests
from bs4 import BeautifulSoup
head = {
'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36'
}
payload = {
'__EVENTTARGET':'AREA',
'__EVENTARGUMENT':'',
'__LASTFOCUS':'',
'__VIEWSTATE':'/wEPDwULLTE0NjI2MjI3MjMPZBYCAgcPZBYMAgEPZBYCAgEPFgIeBFRleHQFLiQoJyNzdG9yZUlucXVpcnlfc3RyZWV0JykuYXR0cignY2xhc3MnLCdzZWwnKTtkAgMPEA8WBh4NRGF0YVRleHRGaWVsZAUJY2l0eV9uYW1lHg5EYXRhVmFsdWVGaWVsZAUJY2l0eV9uYW1lHgtfIURhdGFCb3VuZGdkEBUSCeWPsOWMl+W4ggnln7rpmobluIIJ5paw5YyX5biCCeWunOiYree4ownmlrDnq7nnuKMJ5qGD5ZyS5biCCeiLl+agl+e4ownlj7DkuK3luIIJ5b2w5YyW57ijCeWNl+aKlee4ownlmInnvqnnuKMJ6Zuy5p6X57ijCeWPsOWNl+W4ggnpq5jpm4TluIIJ5bGP5p2x57ijCemHkemWgOe4ownmlrDnq7nluIIJ5ZiJ576p5biCFRIJ5Y+w5YyX5biCCeWfuumahuW4ggnmlrDljJfluIIJ5a6c6Jit57ijCeaWsOeruee4ownmoYPlnJLluIIJ6IuX5qCX57ijCeWPsOS4reW4ggnlvbDljJbnuKMJ5Y2X5oqV57ijCeWYiee+qee4ownpm7LmnpfnuKMJ5Y+w5Y2X5biCCemrmOmbhOW4ggnlsY/mnbHnuKMJ6YeR6ZaA57ijCeaWsOerueW4ggnlmInnvqnluIIUKwMSZ2dnZ2dnZ2dnZ2dnZ2dnZ2dnFgECB2QCBQ8QDxYGHwEFCXRvd25fbmFtZR8CBQl0b3duX25hbWUfA2dkEBUWBuS4reWNgAbmnbHljYAG5Y2X5Y2ABuilv+WNgAbljJfljYAJ5YyX5bGv5Y2ACeilv+Wxr+WNgAnljZflsa/ljYAJ5aSq5bmz5Y2ACeWkp+mHjOWNgAnpnKfls7DljYAJ54OP5pel5Y2ACeixkOWOn+WNgAnlkI7ph4zljYAJ5r2t5a2Q5Y2ACeWkp+mbheWNgAnnpZ7lsqHljYAJ5aSn6IKa5Y2ACeaymem5v+WNgAnmoqfmo7LljYAJ5riF5rC05Y2ACeWkp+eUsuWNgBUWBuS4reWNgAbmnbHljYAG5Y2X5Y2ABuilv+WNgAbljJfljYAJ5YyX5bGv5Y2ACeilv+Wxr+WNgAnljZflsa/ljYAJ5aSq5bmz5Y2ACeWkp+mHjOWNgAnpnKfls7DljYAJ54OP5pel5Y2ACeixkOWOn+WNgAnlkI7ph4zljYAJ5r2t5a2Q5Y2ACeWkp+mbheWNgAnnpZ7lsqHljYAJ5aSn6IKa5Y2ACeaymem5v+WNgAnmoqfmo7LljYAJ5riF5rC05Y2ACeWkp+eUsuWNgBQrAxZnZ2dnZ2dnZ2dnZ2dnZ2dnZ2dnZ2dnFgECBGQCBw8PFgIfAAUJ5Y+w5Lit5biCZGQCCQ8PFgIfAAUG5YyX5Y2AZGQCCw8WAh4LXyFJdGVtQ291bnQCAhYEZg9kFgJmDxUFBEg2NDYP5Y+w5Lit5aSq5bmz5bqXIOWPsOS4reW4guWMl+WNgDQwNOWkquW5s+i3rzcy6JmfCzA0LTIyMjkwOTI4Azg3MGQCAQ9kFgJmDxUFBDQzMTgP5Y+w5Lit5rC45aSq5bqXM+WPsOS4reW4guWMl+WNgDQwNOWkquWOn+i3r+S6jOautTI0MOiZn+S4gOaok+WFqOmDqAswNC0yMzY5MDA1NwM4NzFkZFHxmtQaBu2Yr9cvskfEZMWn57JLRfjPYBFYDy+tHr6X',
'__VIEWSTATEGENERATOR':'B77476FC',
'__EVENTVALIDATION':'/wEdACtWrrgS52/ojbuYEYvRDXHZ2ryV+Ed5kWYedGp5zjHs3Neeeo/9TTvNTdElW+hiVA25mZnLEQUYPOZFLnuVu9jOT+Zq1/xceVgC7GxWRM+A8tOS3xZBjlhgzlx5UN3H3D0UrdtoyeScvRqxFL8L3gGKRyCJu029oItLX7X6c7SW7C7IVzuAeZ6t9kFMeOQus7MtrV7YeOXrlOP8inI96UkaJEU7Ro3FtK29+B+NamR2j4qInKVwJ4+JD3cjWm5buZdnOhT/ISzrljaf+F9GnVjm4dGchVglf1PxMMHl7EEoLjs20TZ856RDCGXvzK/6J+tEFp7zDvFTYGoeHtuHy+YF/IoR/CRFBAaEkys48FIAUCSUKnxACPyW6Ar2guIADjOqYue7v4fhV1jIq65P/lwanoaJpIsboCbjakbTYnqK8BLngMayrRehyT58dmj3SbzY1mOtzSNnakdpUxaC0EpOJ7rhB52A2FKsxy5EbP0PwHHuHNMa9dit0AxPMfYUP1/LWuYPWMX0W8tyEMKxoUcYsCb+qJLF9yXPgM6c8sIQTRxcBokm1PGzFN4M6vnSF8OfFSC+c0frLZ4GH6l497B/5oDIjq7Bz4/cPeGCavvh9NUqPcmzJIr8Abx9vjtMGpZSwBdVY3bR/ARswIDrmWLt1qMD4jcRvGPxBa8nsRR8HNdVINbR+iOSFLwVhBCg+s+mV5NeTdOKvAeggfOsJHmJKL0ApQSCyjY5kEiOvo2JAI07C08ENIFF7HpDTaGCi93i2WnmdDrYoaoLZi96dRTlk4xoWV9tc7rd9X/wE6QoKHxFtADSz9WkgtbUn88lAhY2++OiqWCaQZobh7K26ndH1z34JXVB7C/AiOEV+CCb97oVyooxWullV44iFQ0isVBjYC1XWS3eGf1PwMS++A+EjQTkl9VJhIRDoS6sg2mD7mikimBjQGvZX/lcYtKSrjY=',
'CITY':'台中市',
'AREA':'北區'
}
res = requests.post('http://www.hilife.com.tw/storeInquiry_street.aspx', data=payload, headers = head)
res.encoding = 'utf-8'
print res.text
I see that you are missing this : Content-Type:application/x-www-form-urlencoded, you have to send a header like this as well as send data in x-www-form-urlencoded format . I recommend using POSTMAN for testing before writing the code. Also make sure you have relevant permission before crawling third party website. Happy Crawling.
Related
I tried to write a little app for parsing this page: https://apps.microsoft.com/store/category/Business
I cannot get a full html code. The tag body is not full.
import requests
def get_data(url):
headers = {
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36"
}
req = requests.get(url, headers=headers)
with open("index.html", "w") as file:
file.write(req.text)
get_data("https://apps.microsoft.com/store/category/Business")
You cannot just parse this page because it is a client side rendered page through JavaScript.
You need to use a tool like:
pyppeteer
Selenium
Or maybe try to reverse engineer the page and directly call the APIs.
(Or maybe see if Microsoft has a public API you can call to get the info you want).
Like the title said, im trying to send request a url using requests with headers, but when I try to print the status code it doesn't print anything in the terminal, I checked my internet connection and changed to test it but nothing changes.
Here's my code ;
import requests
from bs4 import BeautifulSoup
from requests.exceptions import ReadTimeout
link="https://www.exampleurl.com"
header={
"accept-language": "tr,en;q=0.9,en-GB;q=0.8,en-US;q=0.7",
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.51 Safari/537.36 Edg/99.0.1150.36'
}
r=requests.get(link)
print(r.status_code)
When I execute this command, nothing appears, don't know why. If someone can help me I will be so glad.
you can use request.head(link) like below:
r=requests.head(link)
print(r.status_code)
I get the same problem. The get() never returns.
Since you have created a header variable I thought about using that:
r = requests.get(link, headers=header)
Now I get status 200 returned.
I'm trying to make a web scraper with python, I made it with selenium but it is really slow.Then i saw that i could speed up the project because of a button that make a post request.
import requests
from bs4 import BeautifulSoup
url = "http://vidtome.host/tnoz00am9j8p"
myobj = {
'op': 'download1',
'code':'tnoz00am9j8p',
'hash': 'the hash',
'imhuman': 'Proceed to video'
}
x = requests.post(url, data = myobj)
print(x.text)
That's the code and it works but only for the first time.
When I started it the first time it doesn't show any error and it printed me out the page with the right changes, but when i started it later it gave me no error but it printed me out the page with no changes like it doesn't do anything.
How can it be possible?
Requests are faster, but you cannot extract dynamically rendered content. However this is probably not the issue.
Problem is that you do not have access to the website.
If it is a basic human checking system, you could try to add user agent to your request
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36 Edg/88.0.705.68',
}
r = requests.get(url, headers=headers)
If this will not work, I would recommend looking into the data that you are passing. Maybe it is validating through it and it contains expired values or something.
I am trying to get number of followers of a facebook page i.e. https://web.facebook.com/marlenaband. I am using python requests library. When I see the page source in the browser, the text "142 people follow this" appears to be in the commented section of the page. But, I am not seeing it in the response text using requests and BeautifulSoup. Would someone please help me on how to get this? Thanks
Here is the code I am using:
import requests
from bs4 import BeautifulSoup as bs
url = 'https://web.facebook.com/marlenaband'
headers = {
'user-agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 UBrowser/7.0.185.1002 Safari/537.36',
}
res = requests.get(url, headers=headers)
print(res.content)
I actually got it using requests by modifying the headers to this:
headers = {
'accept-language':'en-US,en;q=0.8',
}
I have taken a look at the login data on the forum i'm trying to login too, but it still leaves me confused as to what information is needed to pass along through POST. Is it the html id fields? name fields? type?
Also, when you successfully log on, it has a "Login Successful!" quick redirection page, then continues to redirect you to index.php. The result i'm getting is that of index.php, but it's not an instance of me logged in. I believe i'm passing the cookies through correctly? I just believe it's a matter of wrong 'data' being passed through at login.
import requests
from bs4 import BeautifulSoup
import json
import string
headers={"User-Agent": "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2049.0 Safari/537.36"}
payload = {"action": "login",
"username": "myusername",
"password": "**********"}
with requests.Session() as s:
s0 = s.post("http://minewind.com/forums/ucp.php?mode=login", data=payload, headers=headers)
print (s.cookies)
s1 = s.get("http://minewind.com/forums/index.php", cookies=s0.cookies, headers=headers)
perty = BeautifulSoup(s1.content)
perty.prettify()
for links in perty.find_all('a'):
print (links.get('href'))