completely new to Python and I'm trying to get stuck in but I'm struggling with requests. I run a node for a small cryptocurrency project and am trying to create a python script that can scrape my wallet value and telegram it to me once a day, I've managed the telegram bot and I've practiced with BeautifulSoup to pull values out from a source fine, it's just getting a response that contains my balance that's frustrating me.
Here's the URL with my balance on: https://www.hpbscan.org/address/0x7EC332476fCA4Bcd20176eE06F16960b5D49333e/
The value obviously changes so I don't think I can just do a get request for the above page and parse it to beautiful soup, so I loaded up Developer Tools and saw that there was a post request:
METHOD: POST
URL: https://www.hpbscan.org/HpbScan/addrs/getAddressDetailInfo
Request Headers:
Host: www.hpbscan.org
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0
Accept: /
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate, br
X-Requested-With: XMLHttpRequest
Content-Type: application/json;charset=utf-8
Content-Length: 46
DNT: 1
Connection: keep-alive
Referer: https://www.hpbscan.org/address/0x7EC332476fCA4Bcd20176eE06F16960b5D49333e/
Pragma: no-cache
Cache-Control: no-cache
Request Body:
["0x7EC332476fCA4Bcd20176eE06F16960b5D49333e"]
The response (at least in a browser) is JSON formatted data that does indeed contain the balance I need.
Here's where I got to so far trying to recreate the above request:
import requests
import json
url = "https://www.hpbscan.org/HpbScan/addrs/getAddressDetailInfo"
payload = '["0x7EC332476fCA4Bcd20176eE06F16960b5D49333e"]'
headers = """
'Host': 'www.hpbscan.org'
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0'
'Accept': '*/*'
'Accept-Language': 'en-GB,en;q=0.5'
'Accept-Encoding': 'gzip, deflate, br'
'X-Requested-With': 'XMLHttpRequest'
'Content-Type': 'application/json;charset=utf-8'
'Content-Length': '46'
'DNT': '1'
'Connection': 'keep-alive'
'Referer': 'https://www.hpbscan.org/address/0x7EC332476fCA4Bcd20176eE06F16960b5D49333e/'
'Pragma': 'no-cache'
'Cache-Control': 'no-cache'
"""
data = requests.post(url, data=payload, headers=headers)
print(data.text)
I've never used requests before so I'm a bit in the dark, I've tried fiddling with things based on what I can see other people doing but it's no use, currently I'm getting "AttributeError: 'str' object has no attribute 'items'.
I'd imagine it to be something along the lines of me not specifying the request headers and body correctly, or maybe because the response is in json format which my code can't understand?
Any help would be massively appreciated :)
You should change "headers" from string to dict. Here your final code:
import requests
import json
url = "https://www.hpbscan.org/HpbScan/addrs/getAddressDetailInfo"
payload = '["0x7EC332476fCA4Bcd20176eE06F16960b5D49333e"]'
headers = {
'Host': 'www.hpbscan.org',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0',
'Accept': '*/*',
'Accept-Language': 'en-GB,en;q=0.5',
'Accept-Encoding': 'gzip, deflate, br',
'X-Requested-With': 'XMLHttpRequest',
'Content-Type': 'application/json;charset=utf-8',
'Content-Length': '46',
'DNT': '1',
'Connection': 'keep-alive',
'Referer': 'https://www.hpbscan.org/address/0x7EC332476fCA4Bcd20176eE06F16960b5D49333e/',
'Pragma': 'no-cache',
'Cache-Control': 'no-cache'}
data = requests.post(url, data=payload, headers=headers)
print(data.text)
The headers should be a dictionary
import requests
import json
url = "https://www.hpbscan.org/HpbScan/addrs/getAddressDetailInfo"
payload = '["0x7EC332476fCA4Bcd20176eE06F16960b5D49333e"]'
headers = {
'Host': 'www.hpbscan.org',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0',
'Accept': '*/*',
'Accept-Language': 'en-GB,en;q=0.5',
'Accept-Encoding': 'gzip, deflate, br',
'X-Requested-With': 'XMLHttpRequest',
'Content-Type': 'application/json;charset=utf-8',
'Content-Length': '46',
'DNT': '1',
'Connection': 'keep-alive',
'Referer': 'https://www.hpbscan.org/address/0x7EC332476fCA4Bcd20176eE06F16960b5D49333e/',
'Pragma': 'no-cache',
'Cache-Control': 'no-cache'}
data = requests.post(url, data=payload, headers=headers)
print(json.loads(data))
the final bit converts the response you get from the browser to a python dictionary so you can continue to make use of it within your code.
Related
I'm trying to produce the JSON data so I can search for available camp rentals and the only way seems to be a request with a header otherwise I get a Not Authorize message when just using the URL. Unfortunately I'm having no luck this way as well since I keep getting a Session has expired message. I'm not a web developer so not sure what the cause is. Any help would be greatly appreciated it. Thank you
import time
import sys
import requests
url = "https://reservations.piratecoveresort.com/irmdata/api/irm?sessionID=_rdpirm01&arrival=2021-10-26&departure=2021-10-28&people1=1&people2=0&people3=0&people4=0&promocode=&groupnum=&rateplan=RACK&changeResNum=&roomtype=&roomnum=&propertycode=&locationcode=&preferences=&preferences=&preferences=&preferences=&preferences=WTF&preferences=&preferences=&preferences=&preferences=&preferences=&preferences=&preferences=&preferences=&preferences=&preferences=&preferences=&preferences=&preferences=&preferences=&preferences=&masterType=&page=&start=0&limit=12&multiRoom=false"
payload={}
headers = {
'authority': 'reservations.piratecoveresort.com',
'method': 'GET',
'path': '/irmdata/api/irm?sessionID=_rdpirm01&arrival=2021-10-26&departure=2021-10-28&people1=1&people2=0&people3=0&people4=0&promocode=&groupnum=&rateplan=RACK&changeResNum=&roomtype=&roomnum=&propertycode=&locationcode=&preferences=&preferences=&preferences=&preferences=&preferences=WTF&preferences=&preferences=&preferences=&preferences=&preferences=&preferences=&preferences=&preferences=&preferences=&preferences=&preferences=&preferences=&preferences=&preferences=&preferences=&masterType=&page=&start=0&limit=12&multiRoom=false',
'scheme': 'https',
'accept': 'application/json, text/plain, */*',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-US,en;q=0.9',
'authentication': '',
'content-type': 'application/json; charset=utf-8',
'cookie': 'rdpirm01=',
'dnt': '0',
'referer': 'https://reservations.piratecoveresort.com/irmng/',
#'sec-ch-ua': "Chromium";v="94", "Google Chrome";v="94", ";Not A Brand";v="99",
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': "Windows",
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36',
}
response = requests.request("GET", url, headers=headers, data=payload)
print(response.text)
Result
Session Expired
You're getting session expired because the session cookie (and authentication token possibly too) are expired. You can fix this using a requests session which will set these session headers for you. Read more here:
https://docs.python-requests.org/en/master/user/advanced/
I want to scrape facebook companies for their date (if they have).
problem is that when I try to retrieve the HTML, I get the Hebrew version of it (I'm located in Israel)
this is part of the result:
�1u�9X�/.������~�O+$B\^����y�����e�;�+
Code:
import requests
from bs4 import BeautifulSoup
headers = {'accept': '*/*',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-GB,en;q=0.9,en-US;q=0.8,hi;q=0.7,la;q=0.6',
'cache-control': 'no-cache',
'dnt': '1',
'pragma': 'no-cache',
'referer': 'https',
'sec-fetch-mode': 'no-cors',
'sec-fetch-site': 'cross-site',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36',
}
url = 'https://www.facebook.com/pg/google/about/'
def fetch(URL):
try:
response = requests.get(url=URL, headers=headers).text
print(response)
except:
print('Could not retrieve data, or connect')
fetch(url)
Is there a way to check the EN website? any subdomain? or i should use proxy in the request?
What are you seeing isn't Hebrew version of the site, but compressed response from the server. As quick solution, you can remove accept-encoding header from the request:
import requests
from bs4 import BeautifulSoup
headers = {
'accept': '*/*',
# 'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-GB,en;q=0.9,en-US;q=0.8,hi;q=0.7,la;q=0.6',
'cache-control': 'no-cache',
'dnt': '1',
'pragma': 'no-cache',
'referer': 'https',
'sec-fetch-mode': 'no-cors',
'sec-fetch-site': 'cross-site',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36',
}
url = 'https://www.facebook.com/pg/google/about/'
def fetch(URL):
try:
response = requests.get(url=URL, headers=headers).text
print(response)
except:
print('Could not retrieve data, or connect')
fetch(url)
Prints the uncompressed page:
<!DOCTYPE html>
<html lang="en" id="facebook" class="no_js">
<head><meta charset="utf-8" /><meta name="referrer" content="origin-when-crossorigin" id="meta_referrer" /><script>window._cstart=+new Date();</script><script>function envFlush(a){function b(b){for(var c in a)b[
...and so on.
I am trying to log on a site using python (Requests) and keep getting 400 Bad request error.
I have tried different header formats, even copied the headers from different browsers (Chrome, Edge, Firefox) but I am always getting 400 error.
I've tried browsing around but can't find anything that would help me.
import requests
with requests.Session() as c:
url = 'https://developer.clashofclans.com/api/login'
e='xxx#xxx.xxx'
p='yyyyy'
header = {'authority': 'developer.clashofclans.com',
'method': 'POST',
'path': '/api/login',
'scheme': 'https',
'accept': '*/*',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-IN,en-US;q=0.9,en;q=0.8',
'content-length': '57',
'content-type': 'application/json',
'cookie': 'cookieconsent_status=dismiss',
'origin': 'https://developer.clashofclans.com',
'referer': 'https://developer.clashofclans.com/',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36',
'x-requested-with': 'XMLHttpRequest'}
login_data = dict(email=e,password=p)
x = c.post(url,data=login_data,headers=header)
print(x)
some website expected the data as json format. in requests you can easly do this by using json params, so your code will be something like this:
python
x = c.post(url, json=login_data, headers=header)
I have simplest POST request code
import requests
headers = {
'origin': 'https://jet.com',
'accept-encoding': 'gzip, deflate, br',
'x-csrf-token': 'IzaENk9W-Xzv9I5NcCJtIf9h_nT24p5fU-Tk',
'jet-referer': '/product/detail/87e89b3ce17f4742ab6d72aeaaa5480d?gclid=CPzS982CgdMCFcS1wAodABwIOQ',
'x-requested-with': 'XMLHttpRequest',
'accept-language': 'en-US,en;q=0.8',
'cookie': 'akacd_phased_release=3673158615~rv=53~id=041cdc832c1ee67c7be18df3f637ad43; jet.csrf=_JKKPyR5fKD-cPDGmGv8AJk5; jid=7292a61d-af8f-4d6f-a339-7f62afead9a0; jet-phaser=%7B%22experiments%22%3A%5B%7B%22variant%22%3A%22a%22%2C%22version%22%3A1%2C%22id%22%3A%22a_a_test16%22%7D%2C%7B%22variant%22%3A%22slp_categories%22%2C%22version%22%3A1%2C%22id%22%3A%22slp_categories%22%7D%2C%7B%22variant%22%3A%22on_cat_nav_clicked%22%2C%22version%22%3A1%2C%22id%22%3A%22catnav_load%22%7D%2C%7B%22variant%22%3A%22zipcode_table%22%2C%22version%22%3A1%2C%22id%22%3A%22zipcode_table%22%7D%5D%2C%22id%22%3A%222982c0e7-287e-42bb-8858-564332ada868%22%7D; ak_bmsc=746D16A88CE3AE7088B0CD38DB850B694F8C5E56B1650000DAA82659A1D56252~plJIR8hXtAZjTSjYEr3IIpW0tW+u0nQ9IrXdfV5GjSfmXed7+tD65YJOVp5Vg0vdSqkzseD0yUZUQkGErBjGxwmozzj5VjhJks1AYDABrb2mFO6QqZyObX99GucJA834gIYo6/8QDIhWMK1uFvgOZrFa3SogxRuT5MBtC8QBA1YPOlK37Ecu1WRsE2nh55E24F0mFDx5hXcfBAhWdMne6NrQ88JE9ZDxjW5n8qsh+QAHo=; _sdsat_landing_page=https://jet.com/product/detail/87e89b3ce17f4742ab6d72aeaaa5480d?gclid=CPzS982CgdMCFcS1wAodABwIOQ|1495705823651; _sdsat_session_count=1; AMCVS_A7EE579F557F617B7F000101%40AdobeOrg=1; AMCV_A7EE579F557F617B7F000101%40AdobeOrg=-227196251%7CMCIDTS%7C17312%7CMCMID%7C11996417004070294145733272597342763775%7CMCAID%7CNONE%7CMCAAMLH-1496310624%7C3%7CMCAAMB-1496310625%7Chmk_Lq6TPIBMW925SPhw3Q%7CMCOPTOUT-1495713041s%7CNONE; __qca=P0-949691368-1495705852397; mm_gens=Rollout%20SO123%20-%20PDP%20Grid%20Image%7Ctitle%7Chide%7Cattr%7Chide%7Cprice%7Chide~SO19712%20HP%20Rec%20View%7Clast_viewed%7Cimage-only~SO17648%20-%20PLA%20PDP%7Cdesc%7CDefault%7Cbuybox%7Cmodal%7Cexp_cart%7Chide-cart%7Ctop_caro%7CDefault; jcmp_productSku=882b1010309d48048b8f3151ddccb3cf; _sdsat_all_pages_canary_variants=a_a_test16:a|slp_categories:slp_categories|catnav_load:on_cat_nav_clicked|zipcode_table:zipcode_table; _sdsat_all_pages_native_pay_eligible=No; _uetsid=_uet6ed8c6ab; _tq_id.TV-098163-1.3372=ef52068e069c26b9.1495705843.0.1495705884..; _ga=GA1.2.789964406.1495705830; _gid=GA1.2.1682210002.1495705884; s_cc=true; __pr.NaN=6jvgorz8tb; mm-so17648=gen; __pr.11xw=xqez1m3cvl; _sdsat_all_pages_login_status=logged-out; _sdsat_jid_cookie=7292a61d-af8f-4d6f-a339-7f62afead9a0; _sdsat_phaser_id=2982c0e7-287e-42bb-8858-564332ada868; _sdsat_all_pages_jet_platform=desktop; _sdsat_all_pages_site_version=3.860.1495036770896|2017-05-16 20:35:36 UTC; _sdsat_all_pages_canary_variants_2=a_a_test16:a~slp_categories:slp_categories~catnav_load:on_cat_nav_clicked~zipcode_table:zipcode_table; jet=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6WyJmMmUwMjI1NS1iODFkLTRlOTktOGU1Yi0yZGI1MjU0ZTdjNzUiXSwiamNtcEhpc3RvcnkiOltbXV0sImlwWmlwY29kZSI6WyIyMTA2MSJdLCJjbGllbnRUaWNrZXQiOlsiZXlKMGVYQWlPaUpLVjFRaUxDSmhiR2NpT2lKSVV6STFOaUo5LmV5SmpiR2xsYm5SZmFXUWlPaUl3Tm1JMlkyTTNaVGRtTnpVME16TmhPREU0T0RjelpUWmpZMkV4WTJRelppSXNJbWx6Y3lJNkltcGxkQzVqYjIwaUxDSmhkV1FpT2lKM1pXSmpiR2xsYm5RaWZRLnlKMXdoYklDVml4TE1iblliV0xQY1RvdF9EWUo3MjFYQkdFMzBpUktpdTQiXSwicHJvbW9jb2RlIjpbIlNQUklORzE1Il0sInBsYSI6W3RydWVdLCJmcmVlU2hpcHBpbmciOltmYWxzZV0sImpjbXAiOlt7ImpjbXAiOiJwbGE6Z2dsOm5qX2R1cl9nZW5fcGF0aW9fX2dhcmRlbl9hMjpwYXRpb19fZ2FyZGVuX2dyaWxsc19fb3V0ZG9vcl9jb29raW5nX2dyaWxsX2NvdmVyc19hMjpuYTpwbGFfNzg0NzQ0NTQyXzQwNTY4Mzg3NzA2X3BsYS0yOTM2MjcyMDMzNDE6bmE6bmE6bmE6Mjo4ODJiMTAxMDMwOWQ0ODA0OGI4ZjMxNTFkZGNjYjNjZiIsImNvZGUiOiJQTEExNSIsInNrdSI6Ijg4MmIxMDEwMzA5ZDQ4MDQ4YjhmMzE1MWRkY2NiM2NmIn1dLCJpYXQiOjE0OTU3MDU4OTh9.6OEM9e9fTyUZdFGju19da4rEnFh8kPyg8wENmKyhYgc; bm_sv=360FA6B793BB42A17F395D08A2D90484~BLAlpOUET7ALPzcGziB9dbZNvjFjG3XLQPFGCRTk+2bnO/ivK7G+kOe1WXpHgIFmyZhniWIzp2MpGel1xHNmiYg0QOLNqourdIffulr2J9tzacGPmXXhD6ieNGp9PAeTqVMi+2kSccO1+JzO+CaGFw==; s_tps=30; s_pvs=173; mmapi.p.pd=%221759837076%7CDwAAAApVAgDxP2Qu1Q4AARAAAUJz0Q1JAQAmoW6kU6PUSKeaIXVTo9RIAAAAAP%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FAAZEaXJlY3QB1Q4BAAAAAAAAAAAAt8wAAH0vAQC3zAAABQDZlQAAAmpAfP%2FVDgD%2F%2F%2F%2F%2FAdUO1Q7%2F%2FwYAAAEAAAAAAc9dAQCNFgIAADyXAABuDVKACdUOAP%2F%2F%2F%2F8B1Q7VDv%2F%2FCgAAAQAAAAABs2ABANweAgAAiY0AAMCzlXtx1Q4A%2F%2F%2F%2F%2FwHVDtUO%2F%2F8GAAABAAAAAAORSwEATPoBAJJLAQBO%2BgEAk0sBAFD6AQABt8wAAAYAAADYlQAAHMPK3ZbVDgD%2F%2F%2F%2F%2FAdUO1Q7%2F%2FwYAAAEAAAAAAc5dAQCJFgIAAbfMAAAGAAAAmpgAAFAf9YUU1Q4A%2F%2F%2F%2F%2FwHVDtUO%2F%2F8EAAABAAAAAAR0YwEA1R4CAHVjAQDWHgIAdmMBANgeAgB3YwEA2x4CAAG3zAAABAAAAAAAAAAAAUU%3D%22; mmapi.p.srv=%22fravwcgus04%22; mmapi.e.PLA=%22true%22; mmapi.p.uat=%7B%22PLATraffic%22%3A%22true%22%7D; _sdsat_lt_pages_viewed=6; _sdsat_pages_viewed=6; _sdsat_traffic_source=',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36',
'content-type': 'application/json',
'accept': 'application/json, text/javascript, */*; q=0.01',
'referer': 'https://jet.com/product/detail/87e89b3ce17f4742ab6d72aeaaa5480d?gclid=CPzS982CgdMCFcS1wAodABwIOQ',
'authority': 'jet.com',
'dnt': '1',
}
data = '{"zipcode":"21061","sku":"87e89b3ce17f4742ab6d72aeaaa5480d","origination":"PDP"}'
r=requests.post('https://jet.com/api/product/v2', headers=headers, data=data)
print(r)
It returns 200
And I want to convert this simple request to Python Request.
body = '{"zipcode":"21061","sku":"87e89b3ce17f4742ab6d72aeaaa5480d","origination":"PDP"}'
yield Request(url = 'https://jet.com/api/product/v2', callback=self.parse_jet_page, meta={'data':data}, method="POST", body=body, headers=self.jet_headers)
it returns 400, looks like headers are being over-written or something. Or is there bug?
I guess the error is caused by cookies.
By default, the "cookie" entry in your HTTP headers shall be overriden by a built-in downloader middleware CookiesMiddleware. Scrapy expects a user to use Request.cookies for passing cookies.
If you do need to pass cookies directly in Request.headers (instead of using Request.cookies), you'll need to disable the built-in CookiesMiddleware. You may simply set COOKIES_ENABLED=False in settings.
I'm trying to use the requests module from python to make a post in http://hastebin.com/
but I've been failing and doesn't know what to do anymore. is there any way I can really make a post on the site? here my current code:
import requests
payload = "s2345"
headers = {
'Host': 'hastebin.com',
'Connection': 'keep-alive',
'Content-Length': '5',
'Accept': 'application/json, text/javascript, */*; q=0.01',
'Origin': 'http://hastebin.com',
'X-Requested-With': 'XMLHttpRequest',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.130 Safari/537.36',
'Content-Type': 'application/json; charset=UTF-8',
'Referer': 'http://hastebin.com/',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'en-US,en;q=0.8'
}
req = requests.post('http://hastebin.com/',headers = headers, params=payload)
print (req.json())
Looking over the provided haste client code the server expects a raw post of the file, without a specific content type. The client also posts to the /documents path, not the root URL.
They are also not being picky about headers, just leave those all to requests to set; the following works for me and creates a new document on the site:
import requests
payload = "s2345"
response = requests.post('http://hastebin.com/documents', data=payload)
if response.status_code == 200:
print(response.json()['key'])
Note that I used data here, not the params option which sets the URL query paramaters.