Python 3.4 HTTP POST request with cookies - python

I'm having trouble constructing a method that will do a HTTP POST request with headers and data (username and password) and will retrieve the resulting cookies.
Here's my latest attempt so far:
def do_login(username, password):
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"}
cj = http.cookiejar.CookieJar()
req = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))
data = {"Username": username, "Password": password}
req.open("http://example.com/login.php", data)
But I keep getting exceptions whenever I try to change the method. Also, will the response cookies be stored in the CookieJar cj, or is that used only for sending request cookies?

After some research it seems that the data cannot be passed directly as an argument to req.open, and it needs to be converted to a URL-encoded string. Here's the solution that worked for me:
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"}
cj = http.cookiejar.CookieJar()
req = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))
req.addheaders = list(headers.items())
# The data should be URL-encoded and then encoded using UTF-8 for best compatilibity
data = urllib.parse.urlencode({"Username": username, "Password": password}).encode("UTF-8")
res = req.open("http://example.com/login.php", data)

Related

Can't fetch tabular content from a webpage using requests

I would like to scrape tabular content from the landing page of this website. There are 100 rows in it's first page. When I observe network activity in dev tools, I could notice that some get requests is being issued to this url https://io6.dexscreener.io/u/ws3/screener3/ with appropriate parameters which ends up producing json content.
However, when I try to mimic that requests through my following efforts:
import requests
url = 'https://io6.dexscreener.io/u/ws3/screener3/'
params = {
'EIO': '4',
'transport': 'polling',
't': 'NwYSrFK',
'sid': 'ztAOHWOb-1ulTq-0AQwi',
}
headers = {
'accept': '*/*',
'referer': 'https://dexscreener.com/',
'user-agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36'
}
with requests.Session() as s:
s.headers.update(headers)
res = s.get(url,params=params)
print(res.content)
I get this response:
`{"code":3,"message":"Bad request"}`
How can I get response having tabular content from that webpage?
Here is a very quick and dirty piece of python code that does the initial handshake and sets up the websocket connection and downloads the data in json format infinitely. I haven't tested this code extensively and I am not sure exactly what is necessary or not (in terms of the steps in the handshake) but I have mimicked the browser behaviour and it seems to work fine:
import requests
from websocket import create_connection
import json
s = requests.Session()
headers = {'user-agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36'}
url = 'https://dexscreener.com/ethereum'
resp = s.get(url,headers=headers)
print(resp)
step1 = s.get('https://io3.dexscreener.io/u/ws3/screener3/?EIO=4&transport=polling&t=Nwof-Os')
step2 = s.get('https://io4.dexscreener.io/u/ws3/screener3/?EIO=4&transport=polling&t=Nwof-S5')
obj = json.loads(step2.text[1:])
code = obj['sid']
payload = '40/u/ws/screener/consolidated/platform/ethereum/h1/top/1,'
step3 = s.post(f'https://io4.dexscreener.io/u/ws3/screener3/?EIO=4&transport=polling&t=Nwof-Xt&sid={code}',data=payload)
step4 = s.get(f'https://io4.dexscreener.io/u/ws3/screener3/?EIO=4&transport=polling&t=Nwof-Xu&sid={code}')
d = step4.text.replace('','').replace('42/u/ws/screener/consolidated/platform/ethereum/h1/top/1,','').replace(payload,'')
start = '["screener",'
end = ']["latestBlock",'
dirty = d[d.find(start)+len(start):d.rfind(end)].strip()
clean = json.loads(dirty)
print(clean)
# Initialize the headers needed for the websocket connection
headers = json.dumps({
'Accept-Encoding':'gzip, deflate, br',
'Accept-Language':'en-ZA,en;q=0.9,en-GB;q=0.8,en-US;q=0.7,de;q=0.6',
'Cache-Control':'no-cache',
'Connection':'Upgrade',
'Host':'io3.dexscreener.io',
'Origin':'https://dexscreener.com',
'Pragma':'no-cache',
'Sec-WebSocket-Extensions':'permessage-deflate; client_max_window_bits',
'Sec-WebSocket-Key':'ssklBDKxAOUt3D47SoEttQ==',
'Sec-WebSocket-Version':'13',
'Upgrade':'websocket',
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36'
})
# Then create a connection to the tunnel
ws = create_connection(f"wss://io4.dexscreener.io/u/ws3/screener3/?EIO=4&transport=websocket&sid={code}",headers=headers)
# Then send the initial messages through the tunnel
ws.send('2probe')
ws.send('5')
# Here you will view the message return from the tunnel
while True:
try:
json_data = json.loads(ws.recv().replace('42/u/ws/screener/consolidated/platform/ethereum/h1/top/1,',''))
print(json_data)
except:
pass

How do you grab tokens in request headers using python requests

In the request headers when logging in, there's a header called "cookie" that changes every time, how would I grab that each time and put it in the headers using python requests?
screenshot of network tab in chrome
Heres my code:
import requests
import time
proxies = {
"http": "http://us.proxiware.com:2000"
}
login_data = {'op':'login-main', 'user':'UpbeatPark', 'passwd':'Testingreddit123', 'api_type':'json'}
comment_data = {'thing_id':'t3_gluktj', 'text':'epical. redditor', 'id':'#form-t3_gluktjbx2', 'r':'gaming','renderstyle':'html'}
s = requests.Session()
s.headers.update({'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/82.0.4085.6 Safari/537.36'})
r = s.get('https://old.reddit.com/', proxies=proxies)
time.sleep(2)
r = s.post('https://old.reddit.com/api/login/UpbeatPark', proxies=proxies, data=login_data)
print(r.text)
here's the output (I know for a fact it is the correct password):
{"json": {"errors": [["WRONG_PASSWORD", "wrong password", "passwd"]]}}
This worked for me:
import requests
login_data = {
"op": "login-main",
"user": "USER",
"passwd": "PASS",
"api_type": "json",
}
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/82.0.4085.6 Safari/537.36",
}
s = requests.Session()
r = s.post("https://old.reddit.com/api/login/USER", headers=headers, data=login_data)
print(r.text)
It seems exactly like the code you are using but without proxy. Can you try to turn it off? The proxy might block cookies.

python requests not able to make connection to NSE india, Connection error

import requests
x = requests.get('https://www1.nseindia.com/live_market/dynaContent/live_watch/equities_stock_watch.htm' )
print(x.status_code)
print(x.content)
Giving connection error. Please help how to correct it.
Try this:
import requests
url = "https://www1.nseindia.com/live_market/dynaContent/live_watch/equities_stock_watch.htm"
headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, '
'like Gecko) '
'Chrome/80.0.3987.149 Safari/537.36',
'accept-language': 'en,gu;q=0.9,hi;q=0.8', 'accept-encoding': 'gzip, deflate, br'}
session = requests.Session()
request = session.get(url, headers=headers, timeout=5)
cookies = dict(request.cookies)
response = session.get(url, headers=headers, timeout=5, cookies=cookies)
print(response.status_code)
print(response.content)
This code for the first time you try to access the website in your program, If your accessing the site multiple times then use this:
response = session.get(url, headers=headers, timeout=5, cookies=cookies) everytime you try to access again.
Tell me if this works
Try add user agent to header:
import requests
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.162 Safari/537.36'}
r = requests.get('https://www1.nseindia.com/live_market/dynaContent/live_watch/equities_stock_watch.htm', headers=headers)
print(r.content)

Login in douban.com with python display b''

I want to login 'douban.com' with python session
import requests
url = 'https://www.douban.com/'
logurl = 'https://accounts.douban.com/passport/login_popup'
data = {'username': 'abc#gmail.com',
'password': 'abcdef'}
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like
Gecko) Chrome/80.0.3987.87 Safari/537.36'}
se = requests.session()
request = se.post(logurl, data=data, headers=headers)
request1 = se.get(url)
print(request1.content)
this display "b''",I don't get this work or not!
You're getting an empty response, meaning the request is not working properly. You want to debug the request further by looking into Your response. Check Requests lib documentation. request1.status_code and request1.headers might interest you.
The b'' is just a Bytes literal prefix. Python 3 documentation:

error in $ failed reading not a valid json value when i try to send requests via python

so ive been trying to figure out how to do the 'follow' thing using python codes on imvu.com, but it always returns the message "invalid arguments" error in $: failed reading: not a valid json value"
import requests
headers = {
"Origin": "https://secure.imvu.com/",
"Referer": "https://secure.imvu.com/next/av/Sammy165/",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36",
"X-IMVU-SAUCE": "" #removed sauce for account safety
}
url = "https://api.imvu.com/profile/profile-user-696969696/subscriptions"
data = {"id": "https://api.imvu.com/profile/profile-user-175389029"}
req = requests.post(url=url, headers=headers, data=data)
print(req.text)
Have you tried
requests.post(url=url, headers=headers, json=data)
?
You have to do json.dumps(data). See code below
import requests
import json
headers = {
"Origin": "https://secure.imvu.com/",
"Referer": "https://secure.imvu.com/next/av/Sammy165/",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36",
"X-IMVU-SAUCE": "" #removed sauce for account safety
}
url = "https://api.imvu.com/profile/profile-user-696969696/subscriptions"
data = {"id": "https://api.imvu.com/profile/profile-user-175389029"}
req = requests.post(url=url, headers=headers, data=json.dumps(data))
print(req.text)
Output:
{"status":"failure","error":"ERROR-GENERIC-001","message":"Permission Denied: You are not allowed to modify this subscription set."}

Categories