There is a parser of products from the STEPN marketplace. To receive a JSON response, you need to send a session with an authorized account in cookies.
# how the parser works
cookies = {'SESSIONIDD2': '7951767220820838781:1658220355588:1400231'} # cookies received from the developer tools in the browser
r = request.get('https://api.stepn.com/run/orderlist?order=2001&chain=103&refresh=true&page=0&type=600&gType=&quality=&level=0&bread=0', cookies=cookies)
# get a JSON response with the necessary data
But after some time, the session is logged out in cookies and you need to log in to the browser again and log in
I tried to log in via request.session (passed all the headers, cookies), but received an 'Incorrect username/password' in response
with requests.Session() as session:
r = session.get('https://m.stepn.com/')
r = session.get('https://api.stepn.com/run/login?account={email}&password={password}&type=3') # I also got the string for the request in the developer tools
# get {"code":201003,"msg":"Incorrect username/password"}
I've recently reversed Stepn web authentication(email and password encryption). Here is my solution in rust: https://github.com/Numenorean/stepn-password, you can remake it as a library using python(or C) ffi, and then call needed function just from your code, so after that you only need to send correct auth request
Related
I am trying to write an azure function to manage SSO between two services. The first one will host the link to the HTTP triggered Azure Function which then should respond with the formatted SAML Response which then gets sent to the consumption URL as a POST, but I can only make GET requests with the azure.functions.HttpResponse method needed to parse outputs for Azure Functions (unless I'm wrong).
Alternatively I've tried to set the cookie that I get as a response from sending the SAML Response with the python requests method, but the consumption URL doesn't seem to care that the cookie is there and just brings me back to the login page.
The SP in this situation is Absorb LMS and I can confirm that the SAML Response is formatted correctly because submitting it from an HTTP form works fine (which I've also tried returning as the body of the azure.functions.HttpResponse, but I just get HTTP errors which I can't make heads or tails of).
import requests
import azure.functions as func
headers = {
'Content-Type': 'application/x-www-form-urlencoded'
}
body = {"SAMLResponse": *b64 encoded saml response and assertion*}
response = requests.post(url=*acs url*, headers=headers, data=body)
headers = response.headers
headers['Location'] = *acs url*
return func.HttpResponse(headers=headers, status_code=302)
My propose is to login at my application through python requests. I was able to get a token, that is expected, but passing it by GET isn't enough. So, i want to store the request in a cookie, pass the token, and maybe the browser can login.
So, let's resume what i did (this is pseudo code)
session = requests.Session()
session.get('<url>salt')
r = session.get('<url>login', params={username, password})
r.headers['token']
I discovered this by looking the requests while login. The token is passed to the application after. So, how can i store the "r" as a cookie?
you can simply access your session cookie using:
client = requests.session()
cook = client.cookies
extracted_token_value = client.cookies['token']
#this will print your cookie and token
print cook.text
print extracted_token_value
#updating your header now:
client.headers.update({'New Header': 'extracted_token_value')
BR
I have a quick question regarding HTTP Basic Authentication after a redirect.
I am trying to login to a website which, for operational reasons, immediately redirects me to a central login site using an HTTP 302 response. In my testing, it appears that the Requests module does not send my credentials to the central login site after the redirect. As seen in the code snippet below, I am forced to extract the redirect URL from the response object and attempt the login again.
My question is simply this:
is there a way to force Requests to re-send login credentials after a redirect off-host?
For portability reasons, I would prefer not to use a .netrc file. Also, the provider of the website has made url_login static but has made no such claim about url_redirect.
Thanks for your time!
CODE SNIPPET
import requests
url_login = '<url_login>'
myauth = ('<username>', '<password')
login1 = requests.request('get', url_login, auth=myauth)
# this login fails; response object contains the login form information
url_redirect = login1.url
login2 = requests.request('get', url_redirect, auth=myauth)
# this login succeeds; response object contains a welcome message
UPDATE
Here is a more specific version of the general code above.
The first request() returns an HTTP 200 response and has the form information in its text field.
The second request() returns an HTTP 401 response with 'HTTP Basic: Access denied.' in its text field.
(Of course, the login succeeds when provided with valid credentials.)
Again, I am wondering whether I can achieve my desired login with only one call to requests.request().
import requests
url_login = 'http://cddis-basin.gsfc.nasa.gov/CDDIS_FileUpload/login'
myauth = ('<username>', '<password>')
with requests.session() as s:
login1 = s.request('get', url_login, auth=myauth)
url_earthdata = login1.url
login2 = s.request('get', url_earthdata, auth=myauth)
My solution to this would be use of "Session". Here is how you can implement Session.
import requests
s = requests.session()
url_login = "<loginUrl>"
payload = {
"username": "<user>",
"password": "<pass>"
}
req1 = s.post(url_login, data=payload)
# Now to make sure you do not get the "Access denied", use the same session variable for the request.
req2 = s.get(url_earthdata)
This should solve your problem.
This isn't possible with Requests, by design. The issue stems from a security vulnerability, where if an attacker modifies the redirect URL and the credentials are automatically sent to the redirect URL, then the credentials are compromised. So, credentials are stripped from redirect calls.
There's a thread about this on github:
https://github.com/psf/requests/issues/2949
I have problem with simple authorization and upload API script.
When authorized, client receives several cookies, including PHPSESSID cookie (in browser).
I use requests.post method with form data for authorization:
r = requests.post(url, headers = self.headers, data = formData)
self.cookies = requests.utils.dict_from_cookieja(r.cookies)
Headers are used for custom User-Agent only.
Authorization is 100% fine (there is a logout link on the page).
Later, i try to upload data using the authorized session cookies:
r = requests.post(url, files = files, data = formData, headers = self.headers, cookies = self.cookies)
But site rejects the request. If we compare the requests from script and google chrome (using Wireshark), there is no differences in request body.
Only difference is that 2 cookies sent by requests class, while google chrome sends 7.
Update: Double checked, first request receives 7 cookies. post method just ignore half...
My mistake in code was that i was assigning cookies from each next API request to the session cookies dictionary. On each request since logged in, cookies was 'reset' by upcoming response cookies, that's was the problem. As auth cookies are assigned only at login request, they were lost at the next request.
After each authorized request i use update(), not assigning.
self.cookies.update( requests.utils.dict_from_cookiejar(r.cookies) )
Solves my issue, upload works fine!
I have access to an admin page by using a basic HTTP authentication system.
This page loads data using JavaScript by retrieving JSON data from another URL I can see in the Firefox Web Dev tools (the combination Ctrl+Shift+I, then going in the Network tab and reloading the page)
If I copy and paste this URL in the same instance of my browser, I retrieve the JSON data I need.
So:
Using Firefox, I connect to the admin page and provide the username/passwd.
Using Firefox Webdev toolbox, I retrieve the URL used to retrieve the JSON data I want.
I copy and paste this URL and get the JSON data I need, ready to be parsed.
Now, I would like to do the same automatically using Python 3.
I use Requests to make it easier. However, if I try to retrieve directly the URL found in step 3, I get an 401 Authentication error:
import requests
url = "http://xxx/services/users?from=0&to=50"
r = requests.get(url, auth=('user', 'passwd'))
r.status_code
>>> 401
I can do an authenticated request on the admin URL (something like http://xxx/admin-ui/) and I can retrieve the content of the web page, but it doesn't contain anything interesting since everything is loaded in JavaScript from that JSON data coming from the URL in step 3...
Any help would be more than welcome!
I needed to use form-based authentication, not HTTP Basic Auth as I originally thought.
So first I needed to login to the first URL in order to retrieve a auth cookie:
url = "http://xxx/admin-ui/"
credentials = {'j_username':'my_username','j_password':'my_passwd'}
s = requests.session()
s.post(url, credentials)
s.cookies
>>> <<class 'requests.cookies.RequestsCookieJar'>[Cookie(version=0, name='JSESSIONID', value='...>
Then I could connect to the second URL using this cookie and retrieve the data I needed:
url2 = "http://xxx/services/users?from=0&to=50"
r = requests.get(url2, cookies=s.cookies)
r.content
>>> (a lot of JSON data! \o/)