How to comment on clickup with python? - python

I want to comment on a specific task in clickup but it responses 401 error.
url = "https://api.clickup.com/api/v2/task/861m8wtw3/comment"
headers = {
"Authorization": "Bearer <my api key>",
"Content-Type": "application/json"
}
# comment = input('Type your comment text: \n')
comment = 'test comment'
data = {
"content": f"{comment}"
}
response = requests.post(url, headers=headers, json=data)
and the output is:
<Response [401]>
what is the problem?
i tried to add mozilla headers as the user agent key:
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'
but still get the 401 error!

Seems to me like your getting detected.
Try using Selenium-Profiles

It looks like the issue is with the Authorization header. Make sue that the header only includes the API token string, without 'Bearer' in front of it. Like so:
headers = {
"Authorization": "<your api token>",
"Content-Type": "application/json"
}

Related

Getting 401 response while making get request with bearer token (Python)

Here is my code---
token = APIUtilities()
auth_token = token.get_token(end_point=App_Config.BASE_URL+"/connect/token",
username="username", password="Password#", client_id="test")
headers = CaseInsensitiveDict()
headers["Accept"] = "*/*"
headers["Authorization"] = "Bearer "+auth_token
headers["User-Agent"] = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36"
response = requests.get(App_Config.BASE_URL+"/api/customers", headers=headers)
assert_that(response.status_code).is_equal_to(requests.codes.ok)
I am getting the token but not sure why I am getting 401 error again and again. Please help me to resolve this.
You are not sending the auth_token in
requests.get(App_Config.BASE_URL+"/api/customers", headers=headers)
It should be something like :
requests.get(App_Config.BASE_URL+"/api/customers", headers={'Authorization': 'auth_token'})
Issue is resolved. Above code snippet is fine, problem was with token generation method. It was returning something wrong which I corrected and now code is fine.

Python POST works only if I GET CSRF token before I POST, why?

I have the following code:
def main():
session_requests = requests.session()
result = session_requests.get(login_url)
tree = html.fromstring(result.text)
veri_token = tree.xpath("/html/body/div[1]/div/div/form/input[1]/#value")[0]
print(veri_token)
headerpayload = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36',
'X-Requested-With': 'XMLHttpRequest',
'Host': host,
'Origin': origin,
'Referer': login_url,
'__RequestVerificationToken': veri_token
}
payload = {
"__RequestVerificationToken": veri_token,
"Q_b9849eb2-813d-4d0a-a1ce-643f1c8af986_0": "name",
"Q_62ebd8c8-7d45-481d-a8b1-ad54a390a029_0": "name#utoronto.ca",
"FormId": "276caa59-80a5-4ced-9b9d-025e1d753b4a",
"_ACTION": "Continue",
"PageIndex": "1"
}
result = session_requests.post(
login_url,
data=payload,
headers=headerpayload
)
print(result.status_code)
If I run this code everything works fine. But if I make the script print out the token and then just specify the token rather than get it before sending the post, it fails with a CSRF error. Any idea why?
EDIT: Just to clarify, I have to run both the get request and post together for this to work, if I separate them into two scripts and manually enter the veri_token it fails.

getting the search page result, login with jwt authentication (python)

I am trying to get the html page to parse. The site itself has login form. I am using the following code to get through the login form:
headers = {
"Content-Type": "application/json",
"referer":"https://somesite/"
}
payload = {
"email": us,
"password": ps,
"web": "true"
}
session_requests = requests.session()
response = session_requests.post(
site,
data = json.dumps(payload),
headers = headers
)
result = response
resultContent = response.content
resultCookies = response.cookies
resultContentJson = json.loads(resultContent)
resultJwtToken = resultContentJson['jwtToken']
That works just fine, I am able to get 200 OK status and jwtToken.
NOW. When I actually trying to get the page (search result) the site returns to me '401 - not authorized'.. So, the question is 'what am I am doing wrong?'. Any suggestion/hint/idea is appreciated!
here is the request that gets 401 response:
siteSearch = "somesite/filters/search"
headersSearch = {
"content-type": "application/json",
"referer":"https://somesite",
"origin":"https://somesite",
"authorization":"Bearer {}".format(resultJwtToken),
"user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36"
}
payloadSearch = {
"userId":50432,
"filters" : [],
"savedSearchIds":[],
"size":24
}
responseSearch = session_requests.post(
siteSearch,
data = json.dumps(payloadSearch),
headers = headers
)
searchResult = response;
looking at the postman and chrome developer tools and seems to me I am sending the identical request as the actual browser (works via browser).. but nope - 401 response.
May be it has something to do with the cookies? The first login response returns bunch of cookies as well, but I thought the session_requests takes care about it?
in any way, any help is appreciated. Thanks
typo.. in responseSearch I used for the headers the headers defined in the initial login. should be headers = headersSearch. All the rest works as expected. Thanks!

Getting an Authorization Code on Spotify API (POST request)

This is a spotify documentation I'm following. Out of the 3 options of 'Authorization Flows', I'm trying the 'Authorization Code Flow'.
Finished step 1. Have your application request authorization.
Stuck at step 2. Have your application request refresh and access tokens
It's asking to make a POST request that contains the parameters encoded in ´application/x-www-form-urlencoded as defined in the OAuth 2.0 specification:. Here is what I've done so far with my limited knowledge and google search.
import requests
import base64
from html import unescape
url = "https://accounts.spotify.com/api/token"
params = {
"grant_type": "authorization_code",
"code": <authorization code I got from step 1>,
"redirect_uri": "http://127.0.0.1:5000/",
}
headers = {
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36",
"Content-Type": "application/x-www-form-urlencoded",
"Authorization" : base64.b64encode("{}:{}".format(CLIENT_ID, CLIENT_SECRET).encode('UTF-8')).decode('ascii')
}
html = requests.request('post', url, headers=headers, params=params, data=None)
print(html.text)
result, with response code 400
{"error":"invalid_client"}
What should I do to make it work? I thought I got all the params right.

Using Requests Post to login to this site not working

I know there are tons of threads and videos on how to do this, I've gone through them all and am in need of a little advanced guidance.
I am trying to log into this webpage where I have an account so I can send a request to download a report.
First I send the get request to the login page, then send the post request but when I print(resp.content) I get the code back for the login page. I do get a code[200] but I can't get to the index page. No matter what page I try to get after the post it keeps redirecting me back to the login page
Here are a couple things I'm not sure if I did correctly:
For the header I just put everything that was listed when I inspected the page
Not sure if I need to do something with the cookies?
Below is my code:
import requests
import urllib.parse
url = 'https://myurl.com/login.php'
next_url = 'https://myurl.com/index.php'
username = 'myuser'
password = 'mypw'
headers = {
'Host': 'url.myurl.com',
'Connection': 'keep-alive',
'Content-Length': '127',
'Cache-Control': 'max-age=0',
'Origin': 'https://url.myurl.com',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36',
'Content-Type': 'application/x-www-form-urlencoded',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Referer': 'https://url.myurl.com/login.php?redirect=1',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.8',
'Cookie': 'PHPSESSID=3rgtou3h0tpjfts77kuho4nnm3'
}
login_payload = {
'XXX_login_name': username,
'XXX_login_password': password,
}
login_payload = urllib.parse.urlencode(login_payload)
r = requests.Session()
r.get(url, headers = headers)
r.post(url, headers = headers, data = login_payload)
resp = r.get(next_url, headers = headers)
print(resp.content)
You don't need to send separate requests for authorization and file download. You need to send single POST with specifying credentials. Also in most cases you don't need to send headers. In common your code should looks like follow:
from requests.auth import HTTPBasicAuth
url_to_download = "http://some_site/download?id=100500"
response = requests.post(url_to_download, auth=HTTPBasicAuth('your_login', 'your_password'))
with open('C:\\path\\to\\save\\file', 'w') as my_file:
my_file.write(response.content)
There are a few more fields in the form data to post:
import requests
data = {"redirect": "1",
"XXX_login_name": "your_username",
"XXX_login_password": "your_password",
"XXX_actionSUBMITLOGIN": "Login",
"XXX_login_php": "1"}
with requests.Session() as s:
s.headers.update({"User-Agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36"})
r1 = s.get("https://eym.sicomasp.com/login.php")
s.headers["cookie"] = r1.headers["Set-Cookie"]
pst = s.post("https://eym.sicomasp.com/login.php", data=data)
print(pst.history)
You may get redirected to index.php automatically after the post, you can check r1.history and r1.content to see exactly what is happening.
So I figured out what my problem was, just in case anyone in the future has the same issue. I am sure different websites have different requirements but in this case the Cookie: I was sending in the request header was blocking it. What I did was grab my cookie in the headers AFTER I logged in. I updated my headers and then I sent the request. This is what ended up working:
(also the form data needs to be encoded in HTML)
import requests
import urllib.parse
headers = {
'Host' : 'eym.sicomasp.com',
'Content-Length' : '62',
'Origin' : 'https://eym.sicomasp.com',
'Upgrade-Insecure-Requests' : '1',
'User-Agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36',
'Referer' : 'https://eym.sicomasp.com/login.php?redirect=1',
'Cookie' : 'PHPSESSID=vdn4er761ash4sb765ud7jakl0; SICOMUSER=31+147234553'
} #Additional cookie information after logging in ^^^^
data = {
'XXX_login_name': 'myuser',
'XXX_login_password': 'mypw',
}
data = urllib.parse.urlencode(data)
with requests.Session() as s:
s.headers.update(headers)
resp = s.post('https://eym.sicomasp.com/index.php', data=data2)
print(resp.content)

Categories