I am currently trying to update a Sharepoint 2013 list.
This is the module that I am using using to accomplish that task. However, when I run the post task I receive the following error:
"b'{"error":{"code":"-1, Microsoft.SharePoint.Client.InvalidClientQueryException","message":{"lang":"en-US","value":"Invalid JSON. A token was not recognized in the JSON content."}}}'"
Any idea of what I am doing wrong?
def update_item(sharepoint_user, sharepoint_password, ad_domain, site_url, sharepoint_listname):
login_user = ad_domain + '\\' + sharepoint_user
auth = HttpNtlmAuth(login_user, sharepoint_password)
sharepoint_url = site_url + '/_api/web/'
sharepoint_contextinfo_url = site_url + '/_api/contextinfo'
headers = {
'accept': 'application/json;odata=verbose',
'content-type': 'application/json;odata=verbose',
'odata': 'verbose',
'X-RequestForceAuthentication': 'true'
}
r = requests.post(sharepoint_contextinfo_url, auth=auth, headers=headers, verify=False)
form_digest_value = r.json()['d']['GetContextWebInformation']['FormDigestValue']
item_id = 4991 # This id is one of the Ids returned by the code above
api_page = sharepoint_url + "lists/GetByTitle('%s')/items(%d)" % (sharepoint_listname, item_id)
update_headers = {
"Accept": "application/json; odata=verbose",
"Content-Type": "application/json; odata=verbose",
"odata": "verbose",
"X-RequestForceAuthentication": "true",
"X-RequestDigest": form_digest_value,
"IF-MATCH": "*",
"X-HTTP-Method": "MERGE"
}
r = requests.post(api_page, {'__metadata': {'type': 'SP.Data.TestListItem'}, 'Title': 'TestUpdated'}, auth=auth, headers=update_headers, verify=False)
if r.status_code == 204:
print(str('Updated'))
else:
print(str(r.status_code))
I used your code for my scenario and fixed the problem.
I also faced the same problem. I think the way that data passed for update is not correct
Pass like below:
json_data = {
"__metadata": { "type": "SP.Data.TasksListItem" },
"Title": "updated title from Python"
}
and pass json_data to requests like below:
r= requests.post(api_page, json.dumps(json_data), auth=auth, headers=update_headers, verify=False).text
Note: I used SP.Data.TasksListItem as it is my type. Use http://SharePointurl/_api/web/lists/getbytitle('name')/ListItemEntityTypeFullName to find the type
Related
Using Python, how do I make a request to the shopee API to get a list of products on offer with my affiliate link?
I've made several scripts, but they all have a signature issue or unsupported authentication attempt. Does anyone have a working example of how to do this?
Below are two code examples I made, but they don't work.
Query: productOfferV2 and shopeeOfferV2
code1:
import requests
import time
import hashlib
appID = '18341090114'
secret = 'XMAEHHWQD3OEGQX5P33AFRREJEDSQX76'
# Set the API endpoint URL
url = "https://open-api.affiliate.shopee.com.my/graphql"
payload = """
{
"query": "query Fetch($page:2){
productOfferV2(
listType: 0,
sortType: 2,
page: $page,
limit: 50
) {
nodes {
commissionRate
commission
price
productLink
offerLink
}
}
}",
"operationName": null,
"variables":{
"page":0
}
}
"""
payload = payload.replace('\n', '').replace(':0', f':{2}')
timestamp = int(time.time())
factor = appID+str(timestamp)+payload+secret
signature = hashlib.sha256(factor.encode()).hexdigest()
# Set the request headers
headers = {
'Content-type': 'application/json',
'Authorization': f'SHA256 Credential={appID},Timestamp={timestamp},Signature={factor}'
}
# Send the POST request
response = requests.post(url, payload, headers=headers)
data = response.json()
print(data)
return = error-invalid-signature-python
code2:
import requests
import json
import hmac
import hashlib
appID = '18341090114'
secret = 'XMAEHHWQD3OEGQX5P33AFRREJEDSQX76'
query = '''query { productOfferV2(item_id: ALL) {
offers {
shop_id
item_price
discount_price
offer_id
shop_location
shop_name
}
}
}'''
def generate_signature(query, secret):
signature = hmac.new(secret.encode(), query.encode(), hashlib.sha256).hexdigest()
return signature
signature = generate_signature(query, secret)
url = 'https://open-api.affiliate.shopee.com.my/graphql'
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json',
'Authorization': 'Bearer ' + appID + ':' + signature
}
payload = {
'query': query
}
response = requests.post(url, data=json.dumps(payload), headers=headers)
data = response.json()
print(data)
I have python code that successfully downloads a Nessus scan report in csv format, but I need to add some additional fields to the downloaded report. I include parameters in the request payload to include some fields, but the scan that is downloaded does not include those fields.
I've tried changing the value of the reportedContents params to actual Boolean types with the True keyword.
Also, I changed the format to pdf and it exports a PDF file that is just a title page and a page with a blank table of contents.
The downloaded csv file has data in it, but only includes the default headers (i.e.):
Plugin ID,CVE,CVSS v2.0 Base Score,Risk,Host,Protocol,Port,Name,Synopsis,Description,Solution,See Also,Plugin Output
The raw output of the POST request looks like:
POST https://localhost:8834/scans/<scan_id>/export
X-ApiKeys: accessKey=accessKey;secretKey=secretKey
Content-Type: application/x-www-form-urlencoded
Content-Length: 122
format=csv&reportContents.vulnerabilitySections.exploitable_with=true&reportContents.vulnerabilitySections.references=true
def download_scan(scan_num):
# Post an export request
headers = {
'X-ApiKeys': 'accessKey=accessKey;secretKey=secretKey',
'Content-Type': 'application/x-www-form-urlencoded'
}
data = {
'format': 'csv',
'reportContents.vulnerabilitySections.exploitable_with': 'true',
'reportContents.vulnerabilitySections.references': 'true'
}
res = requests.post(url + '/scans/{id_num}/export'.format(id_num = scan_num), data=data, verify=False, headers=headers)
if res.status_code == 200:
export = json.loads(res.text)
file_id = export.get('file')
# Continually check the scan status until the status is ready
while True:
# Check file status
res = requests.get(url + '/scans/{id_num}/export/{file_num}/status'.format(id_num = scan_num, file_num = file_id), verify=False, headers=headers)
if res.status_code == 200:
status = json.loads(res.text)['status']
if status == 'ready':
break
# Download the scan
res = requests.get(url + '/scans/{scan_num}/export/{file_num}/download'.format(scan_num = scan_num, file_num = file_id), verify=False, headers=headers)
# If the scan is successfully downloaded, get the attachment file
if res.status_code == 200:
attachment = res.content
print("Scan downloaded!!!")
else:
raise Exception("Download request failed with status code: " + str(res))
return attachment
def main():
# Download the scan based on the scan_id. I have a helper function that returns the id that I am omitting here
try:
scan = download_scan(scan_id)
except Exception as e:
print(e)
quit()
with open("scan.csv", "wb") as f:
f.write(scan)
f.close()
if __name__ == "__main__":
main()
I'm having the exact same issue but with PowerShell. Neither my additional columns nor filters appear to be working. Was wondering if you'd had any joy getting this to work?
If I change the scan_id I get the correct different results, which suggests it is receiving the JSON but ignoring the columns and filters.
My JSON is as follows...
{
"scan_id": 3416,
"format": "csv",
"reportContents.vulnerabilitySections.cvss3_base_score": true,
"filters": {
"filter.0.quality": "gt",
"filter.0.filter": "cvss2_base_score",
"filter.0.value": "6.9",
"filter.1.quality": "neq",
"filter.1.filter": "cvss2_base_score",
"filter.1.value": ""
}
}
I managed to fix it, my problem was that I was using Python's requests module and it's data={} keyword, which defaults to header content-type: application-x-www-form-urlencoded, it generates reports with strictly 13 fields regardless of your payload.
To make it actually consider your payload, use the header "content-type": "application/json", in your code implicitly and json={} in your payload instead of data={}.
WILL NOT WORK:
requests.post(
nessus_url + f"/scans/{scan_id}/export",
data={
"format": "csv",
"template_id": "",
"reportContents": {
"csvColumns": {
"id": True,
"cve": True,
"cvss": True,
**other_columns,
}
}
},
verify=False,
headers={
"X-ApiKeys": f"accessKey={credentials['access_key']}; secretKey={credentials['secret_key']}",
},
)
WILL WORK:
requests.post(
nessus_url + f"/scans/{scan_id}/export",
json={
"format": "csv",
"template_id": "",
"reportContents": {
"csvColumns": {
"id": True,
"cve": True,
"cvss": True,
**other_columns
}
}
},
verify=False,
headers={
"X-ApiKeys": f"accessKey={credentials['access_key']}; secretKey={credentials['secret_key']}",
"content-type": "application/json",
},
)
Hey Guys I am trying to (remove) unlist number from Truecaller using following link
https://www.truecaller.com/unlisting
I want to automate this process but because of recaptcha of google the requests are limited and cant possible so it it any way to do this using any library like unofficial libraries of Truecaller like Trunofficial .
https://www.truecaller.com/unlisting
Unlisting your phone number requires reCaptcha, you can bypass it.
This my be helpful
import requests
from requests.structures import CaseInsensitiveDict
url = "https://asia-south1-truecaller-web.cloudfunctions.net/api/noneu/unlist/v1"
headers = CaseInsensitiveDict()
headers["Authorization"] = "Bearer null"
headers["Content-Type"] = "application/json"
headers["Content-Length"] = "612"
headers["accept"] = "*/*"
headers["sec-gpc"] = "1"
headers["origin"] = "https://www.truecaller.com"
headers["sec-fetch-site"] = "cross-site"
headers["sec-fetch-mode"] = "cors"
headers["sec-fetch-dest"] = "empty"
headers["referer"] = "https://www.truecaller.com/"
headers["accept-encoding"] = "gzip, deflate, br"
headers["accept-language"] = "en-IN,en-GB;q=0.9,en-US;q=0.8,en;q=0.7"
data = {
"phoneNumber": "+919912345678",
"recaptcha": "03ANYolqtbEiFqaQ8wBrDF3kKqkCzIaH4r79oA2hCNd80gZGENvff9fPKocccytf6QXpPvQfQ12WMvgfdP1IKggff6lTY_0ucZxFB7r6A_dbNjfp_NSYtrkU4NX1h_LBQgnCO0ALkWS8CMjaIEjhxclfeClFv4EmFNEQis1OvrSVgvB8nJipuUxGakpa0eB8yWrEQCUfy0Gs7VA2hO4VaeLRTwr6BaxYsJsCP_3-vaMP2crZDDrIm8on_0H0vqh-1S44y69b0rSM6_ornuVZxeNQkpe_3NvPjQQxqQtdyQl
d55OQkK67PH7OH_A7s3GVgMa0VCOuX_UdBsPkd8mKf708GgutggfggvVrbe3DrBsUnpXMYchsv_revkhknej0G_SxAtqtwQoGPtt5iKSKHRmlelDJpYuQs6Lwi-4Umn_E
clRPT2iaohxZ3r8O_4jaGP9yhRiMyVkgTm6mutJn50nPFbyabjSqgC2ShlMEI7IoOqWp9g90b2bl4qw4h6k9vP4AVy36sCx2z_gksBEgxT1zsM3P77PQ_guo12k7rtFlUAmvdqhqgwowaKQFMMBfjWDo40"
}
resp = requests.post(url, headers=headers, data=data)
print(resp.json())
# {
# "lastUpdated": "2022-07-11T03:27:30.306713Z",
# "phoneNumber": "919912345678",
# "status": "unlisted"
# }
Looks like I'm doing just about everything correct but I keep receiving this error....
Response text error:
response .text {"name":"INVALID_TRACKING_NUMBER","message":"The requested resource ID was not found","debug_id":"12345","details":[{"field":"tracker_id","value":"1234-567890","location":"path","issue":"INVALID_TRACKING_INFO"}],"links":[]}
Response status: <Response [404]>
I'm using a real transaction and a real tracking number.
I'm doing this through python and this is my code:
def paypal_oauth():
url = 'https://api-m.paypal.com/v1/oauth2/token'
headers = {
"Content-Type": "application/json",
"Accept-Language": "en_US",
}
auth = "1234-1234","0987"
data = {"grant_type":"client_credentials"}
response = requests.post(url, headers=headers, data=data, auth=(auth))
return response
def paypal_tracking(paypal_transaction_token, tracking_number, status, carrier):
try:
_paypal_oauth = paypal_oauth()
_paypal_oauth_response = _paypal_oauth.json()
except Exception as e:
print(e)
pass
access_token = _paypal_oauth_response['access_token']
url = 'https://api-m.paypal.com/v1/shipping/trackers/%s-%s/' % (paypal_transaction_token, tracking_number)
# https://api-m.paypal.com/v1/shipping/trackers/1234-567890/
carrier = carrier_code(carrier)
# This grabs carrier from a method and gets back: 'DHL'
headers = {
'Content-Type' : 'application/json',
'Authorization' : 'Bearer %s' % access_token,
}
# {'Content-Type': 'application/json', 'Authorization': 'Bearer 1234'}
data = {
"transaction_id":"%s" % paypal_transaction_token,
"tracking_number":"%s" % tracking_number,
"status": "%s" % status,
"carrier": "%s" % carrier
}
# {'transaction_id': '1234', 'tracking_number': '567890', 'status': 'SHIPPED', 'carrier': 'DHL'}
response = requests.put(url, headers=headers, data=json.dumps(data))
return HttpResponse(status=200)
Anyone with experience in paypal or using API's see my issue?
To add a tracking number (not update), use an HTTP POST request, as documented.
The URL to POST to is https://api-m.sandbox.paypal.com/v1/shipping/trackers-batch , with no additional URL parameters.
The body format is
{
"trackers": [
{
"transaction_id": "8MC585209K746392H",
"tracking_number": "443844607820",
"status": "SHIPPED",
"carrier": "FEDEX"
},
{
"transaction_id": "53Y56775AE587553X",
"tracking_number": "443844607821",
"status": "SHIPPED",
"carrier": "FEDEX"
}
]
}
Note that trackers is an array of JSON object(s).
The example code for Google's YouTube Data API is a piece of junk. It's so complicated and tied to the oauth redirect flow that I can't use it. Trying to go raw with requests pip and not getting too far.
I've followed the instructions exactly (as far as I can tell), with the following code:
import json
import os
import sys
import urllib
import requests
payload_file = None
payload = None
print 'Loading Config'
# Get the directory path of this file. When using any relative file paths make
# sure they are relative to current_dir so that the script can be run from any CWD.
current_dir = os.path.dirname(os.path.abspath(__file__))
# Reads in the config.json file then parses it
config = json.loads(open(os.path.join(current_dir, '..', 'config.json')).read())
print 'Parsing Payload'
for i in range(len(sys.argv)):
if sys.argv[i] == "--json" and (i + 1) < len(sys.argv):
payload = json.loads(sys.argv[i + 1])
elif sys.argv[i] == "-payload" and (i + 1) < len(sys.argv):
payload_file = sys.argv[i + 1]
with open(payload_file,'r') as f:
payload = json.loads(f.read())
break
print 'Configuring youtube with token {0}'.format(payload['token'])
print 'Downloading video...'
# See how big it is
f = urllib.urlopen(payload['url'])
content_length = int(f.headers["Content-Length"])
# Download it
# urllib.urlretrieve(payload['url'], "video.mp4")
metadata = {
'snippet' : {
'title': payload['title'],
"categoryId": 22
},
'status' : {
"privacyStatus": "public",
"embeddable": True,
"license": "youtube"
}
}
if 'tags' in payload:
metadata['snippet']['tags'] = payload['tags']
if 'description' in payload:
metadata['snippet']['description'] = payload['description']
headers = {
'Authorization' : 'Bearer {0}'.format(payload['token']),
'Content-Type' : 'application/json; charset=UTF-8',
'Content-Length' : json.dumps(metadata).__len__(),
'X-Upload-Content-Length' : content_length,
'X-Upload-Content-Type' : 'video/*',
}
print 'Attempting to upload video'
print headers
# upload video file
r = requests.post('https://www.googleapis.com/upload/youtube/v3/videos?uploadType=resumable&part=snippet,status', data=metadata, headers=headers);
print "RESPONSE!"
print r.text
# files = {
# 'file': video_file,
# }
# r = requests.post('https://www.googleapis.com/upload/youtube/v3/videos', data={ "video" : video }, headers=headers);
Obviously its not finished, but its dying on the metadata upload request with the following output:
Loading Config
Parsing Payload
Configuring youtube with token <access-token>
Downloading video...
Attempting to upload video
{'X-Upload-Content-Length': 51998563, 'Content-Length': 578, 'Content-Type': 'application/json; charset=UTF-8', 'X-Upload-Content-Type': 'video/*', 'Authorization': 'Bearer <access-token>'}
RESPONSE!
{
"error": {
"errors": [
{
"domain": "global",
"reason": "parseError",
"message": "Parse Error"
}
],
"code": 400,
"message": "Parse Error"
}
}
This error is not even listed in their "Errors" docs.
What is wrong with my code?
Here is an example in python that works. It assumes you've already done the oauth part though.
import requests
from os import fstat
import json
fi = open('myvideo.mp4')
base_headers = {
'Authorization': '%s %s' % (auth_data['token_type'],
auth_data['access_token']),
'content-type': 'application/json'
}
initial_headers = base_headers.copy()
initial_headers.update({
'x-upload-content-length': fstat(fi.fileno()).st_size,
'x-upload-content-type': 'video/mp4'
})
initial_resp = requests.post(
'https://www.googleapis.com/upload/youtube/v3/videos?uploadType=resumable&part=snippet,status,contentDetails',
headers=initial_headers,
data=json.dumps({
'snippet': {
'title': 'my title',
},
'status': {
'privacyStatus': 'unlisted',
'embeddable': True
}
})
)
upload_url = initial_resp.headers['location']
resp = requests.put(
upload_url,
headers=base_headers,
data=fi
)
fi.close()
the above is graet, just adding: you can also get the youtube id from the response (for future use):
cont = json.loads(resp.content)
youtube_id = cont['id']