Right now I am writing a snippet allowing submitting code to http://www.spoj.com/ from command line, i.e. something like python spoj_cl.py test.cpp
I understand that I need to send a POST request to the corresponding URL with specified parameters and cookie, but I'm still confused on which parameters to include on. Right now I'm doing it in a trial-and-error method, which seems not to be very effective. My questions are:
How to check systematically which parameters to be included in when sending a POST request?
How can I check immediately if the POST request I send is successful? One way I could think of is to get the content of the submission page http://www.spoj.com/status/, but checking the request directly should be preferable.
Below is the snippet I'm working on. Hopefully it should be readable.
import requests, sys
# if __name__ == "__main__":
base_url = "http://spoj.com"
autologin_hash = "*************" # Your user hash, taken from cookie
autologin_login = "************" # Your user name
session_id = "************" # Login session, can be retrieved when logged in
cookies_info = {
"autologin_login": autologin_login,
"autologin_hash": autologin_hash
}
ext_id = {
"cpp": "1"
}
filename = "test.cpp"
problem_name = str(filename.split(".")[0]).upper()
extension = filename.split(".")[1]
submit_url = base_url + "/submit/"
parts = {
"PHPSESSID": session_id,
"action": "/submit/complete",
"file": open(filename, "rb"),
"subm_file": "",
"lang": ext_id[extension],
"problemcode": problem_name
}
requests.post(submit_url,
params={"PHPSESSID": session_id},
files=parts,
cookies=cookies_info)
print "Submission sent!"
How to check systematically which parameters to be included in when sending a POST request?
I am not a member of the site spoj.com, but what you are asking for is basic web scraping. Find the HTML form for submitting code on the website, then use Firebug or Chrome developer console to find the HTML input elements with name attributes. Once you have found them, you can make a Python script to check for these elements systematically. If one day the elements are missing, the page probably changed.
Example Code:
webpage = requests.get(form_url, params={"PHPSESSID": session_id}, cookies=cookies_info)
html = BeautifulSoup(webpage.text)
form = html.find('form')
inputs = form.findAll('input')
names = []
for i in inputs:
names.append(i['name'])
How can I check immediately if the POST request I send is successful?
Check the status code of the response. It should be 200 for successful requests.
# Make the request
r = requests.post(submit_url,
params={"PHPSESSID": session_id},
files=parts,
cookies=cookies_info)
# Check the response code
if r.status_code == '200':
print "Submission successful!"
else:
print "Submission met a status code of: %s" % r.status_code
Related
I'm new to Python/rest API
the above screenshot is a sample POST request that running fine via 3rd party SandBox, and I want to be able run this POST Request via Python so I can do more personalization on it (e.g. run multiple taskid at one shot, output pre/post result, etc...):
1- Issue Description:
I'm using this script to make a POST request to a API. in this case, I want to pass the plain text (taskId = 125918) as a variable so can use it in the script, and can't make the same POST request with this plain text that highlighted in the above sample screenshot(taskId = 451).
taskId = 0
task = {(taskId, 'text/plain') : 125918}
r2 = requests.post(DisableSchedule, data = task, headers=headers)
2- what have been tried:
#1 " tried below python script and gave an error : file "string", line unknown paraseError: no element found: line 1, coilmun 0
taskId = 0
task = {taskId : 125918}
r2 = requests.post(DisableSchedule, data = task, headers=headers)
or
r2 = requests.post(DisableSchedule, files = dict(taskId = 125918), headers=headers)
3- How to make this request using the clear text as a variable in Python ?
variable before the script, and I want to use this plain text variable in the POST request
taskId = 125918
use 'payload' in request body and it works now :
payload = "taskId=125918"
r2 = requests.request("POST",DisableSchedule, headers=headers, data = payload )
I use software that only lets me fetch information on all users, while my project requires me to get only one specific user. I only need his ID and I'm going to find it out by his e-mail (because it's unique).
The program creates an user (via POST) and stores his data (like email) in variables, then reads a provided list of devices that will be assigned to said user. To do this, the program has to:
Fetch all users (why software author didn't allow to fetch a single user is beyond me)
Filter users so it finds my newly created user <- here's my issue
Fetch all devices...
Filter devices...
Finally create a permission relationship between found user and device IDs
This is what I came up with so far:
inputText = self.imeiInput.GetValue().splitlines() # reads input and creates a list
url = "https://example.com/api/users/"
userList = requests.get(url, auth=(login, password)).content
foundUser = re.findall(givenMail, str(userList)) # givenMail provided by function argument
print(userList)
print(foundUser) # prints only searched mail
for imei in inputText:
self.TrackerAssign(imei, foundUser) # do your stuff
But it only confirms that there indeed is my email in the userbase, there's no other info (like ID I'm interested in). Here's a sample userList output (except that originally it's all in one line):
b'[{
"id":1,
"attributes": {
...
},
"name":"username",
"login":"",
"email":"test#example.com",
"phone":"999999999",...
},
{
"id":2,
"attributes": {
...
},
"name":"username2",
"login":"",
"email":"test2#exmaple.com",
"phone":"888888888",...
},
...]'
Then there's also a question how to only read the desired ID. Sorry if this is confusing, I barely know what I'm doing myself.
From your example it seems like you get a JSON response from the endpoint. This is great, because this JSON can be parsed to a list/dictionary!
Your approach can look something like this. In broad terms, this is a possible strategy:
Get all users from the users endpoint.
Parse the response JSON to a list of dictionaries.
Loop over all users, breaking the loop when you find your user.
Do something with the ID of the user that you found in your search.
response = requests.get(url, auth=(login, password)) # receive a Response instance
user_list = response.json() # parse the body to a dictionary
for user in user_list:
# Compare the email of this user with the target, using get to catch users with no email specified.
if user.get("email") == given_mail:
desired_user = user
break # if we find the correct user, we exit the loop
else:
# If we never find the desired user, we raise an exception.
raise ValueError("There is no user with email %s", given_email")
print(f"The ID of the user I am looking for is {desired_user["id"]}.")
I am trying to update a website (just change its name) I have created with the Share script in Alfresco, but I am getting a 401 response. I'm sure my login and password are correct.
Code:
s = requests.Session()
data = {'username':"admin", 'password':"admin"}
url = "http://127.0.0.1:8080/share/page/dologin"
r = s.post(url, data=data)
if (r.status_code != 200) :
print "Incorrect login or password "
url1 = "http://127.0.0.1:8080/alfresco/service/api/sites/OdooSite50"
print url_alfresco
jsonString = JSONEncoder().encode({
"title" : name
})
headers = {'content-type': 'application/json',"Accept":"application/json"}
site = s.put(url1,headers=headers,data=data)
if (site.status_code != 200) :
print " Error while creating site"
print site.status_code
I am getting the error on the second part. The login part works without any problems.
Can you tell me what I do wrong?
This is because you are using different contexts to make your queries.
The Alfresco stack is made of multiples parts :
alfresco.war
share.war
solr.war
If we forget the solr part and focus on your problem, you have :
a content repository (alfresco) which contains the core services of alfresco
a web application (share) which contains the web ui of your application and communicate the content repository to do some actions
They don't share the same context and have different lives. One can be on a server, the other one can be in another one server.
So this mean, when you are authenticating, you are doing it on the share context :
http://127.0.0.1:8080/share/page/dologin
and when you are trying to update your website, you are doing it on the alfresco context (on which you are not authenticated yet) :
http://127.0.0.1:8080/alfresco/service/api/sites/OdooSite50
I see two solutions then :
You do your authentication on the alfresco context (webservice alfresco/s/api/login) and then you will be authenticated for calling your alfresco site services
You pass through the share proxy : /alfresco/service/api/sites becomes /share/proxy/alfresco/api/sites
I'm looking for a possibility to get a followers and following list in JSON format via web request (in the same way as on the Instagram web site).
For example, I can login via requests, and get user info:
def get_user_info(self, user_name):
url = "https://www.instagram.com/" + user_name + "/?__a=1"
try:
r = requests.get(url)
except requests.exceptions.ConnectionError:
print 'Seems like dns lookup failed..'
time.sleep(60)
return None
if r.status_code != 200:
print 'User: ' + user_name + ' status code: ' + str(r.status_code)
print r
return None
info = json.loads(r.text)
return info['user']
I tried to see what request chrome send to server, but was unsuccessful.
The question is: how to prepare a similar get or post request to retrieve followers list without the Instagram API?
GraphQL queries with query_hash = "58712303d941c6855d4e888c5f0cd22f" (followings) and "37479f2b8209594dde7facb0d904896a" (followers) return this information. With being logged in, do a GET query to instagram.com/graphql/query with parameters query_hash and variables, where variables is a JSON-formatted set of variables id (user id, as in the return dictionary of your get_user_info() function), first (a page length, it seems the current maximum is 50) and in subsequent requests after set to the end_cursor in the previous response dictionary.
Alternatively, the Instaloader library provides a convenient way to login and then programmatically access a profile's followers and followings list.
import instaloader
# Get instance
L = instaloader.Instaloader()
# Login or load session
L.login(USER, PASSWORD) # (login)
L.interactive_login(USER) # (ask password on terminal)
L.load_session_from_file(USER) # (load session created w/
# `instaloader -l USERNAME`)
# Obtain profile metadata
profile = instaloader.Profile.from_username(L.context, PROFILE)
# Print list of followees
for followee in profile.get_followees():
print(followee.username)
# (likewise with profile.get_followers())
Besides username, the attributes full_name, userid, followed_by_viewer and many more are defined in the Profile instance that is returned for each followee.
Easy(just replace _a with __a)
'https://www.instagram.com/'+user_name+'/followers/?_a=1'
'https://www.instagram.com/'+user_name+'/following/?_a=1'
I am extremely new to python , scripting and APIs, well I am just learning. I came across a very cool code which uses facebook api to reply for birthday wishes.
I will add my questions, I will number it so that it will be easier for someone else later too. I hope this question will clear lots of newbies doubts.
1) Talking about APIs, in what format are the usually in? is it a library file which we need to dowload and later import? for instance, twitter API, we need to import twitter ?
Here is the code :
import requests
import json
AFTER = 1353233754
TOKEN = ' <insert token here> '
def get_posts():
"""Returns dictionary of id, first names of people who posted on my wall
between start and end time"""
query = ("SELECT post_id, actor_id, message FROM stream WHERE "
"filter_key = 'others' AND source_id = me() AND "
"created_time > 1353233754 LIMIT 200")
payload = {'q': query, 'access_token': TOKEN}
r = requests.get('https://graph.facebook.com/fql', params=payload)
result = json.loads(r.text)
return result['data']
def commentall(wallposts):
"""Comments thank you on all posts"""
#TODO convert to batch request later
for wallpost in wallposts:
r = requests.get('https://graph.facebook.com/%s' %
wallpost['actor_id'])
url = 'https://graph.facebook.com/%s/comments' % wallpost['post_id']
user = json.loads(r.text)
message = 'Thanks %s :)' % user['first_name']
payload = {'access_token': TOKEN, 'message': message}
s = requests.post(url, data=payload)
print "Wall post %s done" % wallpost['post_id']
if __name__ == '__main__':
commentall(get_posts())`
Questions:
importing json--> why is json imported here? to give a structured reply?
What is the 'AFTER' and the empty variable 'TOKEN' here?
what is the variable 'query' and 'payload' inside get_post() function?
Precisely explain almost what each methods and functions do.
I know I am extremely naive, but this could be a good start. A little hint, I can carry on.
If not going to explain the code, which is pretty boring, I understand, please tell me how to link to APIs after a code is written, meaning how does a script written communicate with the desired API.
This is not my code, I copied it from a source.
json is needed to access the web service and interpret the data that is sent via HTTP.
The 'AFTER' variable is supposed to get used to assume all posts after this certain timestamp are birthday wishes.
To make the program work, you need a token which you can obtain from Graph API Explorer with the appropriate permissions.