I am very new to Python (and relatively new to programming) and would appreciate any help.
How would I use the following example link to download the supplied information using Python.
https://url.com/InfoService/GetFlightByFlightNum?board={BOARD}&flightNum={FLIGHTNUM}
Method: GET
Thanks
The most easy way to access APIs from python is using the requests library
You can quickly install it with pip install requests
Then you can do that for your example:
import requests
payload = {'board': {BOARD}, 'flightNum': {FLIGHTNUM}}
r = requests.get('https://url.com/InfoService/GetFlightByFlightNum', params=payload)
# print the response result
print r.text
You can use the builtin library urllib2.
Python 2 documentation: http://docs.python.org/2/library/urllib2.html
Python 3 documentation: http://docs.python.org/3.1/howto/urllib2.html
Here are some examples: How do I download a file over HTTP using Python?
Related
I want to test my different env. like DEV,TEST,STAGE,PRODUCTION. The API call for the env. are different. For instance, http://dev.myclient.com, http://stage.myclient.com etc.
So, I want to write the test cases that will go to my specific URL make a search of specific thing and whatever the response come for example. I search for apples and I got 500 results related to that so, I want that result to print and save into text or Json. and same applies for the all different env.
then I will compare all the environments by one each other when I have a raw response data.
Any ideas how I can do that? In python specifically.
Thanks In advance!!
You could use the requests library for sending HTTP requests.This isn't a built-in library, so you should use pip install requests to install the library.
Here's an example:
import requests
url = "http://dev.myclient.com/"
response = requests.get(url).json()
print(response)
The console should now print the JSON response of the request you just sent. For this to work the URL provided should return a JSON string response.
The API for the website Urban dictionary is a URL that takes you to a page that dumps out the json, see example here: http://api.urbandictionary.com/v0/define?term=test
Is there a simple way to grab all the text on that page? Do I still need to use some type of HTML parser?
You could use a command line tool such as curl
curl http://api.urbandictionary.com/v0/define?term=test
For a python specific solution you could try a library such as requests.
pip install requests
import requests
data = requests.get(http://api.urbandictionary.com/v0/define?term=test)
I'm trying to read json from intranet site that's using windows authentication into pandas dataframe using read_json function but I'm getting 401 error.
A bit of googling showed that a similar issue with postman reading windows authenticated json was solved using Fiddler's "Automatically Authorize" function but it doesn't seem to work with pandas using anaconda.
import pandas as pd
df = pd.read_json(windows authenticated url)
btw the url works just fine it returns a perfectly formatted json in browser.
Thanks
Is the URL on your corporate intranet?
Do you normally enter it in the browser, then it pauses for a 10 .sec and you get the results without any password prompts?
If the above is true, it probably uses Kerberos authentication. You can certainly get it using python. Use here is the package that will help you with this https://github.com/requests/requests-kerberos
Note, some language environments maintain their own HTTP/Kerberos stack ( Java ) so you need to log in into Active Directory Domain there separately from your OS login.
I would suggest to do a HTTP GET request first using the requests package. The package provides a get() method that allows authentication as well as a json() method that returns the json-encoded content of a response. A code example could look like this:
import requests
r = requests.get('https://intranet.jsondata.com/xy.json', auth=('user', 'pass'))
json_content = r.json()
My goal for this python code is to create a way to obtain job information into a folder. The first step is being unsuccessful. When running the code I want the url to print https://www.indeed.com/. However instead the code returns https://secure.indeed.com/account/login. I am open to using urlib or cookielib to resolve this ongoing issue.
import requests
import urllib
data = {
'action':'Login',
'__email':'email#gmail.com',
'__password':'password',
'remember':'1',
'hl':'en',
'continue':'/account/view?hl=en',
}
response = requests.get('https://secure.indeed.com/account/login',data=data)
print(response.url)
If you're trying to scrape information from indeed, you should use the selenium library for python.
https://pypi.python.org/pypi/selenium
You can then write your program within the context of a real user browsing the site normally.
I am trying to create a python script that can deploy an artifact to Artifactory.
I am using Python 3.4 and I want the resulted script to put it through py2exe, so external libraries might create issues.
Through all my research, I found that one way is this, but I don't know how to "translate" it to Python:
curl -X PUT -u user:password --data-binary #/absolute/path/my-utils-2.3.jar "http://localhost/artifactory/my-repo/my/utils/2.3/"
How can I achieve that into Python? Or is it any either way for deploying?
Been trying the whole day and I've had some successful testing using the requests library.
import requests
url = "repo/path/test.txt"
file_name = "test.txt"
auth=(USERNAME, PASSWORD)
with open(file_name, 'rb') as fobj:
res = requests.put(url, auth=auth, data=fobj)
print(res.text)
print(res.status_code)
And py2exe had no issues with it.
You might want to take look at Party, either look on how they do it, or just use it directly.