So my python program needs to be able to ping a website to see if it is up or not, i have made the ping program and then found out that this site only works with httping, after doing some googling about it i found almost nothing on the subject. Has anybody httping in python before? if so how did you do it?, Thanks for the help.
Here is my code for the normal ping (which works but not for the site i need it to work for)
import os
hostname = "sitename"
response = os.system("ping -c 1 " + hostname)
if response == 0:
print "good"
else:
print "bad"
Use requests to make a HTTP HEAD request.
import requests
response = requests.head("http://www.example.com/")
if response.status_code == 200:
print(response.elapsed)
else:
print("did not return 200 OK")
Output:
0:00:00.238418
To check accessibility of HTTP server you can make a GET request to the URL, which is as easy as:
import urllib2
response = urllib2.urlopen("http://example.com/foo/bar")
print response.getcode()
Related
I'm trying to get the following curl command to work in Python 2.7:
curl --header "Accept: application/vnd.github.korra-preview" --user [username]:[password] https://api.github.com/orgs/myorg/outside_collaborators?per_page=100\&page=1
(basically just gets a list of collaborators from a GitHub organization)
This is utilising the GitHub API v3 - https://developer.github.com/v3/orgs/outside_collaborators/
I've read through the Requests library documentation and can't figure out how to pass BOTH authorisation and custom headers. Does anyone know how to do this?
This is the code I've written so far. How do I include both the auth and varHeaders in the get request?
import requests
varUsername = raw_input("GitHub username:\n")
varPassword = raw_input("GitHub password:\n")
varHeaders = {'Accept':'application/vnd.github.korra-preview'}
#req = requests.get('https://api.github.com/user/repos',auth=(varUsername,varPassword))
req = requests.get('https://api.github.com/orgs/myorg/outside_collaborators?per_page=100\&page=1',auth=(varUsername,varPassword))
print req.status_code
print req.headers
print req.encoding
print req.text
print req.json()
You can add a third parameter, in the form 'headers=varHeaders'
req = requests.get('https://api.github.com/orgs/myorg/outside_collaborators?per_page=100\&page=1',auth=(varUsername,varPassword),headers=varHeaders)
Thanks Brendan for pointing this out!
I have a python program that takes pictures and I am wondering how I would write a program that sends those pictures to a particular URL.
If it matters, I am running this on a Raspberry Pi.
(Please excuse my simplicity, I am very new to all this)
Many folks turn to the requests library for this sort of thing.
For something lower level, you might use urllib2
I've been using the requests package as well. Here's an example POST from the requests documentation.
If you are feeling that you want to use CURL, try PyCurl.
Install it using:
sudo pip install pycurl
Here is an example of how to send data using it:
import pycurl
import json
import urllib
import cStringIO
url = 'your_url'
first_param = '12'
dArrayData = [{'data' : 'first'}, {'data':'second'}]
json_to_send = json.dumps(dArrayData, separators=(',',':'), sort_keys=False)
curlClient = pycurl.Curl()
curlClient.setopt(curlClient.USERAGENT, 'curl-user-agent')
# Sets the url of the service
curlClient.setopt(curlClient.URL, url)
# Sets the request to be of the type POST
curlClient.setopt(curlClient.POST, True)
# Sets the params of the post request
send_params = 'first_param=' + first_param + '&data=' + urllib.quote(json_to_send)
curlClient.setopt(curlClient.POSTFIELDS, send_params)
# Setting the buffer for the response to be written to
bufResponse = cStringIO.StringIO()
curlClient.setopt(curlClient.WRITEFUNCTION, bufResponse.write)
# Setting to fail on error
curlClient.setopt(curlClient.FAILONERROR, True)
# Sets the time out for the connections
curlClient.setopt(pycurl.CONNECTTIMEOUT, 25)
curlClient.setopt(pycurl.TIMEOUT, 25)
response = ''
try:
# Performs the operation
curlClient.perform()
except pycurl.error as err:
errno, errString = err
print '========'
print 'ERROR sending the data:'
print '========'
print 'CURL error code:', errno
print 'CURL error Message:', errString
else:
response = bufResponse.getvalue()
# Do what ever you want with the response.. Json it or what ever..
finally:
curlClient.close()
bufResponse.close()
The requests library is most supported and advanced way to do this.
I have a python program that periodically checks the weather from weather.yahooapis.com, but it always throws the error: urllib.HTTPError: HTTP Error 404: Not Found on Accelerator. I have tried on two different computers with no luck, as well as changing my DNS settings. I continue to get the error. Here is my code:
#!/usr/bin/python
import time
#from Adafruit_CharLCDPlate import Adafruit_CharLCDPlate
from xml.dom import minidom
import urllib2
#towns, as woeids
towns = [2365345,2366030,2452373]
val = 1
while val == 1:
time.sleep(2)
for i in towns:
mdata = urllib2.urlopen('http://206.190.43.214/forecastrss?w='+str(i)+'&u=f')
sdata = minidom.parseString(mdata)
atm = sdata.getElementsByTagName('yweather:atmosphere')[0]
current = sdata.getElementsByTagName('yweather:condition')[0]
humid = atm.attributes['humidity'].value
tempf = current.attributes['temp'].value
print(tempf)
time.sleep(8)
I can successfully access the output of the API through a web browser on the same computers that give me the error.
The problem is that you're using the IP address 206.190.43.214 rather than the hostname weather.yahooapis.com.
Even though they resolve to the same host (206.190.43.214, obviously), the name that's actually in the URL ends up as the Host: header in the HTTP request. And you can tell that this makes the difference here:
$ curl 'http://206.190.43.214/forecastrss?w=2365345&u=f'
<404 error>
$ curl 'http://weather.yahooapis.com/forecastrss?w=2365345&u=f'
<correct rss>
$ curl 'http://206.190.43.214/forecastrss?w=2365345&u=f' -H 'Host: weather.yahooapis.com'
<correct rss>
If you test the two URLs in your browser, you will see the same thing.
So, in your code, you have two choices. You can use the DNS name instead of the IP address:
mdata = urllib2.urlopen('http://weather.yahooapis.com/forecastrss?w='+str(i)+'&u=f')
… or you can use the IP address and add the Host header manually:
req = urllib2.Request('http://206.190.43.214/forecastrss?w='+str(i)+'&u=f')
req.add_header('Host', 'weather.yahooapis.com')
mdata = urllib2.urlopen(req)
There's least one other problem in your code once you fix this. You can't call minidom.parseString(mdata) when mdata is a urlopen thingy; you either need to call read() on the thingy, or use parse instead of parseString.
I'm writing my own directory buster in python, and I'm testing it against a web server of mine in a safe and secure environment. This script basically tries to retrieve common directories from a given website and, looking at the HTTP status code of the response, it is able to determine if a page is accessible or not.
As a start, the script reads a file containing all the interesting directories to be looked up, and then requests are made, in the following way:
for dir in fileinput.input('utils/Directories_Common.wordlist'):
try:
conn = httplib.HTTPConnection(url)
conn.request("GET", "/"+str(dir))
toturl = 'http://'+url+'/'+str(dir)[:-1]
print ' Trying to get: '+toturl
r1 = conn.getresponse()
response = r1.read()
print ' ',r1.status, r1.reason
conn.close()
Then, the response is parsed and if a status code equal to "200" is returned, then the page is accessible. I've implemented all this in the following way:
if(r1.status == 200):
print '\n[!] Got it! The subdirectory '+str(dir)+' could be interesting..\n\n\n'
All seems fine to me except that the script marks as accessible pages that actually aren't. In fact, the algorithm collects the only pages that return a "200 OK", but when I manually surf to check those pages I found out they have been moved permanently or they have a restricted access. Something went wrong but I cannot spot where should I fix the code exactly, any help is appreciated..
I did not found any problems with your code, except it is almost unreadable. I have rewritten it into this working snippet:
import httplib
host = 'www.google.com'
directories = ['aosicdjqwe0cd9qwe0d9q2we', 'reader', 'news']
for directory in directories:
conn = httplib.HTTPConnection(host)
conn.request('HEAD', '/' + directory)
url = 'http://{0}/{1}'.format(host, directory)
print ' Trying: {0}'.format(url)
response = conn.getresponse()
print ' Got: ', response.status, response.reason
conn.close()
if response.status == 200:
print ("[!] The subdirectory '{0}' "
"could be interesting.").format(directory)
Outputs:
$ python snippet.py
Trying: http://www.google.com/aosicdjqwe0cd9qwe0d9q2we
Got: 404 Not Found
Trying: http://www.google.com/reader
Got: 302 Moved Temporarily
Trying: http://www.google.com/news
Got: 200 OK
[!] The subdirectory 'news' could be interesting.
Also, I did use HEAD HTTP request instead of GET, as it is more efficient if you do not need the contents and you are interested only in the status code.
I would be adviced you to use http://docs.python-requests.org/en/latest/# for http.
I'm trying to write a small program that will simply display the header information of a website. Here is the code:
import urllib2
url = 'http://some.ip.add.ress/'
request = urllib2.Request(url)
try:
html = urllib2.urlopen(request)
except urllib2.URLError, e:
print e.code
else:
print html.info()
If 'some.ip.add.ress' is google.com then the header information is returned without a problem. However if it's an ip address that requires basic authentication before access then it returns a 401. Is there a way to get header (or any other) information without authentication?
I've worked it out.
After try has failed due to unauthorized access the following modification will print the header information:
print e.info()
instead of:
print e.code()
Thanks for looking :)
If you want just the headers, instead of using urllib2, you should go lower level and use httplib
import httplib
conn = httplib.HTTPConnection(host)
conn.request("HEAD", path)
print conn.getresponse().getheaders()
If all you want are HTTP headers then you should make HEAD not GET request. You can see how to do this by reading Python - HEAD request with urllib2.