Python 3 - Router Access - python

I have been using the following script to grab the PIN from our router. It is changed often so I decided to use a script so it would be easier than having to access the router from the browser.
The script is as follows:
from requests.auth import HTTPBasicAuth
import requests
import re
while True:
try:
response = requests.get('http://192.168.2.1/settings.html',
auth=HTTPBasicAuth('username', 'password'))
html = response.content
m = re.findall(b'var routerpin\s+=\s+(.*)', html)
break
except:
m = None
print(m)
The trouble I am having is the first time the script is run the variable 'm' returns an empty list. It does not give an exception. I thought by using a try - except loop and using the None or empty set as the exception would allow it to work.
When the script runs once it returns m = []
after this the script returns the correct data. I know this is down to the first run not authenticating with the router but not sure how I can handle it to run twice and grab the data.
Probably a really simple answer but any help much appreciated.

Try using a session object to manage your authenticated session:
s = requests.Session()
s.auth = ('username', 'password')
auth = s.post('http://192.168.2.1')
response = s.get('http://192.168.2.1/settings.html')
html = response.content
# etc

Related

How can i go on from here? Trying to build a weather featcher

import requests
API_KEY = '...'
API_LINK = 'https://api.openweathermap.org/data/2.5/weather'
city = input('enter your city: ')
request_url = f"{API_LINK}?appid={API_KEY}&q={city}"
I'm trying to build a weather fetcher, don't wanna see the code but I'm so lost at the moment. Can you guys help? Don't write the code, just tell me what to do.
and also on the 4th line of the code, i'm not supposed to know these (appid, &q) right? Are they in the module?
I'm not familiar with the API you're using, but you can probably get the response by doing something like this
import requests
# make session
session = requests.Session()
# call API
resp = session.get(’your url’)
# print response
print(resp.text)

How to continuously pull data from a URL in Python?

I have a link, e.g. www.someurl.com/api/getdata?password=..., and when I open it in a web browser it sends a constantly updating document of text. I'd like to make an identical connection in Python, and dump this data to a file live as it's received. I've tried using requests.Session(), but since the stream of data never ends (and dropping it would lose data), the get request also never ends.
import requests
s = requests.Session()
x = s.get("www.someurl.com/api/getdata?password=...") #never terminates
What's the proper way to do this?
I found the answer I was looking for here: Python Requests Stream Data from API
Full implementation:
import requests
url = "www.someurl.com/api/getdata?password=..."
s = requests.Session()
with open('file.txt','a') as fp:
with s.get(url,stream=True) as resp:
for line in resp.iter_lines(chunk_size=1):
fp.write(str(line))
Note that chunk_size=1 is necessary for the data to immediately respond to new complete messages, rather than waiting for an internal buffer to fill before iterating over all the lines. I believe chunk_size=None is meant to do this, but it doesn't work for me.
You can keep making get requests to the url
import requests
import time
url = "www.someurl.com/api/getdata?password=..."
sess = requests.session()
while True:
req = sess.get(url)
time.sleep(10)
this will terminate the request after 1 second ,
import multiprocessing
import time
import requests
data = None
def get_from_url(x):
s = requests.Session()
data = s.get("www.someurl.com/api/getdata?password=...")
if __name__ == '__main__':
while True:
p = multiprocessing.Process(target=get_from_url, name="get_from_url", args=(1,))
p.start()
# Wait 1 second for get request
time.sleep(1)
p.terminate()
p.join()
# do something with the data
print(data) # or smth else

How to send body to post request

I am using an api which takes html code as input.Lets say it is accesible at http://10.21.2.80:8000/Application/validate_content.php
validate_content.php
$html_data = trim(urldecode($_POST['html'])); // html is key
validate($html_data)
access.py
I am sending a request to this api using python requests like
import requests
openfile = open('file.txt')
html_data = openfile.read()
openfile.close()
url = http://10.21.2.80:8000/Application/validate_content.php?id=12&offset=10
response = requests.post(url,data={'html':html_data})
validate() checks weather html code follows 508 compliance rules or not. If it follows the rules then it returns PASS, else it returns the errors in the code.
When I am making request using POSTMAN, the API is giving right response(Validating and returning errors). But with python code it is always returning PASS.
I don't know what went wrong. Can anyone suggest me the right way to do it.

Making a POST call instead of GET using urllib2

There's a lot of stuff out there on urllib2 and POST calls, but I'm stuck on a problem.
I'm trying to do a simple POST call to a service:
url = 'http://myserver/post_service'
data = urllib.urlencode({'name' : 'joe',
'age' : '10'})
content = urllib2.urlopen(url=url, data=data).read()
print content
I can see the server logs and it says that I'm doing GET calls, when I'm sending the data
argument to urlopen.
The library is raising an 404 error (not found), which is correct for a GET call, POST calls are processed well (I'm also trying with a POST within a HTML form).
Do it in stages, and modify the object, like this:
# make a string with the request type in it:
method = "POST"
# create a handler. you can specify different handlers here (file uploads etc)
# but we go for the default
handler = urllib2.HTTPHandler()
# create an openerdirector instance
opener = urllib2.build_opener(handler)
# build a request
data = urllib.urlencode(dictionary_of_POST_fields_or_None)
request = urllib2.Request(url, data=data)
# add any other information you want
request.add_header("Content-Type",'application/json')
# overload the get method function with a small anonymous function...
request.get_method = lambda: method
# try it; don't forget to catch the result
try:
connection = opener.open(request)
except urllib2.HTTPError,e:
connection = e
# check. Substitute with appropriate HTTP code.
if connection.code == 200:
data = connection.read()
else:
# handle the error case. connection.read() will still contain data
# if any was returned, but it probably won't be of any use
This way allows you to extend to making PUT, DELETE, HEAD and OPTIONS requests too, simply by substituting the value of method or even wrapping it up in a function. Depending on what you're trying to do, you may also need a different HTTP handler, e.g. for multi file upload.
This may have been answered before: Python URLLib / URLLib2 POST.
Your server is likely performing a 302 redirect from http://myserver/post_service to http://myserver/post_service/. When the 302 redirect is performed, the request changes from POST to GET (see Issue 1401). Try changing url to http://myserver/post_service/.
Have a read of the urllib Missing Manual. Pulled from there is the following simple example of a POST request.
url = 'http://myserver/post_service'
data = urllib.urlencode({'name' : 'joe', 'age' : '10'})
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
print response.read()
As suggested by #Michael Kent do consider requests, it's great.
EDIT: This said, I do not know why passing data to urlopen() does not result in a POST request; It should. I suspect your server is redirecting, or misbehaving.
The requests module may ease your pain.
url = 'http://myserver/post_service'
data = dict(name='joe', age='10')
r = requests.post(url, data=data, allow_redirects=True)
print r.content
it should be sending a POST if you provide a data parameter (like you are doing):
from the docs:
"the HTTP request will be a POST instead of a GET when the data parameter is provided"
so.. add some debug output to see what's up from the client side.
you can modify your code to this and try again:
import urllib
import urllib2
url = 'http://myserver/post_service'
opener = urllib2.build_opener(urllib2.HTTPHandler(debuglevel=1))
data = urllib.urlencode({'name' : 'joe',
'age' : '10'})
content = opener.open(url, data=data).read()
Try this instead:
url = 'http://myserver/post_service'
data = urllib.urlencode({'name' : 'joe',
'age' : '10'})
req = urllib2.Request(url=url,data=data)
content = urllib2.urlopen(req).read()
print content
url="https://myserver/post_service"
data["name"] = "joe"
data["age"] = "20"
data_encoded = urllib2.urlencode(data)
print urllib2.urlopen(url + "?" + data_encoded).read()
May be this can help

a little bit of python help

i need a little help here, since i am new to python, i am trying to do a nice app that can tell me if my website is down or not, then send it to twitter.
class Tweet(webapp.RequestHandler):
def get(self):
import oauth
client = oauth.TwitterClient(TWITTER_CONSUMER_KEY,
TWITTER_CONSUMER_SECRET,
None
)
webstatus = {"status": "this is where the site status need's to be",
"lat": 44.42765100,
"long":26.103172
}
client.make_request('http://twitter.com/statuses/update.json',
token=TWITTER_ACCESS_TOKEN,
secret=TWITTER_ACCESS_TOKEN_SECRET,
additional_params=webstatus,
protected=True,
method='POST'
)
self.response.out.write(webstatus)
def main():
application = webapp.WSGIApplication([('/', Tweet)])
util.run_wsgi_app(application)
if __name__ == '__main__':
main()
now the check website part is missing, so i am extremely new to python and i need a little bit of help
any idea of a function/class that can check a specific url and the answer/error code can be send to twitter using the upper script
and i need a little bit of help at implementing url check in the script above, this is my first time interacting with python.
if you are wondering, upper class uses https://github.com/mikeknapp/AppEngine-OAuth-Library lib
cheers
PS: the url check functionality need's to be based on urlfetch class, more safe for google appengine
You could use Google App Engine URL Fetch API.
The fetch() function returns a Response object containing the HTTP status_code.
Just fetch the url and check the status with something like this:
from google.appengine.api import urlfetch
def is_down(url):
result = urlfetch.fetch(url, method = urlfetch.HEAD)
return result.status_code != 200
Checking if a website exists:
import httplib
from httplib import HTTP
from urlparse import urlparse
def checkUrl(url):
p = urlparse(url)
h = HTTP(p[1])
h.putrequest('HEAD', p[2])
h.endheaders()
return h.getreply()[0] == httplib.OK
We only get the header of a given URL and check the response code of the web server.
Update: The last line is modified according to the remark of Daenyth.

Categories