Set port in requests - python

I'm attempting to make use of cgminer's API using Python. I'm particularly interested in utilizing the requests library.
I understand how to do basic things in requests, but cgminer wants to be a little more specific. I'd like to shrink
import socket
import json
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(('localhost', 4028))
sock.send(json.dumps({'command': 'summary'}))
using requests instead.
How does one specify the port using that library, and how does one send such a json request and await a response to be stored in a variable?

Request is an HTTP library.
You can specify the port in the URL http://example.com:4028/....
But, from what I can read in a hurry here cgminer provides a RPC API (or JSON RPC?) not an HTTP interface.

As someone who has learned some of the common pitfalls of python networking the hard way, I'm adding this answer to emphasize an important-but-easy-to-mess-up point about the 1st arg of requests.get():
localhost is an alias which your computer resolves to 127.0.0.1, the IP address of its own loopback adapter. foo.com is also an alias, just one that gets resolved further away from the host.
requests.get('foo.com:4028') #<--fails
requests.get('http://foo.com:4028') #<--works usually
& for loopbacks:
requests.get('http://127.0.0.1:4028') #<--works
requests.get('http://localhost:4028') #<--works
this one requires import socket & gives you the local ip of your host (aka, your address within your own LAN); it goes a little farther out from the host than just calling localhost, but not all the way out to the open-internet:
requests.get('http://{}:4028'.format(socket.gethostbyname(socket.gethostname()))) #<--works

You can specify the port for the request with a colon just as you would in a browser, such as
r = requests.get('http://localhost:4028'). This will block until a response is received, or until the request times out, so you don't need to worry about awaiting a response.
You can send JSON data as a POST request using the requests.post method with the data parameter, such as
import json, requests
payload = {'command': 'summary'}
r = requests.post('http://localhost:4028', data=json.dumps(payload))
Accessing the response is then possible with r.text or r.json().
Note that requests is an HTTP library - if it's not HTTP that you want then I don't believe it's possible to use requests.

Related

Python - Requests Library - How to ensure HTTPS requests

This is probably a dumb question, but I just want to make sure with the below.
I am currently using the requests library in python. I am using this to call an external API hosted on Azure cloud.
If I use the requests library from a virtual machine, and the requests library sends to URL: https://api-management-example/run, does that mean my communication to this API, as well as the entire payload I send through is secure? I have seen in my Python site-packages in my virtual environment, there is a cacert.pem file. Do I need to update that at all? Do I need to do anything else on my end to ensure the communication is secure, or the fact that I am calling the HTTPS URL means it is secure?
Any information/guidance would be much appreciated.
Thanks,
A HTTPS is secure with valid signed certificate. Some people use self signed certificate to maintain HTTPS. In requests library, you explicitly verify your certificate. If you have self-signed HTTPS then, you need to pass the certificate to cross verify with your local certificate.
verify = True
import requests
response = requests.get("https://api-management-example/run", verify=True)
Self Signed Certificate
import requests
response = requests.get("https://api-management-example/run", verify="/path/to/local/certificate/file/")
Post requests are more secure because they can carry data in an encrypted form as a message body. Whereas GET requests append the parameters in the URL, which is also visible in the browser history, SSL/TLS and HTTPS connections encrypt the GET parameters as well. If you are not using HTTPs or SSL/TSL connections, then POST requests are the preference for security.
A dictionary object can be used to send the data, as a key-value pair, as a second parameter to the post method.
The HTTPS protocol is safe provided you have a valid SSL certificate on your API. If you want to be extra safe, you can implement end-to-end encryption/cryptography. Basically converting your so called plaintext, and converting it to scrambled text, called ciphertext.
You can explicitly enable verification in requests library:
import requests
session = requests.Session()
session.verify = True
session.post(url='https://api-management-example/run', data={'bar':'baz'})
This is enabled by default. you can also verify the certificate per request:
requests.get('https://github.com', verify='/path/to/certfile')
Or per session:
s = requests.Session()
s.verify = '/path/to/certfile'
Read the docs.

python socket, HTTPS request load full html code

I'm learning how to use socket to make https request, and my problem is that I can success request (status 200), but I will only have a part of the webpage content (can't understand why it's splitted in this way)
I will receive my Http header, with a part of the html code. I tried it with at least 3 different website (including github), and I always have the same result.
I'm able to connect with my account to a website, having my cookies to use my account, load a new page with those cookie and get a status 200, and juste have a part of the website... Like just having site's navbars.
If someone have any clue.
import socket
import ssl
HOST = 'www.python.org'
PORT = 443
MySock = socket.socket()
MySock = ssl.wrap_socket(MySock, ssl_version=ssl.PROTOCOL_SSLv23)
MySock.connect((HOST,PORT))
MySock.send("""GET / HTTP/1.1
Host: {}
""".format(HOST).encode())
#Create file to check reponse content
with open('PythonOrg.html', 'w') as File:
print(MySock.recv(50000).decode(), file=File)
1) I seem to not be able to load content with a large buffer, in MySock.recv(50000), I need to loop with smaller buffer, like 4096, and concatenate a variable.
2) A request required time to receive the entire response, I used time.sleep function to manage this waiting, not sur if it's the best way with an ssl socket to wait the server. If anyone have a nice way to get the entire response message when it's big, feel free.

Python - Using Windows hosts file when using Python Requests / Use predefined IP Address without making a DNS request

I am trying to use Python requests to make a HTTP GET request to a domain, without using urllib3/httplib.HTTPConnection to perform a DNS request for the domain. I set the domain in the Windows hosts file, but Python requests appears to override this, so I need to define the DNS resolution for the domain in the script.
I want to script to bypass the dns request so I can set the IP address. In the example below I've set this to 45.22.67.8, and I will change this to my public IP address later.
I tried using this 'monkey patching' technique but it doesn't work. Requests doesn't generate a DNS request in Wireshark, but it also doesn't connect to the HTTP server.
import socket
import requests
from requests.packages.urllib3.connection import HTTPConnection
socket.getaddrinfo = '45.22.67.8'
url = "http://www.randomdomain.com"
requests.get(url, timeout=10)
Error
'str' object is not callable
Thanks!
Edit: just updated the code in my example. All I want to do is override future http connections to trick the http packets to go to a different destination IP.

pysimplesoap web service return connection refused

I've created some web services using pysimplesoap like on this documentation:
https://code.google.com/p/pysimplesoap/wiki/SoapServer
When I tested it, I called it like this:
from SOAPpy import SOAPProxy
from SOAPpy import Types
namespace = "http://localhost:8008"
url = "http://localhost:8008"
proxy = SOAPProxy(url, namespace)
response = proxy.dummy(times=5, name="test")
print response
And it worked for all of my web services, but when I try to call it by using an library which is needed to specify the WSDL, it returns "Could not connect to host".
To solve my problem, I used the object ".wsdl()" to generate the correct WSDL and saved it into a file, the WSDL generated by default wasn't correct, was missing variable types and the correct server address...
The server name localhost is only meaningful on your computer. Once outside, other computers won't be able to see it.
1) find out your external IP, with http://www.whatismyip.com/ or another service. Note that IPs change over time.
2) plug the IP in to http://www.soapclient.com/soaptest.html
If your local service is answering IP requests as well as from localhost, you're done!

Http proxy works with urllib.urlopen, but not with requests.get [duplicate]

I am trying to do a simple get request through a proxy server:
import requests
test=requests.get("http://google.com", proxies={"http": "112.5.254.30:80"})
print test.text
The address of the proxy server in the code is just from some freely available proxy lists on the internet. The point is that this same proxy server works when I use it from browser, but it doesn't work from this program. And i tried many different proxy servers and none of them works through above code.
Here is what I get for this proxy server:
The requested URL could not be retrieved While trying to retrieve the URL: http:/// The following error was encountered:
Unable to determine IP address from host name for
The dnsserver returned: Invalid hostname
This means that: The cache was not able to resolve the
hostname presented in the URL. Check if the address is correct.
I know its an old question, but it should be
import requests
test=requests.get("http://google.com", proxies={"http":"http://112.5.254.30:80","https": "http://112.5.254.30:80"})
print (test.text)

Categories