How do I get the remote IP address in Python?
I tried searching Google but couldn't find any useful results. os.environ['REMOTE_ADDR'] is giving KeyError: 'REMOTE_ADDR'
You're accessing the operating system's os environment, not the request's.
The WSGI callable should be passed two variables, environ and start_response, and that environ variable will have the variables you're looking for.
Those variables would only be present in the actual os.environ if you were running a CGI app.
Depending on the web framework you're using, you might not have access to this. If you're passed a request object, this will likely end up in request.META or something similar.
If you're not using any framework, that will be in the environ dict that is passed to your wsgi callable.
As noted in another answer, REMOTE_ADDR doesn't have to be their as per the spec, but if you're using Apache's mod_wsgi, it should be there.
If you have access to the TCP socket, you can use socket.getpeername() to get the remote address. Docs are here.
Related
I want to bypass proxy for domains like:
http://server-1:5000
http://server-2:5000
NO_PROXY=server-1, server-2
server-1 and server-2 are basically services attached to kube pods, so they can change dynamically during runtime.
I want to bypass proxy for any domains in this format. For example, at any point it can even reach to server-124.
It would have been easier if domain name was in this format : subdomain.domain.com
like 1.server.com , 2.server.com.
I believe, NO_PROXY=.server.com would have worked in this case.
But my current scenario is a little different. So, can it be done ?
Yes it can be done. NO_PROXY=*.server.*
This will help to bypass proxy for any domain like .server. or server-124.
This wild card match will handle that.
In some cases, setting no_proxy to * effectively disables proxies altogether, but this is not a universal rule.
No implementation performs a DNS lookup to resolve a hostname to an IP address when deciding if a proxy should be used. Do not specify IP addresses in the no_proxy variable unless you expect that the IPs are explicitly used by the client.
The same holds true for CIDR blocks, such as 18.240.0.1/24. CIDR blocks only work when the request is directly made to an IP address.
The libraries that support the http_proxy environment variable generally also support a matching no_proxy that names things that shouldn't be proxied. The exact syntax seems to vary across languages and libraries but it does seem to be universal that setting no_proxy=example.com causes anything.example.com to not be proxied either.
This is relevant because the Kubernetes DNS system creates its names in a domain based on the cluster name, by default cluster.local. The canonical form of a Service DNS name, for example, is service-name.namespace-name.svc.cluster.local., where service-name and namespace-name are the names of the corresponding Kubernetes objects.
How can I set the remote_addr property in a flask test_request_context? This question: Get IP Address when testing flask application through nosetests (and other, similar questions) explains how to do it using the get/post/whatever test client calls, but I'm not using those (in this case). Instead, I get a test_request_context and then call a function, thereby allowing me to test the functions that are called by my view functions individually.
Edit: to clarify, my testing code looks something like this:
with app.test_request_context():
result=my_function_which_expects_to_be_called_from_a_request_context()
<check result however>
So at no point am I using a test client call.
Pass the same arguments to test_request_context as you would to client.get. Both set up the WSGI environment the same way internally.
with app.test_request_context(environ_base={'REMOTE_ADDR': '10.1.2.3'}):
I'm sending a callback URL to a remote widely API over which I have no control.
I've written my callback view and it's properly named (say, myapp_callback) in my urls.py, so all that I have to do is to call reverse('myapp_callback'), right? That's what it says in the manual.
Well, not so much. The result is /myapp/callback. Where's my protocol and hostname? The remote service I'm sending these API calls to has no idea. How can I detect it while maybe behind an Apache reverse proxy?
I'm working around this problem by putting the full URL into the settings file, but I'd love to provide a more "turnkey" solution.
Try out the request.build_absolute_uri(reverse('myapp_callback')).
Returns the absolute URI form of location. If no location is provided, the location will be set to request.get_full_path().
If the location is already an absolute URI, it will not be altered. Otherwise the absolute URI is built using the server variables available in this request.
Example: "http://example.com/music/bands/the_beatles/?print=true"
I'm trying to use the salesforce-python-toolkit to make web services calls to the Salesforce API, however I'm having trouble getting the client to go through a proxy. Since the toolkit is based on top of suds, I tried going down to use just suds itself to see if I could get it to respect the proxy setting there, but it didn't work either.
This is tested on suds 0.3.9 on both OS X 10.7 (python 2.7) and ubuntu 12.04.
an example request I've made that did not end up going through the proxy (just burp or charles proxy running locally):
import suds
ws = suds.client.Client('file://sandbox.xml',proxy={'http':'http://localhost:8888'})
ws.service.login('user','pass')
I've tried various things with the proxy - dropping http://, using an IP, using a FQDN. I've stepped through the code in pdb and see it setting the proxy option. I've also tried instantiating the client without the proxy and then setting it with:
ws.set_options(proxy={'http':'http://localhost:8888'})
Is proxy not used by suds any longer? I don't see it listed directly here http://jortel.fedorapeople.org/suds/doc/suds.options.Options-class.html, but I do see it under transport. Do I need to set it differently through a transport? When I stepped through in pdb it did look like it was using a transport, but I'm not sure how.
Thank you!
I went into #suds on freenode and Xelnor/rbarrois provided a great answer! Apparently the custom mapping in suds overrides urllib2's behavior for using the system configuration environment variables. This solution now relies on having the http_proxy/https_proxy/no_proxy environment variables set accordingly.
I hope this helps anyone else running into issues with proxies and suds (or other libraries that use suds). https://gist.github.com/3721801
from suds.transport.http import HttpTransport as SudsHttpTransport
class WellBehavedHttpTransport(SudsHttpTransport):
"""HttpTransport which properly obeys the ``*_proxy`` environment variables."""
def u2handlers(self):
"""Return a list of specific handlers to add.
The urllib2 logic regarding ``build_opener(*handlers)`` is:
- It has a list of default handlers to use
- If a subclass or an instance of one of those default handlers is given
in ``*handlers``, it overrides the default one.
Suds uses a custom {'protocol': 'proxy'} mapping in self.proxy, and adds
a ProxyHandler(self.proxy) to that list of handlers.
This overrides the default behaviour of urllib2, which would otherwise
use the system configuration (environment variables on Linux, System
Configuration on Mac OS, ...) to determine which proxies to use for
the current protocol, and when not to use a proxy (no_proxy).
Thus, passing an empty list will use the default ProxyHandler which
behaves correctly.
"""
return []
client = suds.client.Client(my_wsdl, transport=WellBehavedHttpTransport())
I think you can do by using a urllib2 opener like below.
import suds
t = suds.transport.http.HttpTransport()
proxy = urllib2.ProxyHandler({'http': 'http://localhost:8888'})
opener = urllib2.build_opener(proxy)
t.urlopener = opener
ws = suds.client.Client('file://sandbox.xml', transport=t)
I was actually able to get it working by doing two things:
making sure there were keys in the proxy dict for http as well as https.
setting the proxy using set_options AFTER creation of the client.
So, my relevant code looks like this:
self.suds_client = suds.client.Client(wsdl)
self.suds_client.set_options(proxy={'http': 'http://localhost:8888', 'https': 'http://localhost:8888'})
I had multiple issues using Suds, even though my proxy was configured properly I could not connect to the endpoint wsdl. After spending significant time attempting to formulate a workaround, I decided to give soap2py a shot - https://code.google.com/p/pysimplesoap/wiki/SoapClient
Worked straight off the bat.
For anyone who's attempting cji's solution over HTTPS, you actually need to keep one of the handlers for the basic authentication. I also am using python3.7 so urllib2 has been replaced with urllib.request.
from suds.transport.https import HttpAuthenticated as SudsHttpsTransport
from urllib.request import HTTPBasicAuthHandler
class WellBehavedHttpsTransport(SudsHttpsTransport):
""" HttpsTransport which properly obeys the ``*_proxy`` environment variables."""
def u2handlers(self):
""" Return a list of specific handlers to add.
The urllib2 logic regarding ``build_opener(*handlers)`` is:
- It has a list of default handlers to use
- If a subclass or an instance of one of those default handlers is given
in ``*handlers``, it overrides the default one.
Suds uses a custom {'protocol': 'proxy'} mapping in self.proxy, and adds
a ProxyHandler(self.proxy) to that list of handlers.
This overrides the default behaviour of urllib2, which would otherwise
use the system configuration (environment variables on Linux, System
Configuration on Mac OS, ...) to determine which proxies to use for
the current protocol, and when not to use a proxy (no_proxy).
Thus, passing an empty list (asides from the BasicAuthHandler)
will use the default ProxyHandler which behaves correctly.
"""
return [HTTPBasicAuthHandler(self.pm)]
is there any way to specify dns server should be used by socket.gethostbyaddr()?
Please correct me, if I'm wrong, but isn't this operating system's responsibility? gethostbyaddr is just a part of libc and according to man:
The gethostbyname(), gethostbyname2() and gethostbyaddr() functions each return a
pointer to an object with the following structure describing an internet host refer-
enced by name or by address, respectively. This structure contains either the infor-
mation obtained from the name server, named(8), or broken-out fields from a line in
/etc/hosts. If the local name server is not running these routines do a lookup in
/etc/hosts.
So I would say there's no way of simply telling Python (from the code's point of view) to use a particular DNS, since it's part of system's configuration.
Take a look at PyDNS.