python swagger client set host and port - python

How should you specify host and port in the python client library auto-generated by swagger-codegen?
The only documentation that I found is here.

Bit of an old one but since this is where I landed first when searching, I'll provide my solution:
configuration = swagger_client.Configuration()
configuration.host = 'http://127.0.0.1:8000'
api_client = swagger_client.ApiClient(configuration=configuration)
api_instance = swagger_client.DefaultApi(api_client=api_client)
But since the configuration gets hard coded, for our project I'll likely end up having a different client for different environments (staging, prod etc).

The target host gets hard-coded in the client code, e.g.:
https://github.com/swagger-api/swagger-codegen/blob/master/samples/client/petstore/python/petstore_api/configuration.py#L50
# configuration.py
...
def __init__(self):
"""Constructor"""
# Default Base url
self.host = "http://petstore.swagger.io:80/v2"

Related

gRPC-python: switching Servicer while gRPC server is running ? (simulation - real mode switching)

For our new, open lab device automation standard (https://gitlab.com/SiLA2/sila_python) we would like to run the devices (=gRPC servers) in two modes: a simulation mode and a real mode (with the same set of remote calls, but in the first case it shall just return simulated responses in the second it should communicate with the hardware.
My first idea was to create two almost identical gRPC servicer python classes in separated modules, like in this example:
in hello_sim.py:
class SayHello(SayHello_pb2_grpc.SayHelloServicer):
#... implementation of the simulation servicer
def SayHello(self, request, context):
# simulation code ...
return SayHello_pb2.SayHello_response("simulation")
in hello_real.py:
class SayHello(SayHello_pb2_grpc.SayHelloServicer):
#... implementation of the real servicer
def SayHello(self, request, context):
# real hardware code
return SayHello_pb2.SayHello_response("real")
and then, after creating the gRPC server in server.py I could switch between simulation and real mode by re-registration of the servcier at the gRPC server like, e.g.:
server.py
# imports, init ...
grpc_server = GRPCServer(ThreadPoolExecutor(max_workers=10))
sh_sim = SayHello_sim.SayHello()
sh_real = SayHello_real.SayHello()
SayHello_pb2_grpc.add_SayHelloServicer_to_server(sh_sim, grpc_server)
grpc_server.run()
# ..... and later, still while the same grpc server is running, re-register, like
SayHello_pb2_grpc.add_SayHelloServicer_to_server(sh_real, grpc_server)
to be able to call the real hardware code;
or by exchanging the reference to the servicer object, like, e.g.:
# imports, init ...
grpc_server = GRPCServer(ThreadPoolExecutor(max_workers=10))
sh_sim = SayHello_sim.SayHello()
sh_real = SayHello_real.SayHello()
sh_current = sh_sim
SayHello_pb2_grpc.add_SayHelloServicer_to_server(sh_current , grpc_server)
grpc_server.run()
# ..... and then later, still while the same grpc server is running, re-register the real Servicer, like
sh_current = sh_real
# so that the server would just use the other servicer object for the next calls ...
but both strategies are not working :(
When calling the server in simulation mode from a gRPC client, I would expect that it should reply (according to the example): "simulation"
gRPC_client.py
# imports, init ....
response = self.SayHello_stub.SayHello()
print(response)
>'simulation'
and after switching to real-mode (by any mechanism) "real":
# after switching to real mode ...
response = self.SayHello_stub.SayHello()
print(response)
>'real'
What is the most clean and elegant solution to achieve this mode switching without completely shutting down the gRPC server (and by that loosing the connection to the client) ?
Thanks a lot for your help in advance !
PS:(Shutting down the gRPC server and re-registering would of course work, but this is not, what we want.)
A kind colleague of mine provided me with a good suggestion:
One could use the dependency injection concept (s. https://en.wikipedia.org/wiki/Dependency_injection), like:
class SayHello(pb2_grpc.SayHelloServicer):
def inject_implementation(SayHelloImplmentation):
self.implementation = SayHelloImplementation;
def SayHello(self, request, context):
return self.implementation.sayHello(request, context)
for a full example, see
https://gitlab.com/SiLA2/sila_python/tree/master/examples/simulation_real_mode_demo

django-websocket-redis redis connection using unix socket

I'm using django-websocket-redis and have this in my settings.py:
WS4REDIS_CONNECTION = {
'host': 'unix://var/run/redis/redis.sock',
'db': 0,
# 'port': 16379,
# 'password': 'verysecret',
}
I tried all possible combinations of 'host' parameter and can't get it to connect using unix socket instead of tcp. I always get this message:
ConnectionError: Error -2 connecting to /var/run/redis/redis.sock:6379. Name or service not known.
Is there a way to connect ws4redis from django to redis using unix socket? If there is, how?
TL;DR: you have to override default Redis connection settings of ws4redis
I've bumped into this when I was implementing custom Django command, which was supposed to send a websocket message on certain server-side events.
If you look at the source code of RedisPublisher class, you will notice this line at the very top:
redis_connection_pool = ConnectionPool(**settings.WS4REDIS_CONNECTION)
while comments for ConnectionPool.__init__() state following:
By default, TCP connections are created connection_class is specified.
Use redis.UnixDomainSocketConnection for unix sockets.
So when you're instantiating RedisPublisher it uses ConnectionPool which, by default, does not know anything about sockets. Therefore 2 approaches are possible:
Switch default Connection to UnixDomainSocketConnection in ConnectionPool instantiation or
Substitute ConnectionPool with StrictRedis connection which has built-in capabilities to use unix socket (named argument unix_socket_path).
This is how I solved it using 2nd approach (it appears cleaner for me):
from redis import StrictRedis
from django.conf import settings
from ws4redis.publisher import RedisPublisher
from ws4redis.redis_store import RedisMessage
r = StrictRedis(**settings.WS4REDIS_CONNECTION)
publisher = RedisPublisher(facility='foobar', broadcast=True)
publisher._connection = r
msg = RedisMessage('ping')
publisher.publish_message(msg)
The whole magic is in publisher._connection line which ultimately switches connection, used by RedisPublisher class.
Since _connection assumes protected access, this looks like a bit dirty, but working solution.
You also need to specify following WS4REDIS_CONNECTION settings:
WS4REDIS_CONNECTION = {
'unix_socket_path': '/tmp/redis.sock'
}
This is required for wsgi, since it appears to use built-in redis.py capability to connect to unix socket as it stated in docs

DNS server using Python and Twisted - issues with asynchronous operation

I'm attempting to create a database-driven DNS server (specifically to handle MX records only, and pass everything else upstream) in Twisted using Python 2.7. The code below works (in terms of getting a result), but is not operating asynchronously. Instead, any DNS requests coming in block the entire program from taking any other requests until the first one has been replied to. We need this to scale, and at the moment we can't figure out where we've gone wrong. If anyone has a working example to share, or sees the issue, we'd be eternally grateful.
import settings
import db
from twisted.names import dns, server, client, cache
from twisted.application import service, internet
from twisted.internet import defer
class DNSResolver(client.Resolver):
def __init__(self, servers):
client.Resolver.__init__(self, servers=servers)
#defer.inlineCallbacks
def _lookup_mx_records(self, hostname, timeout):
# Check the DB to see if we handle this domain.
mx_results = yield db.get_domain_mx_record_list(hostname)
if mx_results:
defer.returnValue(
[([dns.RRHeader(hostname, dns.MX, dns.IN, settings.DNS_TTL,
dns.Record_MX(priority, forward, settings.DNS_TTL))
for forward, priority in mx_results]),
(), ()])
# If the hostname isn't in the DB, we forward
# to our upstream DNS provider (8.8.8.8).
else:
i = yield self._lookup(hostname, dns.IN, dns.MX, timeout)
defer.returnValue(i)
def lookupMailExchange(self, name, timeout=None):
"""
The twisted function which is called when an MX record lookup is requested.
:param name: The domain name being queried for (e.g. example.org).
:param timeout: Time in seconds to wait for the query response. (optional, default: None)
:return: A DNS response for the record query.
"""
return self._lookup_mx_records(name, timeout)
# App name, UID, GID to run as. (root/root for port 53 bind)
application = service.Application('db_driven_dns', 1, 1)
# Set the secondary resolver
db_dns_resolver = DNSResolver(settings.DNS_NAMESERVERS)
# Create the protocol handlers
f = server.DNSServerFactory(caches=[cache.CacheResolver()], clients=[db_dns_resolver])
p = dns.DNSDatagramProtocol(f)
f.noisy = p.noisy = False
# Register as a tcp and udp service
ret = service.MultiService()
PORT=53
for (klass, arg) in [(internet.TCPServer, f), (internet.UDPServer, p)]:
s = klass(PORT, arg)
s.setServiceParent(ret)
# Run all of the above as a twistd application
ret.setServiceParent(service.IServiceCollection(application))
EDIT #1
blakev suggested that I might not be using the generator correctly (which is certainly possible). But if I simplify this down a little bit to not even use the DB, I still cannot process more than one DNS request at a time. To test this, I have stripped the class down. What follows is my entire, runnable, test file. Even in this highly stripped-down version of my server, Twisted does not accept any more requests until the first one has come in.
import sys
import logging
from twisted.names import dns, server, client, cache
from twisted.application import service, internet
from twisted.internet import defer
class DNSResolver(client.Resolver):
def __init__(self, servers):
client.Resolver.__init__(self, servers=servers)
def lookupMailExchange(self, name, timeout=None):
"""
The twisted function which is called when an MX record lookup is requested.
:param name: The domain name being queried for (e.g. example.org).
:param timeout: Time in seconds to wait for the query response. (optional, default: None)
:return: A DNS response for the record query.
"""
logging.critical("Query for " + name)
return defer.succeed([
(dns.RRHeader(name, dns.MX, dns.IN, 600,
dns.Record_MX(1, "10.0.0.9", 600)),), (), ()
])
# App name, UID, GID to run as. (root/root for port 53 bind)
application = service.Application('db_driven_dns', 1, 1)
# Set the secondary resolver
db_dns_resolver = DNSResolver( [("8.8.8.8", 53), ("8.8.4.4", 53)] )
# Create the protocol handlers
f = server.DNSServerFactory(caches=[cache.CacheResolver()], clients=[db_dns_resolver])
p = dns.DNSDatagramProtocol(f)
f.noisy = p.noisy = False
# Register as a tcp and udp service
ret = service.MultiService()
PORT=53
for (klass, arg) in [(internet.TCPServer, f), (internet.UDPServer, p)]:
s = klass(PORT, arg)
s.setServiceParent(ret)
# Run all of the above as a twistd application
ret.setServiceParent(service.IServiceCollection(application))
# If called directly, instruct the user to run it through twistd
if __name__ == '__main__':
print "Usage: sudo twistd -y %s (background) OR sudo twistd -noy %s (foreground)" % (sys.argv[0], sys.argv[0])
Matt,
I tried your latest example and it works fine. I think you may be testing it wrong.
In your later comments you talk about using time.sleep(5) in the lookup method to simulate a slow response.
You can't do that. It will block the reactor. If you want to simulate a delay, use reactor.callLater to fire the deferred
https://twistedmatrix.com/documents/current/api/twisted.internet.interfaces.IReactorTime.html#callLater
eg
def lookupMailExchange(self, name, timeout=None):
d = defer.Deferred()
self._reactor.callLater(
5, d.callback,
[(dns.RRHeader(name, dns.MX, dns.IN, 600,
dns.Record_MX(1, "mail.example.com", 600)),), (), ()]
)
return d
Here's how I tested:
time bash -c 'for n in "google.com" "yahoo.com"; do dig -p 10053 #127.0.0.1 "$n" MX +short +tries=1 +notcp +time=10 & done; wait'
And the output shows that both responses came back after 5 seconds
1 10.0.0.9.
1 10.0.0.9.
real 0m5.019s
user 0m0.015s
sys 0m0.013s
Similarly, you need to make sure that calls to your database don't block:
https://twistedmatrix.com/documents/current/core/howto/rdbms.html
Some other points:
MX records should contain hostnames not IP addresses - https://www.rfc-editor.org/rfc/rfc1035#section-3.3.9
Don't use nslookup -
https://jdebp.eu./FGA/nslookup-flaws.html and http://cr.yp.to/djbdns/nslookup.html
Be careful that stdlib logging doesn't block -
https://twistedmatrix.com/documents/current/api/twisted.python.log.PythonLoggingObserver.html
Configure your test client to only issue one UDP query, otherwise retries and followup TCP queries may confuse your tests.

Python - Twisted, Proxy and modifying content

So i've looked around at a few things involving writting an HTTP Proxy using python and the Twisted framework.
Essentially, like some other questions, I'd like to be able to modify the data that will be sent back to the browser. That is, the browser requests a resource and the proxy will fetch it. Before the resource is returned to the browser, i'd like to be able to modify ANY (HTTP headers AND content) content.
This ( Need help writing a twisted proxy ) was what I initially found. I tried it out, but it didn't work for me. I also found this ( Python Twisted proxy - how to intercept packets ) which i thought would work, however I can only see the HTTP requests from the browser.
I am looking for any advice. Some thoughts I have are to use the ProxyClient and ProxyRequest classes and override the functions, but I read that the Proxy class itself is a combination of the both.
For those who may ask to see some code, it should be noted that I have worked with only the above two examples. Any help is great.
Thanks.
To create ProxyFactory that can modify server response headers, content you could override ProxyClient.handle*() methods:
from twisted.python import log
from twisted.web import http, proxy
class ProxyClient(proxy.ProxyClient):
"""Mangle returned header, content here.
Use `self.father` methods to modify request directly.
"""
def handleHeader(self, key, value):
# change response header here
log.msg("Header: %s: %s" % (key, value))
proxy.ProxyClient.handleHeader(self, key, value)
def handleResponsePart(self, buffer):
# change response part here
log.msg("Content: %s" % (buffer[:50],))
# make all content upper case
proxy.ProxyClient.handleResponsePart(self, buffer.upper())
class ProxyClientFactory(proxy.ProxyClientFactory):
protocol = ProxyClient
class ProxyRequest(proxy.ProxyRequest):
protocols = dict(http=ProxyClientFactory)
class Proxy(proxy.Proxy):
requestFactory = ProxyRequest
class ProxyFactory(http.HTTPFactory):
protocol = Proxy
I've got this solution by looking at the source of twisted.web.proxy. I don't know how idiomatic it is.
To run it as a script or via twistd, add at the end:
portstr = "tcp:8080:interface=localhost" # serve on localhost:8080
if __name__ == '__main__': # $ python proxy_modify_request.py
import sys
from twisted.internet import endpoints, reactor
def shutdown(reason, reactor, stopping=[]):
"""Stop the reactor."""
if stopping: return
stopping.append(True)
if reason:
log.msg(reason.value)
reactor.callWhenRunning(reactor.stop)
log.startLogging(sys.stdout)
endpoint = endpoints.serverFromString(reactor, portstr)
d = endpoint.listen(ProxyFactory())
d.addErrback(shutdown, reactor)
reactor.run()
else: # $ twistd -ny proxy_modify_request.py
from twisted.application import service, strports
application = service.Application("proxy_modify_request")
strports.service(portstr, ProxyFactory()).setServiceParent(application)
Usage
$ twistd -ny proxy_modify_request.py
In another terminal:
$ curl -x localhost:8080 http://example.com
For two-way proxy using twisted see the article:
http://sujitpal.blogspot.com/2010/03/http-debug-proxy-with-twisted.html

Creating a program to be broadcasted by avahi

I'm trying to write a program that outputs data that can be served over a network with avahi. The documentation I've looked at seems to say I have to register the service with dbus and then connect it to avahi, but the documentation to do this is pretty sparse. Does anyone know of good documentation for it? I've been looking at these:
python-dbus:
http://dbus.freedesktop.org/doc/dbus-python/doc/tutorial.html#exporting-objects
python-avahi:
http://www.amk.ca/diary/2007/04/rough_notes_python_and_dbus.html
I'm really unfamiliar with how avahi works at all, so any pointers would be helpful.
I realise this answer is pretty late, considering your question was asked four years ago. However, it might help others.
The following announces a service using avahi/dbus:
import avahi
import dbus
from time import sleep
class ServiceAnnouncer:
def __init__(self, name, service, port, txt):
bus = dbus.SystemBus()
server = dbus.Interface(bus.get_object(avahi.DBUS_NAME, avahi.DBUS_PATH_SERVER), avahi.DBUS_INTERFACE_SERVER)
group = dbus.Interface(bus.get_object(avahi.DBUS_NAME, server.EntryGroupNew()),
avahi.DBUS_INTERFACE_ENTRY_GROUP)
self._service_name = name
index = 1
while True:
try:
group.AddService(avahi.IF_UNSPEC, avahi.PROTO_INET, 0, self._service_name, service, '', '', port, avahi.string_array_to_txt_array(txt))
except dbus.DBusException: # name collision -> rename
index += 1
self._service_name = '%s #%s' % (name, str(index))
else:
break
group.Commit()
def get_service_name(self):
return self._service_name
if __name__ == '__main__':
announcer = ServiceAnnouncer('Test Service', '_test._tcp', 12345, ['foo=bar', '42=true'])
print announcer.get_service_name()
sleep(42)
Using avahi-browse to verify it is indeed published:
micke#els-mifr-03:~$ avahi-browse -a -v -t -r
Server version: avahi 0.6.30; Host name: els-mifr-03.local
E Ifce Prot Name Type Domain
+ eth0 IPv4 Test Service _test._tcp local
= eth0 IPv4 Test Service _test._tcp local
hostname = [els-mifr-03.local]
address = [10.9.0.153]
port = [12345]
txt = ["42=true" "foo=bar"]
Avahi is "just" a Client implementation of ZeroConfig which basically is a "Multicast based DNS" protocol. You can use Avahi to publish the availability of your "data" through end-points. The actual data must be retrieved through some other means but you would normally register an end-point that can be "invoked" through a method of your liking.

Categories