I learning Python (coming from a dotnet background) and developing an app which interacts with a webservice.
The web service is flat, in that it has numerous calls some of which are related to sessions e.g. logging on etc, whereas other calls are related to retrieving/setting business data.
To accompany the webservice, there are a couple of python classes which wrap all the calls. I am looking to develop a client on top of that class but give the client more OO structure.
The design of my own app was to have a Session-type class which would be responsible for logging on/maintaining the connection etc , but itself would be injected into a Business-type class which is responsible for making all the business calls.
So the stack is something like
WebService (Soap)
WebServiceWrapper (Python)
Session (Python)
Business (Python)
Here's a sample of my code (I've renamed some methods to try and make stuff more explicit)
from webServiceWrapper import webServiceAPI
class Session():
def __init__(self, user, password):
self._api = webServiceAPI()
self.login = self._api.login(user, password)
def webServiceCalls(self):
return self._api()
class Business():
def __init__(self, service):
self._service=service
def getBusinessData(self):
return self._service.get_business_data()
and my unit test
class exchange(unittest.TestCase):
def setUp(self):
self.service = Session("username","password")
self._business = Business(self.service.webServiceCalls())
def testBusinessReturnsData(self):
self.assertFalse(self._business.getBusinessData()==None)
The unit test fails fails on
return self._api()
saying that the underlying class is not callable
TypeError: 'webServiceAPI' is not callable
My first q is, is that the python way? Is the OO thinking which underpins app development with static languages good for dynamic as well? (That's probably quite a big q!)
My second q is that, if this kind of architecture is ok, what am I doing wrong (I guess in terms of passing references to objects in this way)?
Many thx
S
If WebserviceAPI is an object just remove the parentheses like that:
return self._api
You already created an instance of the object in the constructor.
Maybe add the definition of WebserviceAPI to the question, I can only guess at the moment.
I don't see anything that is wrong or un-Pythonic here. Pythonistas often point out the differences to static languages like Java or C#, but many real-world application mostly use a simple static class design in Python, too.
I guess that webServiceAPI is not a class, thus it can't be called.
If you are using Python 2.x, always inherit from the object type, otherwise you'll get a “classic class” (a relict from ancient times that is kept for background compatibility).
Related
In my system I'm thinking about migrating an RPC service to gRPC. (Previously, I'm using messagepack-rpc, but that doesn't matter here, except maybe for the fact that it didn't require a schema).
gRPC has many advantages, it's well supported and documented, so it seems like an obvious choice. One thing I'm trying to get a better understanding of is, how do people write their server code without unnecessary duplication?
Let me whip up an example. First, we have a class that acts as a system controller. It contains all the domain knowledge of what my system needs to do, and as such, we want to be good developers and keep implementation details of the RPC technology out of it. So, it looks something like this:
class SystemController:
def __init__(self, args):
# Do some system initializing
def do_action1(self, str_arg, int_arg):
""" Does action #1. Requires a string argument and
an integer argument."""
run_some_code()
def query_sensor17(self, sensor_name):
"""Queries sensors, requires a sensor name"""
return get_sensor_value(sensor_name)
In essence, this class will have a several dozen API calls that I can use to query or control my system.
Now, in order to use gRPC I'm going to have to describe all of these APIs again for the protobufs. Maybe it'll look like this:
syntax = "proto3";
package mysystemcontroller;
service SystemControllerServer {
rpc do_action1(Action1Request) returns (Action1Reply) {}
rpc query_sensor17(QuerySensor17Request) returns (QuerySensor17Reply) {}
}
message Action1Request {
string str_arg = 1;
int32 int_arg = 2;
}
message Action1Reply {
}
message QuerySensor17Request {
string sensor_name = 1;
}
message QuerySensor17Reply {
string sensor_value = 1;
}
Part of me is a little unhappy about having to repeat describing all of the APIs from the system controller class, but to be fair, I don't have all the type information in my Python and this gives the ability to do type-safe RPC calls.
But now, I have to write a third file containing the actual server code:
class SystemControllerServer(mysystemcontroller_grpc.SystemControllerServerServicer):
def __init__(self):
self._sc = SystemController()
def do_action1(self, request, context):
return mysystemcontroller.Action1Reply(self._sc.do_action1(request.str_arg, request.int_arg))
def query_sensor17(self, request, context):
return mysystemcontroller.QuerySensor17Reply(self._sc.query_sensor17(request.sensor_name))
At this point, I have to ask: Is this what everyone does, or do people use custom generation code for the server, or is there some neat Python-introspection that can save some of the trouble?
I do realize that not necessarily all of the server methods are going to be identical, but likely, for the most part, there will be plenty of similarity between server methods.
Put differently, there is so much overlap in information between the three files (domain knowledge implementation, proto file, and server implementation) that I'm curious if people have smarter ways than having to manually keep three separate files in sync when something changes.
Context: For my work, I'm running a script populate.py where we populate a database. It fails at a certain advanced stage (let's just say step 9) as I'm trying to add a new many to many association table.
I made some changes to correspond to a similar entity that works. Specifically I added some #property and #x.setter methods because I thought that would solve it or at least be good practice. Now I get a failure at an earlier stage that was working before (let's say step 4).
I'm not looking for a solution to either of these issues but an understanding of how there could be such an error as logged with respect to how python is designed to work.
The error is:
`sqlalchemy.exc.InvalidRequestError: Mapper 'mapped class User->user' has no property 'member_groups'`
I did change a class attribute from
member_groups
to
_member_groups
But I also added the following code
#property
def member_groups(self) -> List[MemberGroup]:
return self._member_groups
#member_groups.setter
def member_groups(self, new_member_groups):
'''
Now, what do we do with you?
'''
self._member_groups = new_member_groups
I thought the whole point of the #property decorator in python was for this very use case- that I could call the class attribute _foobar as long as I had the decorator grabbing and setting it correctly..
#property
def member_groups(self):
return self._foobar
..and that it shouldn't make any difference what the class attribute is called as long as we give what we want the user to access it through the property descriptor. I thought that was the whole purpose or at least a major point of this pythonic api, specifically to create pseudo private variables and not have it make a difference or be breaking with existing code, but it shows that mapped class has no property.
I just want the theory, not a code solution per se, although I'll take ideas if you have them. I just want to understand Python better.
I'm writing a CLI to interact with elasticsearch using the elasticsearch-py library. I'm trying to mock elasticsearch-py functions in order to test my functions without calling my real cluster.
I read this question and this one but I still don't understand.
main.py
Escli inherits from cliff's App class
class Escli(App):
_es = elasticsearch5.Elasticsearch()
settings.py
from escli.main import Escli
class Settings:
def get(self, sections):
raise NotImplementedError()
class ClusterSettings(Settings):
def get(self, setting, persistency='transient'):
settings = Escli._es.cluster\
.get_settings(include_defaults=True, flat_settings=True)\
.get(persistency)\
.get(setting)
return settings
settings_test.py
import escli.settings
class TestClusterSettings(TestCase):
def setUp(self):
self.patcher = patch('elasticsearch5.Elasticsearch')
self.MockClass = self.patcher.start()
def test_get(self):
# Note this is an empty dict to show my point
# it will contain childs dict to allow my .get(persistency).get(setting)
self.MockClass.return_value.cluster.get_settings.return_value = {}
cluster_settings = escli.settings.ClusterSettings()
ret = cluster_settings.get('cluster.routing.allocation.node_concurrent_recoveries', persistency='transient')
# ret should contain a subset of my dict defined above
I want to have Escli._es.cluster.get_settings() to return what I want (a dict object) in order to not make the real HTTP call, but it keeps doing it.
What I know:
In order to mock an instance method I have to do something like
MagicMockObject.return_value.InstanceMethodName.return_value = ...
I cannot patch Escli._es.cluster.get_settings because Python tries to import Escli as module, which cannot work. So I'm patching the whole lib.
I desperately tried to put some return_value everywhere but I cannot understand why I can't mock that thing properly.
You should be mocking with respect to where you are testing. Based on the example provided, this means that the Escli class you are using in the settings.py module needs to be mocked with respect to settings.py. So, more practically, your patch call would look like this inside setUp instead:
self.patcher = patch('escli.settings.Escli')
With this, you are now mocking what you want in the right place based on how your tests are running.
Furthermore, to add more robustness to your testing, you might want to consider speccing for the Elasticsearch instance you are creating in order to validate that you are in fact calling valid methods that correlate to Elasticsearch. With that in mind, you can do something like this, instead:
self.patcher = patch('escli.settings.Escli', Mock(Elasticsearch))
To read a bit more about what exactly is meant by spec, check the patch section in the documentation.
As a final note, if you are interested in exploring the great world of pytest, there is a pytest-elasticsearch plugin created to assist with this.
I have recently been trying to learning about WSGI and moreover, how the web works in regards to Python. So I've been reading through Werkzeug and PEP333 to learn.
However I've run up against a small question, that I think I understand but probably don't, so I would appreciate your steering in the right direction.
PEP333 states:
The application object is simply a callable object that accepts two
arguments. The term "object" should not be misconstrued as requiring
an actual object instance: a function, method, class, or instance with
a call method are all acceptable for use as an application object.
Application objects must be able to be invoked more than once, as
virtually all servers/gateways (other than CGI) will make such
repeated requests.
The implementation:
class AppClass:
"""Produce the same output, but using a class
(Note: 'AppClass' is the "application" here, so calling it
returns an instance of 'AppClass', which is then the iterable
return value of the "application callable" as required by
the spec.
If we wanted to use *instances* of 'AppClass' as application
objects instead, we would have to implement a '__call__'
method, which would be invoked to execute the application,
and we would need to create an instance for use by the
server or gateway.
"""
def __init__(self, environ, start_response):
self.environ = environ
self.start = start_response
def __iter__(self):
status = '200 OK'
response_headers = [('Content-type', 'text/plain')]
self.start(status, response_headers)
yield "Hello world!\n"
My question here is just to clarify if I have understood it correctly.
It states that AppClass is the application, and when we call it, it returns an instance of AppClass. But then further down states that 'if we wanted to use instances of AppClass ass application objects instead', is this saying that when the server side of WSGI calls the AppClass object, there is only one instance running?
For example. The server can issue multiple requests (200 OK's) to the Application for more responses, hence why iter is placed into the class. But each request runs through the same singular AppClass instance, each request to the server basically doesn't instantiate more than one instance of the AppClass?
Sorry if this is long winded, and apologies again if I haven't made much sense. I'm trying to improve atm.
Appreciate your inputs as always.
Thanks.
The server technology will call your app (in this case the class AppClass, causing an object construction) for each request. This is because each request will have a potentially unique environ.
The neat thing about this is it doesn't mean your app has to be a class, I often find it useful to define my wsgi app (or middleware) as a function returning a function:
# I'd strongly suggest using a web framework instead to define your application
def my_application(environ, start_response):
start_response(str('200 OK'), [(str('Content-Type'), str('text/plain'))])
return [b'hello world!\n']
def my_middleware(app):
def middleware_func(environ, start_response):
# do something or call the inner app
return app(environ, start_response)
return middleware_func
# expose `app` for whatever server tech you're using (such as uwsgi)
app = my_application
app = my_middleware(app)
Another common pattern involves defining an object to store some application state which is constructed once:
class MyApplication(object):
def __init__(self):
# potentially some expensive initialization
self.routes = ...
def __call__(self, environ, start_response):
# Called once per request, must call `start_response` and then
# return something iterable -- could even be `return self` if
# this class were to define `__iter__`
...
return [...]
app = MyApplication(...)
As for PEP333, I'd suggest reading PEP3333 instead -- it contains largely the same information, but clarifies the datatypes used throughout.
For background on various ways that WSGI application objects can be implemented read this blog post on the topic.
http://blog.dscpl.com.au/2011/01/implementing-wsgi-application-objects.html
I would also suggest reading the following, which talks about how Python web servers in general work.
https://ruslanspivak.com/lsbaws-part1/
Unless you really have a need you probably just want to use a framework. Avoid trying to write anything from scratch with WSGI.
I'm studying the code of jinja2.ext.InternationalizationExtension provided with Jinja2.
I know that tags can be added via the tags attribute; Jinja2 template parser will relinquish control and call user code when one of those strings is the first token in a {% %} block.
class InternationalizationExtension(Extension):
"""This extension adds gettext support to Jinja2."""
tags = set(['trans'])
I learned from the code that an extension can add attributes to the environment by calling Environment.extend; for jinja2.ext.InternationalizationExtension this is done in the __init__ method:
def __init__(self, environment):
Extension.__init__(self, environment)
environment.globals['_'] = _gettext_alias
environment.extend(
install_gettext_translations=self._install,
install_null_translations=self._install_null,
install_gettext_callables=self._install_callables,
uninstall_gettext_translations=self._uninstall,
extract_translations=self._extract,
newstyle_gettext=False
)
I know that custom filters are added by registering functions into Environment.filters:
def datetimeformat(value, format='%H:%M / %d-%m-%Y'):
return value.strftime(format)
environment.filters['datetimeformat'] = datetimeformat
The questions are:
Is it recommended that an extensions adds new filters, and not only tags and attributes to the environment? (The documentation suggests that this should be common practice)
Where in the extension subclass should this be done? In __init__ a reference to the environment is available, so in principle the above code could be put in the __init__ method.
Is it conceptually ok to do such thing in __init__? I personally don't like to alter objects states from within other objects' constructors, but in Jinja2 seems idiomatic enough to make it to an official extension (I'm talking about altering Environment.globals and calling Environment.extend from InternationalizationExtension.__init__).
Edit
A pattern to at least package filters nicely into a Python module. However this install function cannot be invoked from within a template (say, via a custom CallBlock created using an extension), because the environment should not be edited after the template has been instantiated.
def greet(value):
return "Hello %s!" % value
def curse(value):
return "Curse you, %s!" % value
def ohno(value):
return "Oh, No! %s!" % value
def install(env):
env.filters['greet'] = greet
env.filters['curse'] = curse
env.filters['ohno'] = ohno
Is it recommended that an extensions adds new filters, and not only tags and attributes to the environment?
Only if you need them. Otherwise, why overcomplicate your code? Filters are a very common use case for writing or extending other extensions, and the author most likely put that in there because they expect this to happen.
Where in the extension subclass should this be done?
It has to be done at call time, so if you don't directly put it in the __init__ you'll need to put it in a helper method called through __init__.
Is it conceptually ok to do such thing in __init__?
It's perfectly fine, so long as other users of your code can understand what it's doing. The simplest solution is usually the best.