Implementing site-specific plugins in a Python application - python

I'm writing a Python application that will be installed at multiple sites. At each of those sites it needs to interface to other software with a different api at each site but all logically doing the same thing.
My solution is to create a base Class that encapsulates the common logic and provides a single interface back to the main app, and then separate subclasses for each different site-specific api. The base class and subclasses would all be defined in an "interfaces" package deployed to each site alongside the main app. To ensure a common code base (for ease of deployment and maintenance) all sites would have identical code, both the main app package and the interfaces package.
My question is, how best to ensure the main app uses the correct subclass from the interfaces package?
My current thoughts are to have the main app call a "get_interface" function in the interfaces package which reads an ini file to identify which interface subclass is in use at that site and returns that subclass to the main app. Obviously this requires the ini file to be site-specific, but that is the only thing that would be.
Is that the best approach?
To add some more concrete info as requested:
The application is astronomy-related. It aims to automate a pipeline from target identification, through scheduling of telescope imaging sessions, to processing of the resultant images. It makes extensive use of Astropy and affiliated packages.
There are several areas where it needs to interface to other software. Eg:
Target Identification, where it gets target info from various astronomical databases through web services.
Platesolving, where it uploads image data to either local (Windows or Linux-based) applications or remote web services such as astrometry.net.
Imaging, where it needs to interface to commercial telescope/observatory control packages (eg ACP or ASA Sequence) to actually perform imaging sessions.
The logical interfaces are all pretty much identical within each of these areas so, in the above cases, I envisage 3 classes to provide abstractions, which would be implemented through sub classes specific to the target system I’m interfacing with.
I’m really just exploring the best design pattern for how to implement this in a way that keeps the core of the application totally isolated from the implementation of the interfaces to the other systems (as these change fairly frequently).
In a way it’s a bit like using device drivers. The core application needs to call for a service to be performed (not caring how that is achieved) with that call being routed to the “driver” (my appropriate interface sub class) which would be potentially different for each site implementation.

Related

Azure Function App split between consumption plan and service app plan

I've recently started working on a project in Azure Functions that has two components.
Both components use the same shared code, however the plans these components use seem to differ.
An API endpoint for web users for controlling objects saved in a CosmosDB - Would benefit from a Service App plan because of the cold/warm issues with Consumption plan.
A backend scheduled process that uses these objects - Would benefit from a Consumption plan, seeing as it could use automatic scaling, and exact execution time is not so important.
I thought of using the premium plan to solve both issues, but it seems pretty expensive (my workload is pretty low, and from the calculator on Azure's page it looks like around 150$ a month by default - correct me if I'm wrong).
I was wondering if there is a way to split a function app into two plans, or have two function apps share code.
Thanks!
I was wondering if there is a way to split a function app into two plans, or have two function apps share code.
The first options is not something that can be done. The second option is the one to go for here. You can have the shared code in a separate Class Library project and reference it from both Function Apps.
A class library defines types and methods that are called by an application. If the library targets .NET Standard 2.0, it can be called by any .NET implementation (including .NET Framework) that supports .NET Standard 2.0. If the library targets .NET 5, it can be called by any application that targets .NET 5.
When you create a class library, you can distribute it as a NuGet package or as a component bundled with the application that uses it.
More information: Tutorial: Create a .NET class library using Visual Studio
EDIT:
For Python, you can create modules.
Python has a way to put definitions in a file and use them in a script or in an interactive instance of the interpreter. Such a file is called a module; definitions from a module can be imported into other modules or into the main module (the collection of variables that you have access to in a script executed at the top level and in calculator mode).
More info: 6. Modules

Are there reactive state libraries like Mobx for Python?

I'm looking for reactive state libraries like Mobx for Python, i.e. on server-side rather than client-side of a web application.
Mobx is similar to classic reactive libraries like RxPY, but has a different focus: It is not so much avout the low-level event dispatching, but reacting on data changes, recalculating derived values (but only those affected, and being lazy on non-observed dependent values). And Mobx determines dependencies of calculated values automatically.
Also, the Vue framework has such functionality built-in, with an even better syntax, with the upside (as well as downside) by being closely tied to the framework.
Alas, both are JavaScript and targeted at client-side / user interface.
So my specific questions are:
Are there similar reactive state libraries for Python?
Do these provide integration for storing/observing data in files?
(This would essentially be an inotify-based build system, but more fine-grained and more flexible.)
Do these provide integration with relational databases?
(Yes, there is a conceptual gap to be bridged, and it probably works only as long a single server instance accesses the database. It would still be very useful for wide range of applications.)
Do these provide integration with webserver frameworks?
(i.e. received HTTP requests trigger state changed and reclaculations, some calculated values are JSON structures which are observed by the client through web sockets, long polling or messaging systems.)
I did one. It's called MoPyX. It's toolkit independent so you can just observe objects. But is geared towards UIs.
See: https://github.com/germaniumhq/mopyx
PySide2 demo: https://github.com/germaniumhq/mopyx-sample

Python Interprocess w/REST Interface options

Is there any tool/library for Python that will aid in interprocess communication, while keeping the API/client code easy to maintain?
I'm writing an application wherein I own both the server and the client portion, but I'd like to make it expandable to others via a REST interface (or something similarly accessible). Is my only options to write the boilerplate connective tissue for REST communication?
The REST interface should be implemented with small functions that call the actual Python API, which you will implement any way.
If you search here, on SO, the most frequent recommendation will be to use Flask to expose the REST interface.
There are libraries around that will try to turn the methods of a class into REST paths, and such, and those may save you a couple of hours on the onset, but cost you many hours down the road.
This morning I coded a backend service that way. The Requests calls to the external service are hidden by a module so the business logic doesn't know where the objects come from (ORM?), and the business logic produces objects that a simple Flask layer consumes to produce the JSON required by each matched URL.
#app.route("/api/users")
def users():
return json_response(
api.users(limit=request.args.get('limit', None)),
)
A one-liner.

Durable architecture in Python distributed application

Wondering about durable architectures for distributed Python applications.
This question I asked before should provide a little guidance about the sort of application it is. We would like to have the ability to have several code servers and several database servers, and ideally some method of deployment that is manageable and not too much of a pain.
The question I mentioned provides an answer that I like, but I wonder how it could be made more durable, or if doing so requires using other technologies. In particular:
I would have my frontend endpoints be the WSGI (because you already have that written) and write the backend to be distributed via messages. Then you would have a pool of backend nodes that would pull messages off of the Celery queue and complete the required work. It would look sort of like:
Apache -> WSGI Containers -> Celery Message Queue -> Celery Workers.
The apache nodes would be behind a load balancer of some kind. This would be a fairly simple architecture to scale and is, if done correctly, fairly reliable. Code for failure in a system like this and you will be fine.
What is the best way to make durable applications? Any suggestions on how to either "code for failure" or design it differently so that we don't necessarily have to? If you think Python might not be suited for this, that is also a valid solution.
Well to continue on the previous answer I gave.
In my projects I code for failure, because I use AWS for a lot of my hosting needs.
I have implemented database backends that will make sure that the database, region, is accessible and if not it will choose another region from a specified list. This happens transparently to the rest of the system on that node. So, if the east-1a region goes down I have a few other regions that I also host in that it will failover into, such as the west coast. I keep track of currently going database transactions and send them over to the west coast and dump them to a file so I can import them into the old database region once it becomes available.
My front end servers sit behind a elastic load balancer that is distributed across multiple regions and this allows for durable recovery if a region fails. But, it cannot be relied upon so I am looking into solutions such as running a HAProxy and switching my DNS in the case that my ELB goes down. This is a work in progress and I cannot give specifics on my own solutions.
To make your data processing durable look into Celery and store the data in a distributed mongo server to keep your results safe. Using a durable data store to keep your results allows you to get them back in the event of a node crash. It comes at the cost of some performance, but it shouldn't be too terrible if you only rely on soft-realtime constraints.
http://www.mnxsolutions.com/amazon/designing-for-failure-with-amazon-web-services.html
The above article talks mostly about AWS but the ideas apply to any system that you need to keep high availability in and system durability. Just remember that downtime is ok as long as you minimize it for a subset of users.

DB Connectivity from multiple modules

I am working on a system where a bunch of modules connect to a MS SqlServer DB to read/write data. Each of these modules are written in different languages (C#, Java, C++) as each language serves the purpose of the module best.
My question however is about the DB connectivity. As of now, all these modules use the language-specific Sql Connectivity API to connect to the DB. Is this a good way of doing it ?
Or alternatively, is it better to have a Python (or some other scripting lang) script take over the responsibility of connecting to the DB? The modules would then send in input parameters and the name of a stored procedure to the Python Script and the script would run it on the database and send the output back to the respective module.
Are there any advantages of the second method over the first ?
Thanks for helping out!
If we assume that each language you use will have an optimized set of classes to interact with databases, then there shouldn't be a real need to pass all database calls through a centralized module.
Using a "middle-ware" for database manipulation does offer a very significant advantage. You can control, monitor and manipulate your database calls from a central and single location. So, for example, if one day you wake up and decide that you want to log certain elements of the database calls, you'll need to apply the logical/code change only in a single piece of code (the middle-ware). You can also implement different caching techniques using middle-ware, so if the different systems share certain pieces of data, you'd be able to keep that data in the middle-ware and serve it as needed to the different modules.
The above is a very advanced edge-case and it's not commonly used in small applications, so please evaluate the need for the above in your specific application and decide if that's the best approach.
Doing things the way you do them now is fine (if we follow the above assumption) :)

Categories