Are there reactive state libraries like Mobx for Python? - python

I'm looking for reactive state libraries like Mobx for Python, i.e. on server-side rather than client-side of a web application.
Mobx is similar to classic reactive libraries like RxPY, but has a different focus: It is not so much avout the low-level event dispatching, but reacting on data changes, recalculating derived values (but only those affected, and being lazy on non-observed dependent values). And Mobx determines dependencies of calculated values automatically.
Also, the Vue framework has such functionality built-in, with an even better syntax, with the upside (as well as downside) by being closely tied to the framework.
Alas, both are JavaScript and targeted at client-side / user interface.
So my specific questions are:
Are there similar reactive state libraries for Python?
Do these provide integration for storing/observing data in files?
(This would essentially be an inotify-based build system, but more fine-grained and more flexible.)
Do these provide integration with relational databases?
(Yes, there is a conceptual gap to be bridged, and it probably works only as long a single server instance accesses the database. It would still be very useful for wide range of applications.)
Do these provide integration with webserver frameworks?
(i.e. received HTTP requests trigger state changed and reclaculations, some calculated values are JSON structures which are observed by the client through web sockets, long polling or messaging systems.)

I did one. It's called MoPyX. It's toolkit independent so you can just observe objects. But is geared towards UIs.
See: https://github.com/germaniumhq/mopyx
PySide2 demo: https://github.com/germaniumhq/mopyx-sample

Related

API Design Questions using Django for OS tasks (REST vs RPC)

Background:
I have an application that is supposed to automate some infrastructure & OS-heavy tasks that happen on a network file system (for example: mounting volumes, shutting down / bringing up servers, creating directories, moving data around, ssh-ing etc). Ultimately there are a lot of OS-level commands that need to be run in a sequence for each action. Our consumer/client likely does not know this sequence, but knows "I want to do X task".
Tech stack: Python/Django
I have been tasked with setting this application up but am perplexed on the best way to approach for modularity from the API standpoint & just overall design. Currently, we have a similar application that is a SOAP-style (rpc) but the way it is written is not very modular. Like for example, one function will have a ton of random hardcoded subprocess commands - not the approach I want to emulate here.
Initially I was leaning more towards REST API since Django has a nice django rest framework plug-in, but am having trouble modelling these very action-oriented tasks. The more I read other things online, the more I come to believe I really need to think of every little action as a resource with the client having to GET/POST/PUT to each of these to keep things very modular but when I boil that down further it looks like I may need to set up 15+ endpoints for each situation needed and the client likely isn't going to want to call all 15 endpoints to get their singular behavior they want. That being said - moving to rpc so users can have one endpoint that 'moves the moon on a single call' might not be the best approach either.
I think one of the issues I see is our application is doing a lot of work on a file system, not all contained within our application's database. I reckon that's kind of a central point of this application, but I have trouble modelling things that require file system actions outside our application's database.
Question 1:
One example action that our client might want to call would be responsible for ssh-ing to a remote server and running a command. How might you model this in REST?
Question 2:
How do you all model file system actions in your applications?
Question 3:
After reviewing the above, does RPC seem like the better option?
Other:
Any other help or feedback (even in generally is much appreciated).
REST is similar to SOAP in a sense that you call operations in SOAP and REST just maps those operations to web resources and HTTP methods.
For example
z DoSSHStuffOnARemoteServer(x=1,y=2)
vs
POST /RemoteServer/SSHStuff {x:1,y:2}
If it timeouts, because it takes a lot of time, then you can do
202 accepted
{type: "transaction": href: "/RemoteServer/SSHStuff/123", status: "pending"}
and poll it in every 5-10 mins or use websockets to update the status. After it is done:
200 ok
{type: "transaction": href: "/RemoteServer/SSHStuff/123", status: "done", result: {z:3}}
So there is no magic. Just keep in mind that REST is in the presentation layer of your application, it returns view models and the entire structure is connected to the application services if you do DDD. It should not reflect the database structure unless you have an anemic domain model, or others call it thick client. Normally I would not say anything about RemoteServer/SSHStuff, just tell the client what will be done and stay silent about how it will be done. They don't need to know anything about how you store data or how many servers you have with what protocols and applications. It should not be their concern. The only thing they need to know what will be done, how long it takes to respond and what will be the response. The other part is irrelevant to them and it is a security risk if you share too much of it. When we design an interface like an interface for a REST service or just an OOP interface we always do it to hide implementation details. I hope that helps, have a nice day!

Implementing site-specific plugins in a Python application

I'm writing a Python application that will be installed at multiple sites. At each of those sites it needs to interface to other software with a different api at each site but all logically doing the same thing.
My solution is to create a base Class that encapsulates the common logic and provides a single interface back to the main app, and then separate subclasses for each different site-specific api. The base class and subclasses would all be defined in an "interfaces" package deployed to each site alongside the main app. To ensure a common code base (for ease of deployment and maintenance) all sites would have identical code, both the main app package and the interfaces package.
My question is, how best to ensure the main app uses the correct subclass from the interfaces package?
My current thoughts are to have the main app call a "get_interface" function in the interfaces package which reads an ini file to identify which interface subclass is in use at that site and returns that subclass to the main app. Obviously this requires the ini file to be site-specific, but that is the only thing that would be.
Is that the best approach?
To add some more concrete info as requested:
The application is astronomy-related. It aims to automate a pipeline from target identification, through scheduling of telescope imaging sessions, to processing of the resultant images. It makes extensive use of Astropy and affiliated packages.
There are several areas where it needs to interface to other software. Eg:
Target Identification, where it gets target info from various astronomical databases through web services.
Platesolving, where it uploads image data to either local (Windows or Linux-based) applications or remote web services such as astrometry.net.
Imaging, where it needs to interface to commercial telescope/observatory control packages (eg ACP or ASA Sequence) to actually perform imaging sessions.
The logical interfaces are all pretty much identical within each of these areas so, in the above cases, I envisage 3 classes to provide abstractions, which would be implemented through sub classes specific to the target system I’m interfacing with.
I’m really just exploring the best design pattern for how to implement this in a way that keeps the core of the application totally isolated from the implementation of the interfaces to the other systems (as these change fairly frequently).
In a way it’s a bit like using device drivers. The core application needs to call for a service to be performed (not caring how that is achieved) with that call being routed to the “driver” (my appropriate interface sub class) which would be potentially different for each site implementation.

Python Interprocess w/REST Interface options

Is there any tool/library for Python that will aid in interprocess communication, while keeping the API/client code easy to maintain?
I'm writing an application wherein I own both the server and the client portion, but I'd like to make it expandable to others via a REST interface (or something similarly accessible). Is my only options to write the boilerplate connective tissue for REST communication?
The REST interface should be implemented with small functions that call the actual Python API, which you will implement any way.
If you search here, on SO, the most frequent recommendation will be to use Flask to expose the REST interface.
There are libraries around that will try to turn the methods of a class into REST paths, and such, and those may save you a couple of hours on the onset, but cost you many hours down the road.
This morning I coded a backend service that way. The Requests calls to the external service are hidden by a module so the business logic doesn't know where the objects come from (ORM?), and the business logic produces objects that a simple Flask layer consumes to produce the JSON required by each matched URL.
#app.route("/api/users")
def users():
return json_response(
api.users(limit=request.args.get('limit', None)),
)
A one-liner.

Durable architecture in Python distributed application

Wondering about durable architectures for distributed Python applications.
This question I asked before should provide a little guidance about the sort of application it is. We would like to have the ability to have several code servers and several database servers, and ideally some method of deployment that is manageable and not too much of a pain.
The question I mentioned provides an answer that I like, but I wonder how it could be made more durable, or if doing so requires using other technologies. In particular:
I would have my frontend endpoints be the WSGI (because you already have that written) and write the backend to be distributed via messages. Then you would have a pool of backend nodes that would pull messages off of the Celery queue and complete the required work. It would look sort of like:
Apache -> WSGI Containers -> Celery Message Queue -> Celery Workers.
The apache nodes would be behind a load balancer of some kind. This would be a fairly simple architecture to scale and is, if done correctly, fairly reliable. Code for failure in a system like this and you will be fine.
What is the best way to make durable applications? Any suggestions on how to either "code for failure" or design it differently so that we don't necessarily have to? If you think Python might not be suited for this, that is also a valid solution.
Well to continue on the previous answer I gave.
In my projects I code for failure, because I use AWS for a lot of my hosting needs.
I have implemented database backends that will make sure that the database, region, is accessible and if not it will choose another region from a specified list. This happens transparently to the rest of the system on that node. So, if the east-1a region goes down I have a few other regions that I also host in that it will failover into, such as the west coast. I keep track of currently going database transactions and send them over to the west coast and dump them to a file so I can import them into the old database region once it becomes available.
My front end servers sit behind a elastic load balancer that is distributed across multiple regions and this allows for durable recovery if a region fails. But, it cannot be relied upon so I am looking into solutions such as running a HAProxy and switching my DNS in the case that my ELB goes down. This is a work in progress and I cannot give specifics on my own solutions.
To make your data processing durable look into Celery and store the data in a distributed mongo server to keep your results safe. Using a durable data store to keep your results allows you to get them back in the event of a node crash. It comes at the cost of some performance, but it shouldn't be too terrible if you only rely on soft-realtime constraints.
http://www.mnxsolutions.com/amazon/designing-for-failure-with-amazon-web-services.html
The above article talks mostly about AWS but the ideas apply to any system that you need to keep high availability in and system durability. Just remember that downtime is ok as long as you minimize it for a subset of users.

DB Connectivity from multiple modules

I am working on a system where a bunch of modules connect to a MS SqlServer DB to read/write data. Each of these modules are written in different languages (C#, Java, C++) as each language serves the purpose of the module best.
My question however is about the DB connectivity. As of now, all these modules use the language-specific Sql Connectivity API to connect to the DB. Is this a good way of doing it ?
Or alternatively, is it better to have a Python (or some other scripting lang) script take over the responsibility of connecting to the DB? The modules would then send in input parameters and the name of a stored procedure to the Python Script and the script would run it on the database and send the output back to the respective module.
Are there any advantages of the second method over the first ?
Thanks for helping out!
If we assume that each language you use will have an optimized set of classes to interact with databases, then there shouldn't be a real need to pass all database calls through a centralized module.
Using a "middle-ware" for database manipulation does offer a very significant advantage. You can control, monitor and manipulate your database calls from a central and single location. So, for example, if one day you wake up and decide that you want to log certain elements of the database calls, you'll need to apply the logical/code change only in a single piece of code (the middle-ware). You can also implement different caching techniques using middle-ware, so if the different systems share certain pieces of data, you'd be able to keep that data in the middle-ware and serve it as needed to the different modules.
The above is a very advanced edge-case and it's not commonly used in small applications, so please evaluate the need for the above in your specific application and decide if that's the best approach.
Doing things the way you do them now is fine (if we follow the above assumption) :)

Categories