how can wo microservices communicate every minute with shared database,? [closed] - python

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I want to create two microservices in python, one posts data into the database every minute and the other will process the data once it's posted into the database. I would like to know what would be an ideal architecture for this? How can this be done in python?

This sounds a lot like something that should be solved using the CQRS pattern. One service is responsible for updating the database and the other one is responsible for utilizing the data. This way you are separating the update and read operations making it very scalable.
I'm a big fan of an event-driven architecture when it makes sense, and since you are talking about RabbitMQ in your first solution, then I would probably continue down that path.
I would use two different topic types. One for commands and one for events. Commands would be things like "update entity" or whatever makes sense in your case. The events are things that happened like "entity updated". Your first service should subscribe to the relevant commands and send out an event after the operation is complete. The second service would subscribe to that event and do the processing that it is supposed to do.
Also a quick note on message queues. There are a lot of different message queues out there. RabbitMQ is a solid but old choice so you might benefit from one of the other options. I personally like Kafka a lot but things like Redis or the ones provided by cloud services like Azure or AWS along with many others.

Related

First time web design, know Python already, any advice which direction to go? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 days ago.
Improve this question
I am a sort of experienced python programmer. I will quickly describe my situation. For a hobby programming was always nice. I then started working at a company that did lots of manual excel processing. One day I mentioned that I could probably automate this with python.
Things led to another and now there is python doing the excel work multiple times a day running from an Intel NUC i deployed as a small server. It has been some work figuring everything out but the money has been good as well, no complaints.
They are quite happy with me and have lots of different plans.
They want me to design a website where the employees can fill out a form daily and the data can be used elsewhere. However, I've done some html and css programming in highschool, but I know there needs to be a back-end to at least save the data that gets filled.
I dont know where to start. I know SQL is the #1 language in data processing and PHP in handling the back-end. But I already know python which also can do back-end operations.
I have two direct questions but also looking for advice on the whole situation. Feel free to just point anything out; I will read every comment.
My questions:
Could I run the webserver from my Intel NUC? Or is this generally seen as bad practice? Also, is it true that I would only need the domain if I run the webserver myself?
Is it worth it to learn SQL and PHP or should I stick to python?
I have tried looking online but found countless of resources. I would like to create a large database with lots of data I can use anytime. I think SQL is good for this but not looking to waste time.

Delay a python function call on a different pod [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I have a piece of software deployed to Kubernetes, and in that I have a scenario when I want to have one function called at a later point in time, but I am not sure my current pod will be the one executing it (it may be scaled down, for example).
So I need help with a mechanism that will enable me to schedule a function for later, on a pod of my software that may or may not be the one that scheduled it, and also a way to decide not to do the execution if some condition was met ahead of time.
Also - I need this to be enabled for thousands of such calls at any given point in time, this is a very fast execution software using Twisted python working on millions of tasks a day. But given the scaling up and down, I cannot just put it on the reactor for later.
Almost any use of a known module, external redis/db is fine.
So - I need this community's help...
Thanks in advance!
You are roughly speaking describing any worker queue system, with Celery as the most common one in Python. With RabbitMQ as the broker it can easily scale to whatever traffic you throw at it. Also check out Dask but I think Dask is baaaaaad so I mention it only for completeness.

Running Python on Server and sending results to Client and vice versa [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
So basically, what I want to do is have a user input some data in an HTML form or something (on client end). Have that data be carried over to a server, where the data is put through some python code and the processed result is sent back to the client. I know, I could use javascript to do this on the user side itself, but I want to experiment a bit and make use of some libraries like tensorflow, matplotlib and so on.
Also, is there some way, you know like Web Assembly to run python code on the client side. Like maybe, send data from server or have it fed by the user, and on some virtual environment type setup and processed ??
Note: I know flask exists and I've tried it, but I can't see the same flexibility as you know regular python code.
Thanks in advance 😊
There won't come a definite answer to your question because your question is too broad. But maybe this will give you some starting points.
I see you have two questions:
How can I use python server side
How can I use python client side
Question 1:
First of all you might know that it makes sense to perform operations on a server and not on a client. For example interacting with a central database.
Flask is already lightweight compared to Django which also uses python. If you really want to do a lot on your own you could take a look at websocket or common gateway interface (cgi).
Question 2: This is really not recommended but if you want to play around with WebAssembly and Python a good starting point is PyPyJs: https://github.com/pypyjs/pypyjs!
You can use Brython in the browser, it's pretty spiffy. Full dom-manipulation from python; fully compatible with libraries written in pure python. Really neat stuff.
As for the server-side, if you want to keep it full-python, you'll need to use something like flask, bottle, cherrypy, aiohttp,...
If you find yourself struggling; maybe try starting out writing a simple socket-based microservice? You'll then be able to either farm requests out to it from any other server; or incorporate the code in your (python) server code.
Good luck!

Is Django suited to simple webapps? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm diving into Django to create a webapp.
The thing is, I'm not sure if my app is too simple for what Django offers.
My app will download the latest CPI figures and convert your (monetary) dataset into inflation-adjusted figures, going way back in decades. The user pastes their data in via a textbox. It certainly won't need SQL.
I may want to expand the project with more features in future.
Is it advisable to go with a more lightweight framework for something as simple as I've described?
Every framework has its pros and cons. There are many different frameworks. Personally I prefer Flask but it is all personal preference. Here are some articles that help describe the differences:
https://www.airpair.com/python/posts/django-flask-pyramid
https://www.reddit.com/r/Python/comments/1yr8v5/django_vs_flask/
https://www.hakkalabs.co/articles/django-and-flask
A webapp like the one you describe sounds like most of the work can happen on the client side, without sending the data back to server. From what it sounds like, you simply need to make a few calculations and present the data in a new way.
For this I don't recommend Django, which is ideal for serving pages and managing relational DB content, but not really useful for client side work.
I'd recommend AngularJS

Easiest way to manage/monitor a flask app? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have a small flask app I want to deploy on my server and I'd like to be able to monitor it via an HTTP web interface. I don't need something complicated and I definitely don't want something that's difficult to set up. Previously I've used the Google App engine and the functionality in the Logs tab is completely fine.
The app is served through nginx & gunicorn and uses redis (w/ py-redis) and sqlite (w/ peewee). Ideally I'd like to be able to check the logs for all parts of the system from one place. Is this possible? What's the easiest way?
There is no definitive answer to the predicament and it would be whatever way you are most comfortable with.
You could change all your logging to write to a central database then create a small program which would scrape this data for you. This method also includes configuring a central syslog server:
http://www.linuxjournal.com/content/creating-centralized-syslog-server
What ever way you want to read these files is fine and all depends how much control you want. You could simply name all the logs based on hostname and rsync them to a central server from where you could parse them.
There are also free tools out there which will aid you in choosing you method take a look at:
http://www.linuxjournal.com/content/creating-centralized-syslog-server
There are also some proprietary systems you could use, such as Splunk:
http://www.splunk.com/
This is by no means a definitive list but should aim you in the right direction.

Categories