Delay a python function call on a different pod [closed] - python

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I have a piece of software deployed to Kubernetes, and in that I have a scenario when I want to have one function called at a later point in time, but I am not sure my current pod will be the one executing it (it may be scaled down, for example).
So I need help with a mechanism that will enable me to schedule a function for later, on a pod of my software that may or may not be the one that scheduled it, and also a way to decide not to do the execution if some condition was met ahead of time.
Also - I need this to be enabled for thousands of such calls at any given point in time, this is a very fast execution software using Twisted python working on millions of tasks a day. But given the scaling up and down, I cannot just put it on the reactor for later.
Almost any use of a known module, external redis/db is fine.
So - I need this community's help...
Thanks in advance!

You are roughly speaking describing any worker queue system, with Celery as the most common one in Python. With RabbitMQ as the broker it can easily scale to whatever traffic you throw at it. Also check out Dask but I think Dask is baaaaaad so I mention it only for completeness.

Related

how can wo microservices communicate every minute with shared database,? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I want to create two microservices in python, one posts data into the database every minute and the other will process the data once it's posted into the database. I would like to know what would be an ideal architecture for this? How can this be done in python?
This sounds a lot like something that should be solved using the CQRS pattern. One service is responsible for updating the database and the other one is responsible for utilizing the data. This way you are separating the update and read operations making it very scalable.
I'm a big fan of an event-driven architecture when it makes sense, and since you are talking about RabbitMQ in your first solution, then I would probably continue down that path.
I would use two different topic types. One for commands and one for events. Commands would be things like "update entity" or whatever makes sense in your case. The events are things that happened like "entity updated". Your first service should subscribe to the relevant commands and send out an event after the operation is complete. The second service would subscribe to that event and do the processing that it is supposed to do.
Also a quick note on message queues. There are a lot of different message queues out there. RabbitMQ is a solid but old choice so you might benefit from one of the other options. I personally like Kafka a lot but things like Redis or the ones provided by cloud services like Azure or AWS along with many others.

How to increase speed of google cloud functions? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
What is the best way to see where the hiccups are in function speeds?
I know cloud functions deal with cold starts, however, as far as debugging goes, what is the best way to see where the lag is at?
Currently coding in python so any tips in that language would be greatly appreciated.
Please take a look on tips & tricks that describes the best practices for designing, implementing, testing and deploying Cloud Function. To summarize, you should consider the following:
Write idempotent functions
Ensure HTTP functions send an HTTP response
Do not start background activities
Always delete temporary files
The performance section describes best practices for optimizing performance. On your case you should:
Use dependencies wisely
Use global variables to reuse objects in future invocations
Do lazy initialization of global variables
Also, here's an additional resource to understand cold boot time.

How to design architecture for scalable algo trading application? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I was using Django-python framework for my web app. When I build a strategy in the UI it gets converted into a .py script file and every 5 mins (can be variable based on my candle interval) the file gets executed. I was using celery beat to invoke the file execution and the execution happens on the same machine using celery.
Here the problem is actually with scalability, if I have more strategies my CPU and memory usage were going more than 90%. How do I design the server architecture so that it can scale. Thank you.
When one Celery worker is no longer enough, you create more. This is quite easy if you are on a cloud platform where you can easily create more virtual machines.
If you can't create more, than you have to live with the current situation and try to spread the execution of your strategies across a longer period of time (throughout the day I suppose).

Running a python script using several computers (grid/cluster) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Is there a way to run my python script using several computers communicating through my web server? Willing to do a lot more research if someone can point me in the right direction, but I can't seem to find any useful info on this.
A simple script can't be automatically distributed, you need to break it into components that can run independently when given a part of the problem. These components run based on commands received from a library like PyMPI, or pull them from a queuing system like http://aws.amazon.com/sqs/
This also means you can't rely on having shared local memory. Any data that needs to be exchanged must be exchanged as part of the command, stored on a shared file system or placed in a database AWS Dynamo, Redis, ect.
There are a large number of links to more resources available at https://wiki.python.org/moin/ParallelProcessing under the Cluster Computing heading.

python how to -generate license- using time module [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm searching for a way to generate a (limited time license) .so
when a user starts the program . it has to check license date first before the program runs.
but the problem is :
i tried a couple of solutions . one of them is python's time.ctime , (to check time and see if it's realy during the license time) and it returns the time of the machine, so whenever a user want to use software without license he'll just change time of the machine.
i hope the idea is clear enough
any better ideas?
please inform me if you want more explanation
Regardless with the question whether or not this hassle is really worth the effort, you can check access times of ubiquitous files (e.g. /etc/passwd in Linux) and compare these to the current date. If you see that the files have been accessed/modified in the future, you know that there is a problem. Again, at least in *nix, a user may substitute system's stat, so that it "massages" the info you are looking at.
You could get the time from an external source via Internet: Python Getting date online?
Of course, this will only work if the user doesn't block your program from accessing the internet. And what should your program do when it can't access the internet? Refuse to run? I doubt that this is a good idea.
Nearly every standard function will return the machine time that can be adjusted by the user.
One possibility is to call a web service that returns the "correct" time. But this is only possible if you can assume internet access.
And may be should ask your self the question if that hassle is really worth the effort?

Categories