How to design architecture for scalable algo trading application? [closed] - python

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I was using Django-python framework for my web app. When I build a strategy in the UI it gets converted into a .py script file and every 5 mins (can be variable based on my candle interval) the file gets executed. I was using celery beat to invoke the file execution and the execution happens on the same machine using celery.
Here the problem is actually with scalability, if I have more strategies my CPU and memory usage were going more than 90%. How do I design the server architecture so that it can scale. Thank you.

When one Celery worker is no longer enough, you create more. This is quite easy if you are on a cloud platform where you can easily create more virtual machines.
If you can't create more, than you have to live with the current situation and try to spread the execution of your strategies across a longer period of time (throughout the day I suppose).

Related

adding index "abality" to mysql data base (im not a programmer take it easy) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 days ago.
Improve this question
i gave my project to a noob programmer which i've been told "he has not used indexing database" and this is using massive cpu from my server
and this is right , my sql consumption is massive for such an small project with maximum of 300 requests/s
is it possible to convert that database somehow to a more efficent version?
if it is may you help me with this?
(project is a python telegram robot and i already know basics of python)
i used to load everything from database to a varriable in python and then process things a lot faster nd with less cpu usage and update the varriable everytime that data base changes
but i need more efficiency and also this is very hard if i want to do for all parts of robot(I've done this only for a few commands of the robot)

Delay a python function call on a different pod [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I have a piece of software deployed to Kubernetes, and in that I have a scenario when I want to have one function called at a later point in time, but I am not sure my current pod will be the one executing it (it may be scaled down, for example).
So I need help with a mechanism that will enable me to schedule a function for later, on a pod of my software that may or may not be the one that scheduled it, and also a way to decide not to do the execution if some condition was met ahead of time.
Also - I need this to be enabled for thousands of such calls at any given point in time, this is a very fast execution software using Twisted python working on millions of tasks a day. But given the scaling up and down, I cannot just put it on the reactor for later.
Almost any use of a known module, external redis/db is fine.
So - I need this community's help...
Thanks in advance!
You are roughly speaking describing any worker queue system, with Celery as the most common one in Python. With RabbitMQ as the broker it can easily scale to whatever traffic you throw at it. Also check out Dask but I think Dask is baaaaaad so I mention it only for completeness.

Does anybody know how to run parallel a Python application? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I would like to use Python for my project but I need to distribute the computation on a set of resources.
You can try with pyCOMPSs, which is the python version of COMP superscalar.
More info here
With this programming model you define which methods are candidate to be executed remotely defining the direction of the method parameters (IN OUT or INOUT). Then the main code is programmed in a sequential fashion and the runtime analyses dependencies between these methods detecting which can be executed in parallel. The runtime also spawn the exectution transparently to the remote hosts and do all data transfers.

How to run python web-crawler effectively [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a python web-crawler that get info and put it in SQL. Now I also have php page, that read this info from the SQL, and represent it. The problem is: in order that the crawler program to work, my computer has to stay 24/7 working. I have simple home computer- so its a problem. Does there is a diffrent way to run the web-crawler? Or do I have to run it on my pc?
If you are monitoring pages that get updated constantly then yes you will need to run it on some computer that is on 24/7. You can either use a cron job or a continuous loop (with some monitoring). If you don't want to run it on your home machine, I would recommend getting a free AWS account. You can get an EC2 micro instance which will allow you to run your python script and an S3 database instance which will allow you to store the information.
Here is a link to AWS

Running a python script using several computers (grid/cluster) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Is there a way to run my python script using several computers communicating through my web server? Willing to do a lot more research if someone can point me in the right direction, but I can't seem to find any useful info on this.
A simple script can't be automatically distributed, you need to break it into components that can run independently when given a part of the problem. These components run based on commands received from a library like PyMPI, or pull them from a queuing system like http://aws.amazon.com/sqs/
This also means you can't rely on having shared local memory. Any data that needs to be exchanged must be exchanged as part of the command, stored on a shared file system or placed in a database AWS Dynamo, Redis, ect.
There are a large number of links to more resources available at https://wiki.python.org/moin/ParallelProcessing under the Cluster Computing heading.

Categories