Python System Design. Master Slave? Or Multiple Masters? - python

Originally it was a small project, just 150 accounts that I wrote a selenium program in python to do a small task with them. It used one computer and took about 5 hours. Now however, I am looking to scale to 1000 accounts. For obvious reasons I do not want to do this on one computer, this task needs to be done once per day and it would obviously take about 30-35 hours for this task to complete on one machine. I want to use more than one machine, but also want to have the option to easily scale to three, four, or more.
I have moved the data of all the accounts into an amazon cloud database, and can easily connect to it from my python program. However, as I mentioned earlier I want this project to be easily scalable. I do not want to hardcode values, aka have one computer do accounts 1-500 and the other do 501-1000 (What if I added 500 more accounts and 2 machines? I would want each machine to do 1500/4). I'm thinking of a master slave approach. Where on each machine I have a program that can be called with some number of accounts as an array. And a master program that runs on my machine that once per 24 hours can send out a command with the accounts each machine is supposed to utilize.
Then I want the program to return the data back to me and when each slave is finished the master program will combine the data sent back by each slave and update the table accordingly. OR each slave to update the table independently, but I am not sure this is possible due to table-locks (if anyone could maybe comment on this it would be helpful as well)
Thanks for reading!
Edit: If you think this is too broad I'm not looking for an exact answer. Just trying to find someone who has done anything like this before. Just listing a technology or method of doing this that I can research would help me a lot

I've done a similar thing before and ended up using a master-slave design.
I had a master with the database of "jobs" and the slaves queried it to get their tasks.
In my case the process what something like this:
Slave query master for jobs
Master send 50 jobs and change status in DB with slave name
Slave finish jobs and tells master
Master change status in DB to complete and send new tasks
Repeat until queue is all completed
This way I could add more slaves as the job queue grew bigger and they could have different performance. Some of my slaves did 3 times more than the slowest ones, depending on internet connection and page loading times.

Related

MongoDB Python parallel operation for a long time

I am developing an automation tool that is supposed to upgrade IP network devices.
I developed 2 totally separated script for the sake of simplicity - I am not an expert developer - one for the core and aggregation nodes, and one for the access nodes.
The tool executes software upgrade on the routers, and verifies the result by executing a set of post check commands. The device role implies the "size" of the router. Bigger routers take much more to finish the upgrade. Meanwhile the smaller ones are up much earlier than the bigger ones, the post check cannot be started until the bigger ones finish the upgrade, because they are connected to each other.
I want to implement a reliable signaling between the 2 scripts. That is, the slower script(core devices) flips a switch when the core devices are up, while the other script keeps checking this value, and start the checks for the access devices.
Both script run 200+ concurrent sessions moreover, each and every access device(session) needs individual signaling, so all the sessions keep checking the same value in the DB.
First I used the keyring library, but noticed that the keys do disappear sometimes. Now I am using a txt file to manipulate the signal values. It looks pretty much armature, so I would like to use MongoDB.
Would it cause any performance issues or unexpected exception?
The script will be running for 90+ minutes. Is it OK to connect to the DB once at the beginning of the script, set the signal to False, then 20~30 minutes later keep checking for an additional 20 minutes. Or is it advised to establish a new connection for reading the value for each and every parallel session?
The server runs on the same VM as the script. What exceptions shall I expect?
Thank you!

How to serve a continuously running python script to multiple users (Social Media Bot)

I hope you are all having an amazing day. So I am working on a project using Python. The script's job is to automate actions and tasks on a social media platform via http requests. As of now, one instance of this script access one user account. Now, I want to create a website where I can let users register, enter their credentials to the social media platform and run an instance of this script to perform the automation tasks. I've thought about creating a new process of this script every time a new user has register, but this doesn't seem efficient. Also though about using threads, but also does not seem reasonable. Especially if there are 10,000 users registering. What is the best way to do this? How can I scale? Thank you guys so much in advance.
What is the nature of the tasks that you're running?
Are the tasks simply jobs that run at a scheduled time of day, or every X minutes? For this, you could have your Web application register cronjobs or similar, and each cronjob can spawn an instance of your script, which I assume is short-running, to carry out a the automated task one user at a time. If the exact timing of the script doesn't matter then you could scatter the running of these scripts throughout the day, on seperate machines if need be.
The above approach probably won't scale well to 10,000 users, and you will need something more robust, especially if the script is something that needs to run continuously (e.g. you are polling some data from Facebook and need to react to its changes). If it's a lot of communication per user, then you could consider using a producer-consumer model, where a bunch of producer scripts (which run continously) issue work requests into a global queue that a bunch of consumer scripts poll and carry out. You could also load balance such consumers and producers across multiple machines.
Of course, you would definitely want to squeeze out some parallelism from the extra cores of your machines by carrying out this work on multiple threads or processes. You could do this quite easily in Python using the multiprocessing module.

How to ensure several Python processes access the data base one by one?

I got a lot scripts running: scrappers, checkers, cleaners, etc. They have some things in common:
they are forever running;
they have no time constrain to finish their job;
they all access the same MYSQL DB, writting and reading.
Accumulating them, it's starting to slow down the website, which runs on the same system, but depends on these scripts.
I can use queues with Kombu to inline all writtings.
But do you know a way to make the same with reading ?
E.G: if one script need to read from the DB, his request is sent to a blocking queue, et it resumes when it got the answer ? This way everybody is making request to one process, and the process is the only one talking to the DB, making one request at the time.
I have no idea how to do this.
Of course, in the end I may have to add more servers to the mix, but before that, is there something I can do at the software level ?
You could use a connection pooler and make the connections from the scripts go through it. It would limit the number of real connections hitting your DB while being transparent to your scripts (their connections would be held in a "wait" state until a real connections is freed).
I don't know what DB you use, but for Postgres I'm using PGBouncer for similiar reasons, see http://pgfoundry.org/projects/pgbouncer/
You say that your dataset is <1GB, the problem is CPU bound.
Now start analyzing what is eating CPU cycles:
Which queries are really slow and executed often. MySQL can log those queries.
What about the slow queries? Can they be accelerated by using an index?
Are there unused indices? Drop them!
Nothing helps? Can you solve it by denormalizing/precomputing stuff?
You could create a function that each process must call in order to talk to the DB. You could re-write the scripts so that they must call that function rather than talk directly to the DB. Within that function, you could have a scope-based lock so that only one process would be talking to the DB at a time.

Splitting a Django project

I have a django project with various apps, which are completely independent. I'd like to make them run each one in their own process, as some of them spawn background threads to precalculate periodically some data and now they are competing for the CPU (the machine has loads of cores, but you know, the GIL and such...)
So, is there an easy way to split automatically the project into different ones, or at least to make each app live in its own process?
You can always have different settings files, but that would be like having multiple projects and even multiple endpoints. With some effort you could configure a reverse proxy to forward to the right Django server, based on the request's path and so on, but I don't think that's what you want and it would be an ugly solution to your problem.
The solution to this is to move the heavy processing to a jobs queue. A lot of people and projects prefer Celery for this.
If that seems like overkill for some reason, you can always implement your own based on simple cron jobs. You can take a look at my small project that does this.
The simplest of the simple is probably to write a custom management command that observes given model (database table) for new entries and processes them. The model is written to by e.g. Django view and the management command is launched periodically from cron (e.g. every 5 minutes).
Example: user registers on the site, but the account creation is an expensive operation (allocating some space, pinging remote services etc.). Therefore you just write a new record to AccountRequest table (AccountRequest.objects.create(...)). Then, cron periodically launches your management script (./manage.py account_creator), which checks for new AccountRequest-s (AccountRequest.objects.filter(unprocessed=True)), does its job and marks those requests as processed.

Run a repeating task for a web app

This seems like a simple question, but I am having trouble finding the answer.
I am making a web app which would require the constant running of a task.
I'll use sites like Pingdom or Twitterfeed as an analogy. As you may know, Pingdom checks uptime, so is constantly checking websites to see if they are up and Twitterfeed checks RSS feeds to see if they;ve changed and then tweet that. I too need to run a simple script to cycle through URLs in a database and perform an action on them.
My question is: how should I implement this? I am familiar with cron, currently using it to do my server backups. Would this be the way to go?
I know how to make a Python script which runs indefinitely, starting back at the beginning with the next URL in the database when I'm done. Should I just run that on the server? How will I know it is always running and doesn't crash or something?
I hope this question makes sense and I hope I am not repeating someone else or anything.
Thank you,
Sam
Edit: To be clear, I need the task to run constantly. As in, check URL 1 in the database, check URl 2 in the database, check URL 3 and, when it reaches the last one, go right back to the beginning. Thanks!
If you need a repeatable running of the task which can be run from command line - that's what the cron is ideal for.
I don't see any demerits of this approach.
Update:
Okay, I saw the issue somewhat different. Now I see several solutions:
run the cron task at set intervals, let it process the data once per run, next time it will process the data on another run; use PIDs/Database/semaphores to avoid parallel processes;
update the processes that insert/update data in the database; let the information be processed when it is inserted/updated; c)
write a demon process which will reside in memory and check the data in real time.
cron would definitely be a way to go with this, as well as any other task scheduler you may prefer.
The main point is found in the title to your question:
Run a repeating task for a web app
The background task and the web application should be kept separate. They can share code, they can share access to a database, but they should be separate and discrete application contexts. (Consider them as separate UIs accessing the same back-end logic.)
The main reason for this is because web applications and background processes are architecturally very different and aren't meant to be mixed. Consider the structure of a web application being held within a web server (Apache, IIS, etc.). When is the application "running"? When it is "on"? It's not really a running task. It's a service waiting for input (requests) to handle and generate output (responses) and then go back to waiting.
Web applications are for responding to requests. Scheduled tasks or daemon jobs are for running repeated processes in the background. Keeping the two separate will make your management of the two a lot easier.

Categories