So I am trying to run a gui to access certain files and a program to create those files in parallel. So once they are created, they wont be touched by the latter anymore.
My question is, how do I best implement this? Do I want multiprocessing? If so, how do I multiprocess just two functions?
I looked into it and the multiprocessing module seems to be what I want, but I havent quite understood how I can let it run two pools or how I run two specific functions with that. It just seems to take in a function and split it up arbitrarily.
My idea would be to just call two bat files in parallel which start their python process. But thats hardly an elegant solution.
So in short, how do I start two specific seperate functions with multiprocessing? Or is there any other more elegant solution, like os.fork or smth like that, that works as I want?
Related
I have been searched the answer for a while. The answers include using Ipyparallel, but there are no clear instructions on how can I do to apply to two cells in general. Most of the examples are just compute some value with some functions distributively. They are at least not clear for me to understand how to solve the problem. I wish someone could provide some code or instructions on how can I do for running two independent cells in general on Colab. Or, if there are other methods, it is also fine as long as it works. Thank you.
It's not exactly straightforward, because Python is synchronous by default. Maybe encapsulating your cells in functions and using asyncio to execute two functions asynchronously will do the job. Or, if it includes something like heavy processing, threading or multiprocessing modules may be the way to go.
I have written a single, monolithic project (let's call it Monolith.py) over the course of about 3 years in python. During that time I have created a lot of large and small utilities that would be useful in other projects.
Here are a couple simple examples:
Color.py: A small script that easily allows me to essentially colorize text. I use this a lot for print().
OExplorer.py: A larger script that is a CLI object explorer that I can use to causally browse classes and objects interactively.
Stuff like that. There's probably 20 of them, mostly modules. They are all under constant development.
My question is, what is the best way to use those in another project and keeping everything up to date?
I am using visual studio code and I use it to handle all my git stuff. I'm guessing there's a way to like nested git folders? I'm worried about screwing up my main repo.
I also do not want to separate out these projects into their own vscode workspace. It is better if they are left where they are. I also do not want to pull in all the code from the monolith for another project, that would be silly.
I think there is a simple git solution here. I would appreciate it if someone can give me some direction here and hand hold a bit, git is clearly not my strong suit.
I am just wondering, is there any way to process multiple videos in one go using a cluster may be? Specifically using python? For example, if I have 50+ videos in a folder and I have to analyze for movement related activity. Assuming I have a code written in python and I have to use that one particular code for the each video. What exactly I want is, instead of analyzing videos one by one (i.e., putting in loop) I need to analyze videos in parallel. Is there any way that I can implement the same?
You can do this with multiprocessing or threading. The details can be a bit involved, and since you didn't give any in your question, suggesting the above links is about all I can help.
(I am using Python and ArchLinux)
I am writing a simple AI in Python as a school project. Because it is a school project, and I would like to visibly demonstrate what it is doing, my intention is to have a different terminal window displaying printed output from each subprocess- one terminal showing how sentences are being parsed, one showing what pyDatalog is doing, one for the actual input-output chat, etc, possibly on two monitors.
From what I know, which is not much, a couple of feasible ways to go about this are threading each subprocess and figuring out display from there, or writing/using a library which allows me to make and configure my own windows.
My question is, then, are those the best ways, or is there an easy way to output to multiple terminals simultaneously. Also, if making my own windows (and I'm sorry if my terminology is wrong when I say 'making my own windows'. I mean building my own output areas in Python) is the best option, I'm looking for which library I should use for that.
So you could go in multiple directions with this. You could create a Tkinter (or GUI library of your choice) output text box and write to that. This option would give you the most control over how to display your data.
Another option is to access multiple terminals via named pipes. This involves spawning an xterm with a pipe and pumping output to it to be written on screen. See this question for and example:
Using Python's Subprocess to Display Output in New Xterm Window
I like #ebarr's answer, but a quick and dirty way to do it is to write to several files. You can then open multiple terminals and tail the files.
I have to automate multiple webpages using Selenium. The preferred method is - WebDriver with Python on Windows. Since there number of webpages to test is very very large, I am trying to figure out if I can make this process parallel. E.g. From command line, I execute
python script1.py
Say I have 100 such scripts and I want to execute them in batches of 5 each. Also one requirement is that is 1 out of 5 scripts completes, then the master starts 6th script to always keep 5 scripts in parallel.
I have searched in docs and some forums. But I could not find any help with this. I have done similar thing in the past, but that involved firing multiple browsers actually from code, so kind of different. This involves Python and Webdriver.
Any help is appreciated.
Thanks and Regards.
I wanted to do something similar where I wanted to run multiple testcases at once. I guess this can be achieved by using Selenium Grid .
I have no idea why this was downvoted? Anyways, I found a way to do this.
It can be done by importing subprocess module and then passing arguments to the call function as -
subprocess.call(["python", "d:/pysel/hello.py"])
subprocess.call(["python", "d:/pysel/goodbye.py"])
It is not exactly parallel. But may work for my situation.