I'm using Python's subprocess to spawn new processes. The processes are independent of each other and output some of the data related to the account creation.
for token in userToken:
p = subprocess.Popen(['python3','create_account.py',token)
sleep(1)
I'm trying to find a way to get the output of each of the Python scripts to run in the separate VSCode terminals to clearly see how the processes are running.
For example, in VSCode you can split the terminals as in the screenshot below. It would be great if each of the processes would have its own terminal window.
I've also checked that you can run tasks in VSCode in separate terminals as described here. Is there a way to launch multiple subprocess threads in separate terminals like that?
If that's not possible, is there another way I can run subprocess in multiple terminals in VSCode?
Currently in VS Code, it supports running python code in a single thread in the terminal by default.
If you want to run the python code in two or more VS Code terminals separately, and not run them sequentially, you could manually enter the run command in the two VS Code terminals, for example:
The command to run the python file'c.py': "..:/.../python.exe ..:/.../c.py".
And for multi-threaded synchronous operation, except for the manual input of execution commands in two or more newly created terminals to make the code run synchronously, VSCode currently does not have other local support that supports this function.
I have submitted an application for this feature in Github and we are looking forward to the realization of this feature:
Github link: Can VSCode automatically run python scripts in two or more terminals at the same time?
Related
I am trying to create a bunch of automations on my PC with Windows, and I encountered some obstacles while trying to automate a Powershell script with Windows Task Scheduler.
Right now, I have managed to set up a Task Scheduler to perform an actual script, but despite the whole script working as intended when I start it manually, it doesn't work right while invoked via task scheduler.
The Powershell script is very simple, it is meant to invoke a certain Python script. I also created a log transcript for testing purposes.
python dataset_creator.py --to_import yes --to_export yes
Start-Transcript -Path "<path>\transcript0.txt"
While invoked manually, the Python script works, but via Task Scheduler, the only working part is a transcript creation (ergo - no Python script is running). The Task Scheduler informs me itself that the task executed properly. This is how I set up the action:
Program: %SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe
Arguments: -NoProfile -NoLogo -NonInteractive -ExecutionPolicy Bypass -File "C:\<path>\powershell_script.ps1"
As for now, I have set it on a SYSTEM account, I have tried to set it on my user admin account, but it doesn't work as well.
Could you suggest the potential issues? I scrolled through several articles and nothing works. I also tried to skip the Powershell part (aka - set up the task relying only on a Python task), but it didn't work so far, and I also initially wanted to insert a number of Python scripts invokations into a single Powershell file.
I am also open to some Windows Task Scheduler alternatives suggestions.
I have looked at this question but I am not sure I got it correctly or not.
I have opened pycharm and one python script and its running (it's topic modeling).
Also I have another python script in which I opened in another pycharm in the same server. I also run it.
Now these two program are running in the same server, I should mention that I have not changed any configuration neither server nor pycharm.
Do you think its ok in this way? or one script technically won't run(in terms of progressing I mean it just show its running but practically wont run) until the other script finished?
Edit Configurations -> Allow parallel run. Done
First, PyCharm will create independent processes on the server, so both scripts will run. You can check it with something like htop - search for processes and verify that they're running.
Second, you don't have to open second PyCharm window to run the second script. You can run both of them from the single one. There are at least two ways: with run configurations or by spawning multiple terminal windows and running scripts from there.
From the Run/Debug Configurations windows you can add a Compound configuration that contains multiple configurations that will run in parallel. The Allow parallel run option for child configurations make no difference in this case.
The default behaviour was changed starting from version 2018.3. You can allow multiple runs by selecting Allow parallel run within the Edit Configurations menu.
Normally, I would use "blender -P script.py" to run a python script. In this case, a new blender process is started to execute the script. What I am trying to do now is to run a script using a blender process that is already running, instead of starting a new one.
I have not seen any source on this issue so far, which makes me concern about the actual feasibility of this approach.
Any help would be appreciated.
Blender isn't designed to be started from the cli and to then keep receiving more commands from the cli as it is running. It does however include a text editor that can open text files and run the text block as a python script, it also includes a python console that can be used to interactively type in commands while blender is running. You may also find this addon useful as it lets you to run a text block in the python console, this leaves you with an interactive session that contains the variables as they exist at the end of the scripts execution.
There is a cli option to run blender as a python console blender --python-console - the gui does not get updated while this console is running, so you could open and exec several scripts and then when you exit the console, blender will update it's gui and allow interactive use, or if you start in background mode -b then it will quit when you exit the console.
My solution was to launch Blender via console with a python script (blender --python script.py) that contains a while loop and creates a server socket to receive requests to process some specific code. The loop will prevent blender from opening the GUI, and the socket will handle the multiple requests inside the same blender process.
I launch python script in bash:
python /dumpgenerator.py --index=http://*website* --xml --curonly --images --path=/*website*
In order to perform many tasks the script has to be launched in new windows with different parameters, I mean website
Can I launch at the same time python script which will "catch" parameters using bash commands from text file which contains website links? It's important that sessions have to be launched in new console windows (for every session there will be own bash and python processes)
There's also a problem to convert website link into applicable filesystem format when setting --path=/website. What regular expression should I use?
Example: the script is developed by https://code.google.com/p/wikiteam/. It doesn't let you to launch more than one wikis to archive them simultaneously. If you want more wikis to be archived, you have to copy paste (just change one parameter website) command in a new bash session.
It seems rather boring to open new terminal window and bash session 50 times. That's why I'm concerned how can I simplify this task.
I've found a solution. I just set up TMUX by this article and ran copies of python /dumpgenerator.py
I am attempting to launch a couple of external applications from a Jenkins build step in Windows 7 64-bit. They are essentially programs designed to interact with each other and perform a series of regression tests on some software. Jenkins is run as Windows service as a user with admin privileges on the machine. I think that's full disclosure on any weirdness with my Jenkins installation.
I have written a Python3 script that successfully does what I want it to when run from the Windows command line. When when I run this script as a Jenkins build step, I can see that the applications have been spawned via the Task Manager, but there is no CPU activity associated with them, and no other evidence that they are actually doing anything (they produce log files, etc., but none of these appear). One of the applications typically runs at 25% CPU during the course of the regression tests.
The Python script itself runs to completion as if everything is OK. Jenkins is correctly monitoring the output of the script, which I can watch from the job's console output. I'm using os.spawnv(os.P_NOWAIT, ...) for each of the external application. The subprocess module doesn't do what I want it to, I just want these programs to run externally.
I've even run a bash script via Cygwin that functionally does the same thing as the Python script with the same results. Any idea why these applications spawn but don't execute?
Thanks!