Opening third-party console and running commands - python

I am trying to write a script to open a third-party console application and run commands on Windows. The third-party console requires the two commands; 'connect' and 'run'. A typical input/output is shown below. This connects to the host server then runs a process indexed by three parameters (p1,p2,p3).
>connect server
Successfully connected to service.
>run p1 p2 p3
Successfully started.
FINISHED
The app does not allow me to execute both commands in one line using & as with cmd.
Despite reading the subprocess documentation I can't figure out how to pass my two commands into the executable.
I am using Python 3.5, so I believe subprocess.run should be suitable for this task. The snippet below simply opens the third-party console. I have tried other code, linked at the bottom of the post, but I am unsure how to implement it for my purpose.
import subprocess
exe = r'C:\...\third_party_app.exe'
subprocess.run(exe)
Below are some of the SO resources that could be helpful that I have tried, and failed, to interpret.
https://stackoverflow.com/tags/subprocess/info
Python - How do I pass a string into subprocess.Popen (using the stdin argument)?

Related

Python Subprocess in Multiple Terminals in VSCode

I'm using Python's subprocess to spawn new processes. The processes are independent of each other and output some of the data related to the account creation.
for token in userToken:
p = subprocess.Popen(['python3','create_account.py',token)
sleep(1)
I'm trying to find a way to get the output of each of the Python scripts to run in the separate VSCode terminals to clearly see how the processes are running.
For example, in VSCode you can split the terminals as in the screenshot below. It would be great if each of the processes would have its own terminal window.
I've also checked that you can run tasks in VSCode in separate terminals as described here. Is there a way to launch multiple subprocess threads in separate terminals like that?
If that's not possible, is there another way I can run subprocess in multiple terminals in VSCode?
Currently in VS Code, it supports running python code in a single thread in the terminal by default.
If you want to run the python code in two or more VS Code terminals separately, and not run them sequentially, you could manually enter the run command in the two VS Code terminals, for example:
The command to run the python file'c.py': "..:/.../python.exe ..:/.../c.py".
And for multi-threaded synchronous operation, except for the manual input of execution commands in two or more newly created terminals to make the code run synchronously, VSCode currently does not have other local support that supports this function.
I have submitted an application for this feature in Github and we are looking forward to the realization of this feature:
Github link: Can VSCode automatically run python scripts in two or more terminals at the same time?

It is possible to use the CMD to send commands in the active Maya window?

I am a beginner in Maya, and I wanted to run commands in a cmd in the active window of maya to modify things in the scene, like put a sphere for example, be in python
from pymel.all import *
sphere()
or MEL
polySphere -r 1 -sx 20 -sy 20 -ax 0 1 0 -cuv 2 -ch 1;
I found the mayapi, and I found several content related to "headless" (without the GUI), but I found nothing to run in the open window yet, maybe because I don't know the terms very much. I would like it to be in python, but if you know any solution in MEL you can put it here too!
Is there any way to do this without specifying the open document path?
You have four basic options for programmatic control of maya.
Running scripts inside the script editor in Maya. This requires that you have an open GUI maya instance running and you'd either type the commands yourself, or initiate scripts by loading and executing them in the script editor. This works fine for automating repetitive tasks but requires manual intervention on your part.
You can send individual commands to Maya over a TCP connection using the maya command port This is basically like connecting to another computer over telnet: you can control the Maya sessions but you'll be communicating entirely via text. It's commonly used, for example, by people who are writing scripts in Sublime Text to test them out in Maya without switching windows
You can run a commandline-only copy of Maya using the MayaPy python interpreter that ships with Maya and the maya.standalone module, which hosts a non-GUI maya session. That lets you excute python commands in a Maya without needing the GUI at all -- it's a common tool for automation tasks.
You can pass a script argument to Maya at startup with the '-c' (for "command") flag. Maya will open and run that script. For legacy reasons the commands are only MEL, not python, but you can get around that by using the MEL command "python" along with a Python command in quotes.
All of these are useful, the right one really depends on what you need to do. For long running tasks however #3 is probably the most reliable method because it's easy to iterate on and test.

Pipe ssh session into and out of python

The company I work for uses an archaic information system (Copyright 1991-2001). The system is a Centos machine running an ssh server. There's no access to the back-end or it's data in any way. All data needs to be retrieved through text reports, or input with manual keystrokes. Here's an example of the view you get when you login.
I'm trying to write a python script that will simulate keystrokes to run reports and do trivial tasks. I've already successfully done this with a .cmd file on windows that connects and simulates keystrokes. The problem is that there are some processes that have unpredictable branches (A message sometimes pops up and asks for some information or a key press to verify that you've seen a message). I can predict where a branch might occur, but can't detect if it actually has because my .cmd file is blind to output from the ssh session. (I'm working in windows by the way)
What I'm trying to do is use a python script that uses stdin and makes decisions based on what it sees, but I'm new to how piping works. Piping into my script works, but I'm unsure how to send keystrokes back to the ssh session from the python script. Here's an example of my test script:
import sys
import time
buff=''
try:
while True:
buff += sys.stdin.read(1)
if buff[-5:] == 'press':
print('found word "press"!')
#Send a keystroke back to the ssh session here
buff = ''
except KeyboardInterrupt:
sys.stdout.flush
pass
And here's how I call it:
ssh MyUsername####.###.###.### | python -u pipe_test.py
While it's running, I can't see anything, but I've verified that I can send keystrokes through the terminal with my regular keyboard.
Any ideas on how to output keystrokes to the ssh session?
Should I be doing some completely different, much simpler thing?
FYI: The data sent by the server to the terminal has ASCII escape characters flying all over the place. It's not a nice bash interface or anything like that. Also, I've installed a bunch of Unix command line tools so that I can, for example, ssh from windows.
tl;dr How do I pipe from an ssh session into python, and send keystrokes back to the ssh session from that same python script?
You definitely don't do this with pipes. A Unix pipe is a unidirectional inter-process communications mechanism. You can send data to it, or you can read data from it. But not both (through the same pipe).
It is possible to use pairs of pipes to create co-processes. This is even supported directly in some Unix shells (such as Korn shell and Bash as of version 4 (https://www.gnu.org/software/bash/manual/html_node/Coprocesses.html). However this mechanism is somewhat fragile and prone to deadlock. It works so long as the processes on both sides of this pair of pipes are rigorous in their handling of the pipes and the associated buffering (Actually it's possible for just one end to be rigorous, but even that's tricky). This is not likely to be the case for the programs that you're trying to run remotely.
Someone suggested pexpect which is an excellent choice for controlling a locally spawned terminal or curses application. It's possible to manage a remote process with it, by spawning a local ssh client and controlling that.
However, a better choice for accessing ssh protocols and APIs from Python would be Paramiko. This implements the ssh protocol such that you access the remote sshd process as an API rather than through a client (command line utility).
An advantage of this approach is that you can programmatically manage port redirections, transfer and manage files (as you would with sftp) (including setting permissions and such), and you can execute programs and separately access their standard input, output, and error streams or their pseudo-terminals (pty) as well as fetch the exit codes of remote processes as distinct from the exit code of your local ssh client.
There's even a package paramiko-expect adds an extension to Paramiko to enable more straightforward using of these remote pty objects. (Pexpect provides similar features on the pty controlling local processes). [Caveat: I haven't used Fotis Gimian's package yet; but I've used Paramiko fairly extensively and sometimes wished I had something like it].
As you may have figured out from this answer, the complexity of programmatically dealing with an interactive terminal/text program under Unix (Linux or any of its variants) has to do with details about how that program is written.
Some programs, such as shells, can be completely driven by line oriented input and output to their standard file descriptors (stdin, stdout, and stderr). Others must be controlled through their terminal interfaces (which is accomplished by launching them using a pseudo-terminal environment ... such as the one provided by sshd when it starts a normal interactive process, and those supplied by expect (and pexect and various other modules and utilities inspired by the old TCL/expect) and by your xterm or other terminal windowing programs under any modern OS (even including Cygwin or the "Bash for Windows" or WSL (Windows Support for Linux) under the latest versions of Microsoft Windows).
In general your attempts will need to use one or another approach and pipes are only very crudely useful for the approach using the standard file descriptors. The decision of which approach to use will mostly be driven by the program you're trying to (remotely) control.

How do I run a python script using an already running blender?

Normally, I would use "blender -P script.py" to run a python script. In this case, a new blender process is started to execute the script. What I am trying to do now is to run a script using a blender process that is already running, instead of starting a new one.
I have not seen any source on this issue so far, which makes me concern about the actual feasibility of this approach.
Any help would be appreciated.
Blender isn't designed to be started from the cli and to then keep receiving more commands from the cli as it is running. It does however include a text editor that can open text files and run the text block as a python script, it also includes a python console that can be used to interactively type in commands while blender is running. You may also find this addon useful as it lets you to run a text block in the python console, this leaves you with an interactive session that contains the variables as they exist at the end of the scripts execution.
There is a cli option to run blender as a python console blender --python-console - the gui does not get updated while this console is running, so you could open and exec several scripts and then when you exit the console, blender will update it's gui and allow interactive use, or if you start in background mode -b then it will quit when you exit the console.
My solution was to launch Blender via console with a python script (blender --python script.py) that contains a while loop and creates a server socket to receive requests to process some specific code. The loop will prevent blender from opening the GUI, and the socket will handle the multiple requests inside the same blender process.

Blackbox test with python of a ncurses python app via Paramiko

I'm in the strange position of being both the developer of a python utility for our project, and the tester of it.
The app is ready and now I want to write a couple of blackbox tests that connect to the server where it resides (the server itself is the product that we commercialize), and launch the python application.
The python app allows a minimal command line scripting (some parameters automatically launch functions that otherwise would require user interaction at the main menu). For the residual user interactions, I usually try bash syntax like this:
./app -f <<< $'parameter1\nparameter2\n\n'
And finally I redirect everything to >/dev/null.
If I do manual checks at the command line on the server (where I connect via SSH), everything works smoothly. The app launch lasts 30 seconds, and after 30 seconds I'm correctly returned to the prompt.
Now to the blackbox testing part. Here I'm also using python (Py.test framework), but the test code resides on another machine.
The test runner machine will connect to the Server under test via Paramiko libraries. I've already used this a lot in scripting other functionalities of the product, and it works quite well.
But the problem in this case is that the app under test that I wrote in python uses the NCurses library in its normal behaviour ("import curses"), and apparently when trying to launch this in the Py.test script:
import paramiko
...
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.load_system_host_keys()
client.connect(myhost, myuser, mypass, mytimeout)
client.exec_command("./app -f <<< $'parameter1\nparameter2\n\n' >/dev/null")
Regardless the redirection to /dev/null, the .exec_command() prints this to standard error out with a message about the curses initialization:
...
File "/my/path/to/app", line xxx, in curses_screen
scr = curses.initscr()
...
_curses.error: setupterm: could not find terminal
and finally the py.test script fails because the app execution crashed.
Is there some conflicts between curses (used by the app under test) and paramiko (used by the test script)? As I said, if I connect manually via SSH to the server where the app resides and launch the command line manually with the silent redirection to /dev/null, it works as I would expect.
ncurses really would like to do input/output to a terminal. /dev/null is not a terminal, and some terminal I/O mode changes will fail in that case. Occasionally someone connects the I/O to a socket, and ncurses will (usually) work in that situation.
In your environment, besides the lack of a terminal, it is possible that TERM is unset. That will make setupterm fail.
Setupterm could not find terminal, in Python program using curses
wrong error from curses.wrapper if curses initialization fails

Categories