I have a docker container (based on RHEL6) where I am running the mongoexport command line tool from a python script using the subprocess module.
mongoexport fails with exit code 1 and the error:
no reachable servers
The mongoexport command has the required connection info, such as host, port, db.
When I run the same mongoexport command in the same container using docker run, it succeeds.
Any idea what goes wrong when I run using Python?
The problem was related to linux/windows compatibility and the way the mongoexport command is built.
The developer developed the Python script on Windows, where subprocess.Popen can receive the command line as a string array. In linux, however, it should be a single string argument.
Changing the way the argument passed to subprocess.Popen to a single string fixed this problem.
Related
I am executing my Robot Framework Selenium tests in a remote machine (it's a Docker container, but I need it to be working using Podman, too... so I guess using docker commands wouldn't help me) and in this remote machine, there is an automatic process running on the background, which is producing terminal logs.
I can read this cmd output when I execute docker logs <container_id> in my terminal, but I need to get them using python and extract some info from these logs to show them in the Robot Framework test log file.
Any ideas how to do it?
I have found several ways how to execute a command in the remote machine and get the output, but here I am not executing any command, I just need to read what's being produced automatically.
Thank you for your advice
I have written a python 3 script to test an API.
The API can only be accessed via a server so I have a bash script to execute wget on that server via ssh and move the result to my local machine for analysis. This script works fine on its own.
Now I want to call this bash script from python a few times, however the ssh commands in the bash script seem to break when I use subprocess.call().
This line should save the API response to a chosen location on my server so that I can copy it to my computer later:
ssh USER#SERVER.com "wget 'https://API.com/?${options}' -O '${file_path}'"
but instead I get the error bash: -O: command not found and the response is saved to a default file generated by wget:
*Using options: 'filter[id]=1234'
*Sending request to API
bash: -O: command not found
--2021-02-19 12:02:52-- https://API.com/?filter[id]=1234
...
Saving to: ‘index.html?filter[id]=1234’
0K .......... .......... ........ 100% 9.48M=0.003s
2021-02-19 12:02:52 (9.48 MB/s) - ‘index.html?filter[id]=1234.’ saved [29477/29477]
So it seems to me that the command being executed via ssh was somehow split into multiple commands?
The weird thing is that when I use os.system() to execute the script (or call it directly from the terminal) it works flawlessly. Here is the python code that calls the bash script:
# Failing subprocess.call line
subprocess.call(["./get_api.sh", save_file_name, f"'filter[id]={id}'"])
# Succeeding os.system line
system(f"./get_api.sh {save_file_name} 'filter[id]={id}'")
I am wondering if anyone can tell me on what might be going on here?
(I edited the included code quite a bit to remove sensitive information, also this is my first stack overflow question so I hope it contains enough information/context)
The single quotes you used in system aren't part of the query; they are just part of the shell command that protects filter[id]={id} from shell expansion. They should be omitted from the use of subprocess.call, which doesn't use a shell.
subprocess.call(["./get_api.sh", save_file_name, f"filter[id]={id}"])
I have a strange scenario going on at the moment. When I issue an svn info TXN REPO command on our build server (separate from the SVN server), it works as expected and displays the relevant information to the console.
However when I script it using Python, and specifically, Popen from the subprocess module, it prints a message svn: E230001: Server SSL certificate untrusted to the standard error (console).
What I've tried:
Using --non-interactive and --trust-server-cert flags within the script call.
Passing a username/password within the svn info call via the script.
The above two don't seem to take effect, and the same error as above is spat out. However, manually running the same command from the command prompt succeeds with no problems. I assume it might be something to do with python opening up a new session to the SVN server, and that session isn't a "trusted" connection? But I can't be sure.
Our SVN server is on a windows machine and is version 1.8.0
Our build server is a windows machine running Jenkins version 2.84. Jenkins executes a batch script which kicks off the Python script, performing the above task.
Command: svn_session = Popen("svn info --non-interactive --trust-server-cert --no-auth-cache -r %s %s" % (TXN, REPOS), stdout=PIPE, stderr=PIPE, shell=True)
** Edit **
When I copy and paste the python line from the script into the interactive python shell on the same server, the command works as expected also. So the issue is how the script is executing the command, rather than the command itself or how Python runs that command.
**
Has anyone come across this before?
In case anyone is looking at this in the future. Panda Pajama has given a detailed answer to this..
SVN command line in jenkins fails due to server certificate mismatch
I want to write a script that executes command on remote windows machine, and get output from that command. The problem is, I want to use built-in software in windows, so I can't unfortunately use for example SSH for that. I found "wmi" library, and I can log in to the machine and execute command, but i don't know how to recieve output from that command.
import wmi
c = wmi.WMI("10.0.0.2", user="administrator", password="admin")
process_startup = c.Win32_ProcessStartup.new()
process_id, result = c.Win32_Process.Create (r'cmd /c ping 10.0.0.1 -n 1 > c:\temp\temp.txt')
if result == 0:
print("Process started successfully: %d" % process_id)
print(result)
I tried to redirect output to file, but I can't find any way to get text file content either.
Is there any possible way to get output or text file content using wmi, or other python libraries?
For the application you're describing you may find RPyC is a good fit. It's a python remoting server you can connect to and issue commands; it will do anything that Python can do on the target machine. So you could, for example, use the subprocess module to run a windows command and capture the output then return the result.
The safe thing to do is to expose your remote functionality as a service -- basically, it's just a python script that RPyC can run for you. However you can also use RPyC classic mode, which is pretty much like running a python session directly on the remote machine. You should use it only when you are not worried about security, since classic mode can do anything on the remote machine -- but it's useful for prototyping.
I'm trying use python's cmd library to create a shell with limited commands. One requirement I have is to be able to run a command that executes an existing shell script which opens an ssh session on a remote machine and from there allows the user to interact with the remote shell as if it was a regular ssh session.
Simply using subprocess.Popen('[/path/to/connect.sh]') works well at least as a starting point except for one issue. You can interact with the remote shell but the input that you type is not shown on stdout...so for example you see the prompt on your stdout but when you type 'ls' you don't see it being typed but when you hit return it works as expected.
I'm trying to wrap my head around how to print the input to stdout and still send it along to the remote ssh session.
EDIT:
Actual code without using cmd was just the one line:
ssh_session = subprocess.Popen(['connect.sh'])
it was fired from a do_* method in a class which extended cmd.Cmd. I think I may end up using paramiko but would still be interested in anyone's input on this.
Assuming you are using a Unix like system, SSH detects if you are on a terminal or not. When it detects that you are not on a terminal, like when using subprocess, it will not echo the characters typed. Instead you might want to use a pseudo-terminal, see pexpect, or pty. This way you can get the output from SSH as if it was running on a true terminal.