I have a strange scenario going on at the moment. When I issue an svn info TXN REPO command on our build server (separate from the SVN server), it works as expected and displays the relevant information to the console.
However when I script it using Python, and specifically, Popen from the subprocess module, it prints a message svn: E230001: Server SSL certificate untrusted to the standard error (console).
What I've tried:
Using --non-interactive and --trust-server-cert flags within the script call.
Passing a username/password within the svn info call via the script.
The above two don't seem to take effect, and the same error as above is spat out. However, manually running the same command from the command prompt succeeds with no problems. I assume it might be something to do with python opening up a new session to the SVN server, and that session isn't a "trusted" connection? But I can't be sure.
Our SVN server is on a windows machine and is version 1.8.0
Our build server is a windows machine running Jenkins version 2.84. Jenkins executes a batch script which kicks off the Python script, performing the above task.
Command: svn_session = Popen("svn info --non-interactive --trust-server-cert --no-auth-cache -r %s %s" % (TXN, REPOS), stdout=PIPE, stderr=PIPE, shell=True)
** Edit **
When I copy and paste the python line from the script into the interactive python shell on the same server, the command works as expected also. So the issue is how the script is executing the command, rather than the command itself or how Python runs that command.
**
Has anyone come across this before?
In case anyone is looking at this in the future. Panda Pajama has given a detailed answer to this..
SVN command line in jenkins fails due to server certificate mismatch
Related
I have written a python 3 script to test an API.
The API can only be accessed via a server so I have a bash script to execute wget on that server via ssh and move the result to my local machine for analysis. This script works fine on its own.
Now I want to call this bash script from python a few times, however the ssh commands in the bash script seem to break when I use subprocess.call().
This line should save the API response to a chosen location on my server so that I can copy it to my computer later:
ssh USER#SERVER.com "wget 'https://API.com/?${options}' -O '${file_path}'"
but instead I get the error bash: -O: command not found and the response is saved to a default file generated by wget:
*Using options: 'filter[id]=1234'
*Sending request to API
bash: -O: command not found
--2021-02-19 12:02:52-- https://API.com/?filter[id]=1234
...
Saving to: ‘index.html?filter[id]=1234’
0K .......... .......... ........ 100% 9.48M=0.003s
2021-02-19 12:02:52 (9.48 MB/s) - ‘index.html?filter[id]=1234.’ saved [29477/29477]
So it seems to me that the command being executed via ssh was somehow split into multiple commands?
The weird thing is that when I use os.system() to execute the script (or call it directly from the terminal) it works flawlessly. Here is the python code that calls the bash script:
# Failing subprocess.call line
subprocess.call(["./get_api.sh", save_file_name, f"'filter[id]={id}'"])
# Succeeding os.system line
system(f"./get_api.sh {save_file_name} 'filter[id]={id}'")
I am wondering if anyone can tell me on what might be going on here?
(I edited the included code quite a bit to remove sensitive information, also this is my first stack overflow question so I hope it contains enough information/context)
The single quotes you used in system aren't part of the query; they are just part of the shell command that protects filter[id]={id} from shell expansion. They should be omitted from the use of subprocess.call, which doesn't use a shell.
subprocess.call(["./get_api.sh", save_file_name, f"filter[id]={id}"])
I have a SQL Server Agent job that executes some python scripts using CmdExec. Everything is set up with a proxy account as expected.
When I run the job I get:
Message
Executed as user: domain\proxyaccount. 'python' is not recognized as an internal or external command, operable program or batch file. Process Exit Code 1. The step failed.
I'm using Anaconda and Python is in the system PATH variable. When I run python from command line, it works. When I run python cutting and pasting the specific command from the job, it works. When I use runas to mimic the proxy account it works. The only place Python doesn't run is form inside the job.
What else do I need to look at to trouble shoot this issue?
You should restart SQL Server Agent after you installed Python on the server.
It is necessary for SQL Server Agent to load new environment variables, including the updated PATH with Python in it.
There are also suggestions to restart SQL Server too, but I believe restarting SQL Server Agent will be enough.
I have a docker container (based on RHEL6) where I am running the mongoexport command line tool from a python script using the subprocess module.
mongoexport fails with exit code 1 and the error:
no reachable servers
The mongoexport command has the required connection info, such as host, port, db.
When I run the same mongoexport command in the same container using docker run, it succeeds.
Any idea what goes wrong when I run using Python?
The problem was related to linux/windows compatibility and the way the mongoexport command is built.
The developer developed the Python script on Windows, where subprocess.Popen can receive the command line as a string array. In linux, however, it should be a single string argument.
Changing the way the argument passed to subprocess.Popen to a single string fixed this problem.
I want to write a script that executes command on remote windows machine, and get output from that command. The problem is, I want to use built-in software in windows, so I can't unfortunately use for example SSH for that. I found "wmi" library, and I can log in to the machine and execute command, but i don't know how to recieve output from that command.
import wmi
c = wmi.WMI("10.0.0.2", user="administrator", password="admin")
process_startup = c.Win32_ProcessStartup.new()
process_id, result = c.Win32_Process.Create (r'cmd /c ping 10.0.0.1 -n 1 > c:\temp\temp.txt')
if result == 0:
print("Process started successfully: %d" % process_id)
print(result)
I tried to redirect output to file, but I can't find any way to get text file content either.
Is there any possible way to get output or text file content using wmi, or other python libraries?
For the application you're describing you may find RPyC is a good fit. It's a python remoting server you can connect to and issue commands; it will do anything that Python can do on the target machine. So you could, for example, use the subprocess module to run a windows command and capture the output then return the result.
The safe thing to do is to expose your remote functionality as a service -- basically, it's just a python script that RPyC can run for you. However you can also use RPyC classic mode, which is pretty much like running a python session directly on the remote machine. You should use it only when you are not worried about security, since classic mode can do anything on the remote machine -- but it's useful for prototyping.
I'm in the strange position of being both the developer of a python utility for our project, and the tester of it.
The app is ready and now I want to write a couple of blackbox tests that connect to the server where it resides (the server itself is the product that we commercialize), and launch the python application.
The python app allows a minimal command line scripting (some parameters automatically launch functions that otherwise would require user interaction at the main menu). For the residual user interactions, I usually try bash syntax like this:
./app -f <<< $'parameter1\nparameter2\n\n'
And finally I redirect everything to >/dev/null.
If I do manual checks at the command line on the server (where I connect via SSH), everything works smoothly. The app launch lasts 30 seconds, and after 30 seconds I'm correctly returned to the prompt.
Now to the blackbox testing part. Here I'm also using python (Py.test framework), but the test code resides on another machine.
The test runner machine will connect to the Server under test via Paramiko libraries. I've already used this a lot in scripting other functionalities of the product, and it works quite well.
But the problem in this case is that the app under test that I wrote in python uses the NCurses library in its normal behaviour ("import curses"), and apparently when trying to launch this in the Py.test script:
import paramiko
...
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.load_system_host_keys()
client.connect(myhost, myuser, mypass, mytimeout)
client.exec_command("./app -f <<< $'parameter1\nparameter2\n\n' >/dev/null")
Regardless the redirection to /dev/null, the .exec_command() prints this to standard error out with a message about the curses initialization:
...
File "/my/path/to/app", line xxx, in curses_screen
scr = curses.initscr()
...
_curses.error: setupterm: could not find terminal
and finally the py.test script fails because the app execution crashed.
Is there some conflicts between curses (used by the app under test) and paramiko (used by the test script)? As I said, if I connect manually via SSH to the server where the app resides and launch the command line manually with the silent redirection to /dev/null, it works as I would expect.
ncurses really would like to do input/output to a terminal. /dev/null is not a terminal, and some terminal I/O mode changes will fail in that case. Occasionally someone connects the I/O to a socket, and ncurses will (usually) work in that situation.
In your environment, besides the lack of a terminal, it is possible that TERM is unset. That will make setupterm fail.
Setupterm could not find terminal, in Python program using curses
wrong error from curses.wrapper if curses initialization fails