Pytest/Fabric not using SSH agent when debugging in PyCharm - python

I am writing some infrastructure tests using pytest and fabric. Generally this is working. I run a test from the command line that executes a fabric task on a remote server and asserts something about the result of that task. The task is executed using my running SSH agent.
However, when I try to debug my tests in PyCharm the fabric tasks fail with the following exception
Fatal error: Needed to prompt for a connection or sudo password, but input would be ambiguous in parallel mode
This difference in behavior leads me to believe something isn't configured properly in my debug configuration. I have made sure that environment variables in debug config includes the SSH agent PID.

Related

SVN SSL Certificate Untrusted via Python Script

I have a strange scenario going on at the moment. When I issue an svn info TXN REPO command on our build server (separate from the SVN server), it works as expected and displays the relevant information to the console.
However when I script it using Python, and specifically, Popen from the subprocess module, it prints a message svn: E230001: Server SSL certificate untrusted to the standard error (console).
What I've tried:
Using --non-interactive and --trust-server-cert flags within the script call.
Passing a username/password within the svn info call via the script.
The above two don't seem to take effect, and the same error as above is spat out. However, manually running the same command from the command prompt succeeds with no problems. I assume it might be something to do with python opening up a new session to the SVN server, and that session isn't a "trusted" connection? But I can't be sure.
Our SVN server is on a windows machine and is version 1.8.0
Our build server is a windows machine running Jenkins version 2.84. Jenkins executes a batch script which kicks off the Python script, performing the above task.
Command: svn_session = Popen("svn info --non-interactive --trust-server-cert --no-auth-cache -r %s %s" % (TXN, REPOS), stdout=PIPE, stderr=PIPE, shell=True)
** Edit **
When I copy and paste the python line from the script into the interactive python shell on the same server, the command works as expected also. So the issue is how the script is executing the command, rather than the command itself or how Python runs that command.
**
Has anyone come across this before?
In case anyone is looking at this in the future. Panda Pajama has given a detailed answer to this..
SVN command line in jenkins fails due to server certificate mismatch

WLST disconnect command issue

I had ran wlst.cmd in my local system after I ran my Weblogic Admin Instance. But as WLST is stateful, I am getting connected to my IT env which is my Integration Testing environment (some UNIX machine for my project). I tried disconnect() to goto offline mode, however it is failed.
wls:/beaProjDir/serverConfig> disconnect()
You will need to be connected to a running server to execute this command
Please help to go offline mode in WLST. As I need to get some work done in my local system.
You can try to getting out of WLST Shell ctrl+D. Which is similar to exiting from any other shell.

Blackbox test with python of a ncurses python app via Paramiko

I'm in the strange position of being both the developer of a python utility for our project, and the tester of it.
The app is ready and now I want to write a couple of blackbox tests that connect to the server where it resides (the server itself is the product that we commercialize), and launch the python application.
The python app allows a minimal command line scripting (some parameters automatically launch functions that otherwise would require user interaction at the main menu). For the residual user interactions, I usually try bash syntax like this:
./app -f <<< $'parameter1\nparameter2\n\n'
And finally I redirect everything to >/dev/null.
If I do manual checks at the command line on the server (where I connect via SSH), everything works smoothly. The app launch lasts 30 seconds, and after 30 seconds I'm correctly returned to the prompt.
Now to the blackbox testing part. Here I'm also using python (Py.test framework), but the test code resides on another machine.
The test runner machine will connect to the Server under test via Paramiko libraries. I've already used this a lot in scripting other functionalities of the product, and it works quite well.
But the problem in this case is that the app under test that I wrote in python uses the NCurses library in its normal behaviour ("import curses"), and apparently when trying to launch this in the Py.test script:
import paramiko
...
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.load_system_host_keys()
client.connect(myhost, myuser, mypass, mytimeout)
client.exec_command("./app -f <<< $'parameter1\nparameter2\n\n' >/dev/null")
Regardless the redirection to /dev/null, the .exec_command() prints this to standard error out with a message about the curses initialization:
...
File "/my/path/to/app", line xxx, in curses_screen
scr = curses.initscr()
...
_curses.error: setupterm: could not find terminal
and finally the py.test script fails because the app execution crashed.
Is there some conflicts between curses (used by the app under test) and paramiko (used by the test script)? As I said, if I connect manually via SSH to the server where the app resides and launch the command line manually with the silent redirection to /dev/null, it works as I would expect.
ncurses really would like to do input/output to a terminal. /dev/null is not a terminal, and some terminal I/O mode changes will fail in that case. Occasionally someone connects the I/O to a socket, and ncurses will (usually) work in that situation.
In your environment, besides the lack of a terminal, it is possible that TERM is unset. That will make setupterm fail.
Setupterm could not find terminal, in Python program using curses
wrong error from curses.wrapper if curses initialization fails

How to remote debug in PyCharm

The issue I'm facing right now:
I deploy Python code on a remote host via SSH
the scripts are passed some arguments and must be ran by a specific user
the PyCharm run/debug configuration that I create connects through SSH via a different user (can't connect with the user that actually runs the scripts)
I want to remote debug this code via PyCharm...I managed to do all configuration, I just get permission errors.
Are there any ways on how I can run/debug the scripts as a specific user (like sudo su - user)?
I've read about specifying some Python Interpeter options in PyCharm's remote/debug configuration, but didn't manage to get a working solution.
If you want an easy and more flexible way to get into the PyCharm debugger, rather than necessarily having a one-click "play" button in PyCharm, you can use the debug server functionality. I've used this in situations where running some Python code isn't as simple as running python ....
See the Remote debug with a Python Debug Server docs for more details, but here's a rough summary of how it works:
Upload & install remote debugging helper egg on your server (On OSX, these are found under /Applications/PyCharm.app/Contents/debug-eggs)
Setup remote debug server run configuration: click on the drop-down run configuration menu, select Edit configurations..., hit the + button, choose Python remote debug.
The details entered here (somewhat confusingly) tell the remote server running the Python script how to connect to your laptop's PyCharm instance.
set Local host name to your laptop's IP address
set port to any free port that you can use on your laptop (e.g. 8888)
Now follow the remaining instructions in that dialog box: copy-paste the import and pydevd.settrace(...) statements into your code, specifically where you want your code to "hit a breakpoint". This is basically the PyCharm equivalent of import pdb; pdb.set_trace(). Make sure the changed code is sync'ed to your server.
Hit the bug button (next to play; this starts the PyCharm debug server), and run your Python script just like you'd normally do, under whatever user, environment etc. When the breakpoint is hit, PyCharm should drop into debug mode.
I have this (finally) working with ssh RemoteForward open, like so:
ssh -R 5678:localhost:5678 user#<remotehost>
Then start the script in this ssh session. The python script host must connect to localhost:5678 and of course your local pycharm debugger must listen to 5678
(or whatever port you choose)

different behavior in Python shell and program

I'm using subprocess.Popen to instantiate an ssh-agent, add a key and push a git repository to a remote. To do this I string them together with &&. The code I'm using is
subprocess.Popen("eval $(ssh-agent) && ssh-add /root/.ssh/test_rsa && git push target HEAD", shell=True)
When I run this as a .py file I am prompted for the key's password. This seems to work as I get.
Identity added: /root/.ssh/test_rsa (/root/.ssh/test_rsa).
But when it tries to push the repository to the remote, an error occurs.
ssh: connect to host ***.***.***.*** port 22: Connection refused
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
However, if I simply run the same command in the interactive shell, it works. What causes this difference in behaviour, and what can I do to fix this?
The git server was on an aws instance that was being started earlier in the script. There was a check to make sure it was running, but aws seems to report an instance as running once boot has begun. This means that there is a brief time in which the instance is running, but an ssh daemon doesn't exist. Because the script moved very quickly into trying to push, it was falling within this time period and the server was refusing its connection attempt. By the time I would try anything in the interactive shell the instance was running long enough that it was working.
In short, aws says instances are running before the OS has started services.

Categories