I have written a python 3 script to test an API.
The API can only be accessed via a server so I have a bash script to execute wget on that server via ssh and move the result to my local machine for analysis. This script works fine on its own.
Now I want to call this bash script from python a few times, however the ssh commands in the bash script seem to break when I use subprocess.call().
This line should save the API response to a chosen location on my server so that I can copy it to my computer later:
ssh USER#SERVER.com "wget 'https://API.com/?${options}' -O '${file_path}'"
but instead I get the error bash: -O: command not found and the response is saved to a default file generated by wget:
*Using options: 'filter[id]=1234'
*Sending request to API
bash: -O: command not found
--2021-02-19 12:02:52-- https://API.com/?filter[id]=1234
...
Saving to: ‘index.html?filter[id]=1234’
0K .......... .......... ........ 100% 9.48M=0.003s
2021-02-19 12:02:52 (9.48 MB/s) - ‘index.html?filter[id]=1234.’ saved [29477/29477]
So it seems to me that the command being executed via ssh was somehow split into multiple commands?
The weird thing is that when I use os.system() to execute the script (or call it directly from the terminal) it works flawlessly. Here is the python code that calls the bash script:
# Failing subprocess.call line
subprocess.call(["./get_api.sh", save_file_name, f"'filter[id]={id}'"])
# Succeeding os.system line
system(f"./get_api.sh {save_file_name} 'filter[id]={id}'")
I am wondering if anyone can tell me on what might be going on here?
(I edited the included code quite a bit to remove sensitive information, also this is my first stack overflow question so I hope it contains enough information/context)
The single quotes you used in system aren't part of the query; they are just part of the shell command that protects filter[id]={id} from shell expansion. They should be omitted from the use of subprocess.call, which doesn't use a shell.
subprocess.call(["./get_api.sh", save_file_name, f"filter[id]={id}"])
Related
I'm at a loss for what to think, here. There must be something I'm missing since I'm able to execute this perfectly fine in another, similar context, but not here. I am trying to run the following console command via Robot Framework for the sake of testing my system's reaction to a network outage: ssh it#128.0.0.1 sudo ifconfig ens192 down (the user & host ip are modified for posting here.)
To accomplish this, I'm using the following line of code:
Run Process 'ssh it#128.0.0.1 sudo ifconfig ens192 down' shell=True stdout=stdout stderr=stderr
Now, I've executed this myself in console and saw that the result was an inability to ping the ip, which is what I want. So I know for certain that the command itself isn't a problem. What is a problem is the fact that I get the following result: /bin/sh: ssh it#128.0.0.1 sudo ifconfig ens192 down: command not found
That's fine, though, because I believe this is a simple issue of the ssh instance not having the it user's $PATH. For reasons described below, I'm going to guess its $PATH variable is empty. Thus, I modified the keyword call as such:
Run Process 'ssh it#128.0.0.1 PATH\=/usr/sbin sudo ifconfig ens192 down' shell=True stdout=stdout stderr=stderr
The result of this command is as follows: /bin/sh: ssh it#128.0.0.1 PATH=/usr/sbin sudo ifconfig ens192 down: No such file or directory, which I believe to be referring to the items in the PATH attribute. I've tried several different things to determine what the problem could be, such as why it would be unable to find these files in an instance where, executed manually in the console, these directories are indeed present.
Given that I have less than a year's worth of experience with Robot Framework, I'm certain that there's just something I'm not understanding about the Process library. Could I have an explanation as to what I am missing for this execution and, if possible, an explanation as to how I should be executing an ssh command via Robot Framework?
A few extra things worth pointing out:
The it user is brand new, having been created via useradd it and copying the home directory contents into its new space.
I have already added an entry into the device's sudoers file to allow the it user to execute ifconfig without requiring password entry.
On that note, I've sent the executing user's ssh keys over to the device such that it does not require a password to log into it as that user.
I have attempted to execute several basic commands. I have intermittently gotten ls to work but have been unable to get a greater sense of what $PATH the Robot Framework ssh instance has because echo doesn't seem to work.
This is a long bash script (400+ lines ) that is originally invoked from a django app like so -
os.system('./bash_script.sh &> bash_log.log')
It stops on a random command in the script. If the order of commands is changed, it hangs on another command in approx. the same location.
sshing to the machine that runs the django app, and running sudo ./bash_script.sh, asks for a password and then runs all the way.
I can't see the message it presents when it hangs in the log file, couldn't make it redirect there. I assume it's a sudo password request.
Tried -
sudo -v in the script - didn't help.
ssh to the machine and manually extend the sudo timeout in /etc/sudoers - didnt help, I think since the django app is already in the air and uses the previos timeout.
splitting the script in two, and running one in separate thread, like so -
def basher(command, log_path):
with open(log_path) as log:
Popen(command, stdout=log, stderr=log).wait()
script_thread = Thread(target=basher, args=('bash_script_pt1.sh', 'bash_log_pt1.log'))
script_thread.start()
os.system('./bash_script_pt2.sh &> bash_log_pt2.log') # I know it's deprecated, not sure if maybe it's better in this case
script_thread.join()
The logs showed that part 1 ended ok, but part 2 still hangs, albeit later in the code than when they were together.
I thought to edit /etc/sudoers from inside the Python code, and then re-login via su - user. There are snippets of how to pass the password using pty, however I don't understand the mechanics of it and could not get it to work.
I also noted that ps aux | grep bash_script.sh shows that the script is being run twice. As -
/bin/bash bash_script.sh
and as
sh -c bash_script.sh.
I assume os.system has an internal shell=True going on.
I don't understand the Linux entities/mechanics in play to figure out what's happening.
My guess is that the django app has different and more limited permissions, than the script itself does, and the script is inheriting said restrictions because it is being executed by it.
You need to find out what permissions the script has when you run it just from bash, and what it has when you run it via django, and then figure out what the difference is.
The only solution i found was this :
cat MyScript.py | ssh username#ip_addr python -
the problem with this is that it wont show the outputs of that script live, it waits until the program is finished and then it displays the output
I'm using scapy with in this script and sniff packets with it, and i have to run this on that remote server (and no, i cant copy it there or write it there)
so what is the solution? how can i view the outputs of a script live in my command line?
I'm using windows 10.
Important note: also i think ssh is using some sort of buffering, and sends the output after the amount of printed stuff gets more than buffer, since when the output is very large it does show part of it suddenly, i want the output of that function to come to my computer as soon as possible, not after reaching a buffer or something
You should first send the file to your remote machine using
scp MyScript.py username#ip_addre:/path/to/script
then SSH to your remote machine using
ssh username#ip_addr
ANd finally, you run you script normally
python path/to/MyScript.py
EDIT
To execute your script directly without copying it to the remote machine, use this command:
ssh user#ip_addr 'python -s' < script.py
I have a strange scenario going on at the moment. When I issue an svn info TXN REPO command on our build server (separate from the SVN server), it works as expected and displays the relevant information to the console.
However when I script it using Python, and specifically, Popen from the subprocess module, it prints a message svn: E230001: Server SSL certificate untrusted to the standard error (console).
What I've tried:
Using --non-interactive and --trust-server-cert flags within the script call.
Passing a username/password within the svn info call via the script.
The above two don't seem to take effect, and the same error as above is spat out. However, manually running the same command from the command prompt succeeds with no problems. I assume it might be something to do with python opening up a new session to the SVN server, and that session isn't a "trusted" connection? But I can't be sure.
Our SVN server is on a windows machine and is version 1.8.0
Our build server is a windows machine running Jenkins version 2.84. Jenkins executes a batch script which kicks off the Python script, performing the above task.
Command: svn_session = Popen("svn info --non-interactive --trust-server-cert --no-auth-cache -r %s %s" % (TXN, REPOS), stdout=PIPE, stderr=PIPE, shell=True)
** Edit **
When I copy and paste the python line from the script into the interactive python shell on the same server, the command works as expected also. So the issue is how the script is executing the command, rather than the command itself or how Python runs that command.
**
Has anyone come across this before?
In case anyone is looking at this in the future. Panda Pajama has given a detailed answer to this..
SVN command line in jenkins fails due to server certificate mismatch
keygen etc. so I can just ssh remotehost w/o using password, shell =BASH
I am using a for loop to ssh over multiple nodes (remote host) and wish to execute a script but it dosent seem to work
for i in {1..10};
do
ssh -f node$i "python script.py $i"
done
the terminal script hangs up and nothing happens
Also I can manually ssh and use python. The PYTHONPATH etc are configured for enodes.
There was cshell on nodes, so i used .cshrc wit exec /bin/bash which atleast when i log manually gives me bash shell, so problem doesent seem to be there.
I
Instead of wrapping python script in a shell script, you should probably have a python script that connects to all the remote hosts via ssh and executes stuff.
Paramiko is a very good framework for this kind of use-case. It will be much easier to maintain your script this was in the long run.
Use -o BatchMode=yes
and maybe you need to force allocate pseudo-tty and -n to prevent reading from stdin