I'm developing an application in which I interact with docker containers.
I want to execute this command in a docker exec name_of_container command fashion:
command= "/usr/bin/balance -b "+ ip_address + " 5001 " + servers_list
The idea is to do an echo command >> /etc/supervisor/conf.d/supervisord.conf`
I tried as follows :
p=subprocess.Popen(['docker','exec','supervisor','echo','command'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE)
but it does not work.
This is the error code:
exec: "'/usr/bin/balance -b 195.154.12.1 5001 192.186.13.1' >> /etc/supervisor/conf.d/supervisord.conf": stat '/usr/bin/balance -b 195.154.12.1 5001 192.186.13.1' >> /etc/supervisor/conf.d/supervisord.conf: no such file or directory
Related
i made a script that connect to a ssh server using paramiko, and after using some bash command stored inside a bash script copy and take some data files, after that using the command line i use another command to copy the datas from ssh server(SVN) loccally. all works, but when im running the py script its says permission denined
The error that im receiving when i`m using the script is:
The error its not about bash.py "permission denied, cause i did that echo "bash its works" to verify if the commands works inside the bash.
The py script:
import paramiko
import os
hostname = "LOGIN AND CONECTION WORKS!"
username = ""
password = ""
# initialize the SSH client
client = paramiko.SSHClient()
# add to known hosts
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
client.connect(hostname=hostname, username=username, password=password)
print("Connection was established!")
except:
print("[!] Cannot connect to the SSH Server")
exit()
# read the BASH script content from the file
bash_script = open("Collect-Usage-SVN.sh").read()
# execute the BASH script
stdin, stdout, stderr = client.exec_command(bash_script)
# read the standard output and print it
print(stdout.read().decode())
# print errors if there are any
err = stderr.read().decode()
if err:
print(err)
# execute cmd command to copy files locally
os.chdir('D:\GIT-files\Automate-Stats\SVN_sample_files\svnrawdatas')
os.system("start cmd /K scp root#atpcnd6c:/data/audit/2022-07-08-* .")
client.close()
THIS IS THE BASH_SCRIPT.SH:
#!/usr/bin/env python
echo "Bash script its running"
CurrentDate=$(date +"%F")
RawDataFolder="/data/audit"
svnLOGFILE="/data/audit/log-svn-usage-data-collection.log"
#echo "Timestamp when starting the work $(date +"%D %T")" >> $svnLOGFILE
echo "Timestamp when starting the work $(date +"%F %T")" >> $svnLOGFILE
# Collect raw data
echo "Generating raw SVN usage data" >> $svnLOGFILE
cp -v /data/conf/mod_authrewrite.map $RawDataFolder/$CurrentDate-svnRawData-mod_authrewrite.map.txt >> $svnLOGFILE;
cp -v /data/conf/svn_authorization.conf $RawDataFolder/$CurrentDate-svnRawData-authorization.conf.txt >> $svnLOGFILE;
cut -d: -f1 /data/conf/localauthfile.htpasswd > $RawDataFolder/$CurrentDate-svnRawData-localauthfile.htpasswd.txt
cd /data/svn; ls -ltr /data/svn | du -h --max-depth=1 > /data/audit/2022-05-06-svnRawData-repositoriesSize.csv;
for repo in /data/svn/*; do echo $repo; svnlook date $repo; done > $RawDataFolder/$CurrentDate-svnRawData-repositoriesLastChangeDate.csv;
echo "Finished generating raw data" >> $svnLOGFILE
echo "Timestamp when work is finished $(date +"%D %T")" >> $svnLOGFILE
echo "Happy data analysis !" >> $svnLOGFILE
echo "***********************************************************************************" >> $svnLOGFILE
echo "/n" >> svnLOGFILE
on first line also i used also : #!/usr/bin/env
This is how i runn the script on the server directly on the bash shell command:
Run the SVN script like this
/data/audit/Collect-Usage-SVN.sh > /dev/null 2>&1
The below command is used in CMD terminal to copy the files from server on locall disk
Copy SVN and GIT raw files from SVN server to local Windows
scp root#user:/data/audit/2022-07-08-* .
(see example in Copy_from_SVN_server.jpeg)
Such as a python file
example.py:
import os
containerId = "XXX"
command = "docker exec -ti " + containerId + "sh"
os.system(command)
when I execute this file using "python example.py", I can enter a docker container, but I want to execute some other commands inside the docker.
I tried this:
import os
containerId = "XXX"
command = "docker exec -ti " + containerId + "sh"
os.system(command)
os.system("ps")
but ps is only executed outside docker after I exit the docker container,it can not be executed inside docker.
so my question is how can I execute commands inside a docker container using the python shell.
By the way, I am using python2.7. Thanks a lot.
If the commands you would like to execute can be defined in advance easily, then you can attach them to a docker run command like this:
docker run --rm ubuntu:18.04 /bin/sh -c "ps"
Now if you already have a running container e.g.
docker run -it --rm ubuntu:18.04 /bin/bash
Then you can do the same thing with docker exec:
docker exec ${CONTAINER_ID} /bin/sh -c "ps"
Now, in python this would probably look something like this:
import os
containerId = "XXX"
in_docker_command = "ps"
command = 'docker exec ' + containerId + ' /bin/sh -c "' + in_docker_command + '"'
os.system(command)
This solution is useful, if you do not want to install an external dependency such as docker-py as suggested by #Szczad
I have a python file "run.py" like below on my remote server.
import subprocess
subprocess.Popen(["nohup", "python", "/home/admin/Packet/application.py", "&"])
I want to run that file from my local computer using SSH. I'm trying like the below. However, my local terminal got stuck there. It seems it isn't being run in the background.
ssh -n -f -i /Users/aws/aws.pem admin#hello_world.com 'python /home/admin/run.py'
After running that command, my terminal got stuck.
The following is an example I'm using, you can try something like this, customizing the ssh_options.
import subprocess
ssh_options = '-o ConnectTimeout=10 -o PasswordAuthentication=no -o PreferredAuthentications=publickey -o StrictHostKeyChecking=no'
server_name = 'remote_server.domain'
cmd = 'ssh ' + ssh_options + ' ' + server_name + ' "/usr/bin/nohup /usr/bin/python /home/admin/run.py 2>&1 &"'
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
Later you can redirect the output to a flat file, changing :
2>&1 &
for:
>> /path/lo/log_file.txt 2>&1 &
I'm using Python subprocess module to call "iperf" command. Then I parse the output and get the source port of the iperf client, e.g. 4321 but when I monitor the network 4321 is missing and I can only see UDP ports 12851 and 0. It is strange that when I call iperf command directly from Ubuntu terminal I can see the source port that iperf reports (4321) in the network.
Can anybody help me and explain why this change of port happening? And how I can enforce subprocess to send the data on the original port that iperf sends?
This is how I call iperf and obtain the source port:
import subprocess, sys, os
cmd = "iperf -c %s -p %s -u -b %sm -t 10 -l 1500" %(self.ip,self.port,self.bw)
print cmd
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
(output, err) = p.communicate()
o_list = output.split(']')
o_list = o_list[1].split(' ')
for i in range(len(o_list)):
if o_list[i] == "port":
self.my_port = int(o_list[i+1])
break
#endIf
And I use same command In terminal and get different output:
iperf -c 10.1.1.2 -p 5001 -u -b 10m -t 10 -l 1500
I'm doing a project in Software-Defined Networking area and using POX as network controller, so I can easily monitor desired packets (here UDP packets) and their source and destination ports. This is the code that I added to forwarding.l2_learning to monitor UDP ports:
if msg.match.dl_type == 0x0800:
if msg.match.nw_proto == 17:
log.warning("FOUND UDP" + str(msg.match.tp_src))
Thank you in advance
I can't seem to get Fabric to play nice with backgrounding a process that I've used nohup on. . . It should be possible, given various pieces of information, including here and here.
def test():
h = 'xxxxx.compute-1.amazonaws.com'
ports = [16646, 9090, 6666]
with settings(host_string = h):
tun_s = "ssh -o StrictHostKeyChecking=no -i ~/.ssh/kp.pem %s#%s " % (env.user, h)
for port in ports:
p_forward = "-L %d:localhost:%d" % (port, port)
tun_s = "%s %s" % (tun_s, p_forward)
tun_s = "%s -N" % tun_s
# create the tunnel. . .
print "creating tunnel %s" % tun_s
run("nohup '%s' >& /dev/null < /dev/null &" % tun_s)
print "fin"
Abbreviated output:
ubuntu#domU-xxx:~/deploy$ fab test
executing on tunnel ssh -o StrictHostKeyChecking=no -i ~/.ssh/kp.pem ubuntu#xxx -L 16646:localhost:16646 -L 9090:localhost:9090 -L 6666:localhost:6666 -N
[xxx.compute-1.amazonaws.com] run: nohup 'ssh -o StrictHostKeyChecking=no -i ~/.ssh/kp.pem ubuntu#xxx.compute-1.amazonaws.com -L 16646:localhost:16646 -L 9090:localhost:9090 -L 6666:localhost:6666 -N' >& /dev/null < /dev/null &
fin
Done.
Disconnecting from xxxx
I know there is no problem with the tunnel command per se because if I strip away the nohup stuff it works fine (but obviously Fabric hangs). I'm pretty sure that it's not properly getting detached and when the run function returns the tunnel process is immediately dying.
But why?
This also happens with a python command in another part of my code.
So, it seems after much wrangling that this is not possible for whatever reason with my setup (default Ubuntu installs on EC2 instances). I have no idea why and as it seems possible according to various sources.
I fixed my particular problem by using Paramiko in place of Fabric, for calls that need to be left running in the background. The following achieves this:
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
privkey = paramiko.RSAKey.from_private_key_file('xxx.pem')
ssh.connect('xxx.compute-1.amazonaws.com', username='ubuntu', pkey=privkey)
stdin, stdout, stderr = ssh.exec_command("nohup ssh -f -o StrictHostKeyChecking=no -i ~/.ssh/xxx.pem ubuntu#xxx.compute-1.amazonaws.com -L 16646:localhost:16646 -L -N >& /dev/null < /dev/null &")
ssh.close()