Popen can find the existing tool - python

I'm trying to run the following;
def conn(ad_group):
result = Popen(["sudo -S /opt/quest/bin/vastool", "-u host/ attrs 'AD_GROUP_NAME' | grep member"], stdout=PIPE)
return result.stdout
on a RedHat machine in a python script but I'm getting FileNotFoundError: [Errno 2] No such file or directory: 'sudo -S /opt/quest/bin/vastool'
I can run the command(sudo -S /opt/quest/bin/vastool -u host/ attrs 'AD_GROUP_NAME' | grep member) at the command line without a problem.
I'm sure I've messed up something in the function but I need an other set of eyes.
Thank you

You need to make the entire command a single string, and use the shell=True option because you're using a shell pipeline.
result = Popen("sudo -S /opt/quest/bin/vastool -u host/ attrs 'AD_GROUP_NAME' | grep member", stdout=PIPE, shell=True)

Related

How to override ansible.cfg variable from a subprocess.call from a python script?

When I run the following command from the command line
ANSIBLE_DISPLAY_OK_HOSTS=true ansible-playbook -i my_inventory.yaml myplaybook.yaml --tag my_tag
then everything works fine, however if I try to do so from a python script using subprocess.call, it fails with "No such file or directory: 'ANSIBLE_DISPLAY_OK_HOSTS=true'
What is the difference and how to fix it please??
From within the python script I tried calling it by following ways:
1)
command = f"ANSIBLE_DISPLAY_OK_HOSTS=true ansible-playbook -i {inventory_path} {absolute_playbook_path} --tag {ansible_tag}" subprocess.run(command)
2)
command = ["ANSIBLE_DISPLAY_OK_HOSTS=true ansible-playbook", "-i", inventory_path, absolute_playbook_path, "--tag", ansible_tag] subprocess.run(command)
with no success.
You are trying to use shell syntax, but you're not executing your command with a shell. Use the env keyword of subprocess.run to provide environment variables to your command:
env = {"ANSIBLE_DISPLAY_OK_HOSTS": "true"}
command = [
"ansible-playbook",
"-i", inventory_path,
absolute_playbook_path,
"--tag", ansible_tag
]
subprocess.run(command, env=env)
You could make version 1 of your command work by specifying shell=True, like this:
command = f"ANSIBLE_DISPLAY_OK_HOSTS=true ansible-playbook -i {inventory_path} {absolute_playbook_path} --tag {ansible_tag}"
subprocess.run(command, shell=True)
But there's really no reason to involve a shell in this invocation.

Editing a file with root privileges from python

I am trying to make a python program write to a root protected file. This is using the python notify module. I am trying to get the program to use the registered endpoint.
On the console these both work and write sometext in the file /root/.config/notify-run:
sudo sh -c 'echo sometext >> /root/.config/notify-run'
echo sometext | sudo tee /root/.config/notify-run
Now in python I tried:
link = 'the endpoint'
command = ['sudo sh -c "echo', link, ' >>/root/.config/notify-run"']
subprocess.call(command, shell=True)
This returns:
syntax error unterminated quoted string
And trying:
link = 'the endpoint'
command = ['echo', link, '| sudo tee -a /root/.config/notify-run']
subprocess.call(command, shell=True)
Returns no error but does not write the endpoint in the file.
Anyone knows how to fix this? using this or other code that does the same as i am trying to do over here?
Use a string command rather than an array. This works to me:
link = 'the endpoint'
command = 'echo ' + link + ' | sudo tee -a /root/.config/notify-run'
subprocess.call(command, shell=True)
However, I advice you to edit directly the notify-run file from your Python script and run the whole Python script with root privileges so you don't have to run sudo, unless your script does much more than writing to that file.
Typing in sudoin your command would show the password prompt, and it would probably fail there.
The above answer asks you to run your script as root directly, however, if that's not possible, then you can move your script sudo tee -a /root/.config/notify-run into a different file.
Give it sudo access in /etc/sudoers file and execute it.

How to run complicated query in subprocess.run in python

I would like to run the following command using python subprocess.
docker run --rm -it -v $(pwd):/grep11-cli/config ibmzcontainers/hpvs-cli-installer:1.2.0.1.s390x crypto list | grep 'cex4queue": []'
If I run using subprocess.call() - it is working. But I am not able to check the return value
s1="docker run --rm -it -v $(pwd):/grep11-cli/config ibmzcontainers/hpvs-cli-installer:1.2.0.1.s390x crypto list | grep \'cex4queue\": []\'"
p1 = subprocess.call(s1,shell=True)
Same command with subprocess.run is not working.
I want to check whether that string present or not. How can I check?
I would recommend the use of subprocess.Popem:
import subprocess as sb
process = sb.Popen("docker run --rm -it -v $(pwd):/grep11-cli/config ibmzcontainers/hpvs-cli-installer:1.2.0.1.s390x crypto list | grep 'cex4queue\": []'".split(), stdout=sb.PIPE, stderror=sb.PIPE)
output, errors = process.communicate()
print('The output is: {}\n\nThe errors were: {}'.format(output, errors))

Docker run inside a container

I have a Python method code using docker and I try to understand it. The method is here,
def exec(self, container_target, command, additional_options=""):
""" execte docker exec commmand and return the stdout or None when error"""
cmd = """docker exec -i "%s" sh -c '%s' %s""" % (
container_target, command, additional_options)
if self.verbose:
print(cmd)
try:
cp = subprocess.run(cmd,
shell=True,
check=True,
stdout=subprocess.PIPE)
return cp.stdout.decode("utf-8").strip()
except Exception as e:
print(f"Docker exec failed command {e}")
return None
I get the screenshot at the time of debugging,
The cmd value is found,
'docker exec -i "craft_p2-2" sh -c \'cd craft && composer show
--name-only | grep nerds-and-company/schematic | wc -l\' '
My understanding is the code using the shell of the container named craft_p2-2 and enters a folder named craft. Then, it checks if the Schematic plugin is installed. Is that correct?
This might be obvious for some, but, I don't come with a wealth of container knowledge and need to be sure of what's going on.

Python and nfqueue error

I'm trying a python script downloaded from a blog to send fake echo-replies after a ping from a machine.
The problem is that when I run the script, it gives me this error:
File "/usr/lib/python2.7/dist-packages/nfqueue.py", line 96, in
create_queue def create_queue(self,*args): return
_nfqueue.queue_create_queue(self, *args) RuntimeError: error during nfq_create_queue()
This is the part where it binds the queue:
import nfqueue
q = None
q = nfqueue.queue()
q.open()
q.bind(socket.AF_INET)
q.set_callback(cb)
q.create_queue(0)
try:
q.try_run()
except KeyboardInterrupt:
print "Exiting..."
q.unbind(socket.AF_INET)
q.close()
The error is on the q.create_queue(0), but I don't know what to do!
The obtained message may derive from an already running execution of your python script.
Assuming your script file is pyscriptname.py, run the following command to check if another instance of your script is already running:
ps -aux | grep "pyscriptname.py" | grep -v grep | wc -l
In case something returned value is greater than 0, you can solve the issue by running the following command:
kill -9 `ps aux | grep "pyscriptname.py" | grep -v grep | awk '{print $2}'`
Then, you can run again your python script:
python pyscriptname.py
nfqueue needs root privileges. So run the script as root or run it under sudo

Categories