I'm using Python 2.7 with the latest plumbum package from mac ports.
In general, plumbum works great. Though I'm having a heck of a time getting a sudo'd command to work. I've setup my /etc/sudoers for the commands I want to run without having to be prompted, so that's fine. I can run the commands manually without issue.
However, when I try the same from python using this:
sudo["/usr/local/some-magic-command here"]
sudo("-u " + sudoUser) # sudo user is userfoo
I receive the following error:
plumbum.commands.processes.ProcessExecutionError: Command line: ['/usr/bin/sudo', '-u userfoo']
Exit code: 1
Stderr: | sudo: unknown user: userfoo
The user does exist, so not exactly sure what the deal is here.
Comments?
There is no "-u userfoo" user. There is probably just "userfoo". Note: no -u prefix. Try:
from plumbum.cmd import sudo
as_userfoo = sudo["-u", sudo_user]
print(as_userfoo("whoami"))
Related
I want to execute the following command via a python script:
sudo cat </dev/tcp/time.nist.gov/13
I can execute this command via the command line completely fine. However, when I execute it using subprocess, I get an error:
Command ['sudo','cat','</dev/tcp/time.nist.gov/13'] returned non-zero exit status 1
My code is as follows
import subprocess
subprocess.check_output(['sudo','cat','</dev/tcp/time.nist.gov/13'])
As I mentioned above, executing the command via the command line gives the desired output without any error. I am using the Raspbian Jessie OS. Can someone point me in the right direction?
You don't want to use subprocess for this at all.
What does this command really do? It uses a bash extension to open a network socket, feeds it through cat(1) to reroute it to standard output, and decides to run cat as root. You don't really need the bash extension, or /bin/cat, or root privileges to do any of this in Python; you're looking for the socket library.
Here's an all-Python equivalent:
#!/usr/bin/env python3
import socket
s = socket.create_connection(('time.nist.gov', 13))
try:
print(s.recv(4096))
finally:
s.close()
(Note that all of my experimentation suggests that this connection works but the daytime server responds by closing immediately. For instance, the simpler shell invocation nc time.nist.gov 13 also returns empty string.)
Give this a try:
import subprocess
com = "sudo cat </dev/tcp/time.nist.gov/13"
subprocess.Popen(com, stdout = subprocess.PIPE, shell = True)
I can use "nginx -s reload" command to restart nginx on the shell.
But, when I use os.system("nginx -s reload") command, It appears error.
/usr/local/bin/nginx: error while loading shared libraries: libpcre.so.1: cannot open shared object file: No such file or directory
For this error. I already install pcre. Is there some magic problems.
For running such commands in python scripts it's better to use subprocess library.
try this code instead of yours:
import subprocess
subprocess.call('whatever command you want to run it in terminal', shell=True)
Be lucky
hello I recommend that you first send this validation before sending the reset so you avoid headaches
reinicioNGINX = subprocess.getoutput(['nginx -t'])
if 'nginx: the configuration file /etc/nginx/nginx.conf syntax is ok' in reinicioNGINX:
_command_restart
else:
_command_avoid_restart
I got a strange behavior in my Python code. It runs fine in my Windows console
For example,
#cmd.exe : python file.py
Content of my file.py file
print("-------------------------- RANDOM STRING HERE! --------------------------------")
email = input()
print("-------------------------- RANDOM STRING HERE! --------------------------------")
name = input()
print("-------------------------- RANDOM STRING HERE! --------------------------------")
address = input()
print("-------------------------- RANDOM STRING HERE! --------------------------------")
print(email+name+address)
This same code doesn't work when I do:
curl ://filepath/file.py | sudo python3
in an a console under SSH. I already tried with PuTTY and Git Bash, but I am still getting the same error.
EOFError in SSH Console:
I already tried to use sys.stdin, but it doesn't work as expected.
No, really, you can't do that this way. Running
... | sudo python3
puts the script to the stdin so you can't use the stdin from that script any more.
But you can do it the other way round without a pipe using a temporary file:
curl ://filepath/file.py -o /tmp/script
sudo python3 /tmp/script
Or using process substitution (in Bash):
python3 <(curl ://filepath/file.py)
Okay, so this question is very project-specific, but it's a problem for me nonetheless.
I have a Python/django website, hosted on localhost from an Ubuntu VM set up by Vagrant. From this website I want to paste in C code and compile it via a series of Python functions. In one of these functions I call make like this:
arg2 = os.path.join(Default_SDK_PATH, "examples/peripheral/blinky")
arg4 = os.path.join(Default_SDK_PATH, "examples/peripheral/blinky/makefile")
args = ' '.join(['make', '-C', arg2, '-f', arg4])
p = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
shell=True
)
output, errors = p.communicate()
p.wait()
I specify arg2 and arg4 more for testing than anything else - It's just to be 100% sure that the correct makefile is used.
### ### ###
OK!
So my problem comes when the subprocess runs. The makefile is called with make, but failes. When I check the build log I can see the error message arm-none-eabi-gcc: error: nano.specs: No such file or directory.
When I call vagrant up for the first time a file named bootstrap.sh is called. I've tried adding new commands to this file
sudo apt-get remove binutils-arm-none-eabi gcc-arm-none-eabi
sudo add-apt-repository ppa:terry.guo/gcc-arm-embedded
sudo apt-get update
sudo apt-get install gcc-arm-none-eabi=4.9.3.2015q1-0trusty13
to uninstall Ubuntu's original GCC and install the latest GCC toolchain. No success there either. I've also tried returning the whole filestructure to file just to check if the files in question exists, and they do!
Can anyone point me in the right direction here?
Thanks in advance.
Whoop-di-hoo, I solved it!
I don't exactly know why, but sudo apt-get remove binutils-arm-none-eabi gcc-arm-none-eabi doesn't seem to do anything, so the original GCC-files still exist when I try to install the new GCC.
Also, the new GCC is installed in /usr/bin, while the old GCC has it's own specified folder.
So I edited my Makefile to get arm-none-eabi-gcc-4.9.3 from /usr/bin instead of the old arm-none-eabi-gcc. nano.specs is now included, and life is great!
Assume I have a file at http://mysite.com/myscript.sh that contains:
#!/bin/bash
echo "Hello $1"
From the command line, I can execute my script (without downloading it) using the following command:
bash <(curl -s http://mysite.com/myscript.sh) World
Now, instead of executing the above command from the command line, I want to execute it from a python script. I tried doing the following:
import os
os.system('bash <(curl -s http://mysite.com/myscript.sh) World')
...but I get the following error:
sh: -c: line 0: syntax error near unexpected token `('
How do I make this execute correctly in python?
Evidently, os.system runs its command through /bin/sh, which usually causes whichever shell it's linked to to drop to a compatibility mode that doesn't include the <(...) construction. You can get around it by either storing the result in a temporary file or using another level of shell. Ugly, but it works.
os.system('bash -c "bash <(curl -s http://mysite.com/myscript.sh) World"')
There is a libcurl for python so you don't have to go the way around to command line behaviour. Here's the function list that should really do it - have never run remote scripts myself though. If you need installing the python binding, the instructions are here.
import curl