Privileged subprocess starts in the wrong working directory - python

Im trying to start a few python scripts from another, unprivileged one, using subprocess.Popen. Works good for the most part, but the ones that need root permissions dont work, i have narrowed the problem to pkexec, that is not staying at the working directory nor accepting it as parameter, thats because
This:
subprocess.Popen(['kdesudo','pwd'],cwd=sys.path[0])
#also works with sudo
effectively prints the cwd, whereas:
subprocess.Popen(['pkexec','pwd'],cwd=sys.path[0])
always stays at /root. (also tried passing env=os.environ to no avail)
i need to prompt the user with a gui, and the portability that pkexec offers over kdesudo/gksu. Any ideas?
Edit: Since its not possible to change pkexec's working directory, the following can be used to prompt the user for the root password across gtk and kde environments:
try:
check_call('which gksu',shell=True)
sudo = 'gksu'
except:
print "gksu frontend not found, using kdesudo instead"
sudo = 'kdesudo'
prompt = Popen([sudo, '<privileged command/script to run>'])

There's no way to get pkexec to keep the old directory when invoking it's command because it always changes to the target user's home directory.
It changes to pw->pw_dir, which is the home directory of the target user and there's no override for this.
I can't see a documented reason for this, but it could be simply a matter of attempting to ensure that the user executing the program can access their current working directory. It's been there since the creation of pkexec and I don't see any bugs relating to this behavior.

I run pcmanfm file-manager as privileged user from current dir using pkexec like this :
pkexec env DISPLAY=$DISPLAY XAUTHORITY=$XAUTHORITY pcmanfm $PWD
User must belong to sudo group.

just use --keep-cwd in newest version of pkexec

Related

Run external applications using Python and call their methods inside

Wish You all beautiful sunny day! :D,
I have a question for You guys. I have following python "script":
import os
os.system('ubuntu.exe')
Which opens Ubuntu running on my WSL. And now, when the Ubuntu terminal appears:
I would like to execute following commands: sudo /etc/init.d/dbus start and sudo /etc/init.d/xrdp start using my python script (just do them automatically). However, when I run one of the commands above, terminal requests my password:
So the script should be also able to enter the password.
Is there any way, how to do it?
Kind regards,
D.
While the question/answers linked in the comments is a good read (sudoers in particular), there's a better method for WSL. Instead of using ubuntu.exe, use the newer wsl.exe replacement. The wsl command offers more control over the startup, including being able to change the user:
import os
os.system('wsl ~ -u root -e sh -c "nohup service xrdp start"')
os.system('wsl -u root service dbus start')
The nohup is needed because of what seems to be a timing issue. When starting up via the WSL command, the shell (owning process) will terminate before xrdp gets a chance to fork. nohup just makes sure that the full xrdp init script gets a chance to run before that happens. This really isn't a WSL issue, per se. It can also be replicated if you were do something similar with exec sh -c "sudo service xrdp start".
A couple of other notes. First, this does not require a password, since WSL doesn't have the concept of "login." The /init process (WSL's PID1 and initialization) is responsible for setting the owning user for each session. This is not considered a security risk since even the root WSL user runs with no greater than the permissions of the Windows user.
Also note that, in my experience, it's not necessary to start dbus for xrdp access, even though I've seen instructions that say it is. Ultimately it will depend on what you want to run within the xrdp session, of course.

How to open a "root" file without typing your password every time?

I read this very interesting post, which is very close to what I need but not completely, they use it for shell scripting while I need the same solution mainly for a built-in Python function.
Long story short:
open("/dev/input/event3", "rb")
This doesn't work out of the box because in order to open event3 I need to type my password every single time I execute my Python script, due to elevated privileges. What can I do so I don't have to type my password every time nor write my password in plane text in my script? I like the solution offered in the post I linked above, but this doesn't work because that would mean the -sort of speak- handle I get on this open file will be in another scope/python script.
Any solutions?
Thanks
EDIT
I tried modifying the privileges of my entire Python script, so that I don't need to type a password, but that didn't work neither. What I tried:
1) modify access rights
sudo chown root:root /home/username/myscript.py
sudo chmod 700 /home/username/myscript.py
2) modify visudo
myusername ALL=(ALL) NOPASSWD: /home/username/myscript.py
3) trying to execute my python script now fails, altough it is clearly there
myusername$ ./myscript
bash: ./myscript: No such file or directory
It occurs to me that you may be approaching this problem backward. Currently, you're asking how you can elevate the permissions of a python script without entering a password every time when you should be asking, "why do I need to enter a password at all?"
As long as the file isn't a security concern both the script and the file in question should be owned by non-root users in the input group. The user who owns the python script can then execute it without root privileges to access the file which doesn't require them.
Read more about setuid. It is tricky (so I won't even try to explain it here), and is the basis of authentication related programs like sudo, su, login etc. See also setreuid(2), setuid(2), execve(2), credentials(7), chmod(1).
A good Unix programming book (such as ALP, or something newer) should explain setuid in terms of system calls (listed in syscalls(2)).
setuid executables cannot be scripts (with shebang); they should be binary ELF executables (see elf(5)). However, you could write some setuid wrapper program in C (or most other compiled languages, e.g. Rust, Ocaml, C++, Go, ...) which runs your Python script. Be careful, since a mistake could open a huge security hole. But with such a setuid executable, you won't have to type any password.
You could also have some specific user or group owning the /dev/input/event3 (so configure appropriately your system for that, thru udev or systemd...) and have a setuid or setgid program.
BTW, you could configure sudo (see sudoers(5) and this) to avoid typing any password. Of course, that weakens the security of your entire system (but the choice is yours).
You can pass the sudo password in the same line with the command, using this syntax:
echo password | sudo -S your_command
This way you won't be prompted for sudo password before the command can execute - sounds like that's what you are looking for.

python symlink in windows 10 creators update

Since the windows 10 creator update, you can enable developer mode to circumvent administrator privileges when creating a symlink. Now, I was able to create a symlink using mklink like this:
os.system('mklink %s %s' %(dst, src))
Hopefully it's obvious that dst is the destination symlink path, and src is the source file for the symlink. While it seems to work ok, it doesn't error if it fails which makes it a little more difficult to ensure each symlink is successful. I can check if the path exists after each symlink, but that's less efficient than a try/except clause. There's also what looks like a command shell window(?) that pops up and closes quickly every time - and that's really annoying when you're symlinking a lot of files...
So, I've been trying other options I've found on stack overflow like this one: How to create symlinks in windows using Python? Unfortunately, the CreateSymbolicLinkW command doesn't seem to work for me... I also found this: OS.symlink support in windows where it appears you need to adjust the group policy editor; however, it apparently still requires users in the administrator group to run the process as an administrator even if you explicitly set that user with symlink privileges.
With the windows 10 creator update, there's mention of a new dwflag in the CreateSymbolicLink api (SYMBOLIC_LINK_FLAG_ALLOW_UNPRIVILEGED_CREATE) you can see the reference for that here: symlinks windows 10
Using the ctypes stuff is a bit over my head, so I'm wondering if anyone knows: Can I actually use that new dwflag? How do I use it? Will it work without running the process as administrator?
I use Autodesk Maya, so I'm stuck with python 2.7 options... I have not tried launching Maya as an administrator so I don't know if that will work, but it seems like a rather annoying hoop to jump through even if it does... I appreciate any assistance you can give
it doesn't error if it fails
os.system will return the exit status of the call. It does not raise an exception.
If you look at the docs for os.system, they recommend using the subprocess module. In fact, subprocess.check_call does what you describe (raise an exception on a non-zero exit status). Perhaps that would work better.
On the other hand, the command mklink will return a zero exit status even if the source does not exist (it will create a link to non-existent file and return 0). You might want to validate the actual link as you mentioned, depending on what errors you are trying to find.
As far as hiding the console window, see this.
os.symlink works out of the box since python 3.8 on windows, as long as Developer Mode is turned on.
Not sure whether this will help with Maya; they seem to have committed to Python 3 though.

Enthought Canopy: Where do os.environ variables come from?

I have the following problem. I wanted to use the matplotlib package animation to save an mp4 video file. The save function has a dependency for generating the mp4 file, the ffmpeg external library. So I installed ffmpeg on a Mac osx 10.8 via Macports, and it got installed in /opt/local/bin .
But now, running the script in canopy, the interpreter (ipython shell) can not see ffmpeg. I added the path to my .bash_profile, and I can run the program at my terminal, but when I type os.environ['PATH'] the actual PATH of my shell was not added, and /opt/local/bin is not there.
If I try to run the script, I get this error:
/Users/alejandrodelacallenegro/Library/Enthought/Canopy_64bit/User/lib/python2.7/site- packages/matplotlib/animation.py:695: UserWarning: MovieWriter ffmpeg unavailable
warnings.warn("MovieWriter %s unavailable" % writer)
Any ideas to fix the problem? What I have to do to change an environmental variable that python sees at startup? Did anyone have the same problem?
Thanks.
The problem here has nothing to do with Enthought; it's that OS X doesn't run bash when you launch things from Finder, LaunchDaemons, etc., and therefore doesn't access your .bash_profile. Instead, it runs them from launchd.
If you want to add some environment variables to affect anything run by launchd for the current user, that's easy:
launchctl setenv PATH $PATH:/opt/local/bin
If you want this to happen every time you log in, if you create a file ~/.launchd.conf, the subcommands in that file will be run through launchctl every time launchd starts (which is the first step in logging in a new user session).
If you want it to be system-wide, rather than just for your user, you can sudo launchctl and/or create/edit /etc/launchd.conf. However, you almost certainly don't want to change the environment used by root services, etc., unless you really know what you're doing.
If it helps: Using launchctl manually, editing ~/.launchd.conf, and editing /etc/launchd.conf are roughly equivalent to export, ~/.bash_profile, and /etc/profile (except of course that they affect launchd rather than bash/sh).
See the launchctl(1) man page for details, or just type launchctl to start an interactive session and use the built-in help. (The pages launchd(8) and launchd.conf(5) also have useful info.)
You can also use the deprecated environment.plist file to affect even things that aren't run by launchd, but… that's deprecated, and there really isn't anything for it to affect that you care about, except in (much) older versions of OS X.
People coming from other Unix systems are often caught out by this. Most file managers ask the shell to run programs for them; Finder.app (and the command-line tool open, and the AppleScript environment, and so on) ask launchd to do it. Plus, on most X11 systems, if you look up the process tree from your file manager, it was ultimately launched by a user shell too, whereas on OS X, Finder.app was launched by launchd, which was launched by the system-wide launchd; no shell in sight.
This also means that other shell-specific stuff like changing resource limits or default umask won't affect programs started outside the shell on a Mac. launchctl is again the answer.

Zsh/Bash: Path isn't what it should be

I stumbled upon something I just can't figure out. Following situation: I downloaded the python frontend to control dropbox via command line (dropbox.py). I put this file in the folder:
/home/username1/.dropbox-dist/dropbox.py
I made a simple bash script in /usr/bin called "dropbox":
#!/bin/bash
python /home/username1/.dropbox-dist/dropbox.py
Now when i run it following happens:
The whereis for the file:
root#linux_remote /home/username1 # whereis dropbox
dropbox: /usr/bin/dropbox
When i run it:
root#linux_remote /home/username1 # dropbox
zsh: no such file or directory: /home/username2/.dropbox-dist/dropboxd
Yeah. It tells me another username. To be specific: I'm logged in via SSH on this linuxbox. On the remote shell there is byobu running. In byobu runs zsh. Username2 equals the user that I'm currently logged in with on my local linuxbox, with which I connected:
username2#linux_local /home/username2 # ssh username1#linux_remote
Thats how I am connected.
So there must be a variable which was passed to my remote shell from my local shell, and python seems to read it, but I can't figure out which it would be.
Now.. look at that: When I type in the command that I wrote into the bash script:
username2#linux_remote /home/username2 # python /home/username1/.dropbox-dist/dropbox.py
Dropbox command-line interface
So it runs if I do it manually.
Another thing: If I run it with the whole path it works too:
root#linux_remote /home/username1 # /usr/bin/dropbox
Dropbox command-line interface
And it does work if I run it via login-shell, for example using "bash -l" and then trying to run "dropbox".
It doesn't work either if I change the hashbang to "#!/usr/bin/zsh"
Any ideas on this?
whereis doesn't do what you think: it searches a specific set of directories, not $PATH. which searches $PATH so you need to use which to find out which executable will be executed by a given name.
Edit: which as an external program (for shells that do not have a builtin command, such as bash) will not give a right answer for some cases, e.g. shell aliases. The type builtin should be used instead (it also should be available more widely as it's mandated by POSIX, though not necessarily as a builtin).

Categories