Change caller's current working directory in Python - python

I'm writing a simple script which ideally will help me conveniently change directories around my system.
The details of the implementation don't matter, but let's say ideally I will place this script in /usr/bin and call it with an argument denoting where I want to go to on the system: goto project1
I would expect that when the script exits, my terminal's current working would have changed to that of Project 1.
In order to accomplish this, I tried:
os.chdir('/')
subprocess.call('cd /', shell=True)
Neither of which work. The first changes the working directory in Python and the second spawns a shell at /.
Then I realized how naive I was being. When a program is run, the terminal is just forking a process, while reading from stdout, which my program is writing to. Whatever it does, it wouldn't affect the state of terminal.
But then I thought "I've been using cd for years, surely someone wrote code for that", thinking there might be something to go off of (system call or something?).
But cd is not even coreutils. Instead, the source of cd is this:
builtin `echo ${0##*/} | tr \[:upper:] \[:lower:]` ${1+"$#"}
So, a couple of questions come to mind:
What's actually going on behind the scenes when a user calls cd? (Meaning, how is the terminal and the system actually interacting?)
Is it possible to have something like a Python script alter the terminal location?
Thanks so much for your help!

You could do it as a pair of scripts:
directory.py
#!/usr/bin/python
import sys
directory = sys.argv[1]
# do something interesting to manipulate directory...
print directory + "tmp"
directory.csh
#!/bin/csh -f
cd `python directory.py $1`
Result:
> pwd
/Users/cdl
> source directory.csh "/"
> pwd
/tmp
>
Substitute your favorite shell and variant of Python as desired. Turn on execute for the Python script to simplify further.
Clearly the shell is changing the directory but Python can do all the clever logic you want to figure out where to send the shell.

Related

Applying source on Python script to export environment variables?

Context:
I'm working with a few developers, some on OSX and some on Linux. We use a collection of bash scripts to load environment variables, build docker images etc. The OSX users are using zsh instead of bash, so some tricks (specifically how we're getting the running script's directory) aren't compatible. What works on Linux (but not OSX) is:
$ cat ./scripts/env_script.sh
THIS_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
# ... more variables ...
$ source ./scripts/env_script.sh
$ echo $THIS_DIR
# the absolute path to the scripts directory
Some cursory testing shows this doesn't work on OSX, presumably because BASH_SOURCE doesn't work the same with zsh. Fine.
Goal:
We're all working with a Python, we would like to replace our bash scripts with Python.
The same pattern is very easy in Python.
from pathlib import Path
THIS_DIR = str(Path(__file__).parent)
Not the cleanest, but it works anywhere. Ideally we could run this and share THIS_DIR with the shell that called the script. Something like
$ <answer to my question> ./scripts/env_script.py
$ echo $THIS_DIR
# absolute path to the scripts directory
Question:
Suppose we rewrote our ./scripts/env_script.sh into ./scripts/env_script.py and collected a list of variables we would like to share with the calling environment.
That is, we wrote the Python script above.
Is it possible to do something akin to source ./scripts/env_script.py so that we can export the variables from the Python script into the calling shell?
What I've tried:
I found a potential duplicate here though the answer is basically 'Python runs in a sub-process, source would only work if your shell is Python` which makes sense but doesn't answer the question.
If there isn't a nice way to exchange these variables, then another option would basically be to have the Python script spit the variables to stdout and then eval them within my shell. I.e. eval $(./script/env_script.py). I guess this could work, but it feels wrong at least because I was always taught eval is evil.
I spent a little more time on the second solution since I could imagine it working (although I'm not happy with it). Basically, I rewrite ./scripts/env_script.py to be
FOO='whatever'
THIS_DIR=str(Path(__file__).parent)
# and so on...
# prints all "normal" variables to stdout
def_vars = list(locals().items())
for name, val in def_vars:
if name.startswith("_"):
continue
print(f'{name}="{val}"')
So running it normally would output
$ ./scripts/env_script.py
FOO="whatever"
THIS_DIR="..."
...
$ echo $FOO
# nothing
Then, to jam those variables into the calling shell, we wrap that with eval and it does the job
$ eval $(./scripts/env_script.py)
$ echo $FOO
whatever
I guess with some further tweaking, the footer at the end of env_script.py could be made a little more robust, and then moved outside of the script. Some sort of alias could be defined to make it syntactically nicer, so then calling it could like pysource ./scripts/env_script.py. Still, I'm not satisfied with this answer.
Edit: I did the tweaking and wrote a downloadable script here.

Crontab wont run scripts

I have a light show on my raspberry pi. I also have crontab running to automate it. When I run the script in terminal it acts normally and works flawless. When cron runs it wont work correctly. I have it set to turn the lights off 15 sec before the show, run the show, 15 seconds later turn them back on. Now when the job runs it only plays the show and the relays never trigger to keep the lights on or off. I have 3 scripts. One is the show and 2 are turning the lights on and off. I'm really confused here.
Run the show
/home/pi/lightshowpi/./lightsoff.sh
sleep 15
$SYNCHRONIZED_LIGHTS_HOME/bin/start_playlist_once $SYNCHRONIZED_LIGHTS_HOME/mus$
sleep 15
/home/pi/lightshowpi/./lightson.sh
Lights on
#!/bin/bash
export SYNCHRONIZED_LIGHTS_HOME=/home/pi/lightshowpi
python py/hardware_controller.py --config=overmech.cfg --state=on
Lights off
#!/bin/bash
export SYNCHRONIZED_LIGHTS_HOME=/home/pi/lightshowpi
python py/hardware_controller.py --config=overmech.cfg --state=off
All these are .sh files with chmod +x. Thanks
I don't know what PATH cronjobs run with, but it may be that it's not finding the python executable. Try putting in the full path:
#!/bin/bash
export SYNCHRONIZED_LIGHTS_HOME=/home/pi/lightshowpi
/usr/bin/python py/hardware_controller.py --config=overmech.cfg --state=on
Assuming python is in /usr/bin, of course.
I'd also put in an absolute path to py/hardware_controller.py but I don't know where you put it :)
There's a couple of things you should fix regardless of it being the culprit, by changing these your crontab should work.
First and foremost, like everybody else mentioned make sure your shell scripts have this or it's equivalent in your system. You can also redirect sdout and stderrr in your crontab by using something similar to this in your crontab:
*/15 * * * * /bin/bash /home/pi/lightshowpi/lightson.sh 2&>1 >> lights_on_log.txt
By using 2&>1 you'll be appending stdout and stderr to that file in the home directory for the user running the cron job, that way you can review the output and errors and figure out exactly what the problem is.
The most likely cause of the problem is that you're not using the full path to the python binary i.e.
/usr/bin/python
Instead of just
python
One other thing you should probably look into is the fact that your command line arguments for both scripts use relative paths, and depending on what user the cron job is running as this may pose a problem. Either change into whatever directory the py directory and overmech.cfg file are in or make sure to also use fully qualified paths for those.
The last bit I'd point out (I saved it for last because it doesn't really mean much) is that one of your examples shows
/home/pi/lightshowpi/./lightsoff.sh
The . is redundant and not needed. Dot-slash is used as a means to abbreviate the fully qualified path to a file within the current directory since . expands to the current working directory and / helps construct the last bit of the path. In the above example you'll do just fine without the . and just using the complete path like this:
/home/pi/lightshowpi/lightsoff.sh

Create a process that runs in background using python

I want to write a python program that runs in the background.
I mean, like we install Python package. And later, we can run any script using python in front of the script name. This means that some python process is running in background which can take inputs and perform actions.
And in case of linux, you can call grep from anywhere. That means grep is also running in the background somehow.
I want to write something like that in python. When I call certain function with name and arguments at any time, it should perform the intended action without caring for the original code. But I am not able to find how to achieve that.
Can anyone please help me here?
Thanks in advance.
Clarification: the fact that you can run python or grep in a console just by typing their name, does not mean that they run in background. It means that there exist an executable file in some location, and this location is listed in the environment variable PATH.
For example, on my system I can run Python by typing python. The python executable is installed at /usr/local/bin/python, and has the execute permission bit on.
$ echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin
and yes, /usr/local/bin is contained in PATH.
You can do the same with python scripts:
ensure that the very first line of your script contains #!/usr/bin/python or #!/usr/bin/env python
give your script execute permissions: chmod a+x yourScript
either move your script to one of the directories contained in $PATH, or add the directory where your script is located to PATH: export PATH=$PATH:/home/you/scripts
Have a look at
http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/
you can roll out your own daemon by inheriting the Daemon class and overriding run method
from daemon import Daemon
class run_daemon(Daemon):
def run(self):
import sys
run_daemon.execute_shell_command(sys.argv[1])
#staticmethod
def execute_shell_command(ShellCommand):
import subprocess
process = subprocess.Popen(ShellCommand, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
process.communicate()

Change parent shell's environment from a subprocess

If I have a program written in a language other than bash (say python), how can I change environment variables or the current working directory inside it such that it reflects in the calling shell?
I want to use this to write a 'command line helper' that simplifies common operations. For example, a smart cd. When I simple type in the name of a directory into my prompt, it should cd into it.
[~/]$ Downloads
[~/Downloads]$
or even
[~/]$ project5
[~/projects/project5]$
I then found How to change current working directory inside command_not_found_handle (which is exactly one of the things I wanted to do) , which introduced me to shopt -s autocd. However, this still doesn't handle the case where the supplied directory is not in ./.
In addition, if I want to do things like setting the http_proxy variable from a python script, or even update the PATH variable, what are my options?
P. S. I understand that there probably isn't an obvious way to write a magical command inside a python script that automatically updates environment variables in the calling shell. I'm looking for a working solution, not necessarily one that's elegant.
This can only be done with the parent shell's involvement and assistance. For a real-world example of a program that does this, you can look at how ssh-agent is supposed to be used:
eval "$(ssh-agent -s)"
...reads the output from ssh-agent and runs it in the current shell (-s specifies Bourne-compatible output, vs csh).
If you're using Python, be sure to use pipes.quote() (or, for Python 3.x, shlex.quote()) to process your output safely:
import pipes
dirname='/path/to/directory with spaces'
foo_val='value with * wildcards * that need escaping and \t\t tabs!'
print 'cd %s; export FOO=%s;' % (pipes.quote(dirname), pipes.quote(foo_val))
...as careless use can otherwise lead to shell injection attacks.
By contrast, if you're writing this as an external script in bash, be sure to use printf %q for safe escaping (though note that its output is targeted for other bash shells, not for POSIX sh compliance):
#!/bin/bash
dirname='/path/to/directory with spaces'
foo_val='value with * wildcards * that need escaping and \t\t tabs!'
printf 'cd %q; export FOO=%q;' "$dirname" "$foo_val"
If, as it appears from your question, you want your command to appear to be written as a native shell function, I would suggest wrapping it in one (this practice can also be used with command_not_found_handle). For instance, installation can involve putting something like the following in one's .bashrc:
my_command() {
eval "$(command /path/to/my_command.py "$#")"
}
...that way users aren't required to type eval.
Essentially, Charles Duffy hit the nail on the head, I present here another spin on the issue.
What you're basically asking about is interprocess communication: You have a process, which may or may not be a subprocess of the shell (I don't think that matters too much), and you want that process to communicate information to the original shell (just another process, btw), and have it change its state.
One possibility is to use signals. For example, in your shell you could have:
trap 'cd /tmp; pwd;' SIGUSR2
Now:
Type echo $$ in your shell, this will give you a number, PID
cd to a directory in your shell (any directory other than /tmp)
Go to another shell (in another window or what have you), and type: kill SIGUSR2 PID
You will find that you are in /tmp in your original shell.
So that's an example of the communication channel. The devil of course is in the details. There are two halves to your problem: How to get the shell to communicate to your program (the command_not_found_handle would do that nicely if that would work for you), and how to get your program to communicate to the shell. Below, I cover the latter issue:
You could, for example, have a trap statement in the original shell:
trap 'eval $(/path/to/my/fancy/command $(pwd) $$)' SIGUSR2
...your fancy command will be given the current working directory of the original shell as the first argument, and the process id of the shell (so it knows who to signal), and it can act upon it. If your command sends an executable shell command string to the eval command, it will be executed in the environment of the original shell.
For example:
trap 'eval $(/tmp/doit $$ $(pwd)); pwd;' SIGUSR2
/tmp/doit is the fancy command. It could be any executable type [Python, C, Perl, etc.]), the key is that it spits out a string that the shell can evaluate. In /tmp/doit, I have provided a bash script:
#!/bin/bash
echo "echo PID: $1 original directory: $2; cd /tmp"
(I make sure the file is executable with: chmod 755 /tmp/doit). Now if I type:
cd; echo $$
Then, in another shell, take the number output ("NNNNN") by the above echo and do:
kill -s SIGUSR2 NNNNN
...then suddenly I will see something like this pop up in the original shell:
PID: NNNNN original directory: /home/myhomepath
/tmp
and if I type "pwd" in my original shell, I will see that I'm in /tmp.
The guy who wanted command_not_found_handle to do something in the current shell environment could have used signals to get the effect he wanted. Here I was running the kill manually but there's no reason why a shell function couldn't do it.
Doing fancy work on the frontend, whereby you re-interpret or pre-interpret the user's input to the shell, may require that the user runs a frontend program that could be pretty complicated, depending on what you want to do. The old school "expect" program is ideal for something like this, but not too many youngsters pick up TCL these days :-) .

Changing prompt working directory via Python script

Is it possible to change the Windows command prompt working directory via Python script?
e.g.
>> cd
>> c:\windows\system32
>> make_decision_change_dir.py
>> cd
>> c:\windows
I have tried a few things which don't work:
import os
os.chdir(path)
import os, subprocess
subprocess.Popen("chdir /D \"%s\"" %path, shell=True)
import os, subprocess
subprocess.Popen("cd \"%s\"" %path, shell=True)
import os, subprocess
subprocess.Popen("CD=\"%s\"" %path, shell=True)
As I understand it and observe these operations change the current processes working directory - which is the Python process and not the prompt its executing from.
Thanks.
UPDATE
The path I would want to change to is dynamic (based on what project I am working on, the full path to a build location changes) hence I wanted to code a solution in Python rather than hack around with a Windows batch file.
UPDATE
I ended up hacking a batch file together to do this ;(
Thanks everyone.
I'm not clear what you want to do here. Do you want a python script which you can run from a Windows command prompt which will change the working directory of the Windows command session?
If so, I'm 99.9% sure that's impossible. As you said yourself the python.exe process is a separate process from the Windows cmd.exe and anything you do in Python won't affect the Command prompt.
There may be something you can do via the Windows API by sending keystrokes to the Windows or something but it would be pretty brittle.
The only two practical options I can think of involve wrapping your Python script in a Batch file:
Output your desired directory from the Python script, read the output in your Batch file and CD to it.
Start your Python script from a batch file, allow your Python script to start a new cmd.exe Window and get the Batch file to close the original Command window.
I have a Python script to make moving around a file tree easier: xdir.py
Briefly, I have an xdir.py file, which writes Windows commands to stdout:
# Obviously, this should be more interesting..
import sys
print "cd", sys.argv[1]
Then an xdir.cmd file:
#echo off
python xdir.py %* >%TEMP%\__xdir.cmd
call %TEMP%\__xdir.cmd
Then I create a doskey alias:
doskey x=xdir.cmd $*
The end result is that I can type
$ x subdir
and change into subdir.
The script I linked to above does much more, including remembering history, maintaining a stack of directories, accepting shorthand for directories, and so on.
One common solution is a two-part script.
Part 1 is Python, which creates a temporary .BAT file that contains the appropriate CD command.
Part 2 is the temporary .BAT file.
fancycd.bat
python figurethepath.py >temp.bat
temp.bat
As people mentioned, child processes (i.e. your program) can't change the current working directory of a parent process (i.e. the terminal). This is why you need the two steps that everybody is describing. In most shells there's a way to make a macro or function to perform this two-step functionality.
For example, in bash, you can make a single alias to compute the path and change the current working directory, similar to what #S.Lott describes for Windows:
alias my_cd='TMP=`compute_path.py`; cd $TMP;'
Note that the cd command is still being interpreted in the parent process (the terminal), which has the ability to change its own current working directory.
The subprocess.Popen() doc page says a child process will be created for the sub-process, so any working directory changes will be local to that subprocess.
If cwd is not None, the child’s current directory will be changed to cwd before it is executed. Note that this directory is not considered when searching the executable, so you can’t specify the program’s path relative to cwd.
This will be the same for any changes done explicitly inside the subproceess, similar to the commands that appear in the question.
imoprt os
os.system("start cmd.exe /k \"cd /d c:\\windows\\system32 & python make_decision_change_dir.py\"")

Categories