I'm trying to customize my zsh prompt. The function below calls a Python script and returns the entire working directory path minus just the current directory. E.g. ~/research would go to ~. This is a .zsh-theme file.
function collapse_pwd {
echo $(python ~/.oh-my-zsh/themes/truncatecwd.py '%~' '%c')
}
This is the python script, truncatecwd.py.
#!/usr/bin/env python
import sys
cwd = sys.argv[1]
current_dir_end = sys.argv[2]
sys.stdout.write(cwd[0: cwd.index(current_dir_end)])
Weird things happen here. I keep getting errors saying that current_dir_end can't be found in cwd. I think that it has something to do with string formatting. I printed out cwd, and it seems to be correct: '~/.oh-my-zsh/themes'. However, when I call length on it, I get 2. Same goes for current_dir_end: I get length 2. In fact, even cwd = '~' returns a length of 2. Clearly, something subtle (but probably simple) is going on.
Thanks for your help.
I don't really understand what you're trying to do here, but wouldn't the following suffice, with no Python involved at all?
collapse_pwd() {
local result=${1:-$PWD}
if [[ $result = */* ]]; then
result="${result%/*}"
fi
if [[ $result = "$HOME"/* ]]; then
result="~/${result#$HOME/}"
fi
echo "$result"
}
could you do something like this:
import os
import sys
cwd = os.getcwd()
ret = os.path.sep.join(cwd.split(os.path.sep)[:-1])
sys.stdout.write(ret)
also, just an observation, because I'm not too familiar with zsh you may need to call python with -u option to ensure unbuffered output otherwise a newline may be written and that wouldn't be good with a command prompt.
Related
I'm trying to create a loop inside a Shell Script and I want to break out of the loop and finish the shell script execution when i find an integer different than 0 in a specific string(using Python).The problem is even after the first occurrence of an integer different than 0 in that specific string the shell script keeps executing.I tried to debug it by echoing the value of GET_OUT_OF_LOOP but it just keeps echoing 0 even after finding the kind of integer I was looking for. I already looked on the web for a way to do this but I still didn't figure it out...
Here's my shell script:
#!/bin/sh
export GET_OUT_OF_LOOP=0
while [ $GET_OUT_OF_LOOP -ne 1 ]; do
python3 provas.py provas.txt
./provas < provas.txt >> data.txt
python3 test.py data.txt
sh clear_data.sh
done
And here is my Python code(test.py) where I'm trying to change the value of the GET_OUT_OF_LOOP variable using os.environ:
#!usr/env/bin python3
import sys
import os
import re
script, filename = sys.argv
os.environ['GET_OUT_OF_LOOP'] = '0'
fin = open("data.txt", 'r')
for line in fin:
if "A percentagem de aprovação foi de" in line:
if int(re.search(r'\d+', line).group()) != 0:
print(line)
os.environ['GET_OUT_OF_LOOP'] = '1'
The python process is a subprocess of the shell process, and it can not modify environment vars of its parent process.
For your case, you can use the exit code to pass the message; i.e.
shell script:
python3 test.py data.txt || GET_OUT_OF_LOOP=1
python:
#!usr/env/bin python3
import sys
import os
import re
script, filename = sys.argv
fin = open("data.txt", 'r')
for line in fin:
if "A percentagem de aprovação foi de" in line:
if int(re.search(r'\d+', line).group()) != 0:
print(line)
sys.exit(1)
sys.exit(0)
That is just the way environment variables work: you can't in a sub-process change variables in the environment of the process which called it.
(And in shell script, almost all lines of code, but for control structures, are external sub-processes)
What you can have is a simple unsigned byte return value of your sub-process that can be read in the shell script as the implicit $? variable.
In Python's case, you terminate the program with this return value by calling sys.exit()
So, in your shell script you can do this to assign the variable:
python3 test.py data.txt
GET_OUT_OF_LOOP=$?
And the Python in the Python script change:
os.environ['GET_OUT_OF_LOOP'] = '1'
for
sys.exit(1)
Of course, it would be much more sane and maintainable if yu just use Python all the way from the top - the shutils module in the stdlib makes it easy to copy files around, and you, above all, get a consistent syntax across all lines of your script, much easier to use comparison operators and variables.
Here are two similar stackoverflow questions that might explain yours:
how-do-i-make-environment-variable-changes-stick-in-python
environment-variables-in-python-on-linux
So the real reason causing this issue is that when we run a process, the environment variables being changed by the process are only available during the process runtime, it won't change the external variables, here is a simplified script of yours to prove it:
#test.py
import os
os.environ['test_env_var'] = '1'
#test.sh
export test_env_var=0
while [ $test_env_var -ne 1 ]; do
python test.py
echo $test_env_var
done
As you might have already seen what's coming, the loop will echo $tev to be 0 forever.
Hence the solution to solve this problem to my understanding, would be to out-source the change into the external system files, if it's necessary. Append changes to the configuration files of the regarding systems, for instance of this example, you can append "export test_env_var=1" into ~/.bashrc, if you are a linux bash user.
I was using python project pick to select an option from a list. Below code returns the option and index.
option, index = pick(options, title)
Pick uses curses library from python. I want to pass the output of my python script to shell script.
variable output = $(pythonfile.py)
but it gets stuck on the curses screen. It cannot draw anything. What can be the reason for this?
pick gets stuck because when you use $(pythonfile.py), the shell redirects the output of pythonfile.py as if it were a pipe. Also, the output of pick contains characters for updating the screen (not what you want). You can work around those problems by
redirecting the output of pythonfile.py to /dev/tty
ensuring that your pythonfile.py writes its result to the standard error, and
directing the standard error in the shell script to the output of the $(...) construct.
For example:
#!/bin/bash
foo=$(python basic.py 2>&1 >/dev/tty )
echo "result '$foo'"
and in pythonfile.py, doing
import sys
print(option, index, file=sys.stderr)
rather than
print(option, index)
To pass the output of a Python script to a Bash variable you need to specify the command with which to open the python file inside the variable's declaration.
Like so:
variable_output=$(python pythonfile.py)
Furthermore, if you'd like to pass a variable from Python to bash you could use Python's sys module and then redirect the stderr.
Like so:
test.py
import sys
test_var = (str(3 + 3))
sys.exit(test_var)
test.sh
test_var=$(python3 test.py 2>&1)
echo $testvar
Now, if we run test.sh we get the output 6.
I've seen some shell scripts in which they pass a file by writing the contents of the file in the same shell script. For instance:
if [[ $uponly -eq 1 ]]; then
mysql $tabular -h database1 -u usertrack -pabulafia usertrack << END
select host from host_status where host like 'ld%' and status = 'up';
END
exit 0
fi
I've been able to do something similar in which I do:
python << END
print 'hello world'
END
If the name of the script is say myscript.sh then I can run it by executing sh myscript.sh. and I obtain my desired output.
Question is, can we do something similar in a Makefile? I've been looking around and they all say that I have do something like:
target:
python #<<
print 'hello world'
<<
But that doesn't work.
Here are the links where I've been looking:
http://www.opussoftware.com/tutorial/TutMakefile.htm#Response%20Files
http://www.scribd.com/doc/2369245/Makefile-Memo
You can do something like this:
define TMP_PYTHON_PROG
print 'hello'
print 'world'
endef
export TMP_PYTHON_PROG
target:
#python -c "$$TMP_PYTHON_PROG"
First, you're defining a multi line variable with define and endef. Then you need to export it to the shell otherwise it will treat each new line as a new command. Then you reinsert the shell variable using $$.
The reason your #<< thing didn't work is that it appears to be a feature of a non-standard make variant. Similarly, the define command that mVChr mentions is specific (as far as I'm aware) to GNU Make. While GNU Make is very widely distributed, this trick won't work in a BSD make, nor in a POSIX-only make.
I feel it's good, as a general principle, to keep makefiles as portable as possible; and if you're writing a Makefile in the context of an autoconf-ed system, it's more important still.
A fully portable technique for doing what you're looking for is:
target:
{ echo "print 'First line'"; echo "print 'second'"; } | python
or equivalently, if you want to lay things out a bit more tidily:
target:
{ echo "print 'First line'"; \
echo "print 'second'"; \
} | python
I have a scripts (a.py) reads in 2 parameters like this:-
#!/usr/bin/env python
import sys
username = sys.argv[1]
password = sys.argv[2]
Problem is, when I call the script with some special characters:-
a.py "Lionel" "my*password"
It gives me this error:-
/swdev/tools/python/current/linux64/bin/python: No match.
Any workaround for this?
Updated-
It has been suspected that this might be a shell issue rather than the script issue.
I thought the same too, until i tried it out on a perl script(a.pl), which works perfectly without any issue:-
#!/usr/bin/env perl
$username = $ARGV[1];
$password = $ARGV[2];
print "$username $password\n";
%a.pl "lionel" "asd*123"
==> lionel asd*123
No problem.
So i guess , this looks to me more like a PYTHON issue.
Geezzz ........
The problem is in the commands you're actually using, which are not the same as the commands you've shown us. Evidence: in Perl, the first two command-line arguments are $ARGV[0] and $ARGV[1] (the command name is $0). The Perl script you showed us wouldn't produce the output you showed us.
"No match" is a shell error message.
Copy-and-paste (don't re-type) the exact contents of your Python script, the exact command line you used to invoke it, and the exact output you got.
Some more things to watch out for:
You're invoking the script as a.py, which implies either that you're copying it to some directory in your $PATH, or that . is in your $PATH. If the latter, that's a bad idea; consider what happens if you cd info a directory that contains a (possibly malicious) command called ls. Putting . at the end of your $PATH is safer than putting it at the beginning, but I still recommend leaving it out altogether and using ./command to invoke commands in the current directory. In any case, for purposes of this exercise, please use ./a.py rather than a.py, just so we can be sure you're not picking up another a.py from elsewhere in your $PATH.
This is a long shot, but check whether you have any files in your current directory with a * character in their names. some_command asd*123 (without quotation marks) will fail if there are no matching files, but not if there happens to be a file whose name is literally "asd*123".
Another thing to try: change your Python script as follows:
#!/usr/bin/env python
print "before import sys"
import sys
print "after import sys"
username = sys.argv[1]
password = sys.argv[2]
This will tell you whether the shell is invoking your script at all.
That error comes from your shell, not from Python. Do you have a shopt -s failglob set in your .bashrc or somewhere?
/swdev/tools/python/current/linux64/bin/python: No match.
I think the problem is that the python env is not set:
Does python run at all on your machine ?
What I'd like to do is something like
$echo $PATH | python --remain-interactive "x = raw_input().split(':')"
>>>
>>> print x
['/usr/local/bin', '/usr/bin', '/bin']
I suppose ipython solution would be best. If this isn't achievable, what would be your solution for the situation where I want to process output from various other commands? I've used subprocess before to do it when I was desperate, but it is not ideal.
UPDATE: So this is getting closer to the end result:
echo $PATH > /tmp/stdout.txt; ipython -i -c 'stdout = open("/tmp/stdout.txt").read()'
Now how can we go about bending this into a form
echo $PATH | pyout
where pyout is the "magic solution to all my problems". It could be a shell script that writes the piped output and then runs the ipython. Everything done fails for the same reasons bp says.
In IPython you can do this
x = !echo $$$$PATH
The double escape of $ is a pain though
You could do this I guess
PATH="$PATH"
x = !echo $PATH
x[0].split(":")
The --remain-interactive switch you are looking for is -i. You also can use the -c switch to specify the command to execute, such as __import__("sys").stdin.read().split(":"). So what you would try is: (do not forget about escaping strings!)
echo $PATH | python -i -c x = __import__(\"sys\").stdin.read().split(\":\")
However, this is all that will be displayed:
>>>
So why doesn't it work? Because you are piping. The python intepreter is trying to interactively read commands from the same sys.stdin you are reading arguments from. Since echo is done executing, sys.stdin is closed and no further input can happen.
For the same reason, something like:
echo $PATH > spam
python -i -c x = __import__(\"sys\").stdin.read().split(\":\") < spam
...will fail.
What I would do is:
echo $PATH > spam.bar
python -i my_app.py spam.bar
After all, open("spam.bar") is a file object just like sys.stdin is :)
Due to the Python axiom of "There should be one - and preferably only one - obvious way to do it" I'm reasonably sure that there won't be a better way to interact with other processes than the subprocess module.
It might help if you could say why something like the following "is not ideal":
>>> process = subprocess.Popen(['cmd', '/c', 'echo %PATH%'], stdout=subprocess.PIPE)
>>> print process.communicate()[0].split(';')
(In your specific example you could use os.environ but I realise that's not really what you're asking.)