I placed a script in /etc/profile.d/
# default_dba.sh
if groups | grep -qw "dba" ;
then
if [ $USER != "oracle" ]; then
. /u00/scripts/oracle_alias
fi
fi
The scipt sets aliases if the LDAP user is a member of the dba group.
This works.
The LDAP user starts a python script.
As a last step the python script calls a new bash shell
subprocess.call(['/bin/bash', '-i'], shell=True)
In that shell session there are the special aliases (created by the /u00/scripts/oracle_alias script) missing, just the default os aliases are there.
Can I fix this without creating home directories for LDAP users?
The startup files (under /etc/profile etc..) are read only when the shell is invoked as 'login' shell. eg:- bash -l
See 'INVOCATION' section under man bash for more details.
snippet (from man page)
When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option,
it first reads and executes commands from the file /etc/profile, if that file exists.
After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order,
and reads and executes commands from the first one that exists and is readable.
The --noprofile option may be used when the shell is started to inhibit this behavior.
Related
Is it possible, to enable bash alias expansion in snakemake?
I'm writing a workflow that is taking a config file for execution parameters.
Let's assume in this config file, the location of a program executable must be defined and will be passed to the shell of a rule.
config.yaml:
myProgram: /very/long/path/to/executable/build/myprogram
Snakefile:
rule: runMyProgram
input: inputfile.txt
output: outputfile.txt
shell: "{config[myProgram]} -i {input} -o {output}"
But I would like to also enable the opportunity to directly call the program with the config.yaml:
myProgram: myprogram
In this case, if the user has set an alias instead of adding myprogram to the $PATH, the shell does not recognize the alias and the rule ends with an error. When testing shopt expand_aliases within the snakemake shell, I see it is turned off, but adding shopt -s expand_aliases to the shell directive of the rule also doesn't do the trick.
I also tried adding the shopt command to the shell.prefix(), with no success, obviously, as it just adds the prefix to the shell.
While all of us would agree, in this minimal example the user should just put the path to the executable to $PATH, there are circumstances where a user would e.g. use different program versions under different aliases.
Or phrased differently: I would like the program not to crash if a user puts an alias instead of adding it to the $PATH.
Hence I was wondering if there was another possibility to turn on expand_aliases globally for Snakemake?
I want to run a python script on boot of ubuntu 14.04LTS.
My rc.local file is as follows:
sudo /home/hduser/morey/zookeeper-3.3.6/bin/zkServer.sh start
echo "test" > /home/hduser/test3
sudo /home/hduser/morey/kafka/bin/kafka-server-start.sh /home/hduser/morey/kafka/config/server.properties &
echo "test" > /home/hduser/test1
/usr/bin/python /home/hduser/morey/kafka/automate.py &
echo "test" > /home/hduser/test2
exit 0
everything except my python script is working fine even the echo statement after running the python script, but the python script doesnt seem to run.
My python script is as follows
import sys
from subprocess import Popen, PIPE, STDOUT
cmd = ["sudo", "./sbt", "project java-examples", "run"]
proc = Popen(cmd, shell=False, stdout=PIPE, stdin=PIPE, stderr=STDOUT)
proc.communicate(input='1\n')
proc.stdin.close()
which works perfectly fine if executed individually.
I went through the following questions , link
I did a lot of research but couldn't find a solution
Edit : echo statements are for testing purpose only, and the second actual command (not considering the echo statements) is starting a server which keeps on running, and even the python script starts a listener which runs on an infinite loop, if this is any help
The Python script tries to launch ./sbt. Are you sure of what if the current directory when rc.local runs? The rule is always use absolute paths in system scripts
Do not run the Python script in background, run it in foreground. Do not exit from its parent script. Better call another script from "rc.local" that does all the job of "echo" and script launching.
Run that script from "rc.local"; not in background (no &).
You do not need "sudo" as "rc.local" is run as root.
If you want to run python script at system boot there is an alternate solution which i have used.
1:Create sh file like sample.sh and copy paste following content
#!/bin/bash
clear
python yourscript.py
2:Now add a cron job at reboot.If you are using linux you can use as following
a:Run crontab -e(Install sudo apt-get install cron)
b:#reboot /full path to sh file > /home/path/error.log 2>&1
And restart your device
I'm trying to run a python script from python using the subprocess module and executing a script sequentially.
I'm trying to do this in UNIX but before I launch python in a new shell I need to execute a command (ppack_gnu) that sets the environment for python (and prints some lines in the console).
The thing is that when I run this command from python subprocess the process hangs and waits for this command to finish whereas when I do it in the UNIX console it jumps to the next line automatically.
Examples below:
From UNIX:
[user1#1:~]$ ppack_gnu; echo 1
You appear to be in prefix already (SHELL=/opt/soft/cdtng/tools/ppack_gnu/3.2/bin/bash)
1
[user1#1:~]$
From PYTHON:
processes.append(Popen("ppack_gnu; echo 1", shell=True, stdin = subprocess.PIPE))
This will print Entering Gentoo Prefix /opt/soft/cdtng/tools/ppack_gnu/3.2 - run 'bash -l' to source full bash profiles
in the python console and then hang...
Popen() does not hang: it returns immediately while ppack_gnu may be still running in the background.
The fact that you see the shell prompt does not mean that the command has returned:
⟫ echo $$
9302 # current shell
⟫ bash
⟫ echo $$
12131 # child shell
⟫ exit
⟫ echo $$
9302 # current shell
($$ -- PID of the current shell)
Even in bash, you can't change environment variables of the parent shell (without gdb or similar hacks) that is why source command exists.
stdin=PIPE suggests that you want to pass commands to the shell started by ppack_gnu. Perhaps you need to add process.stdin.flush() after the corresponding process.stdin.write(b'command\n').
I have a python script named sudoserver.py that I start in a CygWin shell by doing:
python sudoserver.py
I am planning to create a shell script (I don't know yet if I will use Windows shell script or a CygWin script) that needs to know if this sudoserver.py python script is running.
But if I do in CygWin (while sudoserver.py is running):
$ ps -e | grep "python" -i
11020 10112 11020 7160 cons0 1000 00:09:53 /usr/bin/python2.7
and in Windows shell:
C:\>tasklist | find "python" /i
python2.7.exe 4344 Console 1 13.172 KB
So it seems I have no info about the .py file being executed. All I know is that python is running something.
The -l (long) option for 'ps' on CygWin does not find my .py file. Nor does it the /v (verbose) switch at tasklist.
What should be the appropriate shell (Windows or CygWin shell would enough; both if possible would be fine) way to programmatically find if an specific python script is executing right now?
NOTE: The python process could be started by another user. Even from a user not logged in a GUI shell, and, even more, the "SYSTEM" (privileged) Windows user.
It is a limitation of the platform.
You probably need to use some low level API to retrieve the process info. You can take a look at this one: Getting the command line arguments of another process in Windows
You can probably use win32api module to access these APIs.
(Sorry, away from a Windows PC so I can't try it out)
Since sudoserver.py is your script, you could modify it to create a file in an accessible location when it starts and to delete the file when it finishes. Your shell script can then check for the existence of that file to find out if sudoserver.py is running.
(EDIT)
Thanks to the commenters who suggested that while the presence or absence of the file is an unreliable indicator, a file's lock status is not.
I wrote the following Python script testlock.py:
f = open ("lockfile.lck","w")
for i in range(10000000):
print (i)
f.close()
... and ran it in a Cygwin console window on my Windows PC. At the same time, I had another Cygwin console window open in the same directory.
First, after I started testlock.py:
Simon#Simon-PC ~/test/python
$ ls
lockfile.lck testlock.py
Simon#Simon-PC ~/test/python
$ rm lockfile.lck
rm: cannot remove `lockfile.lck': Device or resource busy
... then after I had shut down testlock.py by using Ctrl-C:
Simon#Simon-PC ~/test/python
$ rm lockfile.lck
Simon#Simon-PC ~/test/python
$ ls
testlock.py
Simon#Simon-PC ~/test/python
$
Thus, it appears that Windows is locking the file while the testlock.py script is running but it is unlocked when it is stopped with Ctrl-C. The equivalent test can be carried out in Python with the following script:
import os
try:
os.remove ("lockfile.lck")
except:
print ("lockfile.lck in use")
... which correctly reports:
$ python testaccess.py
lockfile.lck in use
... when testlock.py is running but successfully removes the locked file when testlock.py has been stopped with a Ctrl-C.
Note that this approach works in Windows but it won't work in Unix because, according to the Python documentation:
On Windows, attempting to remove a file that is in use causes
an exception to be raised; on Unix, the directory entry is removed
but the storage allocated to the file is not made available until
the original file is no longer in use.
A platform-independent solution using an additional Python module FileLock is described in Locking a file in Python.
(FURTHER EDIT)
It appears that the OP didn't necessarily want a solution in Python. An alternative would be to do this in bash. Here is testlock.sh:
#!/bin/bash
flock lockfile.lck sequence.sh
The script sequence.sh just runs a time-consuming operation:
#!/bin/bash
for i in `seq 1 1000000`;
do
echo $i
done
Now, while testlock.sh is running, we can test the lock status using another variant on flock:
$ flock -n lockfile.lck echo "Lock acquired" || echo "Could not acquire lock"
Could not acquire lock
$ flock -n lockfile.lck echo "Lock acquired" || echo "Could not acquire lock"
Could not acquire lock
$ flock -n lockfile.lck echo "Lock acquired" || echo "Could not acquire lock"
Lock acquired
$
The first two attempts to lock the file failed because testlock.sh was still running and so the file was locked. The last attempt succeeded because testlock.sh had finished running.
I have a python script in /usr/share/myscript.py
I want to execute this script from a cron job, which if the script produces any errors, emails these errors to a specific user (and does not inform the root account).
I do not want to over-ride any of the cron settings - other cron jobs should still notify root.
Currently I am using a shell wrapper, which should pipe errors to a log file and then email it to me. The cron job then executes this .sh file rather than the python script directly.
#!/bin/sh
python /usr/share/scripts/myscript.py 2>&1 > /home/me/logs/myscript.log
test -s /home/me/logs/myscript.log && cat /home/me/logs/myscript.log | mail -s "myscript errors" bob#myplace.com
In production, if nothing goes wrong, then the script executes correctly and nobody is emailed. However, if there is an error in the execution of the python script, then this is still being emailed to the root user from cron.
How should I change the .sh script to suppress this and report to me instead?
This command does the redirection of err output not in the order you want:
python /usr/share/scripts/myscript.py 2>&1 > /home/me/logs/myscript.log
instead you need to redirect stdin first, and stderr second, like so:
python /usr/share/scripts/myscript.py > /home/me/logs/myscript.log 2>&1
Also, have you appended >/dev/null 2>&1 to the end of the wrapped script call in crontab?