So I have a trobule with gmsh.
Direct execution works fine:
!gmsh -3 -algo meshadapt tmp_0.geo -o SFM.msh
While execution from code fails:
try:
out = subprocess.check_output(
["gmsh", "gmsh -3 -algo meshadapt tmp_0.geo -o SFM.msh"],
stderr=subprocess.STDOUT
).strip().decode('utf8')
except subprocess.CalledProcessError as e:
out = e.output
print(out)
with:
b"--------------------------------------------------------------------------\n[[23419,1],0]: A high-performance Open MPI point-to-point messaging module\nwas
unable to find any relevant network interfaces:\n\nModule: OpenFabrics
(openib)\n Host: 931136e3f6fe\n\nAnother transport will be used
instead, although this may result in\nlower
performance.\n--------------------------------------------------------------------------\n\x1b[1m\x1b[31mFatal : Can't open display: (FLTK internal
error)\x1b[0m\n--------------------------------------------------------------------------\nMPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD \nwith errorcode
1.\n\nNOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.\nYou may or may not see output from other processes,
depending on\nexactly when Open MPI kills
them.\n--------------------------------------------------------------------------\n"
So how to emulate ! execution in jupyter from Python 3 code?
#Hristo:
_=/opt/conda/bin/jupyter SHLVL=1 PATH=/opt/conda/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=931136e3f6fe HOME=/root LC_ALL=C.UTF-8 PWD=/ JPY_PARENT_PID=1
LANG=C.UTF-8 TERM=xterm-color CLICOLOR=1 PAGER=cat GIT_PAGER=cat
MPLBACKEND=module://ipykernel.pylab.backend_inline env DISPLAY=:0 gmsh
-3 -algo meshadapt tmp_0.geo -o SFM.msh
#Gilles:
Same result.
It seems the root cause is the $DISPLAY environment variable is not set.
first make sure $DISPLAY is set when your Jupyter notebook starts.
you might also have to direct mpirun to export it to all the MPI tasks.
starting from Open MPI 3.0.0, you can achieve this with
export OMPI_MCA_mca_base_env_list=DISPLAY
before starting your Jupyter notebook
By the way, should your application need to open the X display ?
If it does not do any graphics, then it could be adjusted to work correctly when no display is available.
[ADDENDUM]
An other possibility is that gmsh thinks a display is available since DISPLAY is set, so it tries to open it and fails. You can try to unset this environment variable, and see how things go, both from the command line (e.g. interactive mode) and via the notebook (e.g. batch mode)
Related
I am using an anaconda environment both for the python code and the terminal.
When I want to execute a program in the shell (Windows CMD) with the environment activated. The program ogr2ogr returns the correct output with the given parameter. The tool ogr2ogr has been installed via a conda package.
But when I execute the my python code, the ogr2ogr returns an errors output. I thought it might be to different installations used due to usage of different environments (without my knowledge), but this is ownly a guess.
The python code goes as follows:
from pathlib import Path
from subprocess import check_call, STDOUT
...
file_path = Path(file_name)
destination = str(file_path.with_suffix(".gpkg"))
command = f"ogr2ogr -f GPKG -s_srs EPSG:25833 -t_srs EPSG:25833 {destination} GMLAS:{file_name} -oo REMOVE_UNUSED_LAYERS=YES"
check_call(command, stderr=STDOUT, shell=True)
ogr2ogr translates a file into another format. Which is also done, but when I open the file, I see, it's not done 100 % correctly.
When I copy the value of the string command and copy it to the shell and execute the command the execution is done correctly!
How can I correct the behaviour of using subprocess.check_call
I have a strange issue that comes and goes randomly and I really can't figure out when and why.
I am running a snakemake pipeline like this:
conda activate $myEnv
snakemake -s $snakefile --configfile test.conf.yml --cluster "python $qsub_script" --latency-wait 60 --use-conda -p -j 10 --jobscript "$job_script"
I installed snakemake 5.9.1 (also tried downgrading to 5.5.4) within a conda environment.
This works fine if I just run this command, but when I qsub this command to the PBS cluster I'm using, I get an error. My qsub script looks like this:
#PBS stuff...
source ~/.bashrc
hostname
conda activate PGC_de_novo
cd $workDir
snakefile="..."
qsub_script="pbs_qsub_snakemake_wrapper.py"
job_script="..."
snakemake -s $snakefile --configfile test.conf.yml --cluster "python $qsub_script" --latency-wait 60 --use-conda -p -j 10 --jobscript "$job_script" >out 2>err
And the error message I get is:
...
Traceback (most recent call last):
File "/path/to/pbs_qsub_snakemake_wrapper.py", line 6, in <module>
from snakemake.utils import read_job_properties
ImportError: No module named snakemake.utils
Error submitting jobscript (exit code 1):
...
So it looks like for some reason my cluster script doesn't find snakemake, although snakemake is clearly installed. As I said, this problem keeps coming and going. It'd stay for a few hours, then go away for now aparent reason. I guess this indicates an environment problem, but I really can't figure out what, and ran out of ideas. I've tried:
different conda versions
different snakemake versions
different nodes on the cluster
ssh to the node it just failed on and try to reproduce the error
but nothing. Any ideas where to look? Thanks!
Following #Manavalan Gajapathy's advice, I added print(sys.version) commands both to the snakefile and the cluster script, and in both cases got a python version (2.7.5) different than the one indicated in the activated environment (3.7.5).
To cut a long story short - for some reason when I activate the environment within a PBS job, the environment path is added to the $PATH only after /usr/bin, which results in /usr/bin/python being used (which does not have the snakemake package). When the env is activated locally, the env path is added to the beginning of the $PATH, so the right python is used.
I still don't understand this behavior, but at least I could work around it by changing the #PATH. I guess this is not a very elegant solution, but it works for me.
A possibility could be that some cluster nodes don't find the path to the snakemake package so when a job is submitted to those nodes you get the error.
I don't know if/how that could happen but if that is the case you could find the incriminated nodes with something like:
for node in pbsnodes
do
echo $node
ssh $node 'python -c "from snakemake.utils import read_job_properties"'
done
(for nodes in pbsnodes iterates through the available nodes - I don't have the exact syntax right now but hopefully you get the idea). This at least would narrow down the problem a bit...
Currently using tox to test a python package, and using a python library (chromedriver-binary) to install chromedriver.
This library creates a script (chromedriver-path) which when called outputs the PATH where chromedriver is installed. The usual way to use this is to run:
export PATH=$PATH:`chromedriver-path`
I've tried the following without success in tox.ini
setenv=
PATH = {env:PATH}{:}`chromedriver-path`
This errors as expected:
FileNotFoundError: [Errno 2] No such file or directory: 'chromedriver': 'chromedriver'
Implying that the setenv command is never called/run.
commands=
export PATH=$PATH:`chromedriver-path
This fails with:
ERROR: InvocationError for command could not find executable export
How do I make this work?
Commands can't change their parent processes' environment variables, and thus can't change the environment variables of subsequent commands launched by forking that parent; they can only set environment variables for themselves or their own children.
If you were able to collect the output of chromedriver-path before starting tox, this would be moot. If it's only available in an environment tox itself creates, then things get a bit more interesting.
One approach you can follow is to wrap the commands that need this path entry in a shim that adds it. Consider changing:
commands=
py test ...
to:
commands=
sh -c 'PATH=$PATH:$(chromedrive-path); exec "$#"' _ py test ...
I can use "nginx -s reload" command to restart nginx on the shell.
But, when I use os.system("nginx -s reload") command, It appears error.
/usr/local/bin/nginx: error while loading shared libraries: libpcre.so.1: cannot open shared object file: No such file or directory
For this error. I already install pcre. Is there some magic problems.
For running such commands in python scripts it's better to use subprocess library.
try this code instead of yours:
import subprocess
subprocess.call('whatever command you want to run it in terminal', shell=True)
Be lucky
hello I recommend that you first send this validation before sending the reset so you avoid headaches
reinicioNGINX = subprocess.getoutput(['nginx -t'])
if 'nginx: the configuration file /etc/nginx/nginx.conf syntax is ok' in reinicioNGINX:
_command_restart
else:
_command_avoid_restart
I'm trying to call a self-defined command line function in python. I defined my function using apple script in /.bash_profile as follows:
function vpn-connect {
/usr/bin/env osascript <<-EOF
tell application "System Events"
tell current location of network preferences
set VPN to service "YESVPN" -- your VPN name here
if exists VPN then connect VPN
repeat while (current configuration of VPN is not connected)
delay 1
end repeat
end tell
end tell
EOF
}
And when I tested $ vpn-connect in bash, vpn-connect works fine. My vpn connection is good.
So I created vpn.py which has following code:
import os
os.system("echo 'It is running.'")
os.system("vpn-connect")
I run it with python vpn.py and got the following output:
vpn Choushishi$ python vpn.py
It is running.
sh: vpn-connect: command not found
This proves calling self-defined function is somehow different from calling the ones that's pre-defined by the system. I have looked into pydoc os but couldn't find useful information.
A way would be to read the ./bash_profile before. As #anishsane pointed out you can do this:
vpn=subprocess.Popen(["bash"],shell=True,stdin= subprocess.PIPE)
vpn.communicate("source /Users/YOUR_USER_NAME/.bash_profile;vpn-connect")
or with os.system
os.system('bash -c "source /Users/YOUR_USER_NAME/.bash_profile;vpn-connect"')
Or try
import subprocess
subprocess.call(['vpn-connect'], shell = True)
and try
import os
os.system('bash -c vpn-connect')
according to http://linux.die.net/man/1/bash