I am deploying a snakemake workflow on a PBS cluster (PBSpro). I'm running into a problem where shell commands run on cluster nodes are failing due to missing arguments/operands to the shell command. Below is a minimal example that can reproduce the behavior I'm seeing:
rule all:
input: 'foo.txt'
rule run_foo:
output: 'foo.txt'
shell: 'touch {output}'
Run from the command line as:
snakemake all
The workflow runs to completion without any errors. However, run from the command line as:
snakemake all --jobs 1 --cluster "qsub -l select=1:ncpus=1 -l walltime=00:05:00 -A $PROJECT -q share -j oe"
The workflow fails and the produces a cluster log such as this:
Error: Image not found
Error in job run_foo while creating output file foo.txt.
RuleException:
CalledProcessError in line 7 of /glade2/scratch2/jhamman/Snakefile:
Command 'touch foo.txt' returned non-zero exit status 1.
File "/glade2/scratch2/jhamman/Snakefile", line 7, in __rule_run_foo
File "/glade/u/home/jhamman/anaconda/envs/storylines/lib/python3.6/concurrent/futures/thread.py", line 56, in run
Exiting because a job execution failed. Look above for error message
What appears to be happening is that the arguments to the command (in this case touch) are not applied, despite being listed traceback.
Is there a trick to passing shell commands to a cluster via snakemake that I am missing?
As it turns out, I was using a fairly old version of snakemake. For some reason, conda had pinned my version. In any event, a manual upgrade to the latest stable version of snakemake seems to have resolved this issue.
Related
Whenever I am trying to run a object detection program in pycharm, the following error occurs (see the trace). Could you please help to figure out, how to fix it?
C:\Users\Dell\venv\Scripts\python.exe "C:\Program Files\JetBrains\PyCharm Community Edition 2018.3.4\helpers\pydev\pydevd.py" --multiproc --qt-support=auto --client 127.0.0.1 --port 11093 --file C:/Users/Dell/Desktop/image.py
pydev debugger: process 2648 is connecting
Connected to pydev debugger (build 183.5429.31)
usage: image.py [-h] -i IMAGE -p PROTOTXT -m MODEL [-c CONFIDENCE]
image.py: error: the following arguments are required: -i/--image, -p/--prototxt, -m/--model
Process finished with exit code 2
Actually i am trying to run the code in the following page
https://www.pyimagesearch.com/2017/09/11/object-detection-with-deep-learning-and-opencv/
My question is where should i copy and paste .......python deep_learning_object_detection.py \
--prototxt MobileNetSSD_deploy.prototxt.txt \
--model MobileNetSSD_deploy.caffemodel --image images/example_01.jpg
In PyCharm, in the Run menu, look for Edit configurations...
Each time you run a new script, a run configuration is created for it and it is here that you can provide command line parameters in the Parameters: box.
You probably only want to past the parameters section there, not the script name, so:
\ --prototxt MobileNetSSD_deploy.prototxt.txt \ --model MobileNetSSD_deploy.caffemodel --image images/example_01.jpg
Another issue you may run into is what working directory your script needs to run in. You can change it from the same dialog, under Working directory:. You'll find that you'll rarely need to change any of the other fields in this dialog, although I would recommend giving it a name that's sensible to you under Name: - by default it's named after the script file it is running.
I am trying to set up elasticsearch with python and as part of that I am trying to do a search from youtube data (as a sample).
I am using Windows 10 X64 machine with elasticsearch 6.5.4.
When I run the following command, I am getting an error
PS C:\Users\XXXXX\elasticsearch-6.5.4\bin> .\elasticsearch cluster.name=youtube node.name=video
starts elasticsearch
- Option Description
------ -----------
-E <KeyValuePair> Configure a setting
-V, --version Prints elasticsearch version information and exits
-d, --daemonize Starts Elasticsearch in the background
-h, --help show help
-p, --pidfile <Path> Creates a pid file in the specified path on start
-q, --quiet Turns off standard output/error streams logging in console
-s, --silent show minimal output
-v, --verbose show verbose output
ERROR: Positional arguments not allowed, found [cluster.name=youtube, node.name=video]
It is mentioned in the usage/help info that you need to pass -E to set configurations:
-E Configure a setting
It is also mentioned in their Getting Started / Installation steps:
As mentioned previously, we can override either the cluster or node
name. This can be done from the command line when starting
Elasticsearch as follows:
./elasticsearch -Ecluster.name=my_cluster_name -Enode.name=my_node_name
I have generated a python script that opens a deployment config_file.yaml, modifies some parameters and saves it again, using pyyaml. This python script will be executed in the master node of a Kubernetes cluster.
Once is generated the new file, my intention is to execute
kubectl apply -f config_file.yaml
in the python script to apply the modifications to the deployment.
I have been reading how to do it using kubernetes python client, but it seems it is not prepare to execute kubectl apply.
So the other option is to create a bash script and execute it from python script.
Bash scripts:
#!/bin/bash
sudo kubectl apply -f config_file.yaml
I give it permissions
chmod +x shell_scipt.sh
Python script:
import subprocess
subprocess.call(['./shell_script.sh'])
But an error appears:
File "/usr/lib/python2.7/subprocess.py", line 1047, in _execute_child
raise child_exception
OSError: [Errno 13] Permission denied
I don't know how to resolve this error, I have tested givin permissions to the bash script, but nothing worked.
I do not know anything about Kubernetes but I think I might help.
I am basically suggesting that you run the command directly from Python script, not having Python running a bash script which runs a command.
import os
command = 'kubectl apply -f config_file.yaml'
password = 'yourpassword'
p = os.system('echo %s|sudo -S %s' % (passs, command))
If I understand correctly you are using python to dynamically modify static yaml files. If this is the case I would recommend using helm which if perfect for making static yaml file dynamic :-)
How are you running python script ?
I think you are running python script with a non sudo user. Try to run python script as sudo user this way your subprocess will have access to that file.
if this fixes your issue let me know.
i am trying to kill process in robot framework, although the log says that process is killed , i am still able to see the command prompt invoked by the process Library.
is there anyway to kill the invoked command prompt in Suite Teardown ?
*** Settings ***
Library Process
Suite Setup Generic Suite Setup
Suite TearDown Terminate All Processes kill=True
*** Test Cases ***
login
*** Keywords ***
Generic Suite Setup
#This is invoking cmd
#when i run this , got error as mentioned below
Run Process appium -p 4723
Run Process appium -p 4750
#I tried to include cmd , no error but can't see the cmd getting invoked
Run Process cmd appium -p 4750
My python version :2.7.14
pybot version : 3.0.2
After removing start & "cmd" i get the error
Parent suite setup failed:
WindowsError: [Error 2] The system cannot find the file specified
Appium path is set in environment variables
When you use Start Process, each argument that you would use on a command line needs to be an argument in robot. For example, if you would type appium -p 4723 on the command line, then in robot you would do:
Start process appium -p 4723
(note: there are two spaces between "process", "appium", "-p", and "4723")
When you do this, robot will look through the folders in your PATH environment variable in order to find a program named "appium" (or "appium.exe" on windows). If you get the error "cannot find the file specified" that usually means that the program you're trying to run isn't in a folder in your PATH. It could also mean that the program isn't installed, or that you misspelled the app name, but I'm assuming neither of those are true in this case.
The simplest solution is to find where the appium executable is, and then use the full and complete path as the first argument to Run Process (eg: Run Process C:/the/path/to/appium.exe -p 4723)
Is it possible to have a context manager that just keeps the state of the previous run execution. In code:
EDIT: Not a working solution, something I expected
with sudo('. myapp'): #this runs a few things and sets many env variables
run('echo $ENV1') # $ENV1 isn't set because the sudo command ran independently
I am trying to run several commands but want to keep state between each command ?
I tried using the prefix context manager but it doesn't work with the shell_env context manager: When running this code
with shell_env(ENV1="TEST"):
with prefix(". myapp"):
run("echo $ENV2")
I expected my ENV to be set then run my application which should have set env2 but the prefix runs before the shell_env ?
Don't really understand the question asked here. Could you give a little more detail in what you are trying to accomplish. However I tried the same thing (with sudo('. myapp)) you did which threw an AttributeError __exit__ exception.
Finally I've tried the to use prefix to source the bash file and executing a sudo command line within this context, which works just fine.
#fab.task
def trythis():
with fab.prefix('. testenv'):
fab.sudo('echo $ENV1')
When executing the task I get the following output.
[host] Executing task 'trythis'
[host] sudo: echo $ENV1
[host] out: sudo password:
[host] out: testing
[host] out:
Done.
Disconnecting from host... done.
with shell_env(ENV1="TEST"):
with prefix(". myapp"):
run("echo $ENV2")
I expected my ENV to be set then run my application which should have set env2 but the prefix runs before the shell_env ?
Given fabric's documentation the code you've written will generate:
export ENV1="TEST" && . myapp && echo $ENV2
Given that myapp creates ENV2, your code should work the way you want it to work, though not all shell interpret the dot operator the same way, using source is always a better idea.
with shell_env(ENV1="TEST"):
with prefix("source myapp"):
run("echo $ENV2")
You may consider a bug in myapp though, and/or double check that all path and working directory are correctly set.