I have input files in this format:
dataset1/file1.bam
dataset1/file1_rmd.bam
dataset1/file2.bam
dataset1/file2_rmd.bam
I would like to run a command with each and merge the results into a csv file.
The command returns an integer given filename.
$ samtools view -c -F1 dataset1/file1.bam
200
I would like to run the command and merge the output for each file into the following csv file:
file1,200,100
file2,400,300
I can do this without using the expand in input and using an append operator >>, but in order to avoid possible file corruptions it can lead to I would like to use >.
I tried something like this which did not work due to wildcards.param2 part:
rule collect_rc_results:
input: in1=expand("{param1}/{param2}.bam", param1=PARS1, param2=PARS2),
in2=expand("{param1}/{param2}_rmd.bam", param1=PARS1, param2=PARS2)
output: "{param1}_merged.csv"
shell:
"""
RCT=$(samtools view -c -F1 {input.in1})
RCD=$(samtools view -c -F1 {input.in2})
printf "{wildcards.param2},${{RCT}},${{RCD}}\n" > {output}
"""
I am aware that the input is no longer a single file but a list of files created by expand. Therefore I defined a function to work on a list input, but it is still not quite right:
def get_read_count:
return [ os.popen("samtools view -c -F1 "+infile).read() for infile in infiles ]
rule collect_rc_results:
input: in1=expand("{param1}/{param2}.bam", param1=PARS1, param2=PARS2),
in2=expand("{param1}/{param2}_rmd.bam", param1=PARS1, param2=PARS2)
output: "{param1}_merged.csv"
params: rc1=get_read_count("{param1}/{param2}.bam"), rc2=get_read_count("{param1}/{param2}_rmd.bam")
shell:
"""
printf "{wildcards.param2},{params.rc1},{params.rc2}\n" > {output}
"""
What is the best practice to use the wildcards inside the input file IDs when the input file list is defined with expand?
Edit:
I can get the expected result with expand if I use an external bash script, such as script.sh
for INF in "${#}";do
IN1=${INF}
IN2=${IN1%.bam}_rmd.bam
LIB=$(basename ${IN1%.*}|cut -d_ -f1)
RCT=$(samtools view -c -F1 ${IN1} )
RCD=$(samtools view -c -F1 ${IN2} )
printf "${LIB},${RCT},${RCD}\n"
done
with
params: script="script.sh"
shell:
"""
bash {params.script} {input} > {output}
"""
but I am interested in learning if there is an easier way to get the same output only using snakemake.
Edit2:
I can also do it in the shell instead of a separate script,
rule collect_rc_results:
input:
in1=expand("{param1}/{param2}.bam", param1=PARS1, param2=PARS2),
in2=expand("{param1}/{param2}_rmd.bam", param1=PARS1, param2=PARS2)
output: "{param1}_merged.csv"
shell:
"""
for INF in {input};do
IN1=${{INF}}
IN2=${{IN1%.bam}}_rmd.bam
LIB=$(basename ${{IN1%.*}}|cut -d_ -f1)
RCT=$(samtools view -c -F1 ${{IN1}} )
RCD=$(samtools view -c -F1 ${{IN2}} )
printf ${{LIB}},${{RCT}},${{RCD}}\n"
done > {output}
"""
thus get the expected files. However, I would be interested to hear about it if anyone has a more elegant or a "best practice" solution.
I don't think there is anything really wrong with your current solution, but I would be more inclined to use a run directive with shell functions to perform your loop.
#bli's suggestion to use temp files is also good, especially if the intermediate step (samtools in this case) is long running; you can get tremendous wall-clock gains from parallelizing those computations. The downside is you will be creating lots of tiny files.
I noticed that your inputs are fully qualified through expand, but based on your example I think you want to leave param1 as a wildcard. Assuming PARS2 is a list, it should be safe to zip in1, in2 and PARS2 together. Here's my take (written but not tested).
rule collect_rc_results:
input:
in1=expand("{param1}/{param2}.bam", param2=PARS2, allow_missing=True),
in2=expand("{param1}/{param2}_rmd.bam", param2=PARS2, allow_missing=True)
output: "{param1}_merged.csv"
run:
with open(output[0], 'w') as outfile:
for infile1, infile2, parameter in zip(in1, in2, PARS2):
# I don't usually use shell, may have to strip newlines from this output?
RCT = shell(f'samtools view -c -F1 {infile1}')
RCD = shell(f'samtools view -c -F1 {infile2}')
outfile.write(f'{parameter},{RCT},{RCD}\n')
Related
I'm trying to figure out how to extract the read-group lane information from a fastq file, and then use this string within my GATK AddOrReplaceReadGroups Snakemake below (below). I've written a short Python function (at the top of the rule) to do this operation, but can't quite figure out how to actually use it in the script such that I can access the string output (e.g., "ABCHGSX35:2") of the function on a given fastq file in my GATK shell command. Basically, I want to feed {params.PathToReadR1} into the extract_lane_info function, but can't figure out how to integrate them correctly.
Open to any solutions to the problem posed, or if there's an entirely more efficiently and different way to achieve the same result (getting the read-group lane info as a param value), that'd be great, too. Thanks for any help.
def extract_lane_info(file_path):
# Run the bash command using subprocess.run()
elements = subprocess.run(["zcat", file_path], stdout=subprocess.PIPE).stdout.split(b"\n")[0].split(b":")[2:4]
# Extract the lane information from the output of the command using a regular expression
read_group = elements[0].decode().strip("'")
string_after = elements[1].decode().strip("'")
elements = read_group + ":" + string_after
return(elements)
rule AddOrReplaceReadGroups:
input:
"results/{species}/bwa/merged/{read}_pese.bam",
output:
"results/{species}/GATK/AddOrReplace/{read}.pese.bwa_mem.addRG.bam",
threads: config["trimmmomatic"]["cpu"]
log:
"results/{species}/GATK/AddOrReplace/log/{read}_AddOrReplace.stderrs"
message:
"Running AddOrReplaceReadGroups on {wildcards.read}"
conda:
config["CondaEnvs"]
params:
ReadGroupID = lambda wildcards: final_Prefix_ReadGroup[wildcards.read],
PathToReadR1 = lambda wildcards: final_Prefix_RawReadPathR1[wildcards.read],
LIBRARY = lambda wildcards: final_Prefix_ReadGroupLibrary[wildcards.read],
shell:"gatk AddOrReplaceReadGroups -I {input} -O {output} -ID {params.ReadGroupID}.1 -LB {params.LIBRARY} -PL illumina -PU {input.lane_info}:{params.ReadGroupID} -SM {params.ReadGroupID} --SORT_ORDER 'coordinate' --CREATE_INDEX true 2>> {log}"
Basically, {params.PathToReadR1} would be "path/to/file.fastq.gz", and I want this file to be inputted into the extract_lane_info function, and then the output of this function to be used in the -PU section of the shell command (e.g., {params.lane_info}. I keep getting all types of errors as I've messed around with it, and am unsure how to move forward.
I want to create a very simple pipeline for molecular dynamic simulation. The program (Amber) just wants 3 files as input, and produces a lot of files, some of them I will be needed in the future. So my pipeline is extremely simple:
Check that *.in, *.prmtop and *.rst are in folder (I guarantee it's only one file for any of these extensions) and warn if these files are not present
Run shell command (based on name of all input files)
Check that *.out, mden, mdinfo and *.nc were produced
That's all. It's standard approach to the program I deal with. One folder, one task, short and simple file names based on file purpose, not on its content.
I wrote a simple pipline:
rule all:
input: '{inp}.out'
rule amber:
input:
'{inp}.in',
'{top}.prmtop',
'{coord}.rst'
output:
'{inp}.out',
'mden',
'mdinfo',
'{inp}.nc'
shell:
'pmemd.cuda'
' -O'
' -i {inp}.in'
' -o {inp}.out'
' -p {top}.prmtop'
' -c {coord}.rst'
' -r {inp}.rst'
' -x {inp}.nc'
' -ref {coord}.rst'
And it doesn't work.
All inputs in all rule must be explicit. (Why? Why it cannot be regex or wildcard expression? If I see *.out in folder and status code of shell script was 0, that's all, work is done)
I must to use all wildcards from input in output, but I want to use some only in shell or in another rules
I must to not expect to get files like mden with potentially "non-unique" names because it's could be change with another task, but I know, that it will be only one task and it's a direct way how my MD program works (yeah, I know about Ambers's -e and -inf keys, but it's over-complication of simple task).
So, I would like to decide is it worth using snakemake for this, or not. It's very simple task, but I already spent several hours, I see a lot of documentation, a lot of examples, that I can't apply to my case. snakemake looks exactly what I need, but I can't express simple task in general terms with this framework, I don't want to specify explicit filenames, because I'll lose in flexibility, I want to run hundreds of simple tasks automatically, only input files will be different. I'm sure I just haven't figured out how to handle this framework yet. Maybe you can show me how should I? Thank you!
Hopefully this will put you in the right direction.
If I understand correctly, the input to snakemake is a folder containing the input files to amber. You know that this folder contains one .in file, one .prmtop file, and one .rst file but you don't know the full names of these files.
If you want snakemake to run on a single input folder, then you don't need wildcards at all and the script below should do.
import glob
import os
input_folder = config['amber_folder']
# We don't know the name of input file. We only know it ends in '.in'
inp = glob.glob(os.path.join(input_folder, '*.in'))
assert len(inp) == 1
inp = inp[0]
name = os.path.splitext(os.path.basename(inp))[0]
output_folder = name + '_results'
out = os.path.join(output_folder, name + '.out')
rule all:
input:
out
rule amber:
input:
inp= inp,
top= glob.glob(os.path.join(input_folder, '*.prmtop')),
rst= glob.glob(os.path.join(input_folder, '*.rst')),
output:
out= out,
nc= os.path.join(output_folder, name + '.nc'),
mden= os.path.join(output_folder, 'mden'),
mdinfo= os.path.join(output_folder, 'mdinfo'),
shell:
r"""
pmemd.cuda \
-O \
-i {input.inp} \
-o {output.out} \
-p {input.top} \
-c {input.rst} \
-r {input.rst} \
-x {output.nc} \
-ref {input.rst}
"""
Execute with:
snakemake -j 1 -C amber_folder='your-input-folder'
If you have many input folders you could write a for-loop to execute the command above but probably better is to pass the list of inputs to snakemake and let it handle them.
I have a bash script that calls a python script with parameters.
In the bash script, I'm reading a file that contains one row of parameters separated by ", and then calls the python script with the line I read.
My problem is that the python gets the parameters separated by the space.
The line looks like this: "param_a" "Param B" "Param C"
Code Example:
Bash Script:
LINE=`cat $tmp_file`
id=`python /full_path/script.py $LINE`
Python Script:
print sys.argv[1]
print sys.argv[2]
print sys.argv[3]
Received output:
"param_a"
"Param
B"
Wanted output:
param_a
Param B
Param C
How can I send the parameters to the Python script the way I need?
Thanks!
What about
id=`python /full_path/script.py $tmp_file`
and
import sys
for line in open(sys.argv[1]):
print(line)
?
The issue is in how bash passes the arguments. Python has nothing do to with it.
So, you have to solve all these stuff before sending it to Python, I decided to use awk and xargs for this. (but xargs is the actual MVP here.)
LINE=$(cat $tmp_file)
awk -v ORS="\0" -v FPAT='"[^"]+"' '{for (i=1;i<=NF;i++){print substr($i,2,length($i)-2)}}' <<<$LINE |
xargs -0 python ./script.py
First $(..) is preferred over backticks, because it is more readable. You are making a variable after all.
awk only reads from stdin or a file, but you can force it to read from a variable with the <<<, also called "here string".
With awk I loop over all fields (as defined by the regex in the FPAT variable), and print them without the "".
The output record separator I choose is the NULL character (-v ORF='\0'), xargs will split on this character.
xargs will now parse the piped input by separating the arguments on NULL characters (set with -0) and execute the command given with the parsed arguments.
Note, while awk is found on most UNIX systems, I make use of FPAT which is a GNU awk extension and you might not be having GNU awk as default (for example Ubuntu), but gnu awk is usually just a install gawk away.
Also, the next command would be a quick and easy solution, but generally considered as unsafe, since eval will execute everything it receives.
eval "python ./script "$LINE
This can be done using bash arrays:
tmp_file='gash.txt'
# Set IFS to " which splits on double quotes and removes them
# Using read is preferable to using the external program cat
# read -a reads into the array called "line"
# UPPERCASE variable names are discouraged because of collisions with bash variables
IFS=\" read -ra line < "$tmp_file"
# That leaves blank and space elements in "line",
# we create a new array called "params" without those elements
declare -a params
for((i=0; i < ${#line[#]}; i++))
do
p="${line[i]}"
if [[ -n "$p" && "$p" != " " ]]
then
params+=("$p")
fi
done
# `backticks` are frowned upon because of poor readability
# I've called the python script "gash.py"
id=$(python ./gash.py "${params[#]}")
echo "$id"
gash.py:
import sys
print "1",sys.argv[1]
print "2",sys.argv[2]
print "3",sys.argv[3]
Gives:
1 param_a
2 Param B
3 Param C
I am creating my first snakemake file, and I got to the point where I need to perform a simple string operation on the value of my output, so that my shell command works as expected:
rule sketch:
input:
'out/genomes.txt'
output:
'out/genomes.msh'
shell:
'mash sketch -l {input} -k 31 -s 100000 -o {output}'
I need to apply the split function to {output} so that only the name of the file up to the extension is used. I couldn't find anything in the docs or in related questions.
You could use the params field:
rule sketch:
input:
'out/genomes.txt'
output:
'out/genomes.msh'
params:
dir = 'out/genomes'
shell:
'mash sketch -l {input} -k 31 -s 100000 -o {params.dir}'
Alternative solution using wildcards:
rule all:
input: 'out/genomes.msh'
rule sketch:
input:
'{file}.txt'
output:
'{file}.msh'
shell:
'mash sketch -l {input} -k 31 -s 100000 -o {wildcards.file}'
Untested, but I think this should work.
The advantage over the params solution is that it generalizes better.
Best is to use params:
rule sketch:
input:
'out/genomes.txt'
output:
'out/genomes.msh'
params:
prefix=lambda wildcards, output: os.path.splitext(output[0])[0]
shell:
'mash sketch -l {input} -k 31 -s 100000 -o {params.prefix}'
It is always preferable to use params instead of using the run directive, because the run directive cannot be combined with conda environments.
Avoid duplicating text. Don't use params unless you convert your input/outputs to wildcards + extentions. Otherwise you're left with a rule that is hard to maintain.
input:
"{pathDIR}/{genome}.txt"
output:
"{pathDIR}/{genome}.msh"
params:
dir: '{pathDIR}/{genome}'
Otherwise, use Python's slice notation.
I couldn't seem to get slice notation to work in the params using the output wildcard. Here it is in the run directive.
from subprocess import call
rule sketch:
input:
'out/genomes.txt'
output:
'out/genomes.msh'
run:
callString="mash sketch -l " + str(input) + " -k 31 -s 100000 -o " + str(output)[:-4]
print(callString)
call(callString, shell=True)
Python underlies Snakemake. I prefer the "run" directive over the "shell" directive because I find it really unlocks a lot of that beautiful Python functionality. The accessing of params and various things are slightly different that with the "shell" directive.
E.g.
callString=config["mpileup_samtoolsProg"] + ' view -bh -F ' + str(config["bitFlag"]) + ' ' + str(input.inputBAM) + ' ' + wildcards.chrB2M[1:]
A bit of a snippet of J.K. using the run directive.
All of the rules in my modules pretty much use the run directive
You could remove the extension within the shell command
rule sketch:
input:
'out/genomes.txt'
output:
'out/genomes.msh'
shell:
'mash sketch -l {input} -k 31 -s 100000 -o $(echo "{output}" | sed -e "s/.msh//")'
I'm trying to write a shell script that will make several targets into several different paths. I'll pass in a space-separated list of paths and a space-separated list of targets, and the script will make DESTDIR=$path $target for each pair of paths and targets. In Python, my script would look something like this:
for path, target in zip(paths, targets):
exec_shell_command('make DESTDIR=' + path + ' ' + target)
However, this is my current shell script:
#! /bin/bash
packages=$1
targets=$2
target=
set_target_number () {
number=$1
counter=0
for temp_target in $targets; do
if [[ $counter -eq $number ]]; then
target=$temp_target
fi
counter=`expr $counter + 1`
done
}
package_num=0
for package in $packages; do
package_fs="debian/tmp/$package"
set_target_number $package_num
echo "mkdir -p $package_fs"
echo "make DESTDIR=$package_fs $target"
package_num=`expr $package_num + 1`
done
Is there a Unix tool equivalent to Python's zip function or an easier way to retrieve an element from a space-separated list by its index? Thanks.
Use an array:
#!/bin/bash
packages=($1)
targets=($2)
if (("${#packages[#]}" != "${#targets[#]}"))
then
echo 'Number of packages and number of targets differ' >&2
exit 1
fi
for index in "${!packages[#]}"
do
package="${packages[$index]}"
target="${targets[$index]}"
package_fs="debian/tmp/$package"
mkdir -p "$package_fs"
make "DESTDIR=$package_fs" "$target"
done
Here is the solution
paste -d ' ' paths targets | sed 's/^/make DESTDIR=/' | sh
paste is equivalent of zip in shell. sed is used to prepend the make command (using regex) and result is passed to sh to execute
There's no way to do that in bash. You'll need to create two arrays from the input and then iterate through a counter using the values from each.