uwsgi - remove "address space usage" message from the log file - python

I'm using uwsgi version 2.0.8 and it's configured as following:
/home/user/env/bin/uwsgi
--chdir=/home/user/code
--module=XXXXXXX.wsgi:application
--master
--pidfile=/tmp/XXXX.pid
--socket=/tmp/uwsgi.sock
--chmod-socket
--processes=9
--harakiri=2000
--max-requests=50000
--vacuum
--home=/home/user/env
--stats /tmp/stats.socket
--stats-minified
--harakiri-verbose
--listen=2048
--log-date
--log-zero
--log-slow 2000
--log-5xx
--log-4xx
--gevent 5
-m
-b 25000
-L
--wsgi-file /home/user/code/wsgi.py
--touch-chain-reload /home/user/uwsgi.reload
And I'm getting a lot of messages like:
{address space usage: 1444040704 bytes/1377MB} {rss usage: 759623680 bytes/724MB} [pid: 4655|app: 0|req: 2897724/26075292] 184.172.192.42 () {44 vars in 563 bytes} [Tue Jul 7 18:45:17 2015] POST /some/url/ => generated 0 bytes in 34 msecs (HTTP/1.1 204) 1 headers in 38 bytes (3 switches on core 4)
What is the purpose of these messages and how can I remove them?

You told uWSGI to log any response with zero size (--log-zero). Instead if you mean removing the memory report just drop the -m flag (it is a shortcut for --memory-report)

Related

How to use singularity and conda wrappers in Snakemake

TLDR I'm getting the following error:
The 'conda' command is not available inside your singularity container image. Snakemake mounts your conda installation into singularity. Sometimes, this can fail because of shell restrictions. It has been tested to work with docker://ubuntu, but it e.g. fails with docker://bash
I had created a Snakemake workflow and converted the shell: commands to rule-based package management via Snakemake wrappers: .
However, I ran into issues running this on HPC and one of the HPC support staff strongly recommended against using conda on any HPC system as:
"if the builder [of wrapper] is not super careful, dynamic libraries present in the conda environment that relies on the host libs (there are always a couple present because builder are most of the time carefree) will break. I think that relying on Singularity for your pipeline would make for a more robust system." - Anon
I did some reading over the weekend and according to this document, it's possible to combine containers with conda-based package management; by defining a global conda docker container and per-rule yaml files.
Note: In contrast to the example in the link above (Figure 5.4), which uses a predefined yaml and shell: command, here I've use
conda wrappers which download these yaml files into the
Singularity container (if I'm thinking correctly) so I thought should function the same - see the Note: at the end though...
Snakefile, config.yaml and samples.txt
Snakefile
# Directories------------------------------------------------------------------
configfile: "config.yaml"
# Setting the names of all directories
dir_list = ["REF_DIR", "LOG_DIR", "BENCHMARK_DIR", "QC_DIR", "TRIM_DIR", "ALIGN_DIR", "MARKDUP_DIR", "CALLING_DIR", "ANNOT_DIR"]
dir_names = ["refs", "logs", "benchmarks", "qc", "trimming", "alignment", "mark_duplicates", "variant_calling", "annotation"]
dirs_dict = dict(zip(dir_list, dir_names))
import os
import pandas as pd
# getting the samples information (names, path to r1 & r2) from samples.txt
samples_information = pd.read_csv("samples.txt", sep='\t', index_col=False)
# get a list of the sample names
sample_names = list(samples_information['sample'])
sample_locations = list(samples_information['location'])
samples_dict = dict(zip(sample_names, sample_locations))
# get number of samples
len_samples = len(sample_names)
# Singularity with conda wrappers
singularity: "docker://continuumio/miniconda3:4.5.11"
# Rules -----------------------------------------------------------------------
rule all:
input:
"resources/vep/plugins",
"resources/vep/cache"
rule download_vep_plugins:
output:
directory("resources/vep/plugins")
params:
release=100
resources:
mem=1000,
time=30
wrapper:
"0.66.0/bio/vep/plugins"
rule get_vep_cache:
output:
directory("resources/vep/cache")
params:
species="caenorhabditis_elegans",
build="WBcel235",
release="100"
resources:
mem=1000,
time=30
log:
"logs/vep/cache.log"
cache: True # save space and time with between workflow caching (see docs)
wrapper:
"0.66.0/bio/vep/cache"
config.yaml
# Files
REF_GENOME: "c_elegans.PRJNA13758.WS265.genomic.fa"
GENOME_ANNOTATION: "c_elegans.PRJNA13758.WS265.annotations.gff3"
# Tools
QC_TOOL: "fastQC"
TRIM_TOOL: "trimmomatic"
ALIGN_TOOL: "bwa"
MARKDUP_TOOL: "picard"
CALLING_TOOL: "varscan"
ANNOT_TOOL: "vep"
samples.txt
sample location
MTG324 /home/moldach/wrappers/SUBSET/MTG324_SUBSET
Submission
snakemake --profile slurm --use-singularity --use-conda --jobs 2
Logs
Workflow defines that rule get_vep_cache is eligible for caching between workflows (use the --cache argument to enable this).
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cores: 1 (use --cores to define parallelism)
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 get_vep_cache
1
[Mon Sep 21 15:35:50 2020]
rule get_vep_cache:
output: resources/vep/cache
log: logs/vep/cache.log
jobid: 0
resources: mem=1000, time=30
Activating singularity image /home/moldach/wrappers/SUBSET/VEP/.snakemake/singularity/d7617773b315c3abcb29e0484085ed06.simg
Activating conda environment: /home/moldach/wrappers/SUBSET/VEP/.snakemake/conda/774ea575
[Mon Sep 21 15:36:38 2020]
Finished job 0.
1 of 1 steps (100%) done
Note: Leaving --use-conda out of the submission of the workflow will cause an error for get_vep_cache: - /bin/bash: vep_install: command not found
Workflow defines that rule get_vep_cache is eligible for caching between workflows (use the --cache argument to enable this).
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cores: 1 (use --cores to define parallelism)
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 download_vep_plugins
1
[Mon Sep 21 15:35:50 2020]
rule download_vep_plugins:
output: resources/vep/plugins
jobid: 0
resources: mem=1000, time=30
Activating singularity image /home/moldach/wrappers/SUBSET/VEP/.snakemake/singularity/d7617773b315c3abcb29e0484085ed06.simg
Activating conda environment: /home/moldach/wrappers/SUBSET/VEP/.snakemake/conda/9f602d9a
[Mon Sep 21 15:35:56 2020]
Finished job 0.
1 of 1 steps (100%) done
The problem occurs when adding the third rule, fastq:
Updated Snakefile
# Directories------------------------------------------------------------------
configfile: "config.yaml"
# Setting the names of all directories
dir_list = ["REF_DIR", "LOG_DIR", "BENCHMARK_DIR", "QC_DIR", "TRIM_DIR", "ALIGN_DIR", "MARKDUP_DIR", "CALLING_DIR", "ANNOT_DIR"]
dir_names = ["refs", "logs", "benchmarks", "qc", "trimming", "alignment", "mark_duplicates", "variant_calling", "annotation"]
dirs_dict = dict(zip(dir_list, dir_names))
import os
import pandas as pd
# getting the samples information (names, path to r1 & r2) from samples.txt
samples_information = pd.read_csv("samples.txt", sep='\t', index_col=False)
# get a list of the sample names
sample_names = list(samples_information['sample'])
sample_locations = list(samples_information['location'])
samples_dict = dict(zip(sample_names, sample_locations))
# get number of samples
len_samples = len(sample_names)
# Singularity with conda wrappers
singularity: "docker://continuumio/miniconda3:4.5.11"
# Rules -----------------------------------------------------------------------
rule all:
input:
"resources/vep/plugins",
"resources/vep/cache",
expand('{QC_DIR}/{QC_TOOL}/before_trim/{sample}_{pair}_fastqc.{ext}', QC_DIR=dirs_dict["QC_DIR"], QC_TOOL=config["QC_TOOL"], sample=sample_names, pair=['R1', 'R2'], ext=['html', 'zip'])
rule download_vep_plugins:
output:
directory("resources/vep/plugins")
params:
release=100
resources:
mem=1000,
time=30
wrapper:
"0.66.0/bio/vep/plugins"
rule get_vep_cache:
output:
directory("resources/vep/cache")
params:
species="caenorhabditis_elegans",
build="WBcel235",
release="100"
resources:
mem=1000,
time=30
log:
"logs/vep/cache.log"
cache: True # save space and time with between workflow caching (see docs)
wrapper:
"0.66.0/bio/vep/cache"
def getHome(sample):
return(list(os.path.join(samples_dict[sample],"{0}_{1}.fastq.gz".format(sample,pair)) for pair in ['R1','R2']))
rule qc_before_trim_r1:
input:
r1=lambda wildcards: getHome(wildcards.sample)[0]
output:
html=os.path.join(dirs_dict["QC_DIR"],config["QC_TOOL"],"before_trim","{sample}_R1_fastqc.html"),
zip=os.path.join(dirs_dict["QC_DIR"],config["QC_TOOL"],"before_trim","{sample}_R1_fastqc.zip"),
params:
dir=os.path.join(dirs_dict["QC_DIR"],config["QC_TOOL"],"before_trim")
log:
os.path.join(dirs_dict["LOG_DIR"],config["QC_TOOL"],"{sample}_R1.log")
resources:
mem=1000,
time=30
singularity:
"https://depot.galaxyproject.org/singularity/fastqc:0.11.9--0"
threads: 1
message: """--- Quality check of raw data with FastQC before trimming."""
wrapper:
"0.66.0/bio/fastqc"
rule qc_before_trim_r2:
input:
r1=lambda wildcards: getHome(wildcards.sample)[1]
output:
html=os.path.join(dirs_dict["QC_DIR"],config["QC_TOOL"],"before_trim","{sample}_R2_fastqc.html"),
zip=os.path.join(dirs_dict["QC_DIR"],config["QC_TOOL"],"before_trim","{sample}_R2_fastqc.zip"),
params:
dir=os.path.join(dirs_dict["QC_DIR"],config["QC_TOOL"],"before_trim")
log:
os.path.join(dirs_dict["LOG_DIR"],config["QC_TOOL"],"{sample}_R2.log")
resources:
mem=1000,
time=30
singularity:
"https://depot.galaxyproject.org/singularity/fastqc:0.11.9--0"
threads: 1
message: """--- Quality check of raw data with FastQC before trimming."""
wrapper:
"0.66.0/bio/fastqc"
Error reported in nohup.out
Building DAG of jobs...
Pulling singularity image https://depot.galaxyproject.org/singularity/fastqc:0.11.9--0.
CreateCondaEnvironmentException:
The 'conda' command is not available inside your singularity container image. Snakemake mounts your conda installation into singularity. Sometimes, this can fail because of shell restrictions. It has been tested to work with docker://ubuntu, but it e.g. fails with docker://bash
File "/home/moldach/anaconda3/envs/snakemake/lib/python3.7/site-packages/snakemake/deployment/conda.py", line 247, in create
File "/home/moldach/anaconda3/envs/snakemake/lib/python3.7/site-packages/snakemake/deployment/conda.py", line 381, in __new__
File "/home/moldach/anaconda3/envs/snakemake/lib/python3.7/site-packages/snakemake/deployment/conda.py", line 394, in __init__
File "/home/moldach/anaconda3/envs/snakemake/lib/python3.7/site-packages/snakemake/deployment/conda.py", line 417, in _check
using shell: instead of wrapper:
I changed the wrapper back into the shell command:
and this is the error I get when submitting with ``:
orkflow defines that rule get_vep_cache is eligible for caching between workflows (use the --cache argument to enable this).
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cores: 1 (use --cores to define parallelism)
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 qc_before_trim_r2
1
[Mon Sep 21 16:32:54 2020]
Job 0: --- Quality check of raw data with FastQC before trimming.
Activating singularity image /home/moldach/wrappers/SUBSET/VEP/.snakemake/singularity/6740cb07e67eae01644839c9767bdca5.simg
^[[33mWARNING:^[[0m Skipping mount /var/singularity/mnt/session/etc/resolv.conf [files]: /etc/resolv.conf doesn't exist in container
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "en_CA.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
Skipping '/home/moldach/wrappers/SUBSET/MTG324_SUBSET/MTG324_R2.fastq.gz' which didn't exist, or couldn't be read
Waiting at most 60 seconds for missing files.
MissingOutputException in line 84 of /home/moldach/wrappers/SUBSET/VEP/Snakefile:
Job completed successfully, but some output files are missing. Missing files after 60 seconds:
qc/fastQC/before_trim/MTG324_R2_fastqc.html
qc/fastQC/before_trim/MTG324_R2_fastqc.zip
This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
File "/home/moldach/anaconda3/envs/snakemake/lib/python3.7/site-packages/snakemake/executors/__init__.py", line 544, in handle_job_success
File "/home/moldach/anaconda3/envs/snakemake/lib/python3.7/site-packages/snakemake/executors/__init__.py", line 231, in handle_job_success
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
The error Skipping '/home/moldach/wrappers/SUBSET/MTG324_SUBSET/MTG324_R2.fastq.gz' which didn't exist, or couldn't be read is misleading because the file is does exist...
Update 2
Following the advice Manavalan Gajapathy I've eliminated defining singularity at two different levels (global + per-rule).
Now I'm using a singularity container at only the global level and using wrappers via --use-conda which creates the conda environment inside of the container:
# Directories------------------------------------------------------------------
configfile: "config.yaml"
# Setting the names of all directories
dir_list = ["REF_DIR", "LOG_DIR", "BENCHMARK_DIR", "QC_DIR", "TRIM_DIR", "ALIGN_DIR", "MARKDUP_DIR", "CALLING_DIR", "ANNOT_DIR"]
dir_names = ["refs", "logs", "benchmarks", "qc", "trimming", "alignment", "mark_duplicates", "variant_calling", "annotation"]
dirs_dict = dict(zip(dir_list, dir_names))
import os
import pandas as pd
# getting the samples information (names, path to r1 & r2) from samples.txt
samples_information = pd.read_csv("samples.txt", sep='\t', index_col=False)
# get a list of the sample names
sample_names = list(samples_information['sample'])
sample_locations = list(samples_information['location'])
samples_dict = dict(zip(sample_names, sample_locations))
# get number of samples
len_samples = len(sample_names)
# Singularity with conda wrappers
singularity: "docker://continuumio/miniconda3:4.5.11"
# Rules -----------------------------------------------------------------------
rule all:
input:
"resources/vep/plugins",
"resources/vep/cache",
expand('{QC_DIR}/{QC_TOOL}/before_trim/{sample}_{pair}_fastqc.{ext}', QC_DIR=dirs_dict["QC_DIR"], QC_TOOL=config["QC_TOOL"], sample=sample_names, pair=['R1', 'R2'], ext=['html', 'zip'])
rule download_vep_plugins:
output:
directory("resources/vep/plugins")
params:
release=100
resources:
mem=1000,
time=30
wrapper:
"0.66.0/bio/vep/plugins"
rule get_vep_cache:
output:
directory("resources/vep/cache")
params:
species="caenorhabditis_elegans",
build="WBcel235",
release="100"
resources:
mem=1000,
time=30
log:
"logs/vep/cache.log"
cache: True # save space and time with between workflow caching (see docs)
wrapper:
"0.66.0/bio/vep/cache"
def getHome(sample):
return(list(os.path.join(samples_dict[sample],"{0}_{1}.fastq.gz".format(sample,pair)) for pair in ['R1','R2']))
rule qc_before_trim_r1:
input:
r1=lambda wildcards: getHome(wildcards.sample)[0]
output:
html=os.path.join(dirs_dict["QC_DIR"],config["QC_TOOL"],"before_trim","{sample}_R1_fastqc.html"),
zip=os.path.join(dirs_dict["QC_DIR"],config["QC_TOOL"],"before_trim","{sample}_R1_fastqc.zip"),
params:
dir=os.path.join(dirs_dict["QC_DIR"],config["QC_TOOL"],"before_trim")
log:
os.path.join(dirs_dict["LOG_DIR"],config["QC_TOOL"],"{sample}_R1.log")
resources:
mem=1000,
threads: 1
message: """--- Quality check of raw data with FastQC before trimming."""
wrapper:
"0.66.0/bio/fastqc"
rule qc_before_trim_r2:
input:
r1=lambda wildcards: getHome(wildcards.sample)[1]
output:
html=os.path.join(dirs_dict["QC_DIR"],config["QC_TOOL"],"before_trim","{sample}_R2_fastqc.html"),
zip=os.path.join(dirs_dict["QC_DIR"],config["QC_TOOL"],"before_trim","{sample}_R2_fastqc.zip"),
params:
dir=os.path.join(dirs_dict["QC_DIR"],config["QC_TOOL"],"before_trim")
log:
os.path.join(dirs_dict["LOG_DIR"],config["QC_TOOL"],"{sample}_R2.log")
resources:
mem=1000,
time=30
threads: 1
message: """--- Quality check of raw data with FastQC before trimming."""
wrapper:
"0.66.0/bio/fastqc"
and submit via:
However, I'm still getting an error:
Workflow defines that rule get_vep_cache is eligible for caching between workflows (use the --cache argument to enable this).
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cores: 1 (use --cores to define parallelism)
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 qc_before_trim_r2
1
[Tue Sep 22 12:44:03 2020]
Job 0: --- Quality check of raw data with FastQC before trimming.
Activating singularity image /home/moldach/wrappers/SUBSET/OMG/.snakemake/singularity/d7617773b315c3abcb29e0484085ed06.simg
Activating conda environment: /home/moldach/wrappers/SUBSET/OMG/.snakemake/conda/c591f288
Skipping '/work/mtgraovac_lab/MATTS_SCRATCH/rep1_R2.fastq.gz' which didn't exist, or couldn't be read
Skipping ' 2> logs/fastQC/rep1_R2.log' which didn't exist, or couldn't be read
Failed to process qc/fastQC/before_trim
java.io.FileNotFoundException: qc/fastQC/before_trim (Is a directory)
at java.base/java.io.FileInputStream.open0(Native Method)
at java.base/java.io.FileInputStream.open(FileInputStream.java:219)
at java.base/java.io.FileInputStream.<init>(FileInputStream.java:157)
at uk.ac.babraham.FastQC.Sequence.FastQFile.<init>(FastQFile.java:73)
at uk.ac.babraham.FastQC.Sequence.SequenceFactory.getSequenceFile(SequenceFactory.java:106)
at uk.ac.babraham.FastQC.Sequence.SequenceFactory.getSequenceFile(SequenceFactory.java:62)
at uk.ac.babraham.FastQC.Analysis.OfflineRunner.processFile(OfflineRunner.java:159)
at uk.ac.babraham.FastQC.Analysis.OfflineRunner.<init>(OfflineRunner.java:121)
at uk.ac.babraham.FastQC.FastQCApplication.main(FastQCApplication.java:316)
Traceback (most recent call last):
File "/home/moldach/wrappers/SUBSET/OMG/.snakemake/scripts/tmpiwwprg5m.wrapper.py", line 35, in <module>
shell(
File "/mnt/snakemake/snakemake/shell.py", line 205, in __new__
raise sp.CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command 'set -euo pipefail; fastqc qc/fastQC/before_trim --quiet -t 1 --outdir /tmp/tmps93snag8 /work/mtgraovac_lab/MATTS_SCRATCH/rep1_R2.fastq.gz ' 2> logs/fastQC/rep1_R$
[Tue Sep 22 12:44:16 2020]
Error in rule qc_before_trim_r2:
jobid: 0
output: qc/fastQC/before_trim/rep1_R2_fastqc.html, qc/fastQC/before_trim/rep1_R2_fastqc.zip
log: logs/fastQC/rep1_R2.log (check log file(s) for error message)
conda-env: /home/moldach/wrappers/SUBSET/OMG/.snakemake/conda/c591f288
RuleException:
CalledProcessError in line 97 of /home/moldach/wrappers/SUBSET/OMG/Snakefile:
Command ' singularity exec --home /home/moldach/wrappers/SUBSET/OMG --bind /home/moldach/anaconda3/envs/snakemake/lib/python3.7/site-packages:/mnt/snakemake /home/moldach/wrappers/SUBSET/OMG/.snakemake$
File "/home/moldach/anaconda3/envs/snakemake/lib/python3.7/site-packages/snakemake/executors/__init__.py", line 2189, in run_wrapper
File "/home/moldach/wrappers/SUBSET/OMG/Snakefile", line 97, in __rule_qc_before_trim_r2
File "/home/moldach/anaconda3/envs/snakemake/lib/python3.7/site-packages/snakemake/executors/__init__.py", line 529, in _callback
File "/home/moldach/anaconda3/envs/snakemake/lib/python3.7/concurrent/futures/thread.py", line 57, in run
File "/home/moldach/anaconda3/envs/snakemake/lib/python3.7/site-packages/snakemake/executors/__init__.py", line 515, in cached_or_run
File "/home/moldach/anaconda3/envs/snakemake/lib/python3.7/site-packages/snakemake/executors/__init__.py", line 2201, in run_wrapper
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Reproducibility
To replicate this you can download this small dataset:
git clone https://github.com/CRG-CNAG/CalliNGS-NF.git
cp CalliNGS-NF/data/reads/rep1_*.fq.gz .
mv rep1_1.fq.gz rep1_R1.fastq.gz
mv rep1_2.fq.gz rep1_R2.fastq.gz
UPDATE 3: Bind Mounts
According to the link shared on mounting:
"By default Singularity bind mounts /home/$USER, /tmp, and $PWD into your container at runtime."
Thus, for simplicity (and also because I got errors using --singularity-args), I've moved the required files into /home/$USER and tried to run from there.
(snakemake) [~]$ pwd
/home/moldach
(snakemake) [~]$ ll
total 3656
drwx------ 26 moldach moldach 4096 Aug 27 17:36 anaconda3
drwx------ 2 moldach moldach 4096 Sep 22 10:11 bin
-rw------- 1 moldach moldach 265 Sep 22 14:29 config.yaml
-rw------- 1 moldach moldach 1817903 Sep 22 14:29 rep1_R1.fastq.gz
-rw------- 1 moldach moldach 1870497 Sep 22 14:29 rep1_R2.fastq.gz
-rw------- 1 moldach moldach 55 Sep 22 14:29 samples.txt
-rw------- 1 moldach moldach 3420 Sep 22 14:29 Snakefile
and ran with bash -c "nohup snakemake --profile slurm --use-singularity --use-conda --jobs 4 &"
However, I still get this odd error:
Activating conda environment: /home/moldach/.snakemake/conda/fdae4f0d
Skipping ' 2> logs/fastQC/rep1_R2.log' which didn't exist, or couldn't be read
Failed to process qc/fastQC/before_trim
java.io.FileNotFoundException: qc/fastQC/before_trim (Is a directory)
at java.base/java.io.FileInputStream.open0(Native Method)
at java.base/java.io.FileInputStream.open(FileInputStream.java:219)
at java.base/java.io.FileInputStream.<init>(FileInputStream.java:157)
at uk.ac.babraham.FastQC.Sequence.FastQFile.<init>(FastQFile.java:73)
at uk.ac.babraham.FastQC.Sequence.SequenceFactory.getSequenceFile(SequenceFactory.java:106)
at uk.ac.babraham.FastQC.Sequence.SequenceFactory.getSequenceFile(SequenceFactory.java:62)
at uk.ac.babraham.FastQC.Analysis.OfflineRunner.processFile(OfflineRunner.java:159)
at uk.ac.babraham.FastQC.Analysis.OfflineRunner.<init>(OfflineRunner.java:121)
at uk.ac.babraham.FastQC.FastQCApplication.main(FastQCApplication.java:316)
Traceback (most recent call last):
Why does it think it's being given a directory?
Note: If you submit only with --use-conda, e.g. bash -c "nohup snakemake --profile slurm --use-conda --jobs 4 &" there is no error from the fastqc rules. However, the --use-conda param alone is not %100 reproducible, case-in-point doesn't work on another HPC I tested it on
The full log in nohup.out when using --printshellcmds can be found at this gist
TLDR:
fastqc singularity container used in qc rule likely doesn't have conda available in it, and this doesn't satisfy what snakemake's--use-conda expects.
Explanation:
You have singularity containers defined at two different levels - 1. global level that will be used for all rules, unless they are overridden at rule level; 2. per-rule level that will be used at the rule level.
# global singularity container to use
singularity: "docker://continuumio/miniconda3:4.5.11"
# singularity container defined at rule level
rule qc_before_trim_r1:
....
....
singularity:
"https://depot.galaxyproject.org/singularity/fastqc:0.11.9--0"
When you use --use-singularity and --use-conda together, jobs will be run in conda environment inside the singularity container. So conda command needs to be available inside the singularity container for this to be possible. While this requirement is clearly satisfied for your global-level container, I am quite certain (haven't tested though) this is not the case for your fastqc container.
The way snakemake works if --use-conda flag is supplied, it will create conda environment locally or inside the container depending on the supply of --use-singularity flag. Since you are using snakemake-wrapper for qc rule and it comes with conda env recipe pre-defined, the easiest solution here is to just use the globally defined miniconda container for all rules. That is, there is no need to use fastqc specific container for qc rule.
If you really want to use the fastqc container, then you shouldn't be using --use-conda flag, but of course this will mean that all necessary tools are available from the container(s) defined globally or per rule.

SysLogHandler messages grouped on one line on remote server

I am trying to use python logging module to log messages to a remote rsyslog server. The messages are received, but its concatenating the messages together on one line for each message. Here is an example of my code:
to_syslog_priority: dict = {
Level.EMERGENCY: 'emerg',
Level.ALERT: 'alert',
Level.CRITICAL: 'crit',
Level.ERROR: 'err',
Level.NOTICE: 'notice',
Level.WARNING: 'warning',
Level.INFO: 'info',
Level.DEBUG: 'debug',
Level.PERF: 'info',
Level.AUDIT: 'info'
}
#staticmethod
def make_logger(*, name: str, log_level: Level, rsyslog_address: Tuple[str, int], syslog_facility: int) -> Logger:
"""Initialize the logger with the given attributes"""
logger = logging.getLogger(name)
num_handlers = len(logger.handlers)
for i in range(0, num_handlers):
logger.removeHandler(logger.handlers[0])
logger.setLevel(log_level.value)
syslog_priority = Log.to_syslog_priority[log_level]
with Timeout(seconds=RSYSLOG_TIMEOUT, timeout_message="Cannot reach {}".format(rsyslog_address)):
sys_log_handler = handlers.SysLogHandler(rsyslog_address, syslog_facility, socket.SOCK_STREAM)
# There is a bug in the python implementation that prevents custom log levels
# See /usr/lib/python3.6/logging/handlers.SysLogHandler.priority_map on line 789. It can only map
# to 5 log levels instead of the 8 we've implemented.
sys_log_handler.mapPriority = lambda *args: syslog_priority
logger.addHandler(sys_log_handler)
stdout_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stdout_handler)
return logger
if __name__ == '__main__':
app_logger = Log.make_logger(name='APP', log_level=Log.Level.INFO, rsyslog_address=('localhost', 514),
syslog_facility=SysLogHandler.LOG_USER)
audit_logger = Log.make_logger(name='PERF', log_level=Log.Level.INFO, rsyslog_address=('localhost', 514),
syslog_facility=SysLogHandler.LOG_LOCAL0)
perf_logger = Log.make_logger(name='AUDIT', log_level=Log.Level.INFO, rsyslog_address=('localhost', 514),
syslog_facility=SysLogHandler.LOG_LOCAL1)
log = Log(log_level=Log.Level.WARNING, component='testing', worker='tester', version='1.0', rsyslog_srv='localhost',
rsyslog_port=30514)
app_logger.warning("Testing warning logging")
perf_logger.info("Testing performance logging1")
audit_logger.info("Testing aduit logging1")
audit_logger.info("Testing audit logging2")
app_logger.critical("Testing critical logging")
perf_logger.info("Testing performance logging2")
audit_logger.info("Testing audit logging3")
app_logger.error("Testing error logging")
On the server side, I'm added the following the following line to the /etc/rsyslog.d/50-default.conf to disable /var/log/syslog logging for USER, LOCAL0 and LOCAL1 facilities (which I use for app, perf, and audit logging).
*.*;user,local0,local1,auth,authpriv.none -/var/log/syslog
And I update the to the /etc/rsyslog.config:
# /etc/rsyslog.conf Configuration file for rsyslog.
#
# For more information see
# /usr/share/doc/rsyslog-doc/html/rsyslog_conf.html
#
# Default logging rules can be found in /etc/rsyslog.d/50-default.conf
#################
#### MODULES ####
#################
module(load="imuxsock") # provides support for local system logging
#module(load="immark") # provides --MARK-- message capability
# provides UDP syslog reception
#module(load="imudp")
#input(type="imudp" port="514")
# provides TCP syslog reception
module(load="imtcp")
input(type="imtcp" port="514")
# provides kernel logging support and enable non-kernel klog messages
module(load="imklog" permitnonkernelfacility="on")
###########################
#### GLOBAL DIRECTIVES ####
###########################
#
# Use traditional timestamp format.
# To enable high precision timestamps, comment out the following line.
#
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
# Filter duplicated messages
$RepeatedMsgReduction on
#
# Set the default permissions for all log files.
#
$FileOwner syslog
$FileGroup adm
$FileCreateMode 0640
$DirCreateMode 0755
$Umask 0022
$PrivDropToUser syslog
$PrivDropToGroup syslog
#
# Where to place spool and state files
#
$WorkDirectory /var/spool/rsyslog
#
# Include all config files in /etc/rsyslog.d/
#
$IncludeConfig /etc/rsyslog.d/*.conf
user.* -/log/app.log
local0.* -/log/audit.log
local1.* -/log/perf.log
So after doing all that when I run the python code (listed above) these are these are the messages I'm seeing on the remote server:
for log in /log/*.log; do echo "${log} >>>"; cat ${log}; echo "<<< ${log}"; echo; done
/log/app.log >>>
Oct 23 13:00:23 de4bba6ac1dd rsyslogd: imklog: cannot open kernel log (/proc/kmsg): Operation not permitted.
Oct 23 13:01:34 Testing warning logging#000<14>Testing critical logging#000<14>Testing error logging
<<< /log/app.log
/log/audit.log >>>
Oct 23 13:01:34 Testing aduit logging1#000<134>Testing audit logging2#000<134>Testing audit logging3
<<< /log/audit.log
/log/perf.log >>>
Oct 23 13:01:34 Testing performance logging1#000<142>Testing performance logging2
<<< /log/perf.log
As you can see the messages are being filtered to the proper log file, but they're being concatenated onto one line. I'm guessing that its doing it because they arrive at the same time, but I'd like the messages to be split onto separate lines.
In addition, I've tried adding a formatter to the SysLogHandler so that it inserts a line break to the message like this:
sys_log_handler.setFormatter(logging.Formatter('%(message)s\n'))
However, this really screws it up:
for log in /log/*.log; do echo "${log} >>>"; cat ${log}; echo "<<< ${log}"; echo; done
/log/app.log >>>
Oct 23 13:00:23 de4bba6ac1dd rsyslogd: imklog: cannot open kernel log (/proc/kmsg): Operation not permitted.
Oct 23 13:01:34 Testing warning logging#000<14>Testing critical logging#000<14>Testing error logging
Oct 23 13:12:00 Testing warning logging
Oct 23 13:12:00 172.17.0.1 #000<134>Testing audit logging2
Oct 23 13:12:00 172.17.0.1 #000<14>Testing critical logging
Oct 23 13:12:00 172.17.0.1 #000<142>Testing performance logging2
Oct 23 13:12:00 172.17.0.1 #000<134>Testing audit logging3
Oct 23 13:12:00 172.17.0.1 #000<14>Testing error logging
Oct 23 13:12:00 172.17.0.1
<<< /log/app.log
/log/audit.log >>>
Oct 23 13:01:34 Testing aduit logging1#000<134>Testing audit logging2#000<134>Testing audit logging3
Oct 23 13:12:00 Testing aduit logging1
<<< /log/audit.log
/log/perf.log >>>
Oct 23 13:01:34 Testing performance logging1#000<142>Testing performance logging2
Oct 23 13:12:00 Testing performance logging1
<<< /log/perf.log
As you can see the first message gets put into the right file for the audit and performance logs, but then all the other messages get put into the application log file. However, there is definitely a line break now.
My question is, I want to filter the messages based on facility, but with each message on a seperate line. How can I do this using the python logging library? I guess I could take a look at the syslog library?
So I came across this python bug:
https://bugs.python.org/issue28404
So I took a look at the source code (nice thing about python), specifically the SysLogHander.emit() method:
def emit(self, record):
"""
Emit a record.
The record is formatted, and then sent to the syslog server. If
exception information is present, it is NOT sent to the server.
"""
try:
msg = self.format(record)
if self.ident:
msg = self.ident + msg
if self.append_nul:
# Next line is always added by default
msg += '\000'
# We need to convert record level to lowercase, maybe this will
# change in the future.
prio = '<%d>' % self.encodePriority(self.facility,
self.mapPriority(record.levelname))
prio = prio.encode('utf-8')
# Message is a string. Convert to bytes as required by RFC 5424
msg = msg.encode('utf-8')
msg = prio + msg
if self.unixsocket:
try:
self.socket.send(msg)
except OSError:
self.socket.close()
self._connect_unixsocket(self.address)
self.socket.send(msg)
elif self.socktype == socket.SOCK_DGRAM:
self.socket.sendto(msg, self.address)
else:
self.socket.sendall(msg)
except Exception:
self.handleError(record)
As you can see it adds a '\000' to the end of the message by default, so if I set this to False and then set a Formatter that adds a line break, then things work the way I expect. Like this:
sys_log_handler.mapPriority = lambda *args: syslog_priority
# This will add a line break to the message before it is 'emitted' which ensures that the messages are
# split up over multiple lines, see https://bugs.python.org/issue28404
sys_log_handler.setFormatter(logging.Formatter('%(message)s\n'))
# In order for the above to work, then we need to ensure that the null terminator is not included
sys_log_handler.append_nul = False
Thanks for your help #Sraw, I tried to use UDP, but never got the message. After applying these changes this is what I see in my log files:
$ for log in /tmp/logging_test/*.log; do echo "${log} >>>"; cat ${log}; echo "<<< ${log}"; echo; done
/tmp/logging_test/app.log >>>
Oct 23 21:06:40 083c9501574d rsyslogd: imklog: cannot open kernel log (/proc/kmsg): Operation not permitted.
Oct 23 21:06:45 Testing warning logging
Oct 23 21:06:45 Testing critical logging
Oct 23 21:06:45 Testing error logging
<<< /tmp/logging_test/app.log
/tmp/logging_test/audit.log >>>
Oct 23 21:06:45 Testing audit logging1
Oct 23 21:06:45 Testing audit logging2
Oct 23 21:06:45 Testing audit logging3
<<< /tmp/logging_test/audit.log
/tmp/logging_test/perf.log >>>
Oct 23 21:06:45 Testing performance logging1
Oct 23 21:06:45 Testing performance logging2
<<< /tmp/logging_test/perf.log
I believe tcp stream makes it more complicated. When you are using tcp stream, rsyslog won't help you to split the message, it is all on your own.
Why not use udp protocol? In this case, every single message will be treated as a single log. So you don't need to add \n manually. And manually adding \n will makes you impossible to log multiple line logs correctly.
So my suggestion is changing to udp protocol and:
# Disable escaping to accept multiple line log
$EscapeControlCharactersOnReceive off
I came accross the same issue.
capture multiline events with rsyslog and storing them to file
The cause is related to the question above.
NULL character is not handled as delimilter in imtcp.
My solution was setting AddtlFrameDelimiter="0", something like below.
module(load="imtcp" AddtlFrameDelimiter="0")
input(type="imtcp" port="514")
reference:
AddtlFrameDelimiter
Syslog Message Format
related rsyslog code
EDITED 2022/10/04
The previous example was using input parameter.
And AddtlFrameDelimiter's input paramter has been newly supported since v8.2106.0.
Therefore, I changed the input paramter's example into old style module golbal parameter's one.
# module golbal parameter case, supported since v6.XXXX
module(load="imtcp" AddtlFrameDelimiter="0")
input(type="imtcp" port="514")
# input paramter case, supported since v8.2106.0
module(load="imtcp")
input(type="imtcp" port="514" AddtlFrameDelimiter="0")

Shell Script on iMac no longer working with High Sierra

I recently upgraded my iMac 27” (mid-2011) from Yosemite to High Sierra and I am struggling to get back some functionality that I had working previously!
To briefly explain… First of all, I grab local weather data from weather underground using some Python scripts on a Raspberry pi3. These scripts also massage the data and then create and store an XML file on the pi. I also, on the pi, run a http server that looks for calls.
On an iPad, using iRule, I have a button that is called ‘Weather Forecast’. When this button is pressed it triggers a network resource on my ISY994i (Insteon) controller that, in turn, makes a call to the http server on the pi sending it a parameter. When the pi receives the call and validates the parameter, it runs another Python script (on the pi) that takes the data in the previously created XML file and puts it into a proper format for the next step. Finally, that script sends GET requests to the iMac, through Apache2, to read the weather forecast out loud.
This was working very well on Yosemite but now that I have upgraded the saying part is not working!
I have 3 shell scripts on the iMac that are called, from the pi, in this process…
saysomethinghttp9a.sh This is the first script called which reads the current volume level and stores it in a local file (on the iMac); then it changes the volume level to an acceptable volume (I use 18);
!/bin/bash
echo -e "Content-type: text/html\n"
PHRASE=`echo "$QUERY_STRING" | sed -n 's/^.*phrase=\([^&]*\).*$/\1/p' | sed "s/+/ /g" | sed "s/%20/ /g"`
cat << junk
<html>
<head>
<title>
saying
</title>
</head>
<body>
junk
currVol=$(osascript -e "get volume settings")
echo "Current Volume Setting = $currVol"
var1=$( echo $currVol | cut -d":" -f1 )
var2=$( echo $currVol | cut -d":" -f2 )
origVol=$( echo $var2 | cut -d"," -f1 )
echo $origVol
parm="set volume output volume $origVol"
echo $parm
destfile="/Users/Sarah/Sound_Volume/Volume1.txt"
echo $parm > $destfile
osascript -e "set volume output volume 18"
cat << junk
</body>
</html>
junk
saysomethinghttp9.sh After the volume level has been set, this script does the ‘say’ part based upon what is sent from the pi. The pi calls this script and sends a parameter, which is the words I want said. This call is repeated several times for the intro, date, time, weather forecast and closing; and
#!/bin/bash
echo -e "Content-type: text/html\n"
PHRASE=`echo "$QUERY_STRING" | sed -n 's/^.*phrase=\([^&]*\).*$/\1/p' | sed "s/+/ /g" | sed "s/%20/ /g"`
cat << junk
<html>
<head>
<title>
saying
</title>
</head>
<body>
junk
say "Hey There"
cat << junk
</body>
</html>
junk
saysomethinghttp9b.sh Finally this last script is called, which reads the original volume from the file created in the first step and then resets the volume to that level.
#!/bin/bash
echo -e "Content-type: text/html\n"
cat << junk
<html>
<head>
<title>
saying
</title>
</head>
<body>
junk
file="/Users/Sarah/Sound_Volume/Volume1.txt"
echo $file
read -d $'\x04' parm < "$file"
echo $parm
osascript -e "$parm"
cat << junk
</body>
</html>
junk
(note that I go through the steps to adjust the volume because the volume for music, from iTunes, is much too loud for the ‘say’ commands)
In trying to figure out what is wrong I have tried numerous things:
I edited the script saysomethinghttp9.sh to eliminate the ‘say’ of a parameter passed to it and simply put in the line say “Hey there” (note that the code above is the edited version)
I then opened up a terminal session on the iMac and issued the commands from there...
./saysomethinghttp9a.sh
./saysomethinghttp9.sh
./saysomethinghttp9b.sh
All 3 scripts worked when called from the terminal so that wasn’t the problem.
To debug the calls to the iMac, I simplified the process by eliminating the iPad, the pi and the ISY994i from the process. Instead, I have been trying to make the calls to the iMac from a PC on the same network using a browser.
http://10.0.1.11/cgi-bin/saysomethinghttp9a.sh
http://10.0.1.11/cgi-bin/saysomethinghttp9.sh
http://10.0.1.11/cgi-bin/saysomethinghttp9a.sh
The result from running the scripts directly from the browser, on the PC, was that script saysomethinghttp9a.sh and saysomethinghttp9b.sh worked but saysomethinghttp9.sh did not!
Here are the Access and Error log entries from the iMac after trying the calls from the browser on the PC…
Access Log
10.0.1.195 - - [18/Dec/2017:21:33:30 -0500] "GET /cgi-bin/saysomethinghttp9a.sh HTTP/1.1" 200 197
10.0.1.195 - - [18/Dec/2017:21:34:04 -0500] "-" 408 -
10.0.1.195 - - [18/Dec/2017:21:33:44 -0500] "GET /cgi-bin/saysomethinghttp9.sh HTTP/1.1" 200 53
10.0.1.195 - - [18/Dec/2017:21:33:49 -0500] "GET /cgi-bin/saysomethinghttp9.sh HTTP/1.1" 200 53
10.0.1.195 - - [18/Dec/2017:21:35:05 -0500] "GET /cgi-bin/saysomethinghttp9b.sh HTTP/1.1" 200 135
Error Log
[Mon Dec 18 21:34:44.356130 2017] [cgi:warn] [pid 29997] [client 10.0.1.195:60109] AH01220: Timeout waiting for output from CGI script /Library/WebServer/CGI-Executables/saysomethinghttp9.sh
[Mon Dec 18 21:34:44.356519 2017] [core:error] [pid 29997] (70007)The timeout specified has expired: [client 10.0.1.195:60109] AH00574: ap_content_length_filter: apr_bucket_read() failed
[Mon Dec 18 21:34:49.949284 2017] [cgi:warn] [pid 29575] [client 10.0.1.195:60107] AH01220: Timeout waiting for output from CGI script /Library/WebServer/CGI-Executables/saysomethinghttp9.sh
[Mon Dec 18 21:34:49.949652 2017] [core:error] [pid 29575] (70007)The timeout specified has expired: [client 10.0.1.195:60107] AH00574: ap_content_length_filter: apr_bucket_read() failed
For full disclosure, my programming experience is relatively limited. I often piece things together using examples that I find online.
I do not know how to interpret the errors noted above! The only information I could find related to "The timeout specified has expired" was related to situations where a lot of data was being dealt with! In my case, there is very little data being processed!
I would appreciate some help or direction on how to proceed.
Edit:
After reading the comments from Mark Setchell, I added into my script the '/usr/bin/id' and ran the script first in the terminal and saw that user name was correct. Then I ran the same script from the other PC and saw that the user name was '_www'! So I then edited the httpd.conf (apache2) file and changed the section include User Sarah and Group staff. However this did not correct the problem!
Next I read up on how to 'use su to become that user and try the script'. Through the readings I kept finding suggestions to use sudo instead and finally found a suggestion to edit the sudoers file. So I did this using the command sudo visudo. Then I added in the following line
Sarah ALL=(ALL) NOPASSWD: ALL
Then I tried running the script from the PC once again however this time the script ran and is saying again!
After reading the comments from Mark Setchell, I added into my script the '/usr/bin/id' and ran the script first in the terminal and saw that user name was correct. Then I ran the same script from the other PC and saw that the user name was '_www'! So I then edited the httpd.conf (apache2) file and changed the section include User Sarah and Group staff. However this did not correct the problem!
Next I read up on how to 'use su to become that user and try the script'. Through the readings I kept finding suggestions to use sudo instead and finally found a suggestion to edit the sudoers file. So I did this using the command sudo visudo. Then I added in the following line
Sarah ALL=(ALL) NOPASSWD: ALL
Then I tried running the script from the PC once again however this time the script ran and is saying again!

configuring nginx to work with python3.2

I'm having some trouble configuring nginx to work with Python3.2. I'm also struggling to find anything resembling a decent tutorial on the subject. I did however find a decent tutorial on getting nginx to play nice with Python2.7. My thought process was that since uwsgi works with plugins it should be a relatively simple exercise to follow the Python2.7 tutorial and just swap out the python plugin.
Here is the tutorial I followed to get a basic Hello World site working: https://library.linode.com/web-servers/nginx/python-uwsgi/ubuntu-12.04-precise-pangolin
/etc/uwsgi/apps_available/my_site_url.xml looks like:
<uwsgi>
<plugin>python</plugin>
<socket>/run/uwsgi/app/my_site_urlmy_site_url.socket</socket>
<pythonpath>/srv/www/my_site_url/application/</pythonpath>
<app mountpoint="/">
<script>wsgi_configuration_module</script>
</app>
<master/>
<processes>4</processes>
<harakiri>60</harakiri>
<reload-mercy>8</reload-mercy>
<cpu-affinity>1</cpu-affinity>
<stats>/tmp/stats.socket</stats>
<max-requests>2000</max-requests>
<limit-as>512</limit-as>
<reload-on-as>256</reload-on-as>
<reload-on-rss>192</reload-on-rss>
<no-orphans/>
<vacuum/>
</uwsgi>
Once everything was working installed uwsgi-plugin-python3 via apt-get. ls -l /usr/lib/uwsgi/plugins/ now outputs:
-rw-r--r-- 1 root root 142936 Jul 17 2012 python27_plugin.so
-rw-r--r-- 1 root root 147192 Jul 17 2012 python32_plugin.so
lrwxrwxrwx 1 root root 38 May 17 11:44 python3_plugin.so -> /etc/alternatives/uwsgi-plugin-python3
lrwxrwxrwx 1 root root 37 May 18 12:14 python_plugin.so -> /etc/alternatives/uwsgi-plugin-python
Changing python to python3 or python32 in my_site_url.xml has the same effect, ie:
The hello world page takes ages to load (it was effectively instantanious before) and then comes up blank
My site's access log records access
my site's error log records no new error
/var/log/uwsgi/app/my_site_url.log records the following:
[pid: 4503|app: 0|req: 1/2] 192.168.1.5 () {42 vars in 630 bytes} [Sun May 19 10:49:12 2013] GET / => generated 0 bytes in 0 msecs (HTTP/1.1 200) 2 headers in 65 bytes (1 switches on core 0)
so My question is:
How can I correctly configure this app to work on Python3.2
The listed tutorial has the following application code:
def application(environ, start_response):
status = '200 OK'
output = 'Hello World!'
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
This is incompatable with python3.2 because it expects a bytes object. Replacing the application function with the following fixes things:
def application(env, start_response):
start_response('200 OK', [('Content-Type','text/html')])
return b"Hello World"

django/celery - celery status: Error: No nodes replied within time constraint

I'm trying to deploy a simple example of celery in my production server, I've followed the tutorial in the celery website about running celery as daemon http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#daemonizing, and I got the config file in /etc/default/celeryd
1 # Name of nodes to start
2 # here we have a single node
3 CELERYD_NODES="w1"
4 # or we could have three nodes:
5 #CELERYD_NODES="w1 w2 w3"
6
7 # Where to chdir at start.
8 CELERYD_CHDIR="/home/audiwime/cidec_sw"
9
10 # Python interpreter from environment.
11 ENV_PYTHON="/usr/bin/python26"
12
13 # How to call "manage.py celeryd_multi"
14 CELERYD_MULTI="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryd_multi"
15
16 # # How to call "manage.py celeryctl"
17 CELERYCTL="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryctl"
18
19 # Extra arguments to celeryd
20 CELERYD_OPTS="--time-limit=300 --concurrency=8"
21
22 # Name of the celery config module.
23 CELERY_CONFIG_MODULE="celeryconfig"
24
25 # %n will be replaced with the nodename.
26 CELERYD_LOG_FILE="/var/log/celery/%n.log"
27 CELERYD_PID_FILE="/var/run/celery/%n.pid"
28
29 # Workers should run as an unprivileged user.
30 CELERYD_USER="audiwime"
31 CELERYD_GROUP="audiwime"
32
33 export DJANGO_SETTINGS_MODULE="cidec_sw.settings"
but if I run
celery status
in the terminal, i got this response:
Error: No nodes replied within time constraint
I can restart celery via the celeryd script provided in https://github.com/celery/celery/tree/3.0/extra/generic-init.d/
/etc/init.d/celeryd restart
celeryd-multi v3.0.12 (Chiastic Slide)
> w1.one.cloudwime.com: DOWN
> Restarting node w1.one.cloudwime.com: OK
I can run python26 manage.py celeryd -l info and my tasks in django run fine, but if I let the daemon do its work I don't get any results, don't even errors in /var/log/celery/w1.log
I know that my task has been registered because I did this
from celery import current_app
def call_celery_delay(request):
print current_app.tasks
run.delay(request.GET['age'])
return HttpResponse(content="celery task set",content_type="text/html")
and I get a dictionary in which my task appear
{'celery.chain': <#task: celery.chain>, 'celery.chunks': <#task: celery.chunks>, 'celery.chord': <#task: celery.chord>, 'tasks.add2': <#task: tasks.add2>, 'celery.chord_unlock': <#task: celery.chord_unlock>, **'tareas.tasks.run': <#task: tareas.tasks.run>**, 'tareas.tasks.add': <#task: tareas.tasks.add>, 'tareas.tasks.test_two_minute': <#task: tareas.tasks.test_two_minute>, 'celery.backend_cleanup': <#task: celery.backend_cleanup>, 'celery.map': <#task: celery.map>, 'celery.group': <#task: celery.group>, 'tareas.tasks.test_one_minute': <#task: tareas.tasks.test_one_minute>, 'celery.starmap': <#task: celery.starmap>}
but besides that I get nothing else, no result from my task, no error in the logs, nothing.
What can be wrong?
Use the following command to find the problem :
C_FAKEFORK=1 sh -x /etc/init.d/celeryd start
This usually happens because there are problems in your source project(permission issues, syntax error etc.)
As mentioned in celery docs:-
If the worker starts with “OK” but exits almost immediately afterwards
and there is nothing in the log file, then there is probably an error
but as the daemons standard outputs are already closed you’ll not be
able to see them anywhere. For this situation you can use the
C_FAKEFORK environment variable to skip the daemonization step
Good Luck
Source: Celery Docs
its because the celery daemon might not be started. This is one reason. So kindly restart it using python manage.py celeryd --loglevel=INFO
I solved my problem, it was a very simple solution, but it was also a weird one:
What I did was:
$ /etc/init.d/celerybeat restart
$ /etc/init.d/celeryd restart
$ service celeryd restart
I had to do this in that order, other way I'd get an ugly Error: No nodes replied within time constraint.

Categories