Going through the answers at superuser.
I'm trying to modify this to listen for multiple strings and echo custom messages such as ; 'Your server started successfully' etc
I'm also trying to tack it to another command i.e. pip
wait_str() {
local file="$1"; shift
local search_term="Successfully installed"; shift
local search_term2='Exception'
local wait_time="${1:-5m}"; shift # 5 minutes as default timeout
(timeout $wait_time tail -F -n0 "$file" &) | grep -q "$search_term" && echo 'Custom success message' && return 0 || || grep -q "$search_term2" && echo 'Custom success message' && return 0
echo "Timeout of $wait_time reached. Unable to find '$search_term' or '$search_term2' in '$file'"
return 1
}
The usage I have in mind is:
pip install -r requirements.txt > /var/log/pip/dump.log && wait_str /var/log/pip/dump.log
To clarify, I'd like to get wait_str to stop tailing when pip exits, whether successfully or not.
Following is general answer and tail could be replaced by any command that result in stream of lines.
IF different string needs different actions then use following:
tail -f var/log/pip/dump.log |awk '/condition1/ {action for condition-1} /condition-2/ {action for condition-2} .....'
If multiple conditions need same action them ,separate them using OR operator :
tail -f var/log/pip/dump.log |awk '/condition-1/ || /condition-2/ || /condition-n/ {take this action}'
Based on comments : Single awk can do this.
tail -f /path/to/file |awk '/Exception/{ print "Worked"} /compiler/{ print "worked"}'
or
tail -f /path/to/file | awk '/Exception/||/compiler/{ print "worked"}'
OR Exit if match is found
tail -f logfile |awk '/Exception/||/compiler/{ print "worked";exit}'
Related
I am stuck with converting my script that uses ssh to activate nodes to pbsdsh. I am using Ray for node communication. My script with ssh is:
#!/bin/bash
#PBS -N Experiment_1
#PBS -l select=2:ncpus=24:mpiprocs=24
#PBS -P CSCIxxxx
#PBS -q normal
#PBS -l walltime=01:30:00
#PBS -m abe
#PBS -M xxxxx#gmail.com
ln -s $PWD $PBS_O_WORKDIR/$PBS_JOBID
cd $PBS_O_WORKDIR
jobnodes=`uniq -c ${PBS_NODEFILE} | awk -F. '{print $1 }' | awk '{print $2}' | paste -s -d " "`
thishost=`uname -n | awk -F. '{print $1.}'`
thishostip=`hostname -i`
rayport=6379
thishostNport="${thishostip}:${rayport}"
echo "Allocate Nodes = <$jobnodes>"
export thishostNport
echo "set up ray cluster..."
for n in `echo ${jobnodes}`
do
if [[ ${n} == "${thishost}" ]]
then
echo "first allocate node - use as headnode ..."
module load chpc/python/anaconda/3-2019.10
source /apps/chpc/chem/anaconda3-2019.10/etc/profile.d/conda.sh
conda activate /home/mnasir/env1
ray start --head
sleep 5
else
ssh ${n} $PBS_O_WORKDIR/startWorkerNode.pbs ${thishostNport}
sleep 10
fi
done
python -u example_trainer.py
rm $PBS_O_WORKDIR/$PBS_JOBID
#
where startWorkerNode.pbs is:
#!/bin/bash -l
source $HOME/.bashrc
cd $PBS_O_WORKDIR
param1=$1
destnode=`uname -n`
echo "destnode is = [$destnode]"
module load chpc/python/anaconda/3-2019.10
source /apps/chpc/chem/anaconda3-2019.10/etc/profile.d/conda.sh
conda activate /home/mnasir/poet
ray start --address="${param1}" --redis-password='5241590000000000'
and the example_trainer.py is:
from collections import Counter
import os
import socket
import sys
import time
import ray
num_cpus = int(sys.argv[1])
ray.init(address=os.environ["thishostNport"])
print("Nodes in the Ray cluster:")
print(ray.nodes()) # This should print all N nodes we are trying to access
#ray.remote
def f():
time.sleep(1)
return socket.gethostbyname(socket.gethostname()) + "--" + str(socket.gethostname())
# The following takes one second (assuming that
# ray was able to access all of the allocated nodes).
for i in range(60):
start = time.time()
ip_addresses = ray.get([f.remote() for _ in range(num_cpus)])
print("GOT IPs", ip_addresses)
print(Counter(ip_addresses))
end = time.time()
print(end - start)
This works perfectly and communicates across all nodes but when I try to change the command to pbsds it returns:
pbsdsh: task 0x00000000 exit status 254
pbsdsh: task 0x00000001 exit status 254
when mpiprocs=1 and if it is set to 24 it repeats 48 times.
As per the best of my knowledge, ray needs a host node and then worker nodes are connected to it and thus the for loop and if statement in it.
I have tried directly replacing pbsdsh in the script with/without identifying nodes. I have added pbsdsh out of the loop and tried a whole lot of possible combinations.
I have followed these questions but could not get my code to communicate throughout nodes:
PBS/TORQUE: how do I submit a parallel job on multiple nodes?
How to execute a script on every allocated node with PBS
Handle multiple nodes in one pbs job
I believe there might be something not too big that I am not able to implement. Your help and guidance will be highly appreciated!
there are a few main things that needed to change to solve this problem:
#PBS -l select=2:ncpus=24:mpiprocs=1 should be used as the selector line, specifically, change mpiprocs from 24 to 1, so that pbsdsh only launches one process per node instead of 24.
Inside jobscript.sh, inside the else, you can use pbsdsh -n $J -- $PBS_O_WORKDIR/startWorkerNode.pbs ${thishostNport} & to run pbsdsh only on one node, and in the background. J is kept as a node index and is incremented at each iteration of the for loop. This results in the ray start being run on each node once.
Inside startWorkerNode.pbs, add this code at the end
# Here, sleep for the duration of the job, so ray does not stop
WALLTIME=$(qstat -f $PBS_JOBID | sed -rn 's/.*Resource_List.walltime = (.*)/\1/p')
SECONDS=`echo $WALLTIME | awk -F: '{ print ($1 * 3600) + ($2 * 60) + $3 }'`
echo "SLEEPING FOR $SECONDS s"
sleep $SECONDS
This ensures that the ray start does not exit as soon as the pbsdsh command returns and is kept alive for the duration of the job. The & in the previous point is also necessary here, as pbsdsh will never return without it.
Here are the files for reference:
startWorkerNode.pbs
#!/bin/bash -l
source $HOME/.bashrc
cd $PBS_O_WORKDIR
param1=$1
destnode=`uname -n`
echo "destnode is = [$destnode]"
module load chpc/python/anaconda/3-2019.10
source /apps/chpc/chem/anaconda3-2019.10/etc/profile.d/conda.sh
conda activate /home/mnasir/poet
ray start --address="${param1}" --redis-password='5241590000000000'
# Here, sleep for the duration of the job, so ray does not stop
WALLTIME=$(qstat -f $PBS_JOBID | sed -rn 's/.*Resource_List.walltime = (.*)/\1/p')
SECONDS=`echo $WALLTIME | awk -F: '{ print ($1 * 3600) + ($2 * 60) + $3 }'`
echo "SLEEPING FOR $SECONDS s"
sleep $SECONDS
jobscript.sh
#!/bin/bash
#PBS -N Experiment_1
#PBS -l select=2:ncpus=24:mpiprocs=1
#PBS -P CSCIxxxx
#PBS -q normal
#PBS -l walltime=01:30:00
#PBS -m abe
#PBS -M xxxxx#gmail.com
ln -s $PWD $PBS_O_WORKDIR/$PBS_JOBID
cd $PBS_O_WORKDIR
jobnodes=`uniq -c ${PBS_NODEFILE} | awk -F. '{print $1 }' | awk '{print $2}' | paste -s -d " "`
thishost=`uname -n | awk -F. '{print $1.}'`
thishostip=`hostname -i`
rayport=6379
thishostNport="${thishostip}:${rayport}"
echo "Allocate Nodes = <$jobnodes>"
export thishostNport
echo "set up ray cluster..."
J=0
for n in `echo ${jobnodes}`
do
if [[ ${n} == "${thishost}" ]]
then
echo "first allocate node - use as headnode ..."
module load chpc/python/anaconda/3-2019.10
source /apps/chpc/chem/anaconda3-2019.10/etc/profile.d/conda.sh
conda activate /home/mnasir/env1
ray start --head
sleep 5
else
# Run pbsdsh on the J'th node, and do it in the background.
pbsdsh -n $J -- $PBS_O_WORKDIR/startWorkerNode.pbs ${thishostNport} &
sleep 10
fi
J=$((J+1))
done
python -u example_trainer.py 48
rm $PBS_O_WORKDIR/$PBS_JOBID
The output of my first command line "bcftools query -l {input.invcf} | head -n 1" prints the name of the first individual of vcf file (i.e. IND1). I want to use that output in selectvariants GATK in -sn IND1 option. How is it possible to integrate the 1st comamnd line in snakemake in order to use it's output in the next one?
rule selectvar:
input:
invcf="{family}_my.vcf"
params:
ind= ???
ref="ref.fasta"
output:
out="{family}.dn.vcf"
shell:
"""
bcftools query -l {input.invcf} | head -n 1 > {params.ind}
gatk --java-options "-Xms2G -Xmx2g -XX:ParallelGCThreads=2" SelectVariants -R {params.ref} -V {input.invcf} -sn {params.ind} -O {output.out}
"""
There are several options, but the easiest one is to store the results into a temporary bash variable:
rule selectvar:
...
shell:
"""
myparam=$(bcftools query -l {input.invcf} | head -n 1)
gatk -sn "$myparam" ...
"""
As noted by #dariober, if one modifies pipefail behaviour, there could be unexpected results, see the example in this answer.
When I have to do these things I prefer to use run instead of shell, and then shell out only at the end.
The reason for this is because it makes it possible for snakemake to lint the run statement, and to exit early if something goes wrong instead of following through with a broken shell command.
rule selectvar:
input:
invcf="{family}_my.vcf"
params:
ref="ref.fasta"
gatk_opts='--java-options "-Xms2G -Xmx2g -XX:ParallelGCThreads=2" SelectVariants'
output:
out="{family}.dn.vcf"
run:
opts = "{params.gatk_opts} -R {params.ref} -V {input.invcf} -O {output.out}"
sn_parameter = shell("bcftools query -l {input.invcf} | head -n 1")
# we could add a sanity check here if necessary, before shelling out
shell(f"gatk {options} {sn_parameter}")
"""
I think I found a solution:
rule selectvar:
input:
invcf="{family}_my.vcf"
params:
ref="ref.fasta"
output:
out="{family}.dn.vcf"
shell:
"""
gatk --java-options "-Xms2G -Xmx2g -XX:ParallelGCThreads=2" SelectVariants -R {params.ref} -V {input.invcf} -sn `bcftools query -l {input.invcf} | head -n 1` -O {output.out}
"""
I am using below python script to filter data from /var/log/messages file of target machine by date & time .
But getting syntax error
I am using python version 2.7 & will not be able to upgrade python version
#!/usr/bin/python
import cgi, cgitb
import os
from subprocess import PIPE, Popen
def cmdline(command):
process = Popen(
args=command,
stdout=PIPE,
shell=True
)
return process.communicate()[0]
out4=cmdline('sshpass -p redhat ssh -o ConnectTimeout=6 -o NumberOfPasswordPrompts=2 -o StrictHostKeyChecking=no -tt ricky#192.168.0.50 "echo redhat | sudo -S zless /var/log/messages* | grep \'^Sep 9\' | awk \' \$3 > "09:30" && \$3 < "23:50" \' "')
print(out4)
Getting below output while executing this script
Connection to 192.168.0.50 closed.
awk: $3 > 09:30 && $3 < 23:50
awk: ^ syntax error
awk: $3 > 09:30 && $3 < 23:50
awk: ^ syntax error
[sudo] password for ricky:
Anyone please help me to correct it
You are falling into quote hell due to multiple levels of special char handling, by the multiple shells you're running one inside the other.
You can reduce your doom, by avoiding using shell=True so it is one less shell to escape for:
p = subprocess.run([
'sshpass', '-p', 'redhat'
'ssh',
'-o', 'ConnectTimeout=6',
'-o', 'NumberOfPasswordPrompts=2',
'-o', 'StrictHostKeyChecking=no',
'-tt', 'ricky#192.168.0.50'
"""sudo -S zless /var/log/messages* | grep '^Sep 9' | awk ' \$3 > "09:30" && \$3 < "23:50" '"""
], input='redhat\n', stdout=subprocess.PIPE)
out4 = p.stdout
Well i want to check 100000k+ url in linux.
About those links those are actually OTA[zip] of my android.
Among those links there is only one valid link rest give 404 error.
So how to check all links in less time period in linux server or web server[apache].
structure of urls:
http://link.com/updateOTA_1.zip
http://link.com/updateOTA_2.zip
http://link.com/updateOTA_999999999.zip
Okay what i tried
i made this script but it is really slow. http://pastebin.com/KVxnzttA I also increase thread upto 500 then my server crashed :[
#!/bin/bash
for a in {1487054155500..1487055000000}
do
if [ $((a%50)) = 0 ]
then
curl -s -I http://link.com/updateOTA_$((a)).zip | head -n1 &
curl -s -I http://link.com/updateOTA_$((a+1)).zip | head -n1 &
curl -s -I http://link.com/updateOTA_$((a+2)).zip | head -n1 &
curl -s -I http://link.com/updateOTA_$((a+3)).zip | head -n1 &
curl -s -I http://link.com/updateOTA_$((a+4)).zip | head -n1 &
...
curl -s -I http://link.com/updateOTA_$((a+49)).zip | head -n1 &
curl -s -I http://link.com/updateOTA_$((a+50)).zip | head -n1
wait
echo "$((a))"
fi
done
i tried with aria2, but highest thread on aria2 is 16, so again failed.
tried with some online tools, but they give me 100url restrictions.
Running curl 100,000+ times is going to be slow. Instead, write batches of URLs to a single instance of curl to reduce the overhead of starting curl.
# This loop doesn't require pre-generating a list of a million integers
for ((a=1487054155500; a<=1487055000000; a+=50)); do
for(k=0; k<50; k++)); do
printf 'url = %s\n' "http://link.com/updateOTA_$((a+k)).zip"
done | curl -I -K - -w 'result: %{http_code} %{url_effective}' | grep -F 'result:' > batch-$a.txt
done
The -w option is used to produce output associating each URL with its result, should you want that.
However i found solution using aria2c
now it scanning 7k url per minute.
thanks to all
aria2c -i url -s16 -x16 --max-concurrent-downloads=1000
I am using Python to execute an external program by simply using Python's "os" library:
os.system("./script1") # script1 generates several log files and the most important for me is "info.log" !
os.system("./script2") # script2 uses the output from script1
The problem is that those scripts are in 50000 element "for loop" and "script1" needs at least 2 min to finish its job (it has fixed time duration)
In the first 1-2 sec I can find out whether I need the output data or not by looking in to the "info.log" file. However, as "script1" is an already compiled program and I can't modify it, I have to wait until it finishes.
I was thinking about a method in Bash that allows me to run two processes at the same time:
one is to start ./script1 and the other is to monitor any change in the "info.log" file...
If "info.log" has been updated or changed in size, then the second script should terminate both processes.
Something like:
os.system("./mercury6 1 > ./energy.log 2>/dev/null & sleep 2 & if [ $(stat -c %s ./info.log) -ge 1371]; then kill %1; else echo 'OK'; fi")
– which does not work...
Please, if someone knows a method, let me know!
I would suggest using subprocess, with a combination of this answer describing how to kill a bash script along with some method of monitoring the file, something like watchdog, or a simple polling loop.
import os
import time
os.system("script1")
while True: # Now this would run in 10 sec cycles and checks the output of the logs
try: # And you don't have the wait the whole 2 mins to script1 to be ready
o = open("info.log", "r") # or add os.system("script2") here
# Add here more logs
break
except:
print "waiting script1 to be ready.."
time.sleep(10) # Modify (seconds) to be sufficient to script1 to be ready
continue
You can try this one. Tell me if script1 pauses on runtime so we could try to configure it further.
#!/bin/bash
[ -n "$BASH_VERSION" ] || {
echo "You need Bash to run this script."
exit 1
}
set +o monitor ## Disable job control.
INFO_LOG='./info.log'
if [[ ! -f $INFO_LOG ]]; then
# We must create the file.
: > "$INFO_LOG" || {
echo "Unable to create $INFO_LOG."
exit 1
}
fi
read FIRSTTIMESTAMP < <(stat -c '%s' "$INFO_LOG") && [[ $FIRSTTIMESTAMP == +([[:digit:]]) ]] || {
echo "Unable to get stat of $INFO_LOG."
exit 1
}
# Run the first process with the script.
./script1 &
SCRIPT1_PID=$!
disown "$SCRIPT1_PID"
# Run the second process.
(
function kill_processes {
echo "Killing both processes."
kill "$SCRIPT1_PID"
exit 1
}
[[ -f $INFO_LOG ]] || {
echo "File has been deleted."
kill_processes
}
read NEWTIMESTAMP < <(stat -c '%s' "$INFO_LOG") && [[ $NEWTIMESTAMP == +([[:digit:]]) ]] || {
echo "Unable to get new timestamp of $INFO_LOG."
kill_processes
}
[[ NEWTIMESTAMP -ne FIRSTTIMESTAMP ]] || {
echo "$INFO_LOG has changed."
kill_processes
}
sleep 4s
) &
disown "$!"
You might also want to check some other similar solutions here: linux script with netcat stops working after x hours
(
function kill_processes {
echo "Killing both processes."
kill "$mercury6_PID"
exit 1
}
while IFS= read -r LINE; do
[[ "${LINE}" == "ejected" ]] && { ## Perhaps it should be == *ejected* ?
echo "$INFO_LOG has changed."
kill_processes
}
done < <(tail -f "$INFO_LOG")
) &
disown "$!"
It has been done ! not so elegant but it works! this is the code:
#!/bin/bash
[ -n "$BASH_VERSION" ] || {
echo "You need Bash to run this script."
exit 1
}
# set +o monitor ## Disable job control.
INFO_LOG="./info.out"
# Run the first process with the script.
./mercury6 1 > ./energy.log 2>/dev/null &
mercury6_PID=$!
disown "$mercury6_PID"
# Run the second process.
(
sleep 5
function kill_processes {
echo "Killing both processes."
kill "$mercury6_PID"
exit 1
}
while IFS= read -r LINE; do
[[ "${LINE}" == "ejected" ]] || [[ "${LINE}" == "complete." ]] && {
echo "$INFO_LOG has changed."
killall tail
kill_processes
}
done < <(tail -f "$INFO_LOG")
)
#&
#disown "$!"
If you have any recommendations you are very welcome! BTW I will try to make it with os.fork() in python as well.
Thanks again!
See here (Google is your friend). You can make the processes background tasks with the & sign, then use wait to have the parent process wait for its children.