I know it is most silly question for openface But I am new to openface and I am not abe to figure this out
align = openface.AlignDlib()
what do we need to pass in the arguements . In openface docoumentation it is written
align = openface.AlignDlib(args.dlibFacePredictor)
and it says in its docoumentation , argument is a dtring which is path to dlib's .
I dont know what that means its my first programm for openface
The string is expected to lead to some pretrained model (residing on your harddisk). Some dlib-models are listed here (official developer).
In terms of openface, the code shows:
mkdir -p dlib
if [ ! -f dlib/shape_predictor_68_face_landmarks.dat ]; then
printf "\n\n====================================================\n"
printf "Downloading dlib's public domain face landmarks model.\n"
printf "Reference: https://github.com/davisking/dlib-models\n\n"
printf "This will incur about 60MB of network traffic for the compressed\n"
printf "models that will decompress to about 100MB on disk.\n"
printf "====================================================\n\n"
wget -nv \
http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2 \
-O dlib/shape_predictor_68_face_landmarks.dat.bz2
[ $? -eq 0 ] || die "+ Error in wget."
bunzip2 dlib/shape_predictor_68_face_landmarks.dat.bz2
[ $? -eq 0 ] || die "+ Error using bunzip2."
fi
Meaning: it's shape_predictor_68_face_landmarks.dat (part of above list).
You can look into the code for additional understanding, e.g. here:
def __init__(self, inputDir, outputDir, verbose):
self.inputDir = inputDir
self.dlibFacePredictor = os.path.join(
dlibModelDir, "shape_predictor_68_face_landmarks.dat")
Related
so i dont know python, but trying to work out this script to search all m3u8's instead of just 1 of the same name in both folders.
For Example i would like to use all_streams.m3u8 in one folder
and then use pluto.m3u8, Freeview.m3u8, yt.m3u8, and any other m3u8 that are in the 2nd folder..
So it would merge the all_streams.m3u8 with pluto.m3u8, freeview.m3u8, yt.m3u8 and any other m3u8's
These 2 scripts are meant to merge m3u8's and remove the all the excess junk from the files so that the main file can be read using any player.
Heres the 2 scripts, the bash and python:-
merge_m3u8_files_in_dirs.sh
#!/bin/bash
### Helper script to merge m3u8 files
#########################################
##### START OF FUNCTION DEFINITIONS #####
#########################################
merge_m3u8_files () {
local m3u8_merge_into_file=$1
local m3u8_merge_from_file=$2
local json_check_file=$3
echo "### START Processing - '${m3u8_merge_into_file}' ###"
ls -alt "${m3u8_merge_into_file}"
ls -alt "${json_check_file}"
python "${SCRIPT_PATH}/merge_m3u8_files.py" --merge_into_file "${m3u8_merge_into_file}" --merge_from_file "${m3u8_merge_from_file}"
if [[ $return_code -eq 0 ]];
then
echo " No Issues"
else
echo " Issues Found"
fi
echo "### FINISH Processing - '${m3u8_merge_into_file}' ###"
}
#######################################
##### END OF FUNCTION DEFINITIONS #####
#######################################
echo '##### Calling: '`basename "$0"` '('$0')'
### Verify the parsed variables
echo Verifying passed arguments
m3u_merge_into_dir=$1
m3u_merge_from_dir=$2
if [[ -z ${m3u_merge_into_dir} ]];
then
echo "arg1 - M3u Merge into directory is not set"
exit 1
fi
if [[ ! -d ${m3u_merge_into_dir} ]];
then
echo "Directory '${m3u_merge_into_dir}' DOES NOT exist."
exit 1
fi
if [[ -z ${m3u_merge_from_dir} ]];
then
echo "arg1 - M3u Merge from directory is not set"
exit 1
fi
if [[ ! -d ${m3u_merge_from_dir} ]];
then
echo "Directory '${m3u_merge_from_dir}' DOES NOT exist."
exit 1
fi
### Action ###
SCRIPT_PATH="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
# find "$m3u_merge_into_dir/" "$m3u_merge_from_dir/" -printf '%P\n' | sort | uniq -d
for file in $m3u_merge_into_dir/*.m3u8; do
name=${file##*/}
if [[ -f $m3u_merge_from_dir/*.m3u8 ]]; then
echo "$name exists in both directories, process"
merge_m3u8_files "${m3u_merge_into_dir}/${name}" "${m3u_merge_from_dir}/${name}"
fi
done
merge_m3u8_files.py
"""Script to convert iptvcat json files to m3u8."""
import argparse
import os
import sys
sys.path.append(os.path.join(os.path.dirname(__file__), '..'))
from ez_m3u8_creator import m3u8
def main():
"""Run the main function."""
parser = argparse.ArgumentParser()
parser.add_argument('-mi', '--merge_into_file', help='The file to merge into', required=True)
parser.add_argument('-mf', '--merge_from_file', help='The file to merge from.', required=True)
args = parser.parse_args()
print('merge_into_file', args.merge_into_file)
print('merge_from_file', args.merge_from_file)
if (not os.path.exists(args.merge_into_file)) or not os.path.isfile(args.merge_into_file):
raise ValueError(F'"{args.merge_into_file}" is not a vaid file')
if (not os.path.exists(args.merge_from_file)) or not os.path.isfile(args.merge_from_file):
raise ValueError(F'"{args.merge_from_file}" is not a vaid file')
m3u8.merge_m3u8_files(merge_into_path=args.merge_into_file, merge_from_path=args.merge_from_file)
if __name__ == '__main__':
main()
i've tried to find the answers, but can't seem to find an answer that works...
For example the normal *.m3u8 doesnt work, sadly i dont know enough of python to get it to work..
Going through the answers at superuser.
I'm trying to modify this to listen for multiple strings and echo custom messages such as ; 'Your server started successfully' etc
I'm also trying to tack it to another command i.e. pip
wait_str() {
local file="$1"; shift
local search_term="Successfully installed"; shift
local search_term2='Exception'
local wait_time="${1:-5m}"; shift # 5 minutes as default timeout
(timeout $wait_time tail -F -n0 "$file" &) | grep -q "$search_term" && echo 'Custom success message' && return 0 || || grep -q "$search_term2" && echo 'Custom success message' && return 0
echo "Timeout of $wait_time reached. Unable to find '$search_term' or '$search_term2' in '$file'"
return 1
}
The usage I have in mind is:
pip install -r requirements.txt > /var/log/pip/dump.log && wait_str /var/log/pip/dump.log
To clarify, I'd like to get wait_str to stop tailing when pip exits, whether successfully or not.
Following is general answer and tail could be replaced by any command that result in stream of lines.
IF different string needs different actions then use following:
tail -f var/log/pip/dump.log |awk '/condition1/ {action for condition-1} /condition-2/ {action for condition-2} .....'
If multiple conditions need same action them ,separate them using OR operator :
tail -f var/log/pip/dump.log |awk '/condition-1/ || /condition-2/ || /condition-n/ {take this action}'
Based on comments : Single awk can do this.
tail -f /path/to/file |awk '/Exception/{ print "Worked"} /compiler/{ print "worked"}'
or
tail -f /path/to/file | awk '/Exception/||/compiler/{ print "worked"}'
OR Exit if match is found
tail -f logfile |awk '/Exception/||/compiler/{ print "worked";exit}'
I'm working on an application that will eventually graph the gpg signature connections between a predefined set of email addresses. I need it to programmatically collect the public keys from a key server. I have a working model that will use the --search-keys option to gpg. However, when run with the --batch flag, I get the error "gpg: Sorry, we are in batchmode - can't get input". When I run with out the --batch flag, gpg expects input.
I'm hoping there is some flag to gpg that I've missed. Alternatively, a library (preferably python) that will interact with a key server would do.
Use
gpg --batch --keyserver hkp://pool.sks-keyservers.net --search-keys ...
and parse the output to get key IDs.
After that
gpg --batch --keyserver hkp://pool.sks-keyservers.net --recv-keys key-id key-id ..
should work
GnuPG is not performing very well anyway when you import very large portions of the web of trust, especially during the import phase.
I'd go for setting up a local keyserver, just dumping all the keys in there (less than 10GB of download size in 2014) and directly querying your own, local keyserver.
Hockeypuck is rather easy to setup and especially query, as it stores the data in a PostgreSQL database.
Use --recv-keys to get the keys without prompting.
In the case of a hkps server the following would work :
gpg --keyserver hkps://***HKPSDOMAIN*** --recv-keys \
$(curl -s "https://***HKPSDOMAIN***/?op=index&options=mr&search=***SEARCHSTRING***"\
|grep pub|awk -F ":" '{print $2}')
We can store the std and err output of the gpg --search-keys commands into variables by specifying 2>&1, then do work on those variables. For example, get the public key ids or those with *.amazon.com email addresses:
pubkeyids=$(gpg --batch --keyserver hkp://keyserver.ubuntu.com --search-keys amazon.com 2>&1 | grep -Po '\d+\s*bit\s*\S+\s*key\s*[^,]+' | cut -d' ' -f5)
The regular expression is fully explained on regex101.com. We can automate searching for keys by their IDs and add them to the keyring using bash by parsing that output. As an illustration, I created the following GitHub gist to host the code below.
Example address list example.csv:
First Name
Last Name
Email Address
Hi
Bye
hi#bye.com
Yes
No
yes#no.com
Why
Not
why#not.com
Then we can pass the csv path to a bash script which will add all keys belonging to the email addresses in the csv:
$ getPubKeysFromCSV.sh ~/example.csv
Here is an implementation of the above idea, getPubKeysFromCSV.sh:
# CSV of email address
csv=$1
# Get headers from CSV
headers=$(head -1 $csv)
# Find the column number of the email address
emailCol=$(echo $headers | tr ',' '\n' | grep -n "Email Address" | cut -d':' -f1)
# Content of the CSV at emailCol column, skipping the first line
emailAddrs=$(tail -n +2 $csv | cut -d',' -f$emailCol)
gpgListPatrn='(?<entropy>\d+)\s*bit\s*(?<algo>\S+)\s*key\s*(?<pubkeyid>[^,]+)'
# Loop through the array and get the public keys
for email in "${emailAddrs[#]}"
do
# Get the public key ids for the email address by matching the regex gpgListPatt
pubkeyids=$(gpg --batch --keyserver hkp://keyserver.ubuntu.com --search-keys $email 2>&1 | grep -Po $gpgListPatrn | cut -d' ' -f5)
# For each public key id, get the public key
for pubkeyid in $pubkeyids
do
# Add the public key to the local keyring
recvr=$(gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys $pubkeyids 2>&1)
# Check exit code to see if the key was added
if [ $? -eq 0 ]; then
# If the public key is added, do some extra work with it
# [do stuff]
fi
done
done
If we wanted, we could make getPubKeysFromCSV.sh more complex by verifying a file signature in the body of the loop, after successfully adding the public key. In addition to the CSV path, we will pass the signature path and file path as arguments two and three respectively:
$ getPubKeysFromCSV.sh ~/example.csv ./example.file.sig ./example.file
Here is the updated script difference as a diff:
--- original.sh
+++ updated.sh
## -1,6 +1,12 ##
# CSV of email address
csv=$1
+# signature file
+sig=$2
+
+# file to verify
+file=$3
+
# Get headers from CSV
headers=$(head -1 $csv)
## -22,5 +28,17 ##
recvr=$(gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys $pubkeyids 2>&1)
# Check exit code to see if the key was added
+ if [ $? -eq 0 ]; then
+ verify=$(gpg --batch --verify $sig $file 2>&1)
+ # If the signature is verified, announce it was verified
+ # else, print error not verified and exit
+ if [[ $verify =~ "^gpg: Good signature from" ]]; then
+ echo "$file was verified by $email using $pubkeyid"
+ else
+ printf '%s\n' "$file was unable to be verified" >&2
+ exit 1
+ fi
+ fi
done
done
I am using Python to execute an external program by simply using Python's "os" library:
os.system("./script1") # script1 generates several log files and the most important for me is "info.log" !
os.system("./script2") # script2 uses the output from script1
The problem is that those scripts are in 50000 element "for loop" and "script1" needs at least 2 min to finish its job (it has fixed time duration)
In the first 1-2 sec I can find out whether I need the output data or not by looking in to the "info.log" file. However, as "script1" is an already compiled program and I can't modify it, I have to wait until it finishes.
I was thinking about a method in Bash that allows me to run two processes at the same time:
one is to start ./script1 and the other is to monitor any change in the "info.log" file...
If "info.log" has been updated or changed in size, then the second script should terminate both processes.
Something like:
os.system("./mercury6 1 > ./energy.log 2>/dev/null & sleep 2 & if [ $(stat -c %s ./info.log) -ge 1371]; then kill %1; else echo 'OK'; fi")
– which does not work...
Please, if someone knows a method, let me know!
I would suggest using subprocess, with a combination of this answer describing how to kill a bash script along with some method of monitoring the file, something like watchdog, or a simple polling loop.
import os
import time
os.system("script1")
while True: # Now this would run in 10 sec cycles and checks the output of the logs
try: # And you don't have the wait the whole 2 mins to script1 to be ready
o = open("info.log", "r") # or add os.system("script2") here
# Add here more logs
break
except:
print "waiting script1 to be ready.."
time.sleep(10) # Modify (seconds) to be sufficient to script1 to be ready
continue
You can try this one. Tell me if script1 pauses on runtime so we could try to configure it further.
#!/bin/bash
[ -n "$BASH_VERSION" ] || {
echo "You need Bash to run this script."
exit 1
}
set +o monitor ## Disable job control.
INFO_LOG='./info.log'
if [[ ! -f $INFO_LOG ]]; then
# We must create the file.
: > "$INFO_LOG" || {
echo "Unable to create $INFO_LOG."
exit 1
}
fi
read FIRSTTIMESTAMP < <(stat -c '%s' "$INFO_LOG") && [[ $FIRSTTIMESTAMP == +([[:digit:]]) ]] || {
echo "Unable to get stat of $INFO_LOG."
exit 1
}
# Run the first process with the script.
./script1 &
SCRIPT1_PID=$!
disown "$SCRIPT1_PID"
# Run the second process.
(
function kill_processes {
echo "Killing both processes."
kill "$SCRIPT1_PID"
exit 1
}
[[ -f $INFO_LOG ]] || {
echo "File has been deleted."
kill_processes
}
read NEWTIMESTAMP < <(stat -c '%s' "$INFO_LOG") && [[ $NEWTIMESTAMP == +([[:digit:]]) ]] || {
echo "Unable to get new timestamp of $INFO_LOG."
kill_processes
}
[[ NEWTIMESTAMP -ne FIRSTTIMESTAMP ]] || {
echo "$INFO_LOG has changed."
kill_processes
}
sleep 4s
) &
disown "$!"
You might also want to check some other similar solutions here: linux script with netcat stops working after x hours
(
function kill_processes {
echo "Killing both processes."
kill "$mercury6_PID"
exit 1
}
while IFS= read -r LINE; do
[[ "${LINE}" == "ejected" ]] && { ## Perhaps it should be == *ejected* ?
echo "$INFO_LOG has changed."
kill_processes
}
done < <(tail -f "$INFO_LOG")
) &
disown "$!"
It has been done ! not so elegant but it works! this is the code:
#!/bin/bash
[ -n "$BASH_VERSION" ] || {
echo "You need Bash to run this script."
exit 1
}
# set +o monitor ## Disable job control.
INFO_LOG="./info.out"
# Run the first process with the script.
./mercury6 1 > ./energy.log 2>/dev/null &
mercury6_PID=$!
disown "$mercury6_PID"
# Run the second process.
(
sleep 5
function kill_processes {
echo "Killing both processes."
kill "$mercury6_PID"
exit 1
}
while IFS= read -r LINE; do
[[ "${LINE}" == "ejected" ]] || [[ "${LINE}" == "complete." ]] && {
echo "$INFO_LOG has changed."
killall tail
kill_processes
}
done < <(tail -f "$INFO_LOG")
)
#&
#disown "$!"
If you have any recommendations you are very welcome! BTW I will try to make it with os.fork() in python as well.
Thanks again!
See here (Google is your friend). You can make the processes background tasks with the & sign, then use wait to have the parent process wait for its children.
I am sorry for my bad english.
I search solution for my question some day,but not found.
this is my question:
i have some manager shell script in server-A.
i use
ssh username#other_server_ip < shell_script.sh
is run OK.
i want to do this in python.
i was test this:
1\paramiko, 'exec_command (str)' is only run ONE command.and i use stdin to invoke_shell,not ok
2\pexect,sendline() is only ONE command.
Please help me,thanks!
(some AIX not support sft,so i not want use sftp script to other server. )
the shell script like this:
#!/bin/sh
if [ $# -lt 1 ]
os=`uname`
if [ "$os" = "linux" ] || [ "$os" = "Linux" ]
then
temp=`df -P $diskname| tail -1`
if [ "$temp" = "" ]
then
echo "error!t=$diskname not found"
exit 0
fi
# diskutil=`echo $temp|awk '{printf("%s",$5)}'|awk '{gsub("%",""); print $0}'`
disk_size=`echo $temp | awk '{print $2}'`
disk_size_mb=`expr $disk_size / 1024`
disk_size=`echo | awk '{ printf("%.2f",(c1/1024.0)) }' c1=$disk_size_mb`
disk_size="${disk_size}"
elif [ "$os" = "SunOS" ]
then
temp=`df -k $diskname | tail -1`
....
elif [ "$os" = "AIX" ] || [ "$os" = "aix" ]
then
temp=`df -k $diskname |tail -1|sed -e "s/%//g"`
....
else
echo "error!!=Unsupported platform: $os"
exit
fi
echo "Total Size=$disk_size_mb"
If just triggering the script from Python is enough, use subprocess.call from the standard library. From looking at your script, I guess it's being run on a number of remote hosts, probably in parallel. You might want to have a look at the excellent fabric module. It's a wrapper around paramiko which greatly facillitates running commands both locally and remotely and transfering files up and down.