I'm working on an application that will eventually graph the gpg signature connections between a predefined set of email addresses. I need it to programmatically collect the public keys from a key server. I have a working model that will use the --search-keys option to gpg. However, when run with the --batch flag, I get the error "gpg: Sorry, we are in batchmode - can't get input". When I run with out the --batch flag, gpg expects input.
I'm hoping there is some flag to gpg that I've missed. Alternatively, a library (preferably python) that will interact with a key server would do.
Use
gpg --batch --keyserver hkp://pool.sks-keyservers.net --search-keys ...
and parse the output to get key IDs.
After that
gpg --batch --keyserver hkp://pool.sks-keyservers.net --recv-keys key-id key-id ..
should work
GnuPG is not performing very well anyway when you import very large portions of the web of trust, especially during the import phase.
I'd go for setting up a local keyserver, just dumping all the keys in there (less than 10GB of download size in 2014) and directly querying your own, local keyserver.
Hockeypuck is rather easy to setup and especially query, as it stores the data in a PostgreSQL database.
Use --recv-keys to get the keys without prompting.
In the case of a hkps server the following would work :
gpg --keyserver hkps://***HKPSDOMAIN*** --recv-keys \
$(curl -s "https://***HKPSDOMAIN***/?op=index&options=mr&search=***SEARCHSTRING***"\
|grep pub|awk -F ":" '{print $2}')
We can store the std and err output of the gpg --search-keys commands into variables by specifying 2>&1, then do work on those variables. For example, get the public key ids or those with *.amazon.com email addresses:
pubkeyids=$(gpg --batch --keyserver hkp://keyserver.ubuntu.com --search-keys amazon.com 2>&1 | grep -Po '\d+\s*bit\s*\S+\s*key\s*[^,]+' | cut -d' ' -f5)
The regular expression is fully explained on regex101.com. We can automate searching for keys by their IDs and add them to the keyring using bash by parsing that output. As an illustration, I created the following GitHub gist to host the code below.
Example address list example.csv:
First Name
Last Name
Email Address
Hi
Bye
hi#bye.com
Yes
No
yes#no.com
Why
Not
why#not.com
Then we can pass the csv path to a bash script which will add all keys belonging to the email addresses in the csv:
$ getPubKeysFromCSV.sh ~/example.csv
Here is an implementation of the above idea, getPubKeysFromCSV.sh:
# CSV of email address
csv=$1
# Get headers from CSV
headers=$(head -1 $csv)
# Find the column number of the email address
emailCol=$(echo $headers | tr ',' '\n' | grep -n "Email Address" | cut -d':' -f1)
# Content of the CSV at emailCol column, skipping the first line
emailAddrs=$(tail -n +2 $csv | cut -d',' -f$emailCol)
gpgListPatrn='(?<entropy>\d+)\s*bit\s*(?<algo>\S+)\s*key\s*(?<pubkeyid>[^,]+)'
# Loop through the array and get the public keys
for email in "${emailAddrs[#]}"
do
# Get the public key ids for the email address by matching the regex gpgListPatt
pubkeyids=$(gpg --batch --keyserver hkp://keyserver.ubuntu.com --search-keys $email 2>&1 | grep -Po $gpgListPatrn | cut -d' ' -f5)
# For each public key id, get the public key
for pubkeyid in $pubkeyids
do
# Add the public key to the local keyring
recvr=$(gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys $pubkeyids 2>&1)
# Check exit code to see if the key was added
if [ $? -eq 0 ]; then
# If the public key is added, do some extra work with it
# [do stuff]
fi
done
done
If we wanted, we could make getPubKeysFromCSV.sh more complex by verifying a file signature in the body of the loop, after successfully adding the public key. In addition to the CSV path, we will pass the signature path and file path as arguments two and three respectively:
$ getPubKeysFromCSV.sh ~/example.csv ./example.file.sig ./example.file
Here is the updated script difference as a diff:
--- original.sh
+++ updated.sh
## -1,6 +1,12 ##
# CSV of email address
csv=$1
+# signature file
+sig=$2
+
+# file to verify
+file=$3
+
# Get headers from CSV
headers=$(head -1 $csv)
## -22,5 +28,17 ##
recvr=$(gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys $pubkeyids 2>&1)
# Check exit code to see if the key was added
+ if [ $? -eq 0 ]; then
+ verify=$(gpg --batch --verify $sig $file 2>&1)
+ # If the signature is verified, announce it was verified
+ # else, print error not verified and exit
+ if [[ $verify =~ "^gpg: Good signature from" ]]; then
+ echo "$file was verified by $email using $pubkeyid"
+ else
+ printf '%s\n' "$file was unable to be verified" >&2
+ exit 1
+ fi
+ fi
done
done
Related
I'm trying to edit the following YAML file
db:
host: 'x.x.x.x.x'
main:
password: 'password_main'
admin:
password: 'password_admin'
To edit the host part, I got it working with
sed -i "/^\([[:space:]]*host: \).*/s//\1'$DNS_ENDPOINT'/" config.yml
But I can't find a way to update the password for main and admin (which are different values).
I tried to play around with \n and [[:space:]] and got different flavours of:
sed -i "/^\([[:space:]]*main:\n*[[:space:]]*password: \).*/s//\1'$DNS_ENDPOINT'/" config.yml
But never got it to work.
Any help greatly appreciated!
Edit - Requirement: no external binaries/tools. Just good ol' bash.
Since you don't want to install yq you could use python that you most probably already have installed.
Here are the fundamentals:
#!/usr/bin/python
import yaml
with open("config.yml") as f:
y = yaml.safe_load(f)
y['db']['admin']['password'] = 'new_admin_pass'
print(yaml.dump(y, default_flow_style=False, sort_keys=False))
Output:
db:
host: x.x.x.x.x
main:
password: password_main
admin:
password: new_admin_pass
A similar piece of python code as a one-liner that you can put in a bash script would look something like this (and produce the same output):
python -c 'import yaml;f=open("config.yml");y=yaml.safe_load(f);y["db"]["admin"]["password"] = "new_admin_pass"; print(yaml.dump(y, default_flow_style=False, sort_keys=False))'
If you'd like to save the output to a file, you can provide an output stream as the second argument to dump():
#!/usr/bin/python
import yaml
with open("config.yml") as istream:
ymldoc = yaml.safe_load(istream)
ymldoc['db']['admin']['password'] = 'new_admin_pass'
with open("modified.yml", "w") as ostream:
yaml.dump(ymldoc, ostream, default_flow_style=False, sort_keys=False)
If you'd like to overwrite the original file, I recommend writing to a temporary file first and only if that succeeds, use os.rename to move that file in place of the original one. That's to minimize the risk of creating a corrupt config.yml in case of problems.
Note: Using a YAML parser like yq (or yq) will be a way more reliable solution.
However, I've used the following 'technique' to alter a 'pre-defined' line though the help of grep and sed like so;
/tmp/config.yml
db:
host: 'x.x.x.x.x'
main:
password: 'password_main'
admin:
password: 'password_admin'
Get the line number where your 'old-password' is located:
grep -n 'password_admin' /tmp/config.yml | cut -d ':' -f1
6
Then, use sed to override that line with your new password:
sed -i '6s/.*/ password: \'new_admin_pass\'/' /tmp/config.yml
The new file now looks like this:
db:
host: 'x.x.x.x.x'
main:
password: 'password_main'
admin:
password: 'new_admin_pass'
Note
Keep in mind that any special chars (&, \, /) in the password will cause sed to misbehave!
This could fail if the indent changes, since YAML cares about indentation. Just like I mentioned above, using a YAML parser will be a much more reliable solution!
$ awk -v new="'sumthin'" 'prev=="main:"{sub(/\047.*/,""); $0=$0 new} {prev=$1} 1' file
db:
host: 'x.x.x.x.x'
main:
password: 'sumthin'
admin:
password: 'password_admin'
or if your new text can contain escape sequences that you don't want expanded (e.g. \t or \n), as seems likely when setting a password, then:
new="'sumthin'" awk 'prev=="main:"{sub(/\047.*/,""); $0=$0 ENVIRON["new"]} {prev=$1} 1' file
See How do I use shell variables in an awk script? for why/how I use ENVIRON[] to access a shell variable rather than setting an awk variable in that second script.
This is by no way as reliable as yq but you can use this awk if your yaml file structure is same as how it is shown in question:
pw='new_&pass'
awk -v pw="${pw//&/\\\\&}" '/^[[:blank:]]*main:/ {
print
if (getline > 0 && $1 == "password:")
sub(/\047[^\047]*\047/, "\047" pw "\047")
} 1' file
db:
host: 'x.x.x.x.x'
main:
password: 'new_&pass'
admin:
password: 'password_admin'
As mentioned by experts in other answers too, yq should be the proper way but in case someone doesn't have it then one could try following.
awk -v s1="'" -v new_pass="new_value_here" '
/main:/{
main_found=1
print
next
}
main_found && /password/{
next
}
/admin:/ && main_found{
print " password: " s1 new_pass s1 ORS $0
main_found=""
next
}
1
' Input_file
NOTE: In case you want to save output into Input_file itself then append > temp && mv temp Input_file to above solution.
Hi guys I would like to ask for some help with my bash script.
I am running 2 python script inside my bash script and it is working when I'm running it manually but when I'm using cron only the commands in the .sh file is working not on .py
Please take note that I already install necessary utils and packages for python3.
This is the script:
#!/usr/bin/env bash
# list.tmp path directory
fileLoc="/home/ec2-user/PushNotification/Incoming34days/list34days.tmp"
# URL to POST request
refLink='http link'
# Title of Push Notification
title='34th day: Grace Period is about to end'
# curl type
type='Notification'
# curl action_type
actionType='NotificationActivity'
# Get the current date and time
now=$(date '+%b %d %Y %H:%M:%S')
# Message to the user
body="Subscribe to the Philippine Mobile Number plan now to continue receiving calls and texts and sending text messages to the Philippines."
# Logs location
logsLoc="/home/ec2-user/PushNotification/Incoming34days/logs.tmp"
# current number
currentNumLoc="/home/ec2-user/PushNotification/Incoming34days/currentNum.tmp"
echo "[$now] Sending notifications to mobile numbers advising today is the last day of grace period..." > $logsLoc
# Python file to SELECT all id who has 34 days counter
python3 select34days.py
# psql -d $database -t -c "SELECT id FROM svn WHERE current_date - expiry_date::DATE = 4" psql must be setup using .pgpass for postgresql authentication, please indicated database
# name and query list directory. Deleting the last line from list.txt
# This is to read the textfile list.txt line per line
while IFS='' read -r list;
# for list in `cat list.txt`;
do
# curl POST request
response=$(curl --location --request POST $refLink \
--header 'Authorization: Basic YXBwdm5vdXNlcjphcHB2bm9wYXNz' \
--header 'Content-Type: application/json' \
--data-raw '{
"title":"'"$title"'",
"body":"'"$body"'",
"min" :[{"mobileNumber" : "'"$list"'"}],
"type" : "'"$type"'",
"action_type" : "'"$actionType"'"}')
# Echo mobile number
echo "[$now] Mobile Number: $list" >> $logsLoc
# Echo response from curl
echo "Response: '$response'"
echo "[$now] Result: '$response'" >> $logsLoc
# Update the current number of the list
echo $list > $currentNumLoc
echo "[$now] Updating $list status into EXPIRED" >> $logsLoc
# Updating status into EXPIRED
python3 updateQuery34.py
done < "$fileLoc"
# end of script
The select34days.py and updateQuery34.py is not running.
I have a log.tmp to check regarding this situation and only displaying commands inside my .sh file
Inside my cron are
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/bin:/usr/bin
MAILTO=root
Your PATH looks wrong:
PATH=/sbin:/bin:/usr/bin:/usr/bin
This includes /usr/bin twice which isn't necessary, but hints that something else should have been there.
Depending on how you've installed it, python might be in /usr/bin/ or /usr/local/bin or even somewhere in /opt.
At the commandline you can find python's directory using:
dirname $(which python3)
This directory needs to be added to your path in your crontab.
Just declare the specific directory with script name (eg. /etc/test.sh)every time you are using bash scripts and adding it to a cron job since the cron doesn't know where is the specific script within the the server.
Where I work, we have servers that are pre-configured for the use of the bash mail command to send attachments and messages. I'm working on a notification script that will monitor server activity and generate an email if it detects an issue. I'm using the subprocess.call function in order to send a bash command.
I am successful in sending messages, but in the body portion of the email, it is stringing each notification line together rather than putting each notification on a separate line. I have tried to append each line within the string with "\n" and "\r\n". I have to use double backslashes as python will interpret this as literal new lines when it sends the echo command. I also passed the command "shopt -s xpg_echo" before using the echo with pipe to mail using the double backspaces but this also had no effect. I also tried using echo without the "-e" option and this had no effect either.
The trick is that I need python to send the new line to bash and then somehow get bash to interpret this as a new line using echo piped through to mail. Here is a sample of the code:
import os
import shutil
import sys
import time
import re
import subprocess
import smtplib
serviceports["SCP Test"] = ["22"]
serviceports["Webtier"] = ["9282"]
bashCommand = "netstat -an | grep LISTEN | grep -v LISTENING"
netstat_results = subprocess.check_output(bashCommand, shell=True)
netstat_results = str(netstat_results)
#Iterate through all ports for each service and assign down ports to variable
for servicename, ports in serviceports.items():
for ind_port in ports:
ind_port_chk = ":" + ind_port
count = sum(1 for _ in re.finditer(r'\b%s\b' % re.escape(ind_port_chk), netstat_results))
if count == 0:
warning = servicename + " on port " + ind_port + " is currently down!"
report.append(warning)
for warning in report:
message = message + warning + "\\\n"
fromaddr=serveridsimp + "#xxxxx.com"
toaddr='email#xxxxx.com'
subject="Testing..."
body=message
cmd= cmd='echo -e '+body+' | mail -s '+subject+' -r '+fromaddr+' '+toaddr
send=subprocess.call(cmd,shell=True)
The code runs a netstat command and assigns it to a string. The code will then iterate through the specified ports and search for where that port doesn't exist in the netstat string (netstat_results). It then will create a list object (warning) containing all the ports not located in netstat_results and then append each line adding \n to a string called "message". It then sends an echo piped to the xmail command to generate an email to be sent containing all the ports not found. What happens currently is that I will get an email saying something like this:
SCP Test on port 22 is currently down!nOHS Webtier on port 9282 is currently down!n etc...
I want it to put each message on a new line like so:
SCP Test on port 22 is currently down!
Webtier on port 9282 is currently down!
I am trying to avoid writing the output to a file and then using bash to read it back into the mail command. Is this possible without having to create a file?
I was finally able to fix the issue by changing the command sent to bash and character being appended to the following:
message = message + warning + "\n"
cmd= cmd='echo -e '+'"'+body+'"'+'|awk \'{ print $0" " }\''+' | mail -s '+'"'+subject+'"'+' -r '+fromaddr+' '+toaddr
Going through the answers at superuser.
I'm trying to modify this to listen for multiple strings and echo custom messages such as ; 'Your server started successfully' etc
I'm also trying to tack it to another command i.e. pip
wait_str() {
local file="$1"; shift
local search_term="Successfully installed"; shift
local search_term2='Exception'
local wait_time="${1:-5m}"; shift # 5 minutes as default timeout
(timeout $wait_time tail -F -n0 "$file" &) | grep -q "$search_term" && echo 'Custom success message' && return 0 || || grep -q "$search_term2" && echo 'Custom success message' && return 0
echo "Timeout of $wait_time reached. Unable to find '$search_term' or '$search_term2' in '$file'"
return 1
}
The usage I have in mind is:
pip install -r requirements.txt > /var/log/pip/dump.log && wait_str /var/log/pip/dump.log
To clarify, I'd like to get wait_str to stop tailing when pip exits, whether successfully or not.
Following is general answer and tail could be replaced by any command that result in stream of lines.
IF different string needs different actions then use following:
tail -f var/log/pip/dump.log |awk '/condition1/ {action for condition-1} /condition-2/ {action for condition-2} .....'
If multiple conditions need same action them ,separate them using OR operator :
tail -f var/log/pip/dump.log |awk '/condition-1/ || /condition-2/ || /condition-n/ {take this action}'
Based on comments : Single awk can do this.
tail -f /path/to/file |awk '/Exception/{ print "Worked"} /compiler/{ print "worked"}'
or
tail -f /path/to/file | awk '/Exception/||/compiler/{ print "worked"}'
OR Exit if match is found
tail -f logfile |awk '/Exception/||/compiler/{ print "worked";exit}'
Google cloud storage represents the md5hash of objects using base64 encoded values.
How can I convert those values to the hexadecimal versions reported by md5sum?
You can use binascii.hexlify (or binascii.b2a_hex):
import base64
import binascii
print binascii.hexlify(base64.urlsafe_b64decode(md5_base64))
Here's a simply python recipe using numpy.
import numpy as np
b=np.frombuffer(base64.urlsafe_b64decode(md5_base64), dtype=np.uint8)
print "".join(["%0x"% l for l in b])
Here md5_base64 is the value reported by Google Cloud Storage. You can get that value by running the ocmmand
gsutil stat gs://PATH/TO/FILE
The output should include Hash (md5) (assuming its a non composite object).
md5sum compatible hash listings from GCS
For anyone else looking for a native command line / coreutils way of check your Cloud store integrity - for example, you're just visiting Storage through browser, or the files you're trying to verify are stored elsewhere, and you just wanted to generate a .md5 hash listing to run md5sum -c later.
For any GCS bucket directory listing with gsutil ls -L, piping the output through this block (either through an alias or a bash script)
awk 'BEGIN { \
decodehash = "base64 -d | xxd -p | tr -d \"\\n\""; \
truncname = "sed \"s/gs:\/\/[a-z0-9_.\-]*\///\" | sed \"s/:$//\"" } \
/Hash \(md5\)/ { print $3 | decodehash; close(decodehash); \
printf " %s\n",fname | truncname; close(truncname) } \
/^gs:\/\// { fname = $0 }'
should generate an output that is compatible with verification using md5sum -c locally. Basically it's an awk block that looks for gs:// lines and Hash (md5): lines, swap their order (so that the hash gets printed before the filename), and use base64 -d and xxd to convert the hash to hex string.
Example
analogist#project-1111$ ls -L gs://bucket-name/directory
gs://bucket-name/directory/file1.tar.gz
Creation time: Wed, 10 Aug 2016 23:17:06 GMT
[...]
Hash (crc32c): a4X4cQ==
Hash (md5): 2xXYMp7aacmOZ+M57KHEbA==
[..]
gs://bucket-name/directory/file2.tar.gz
Creation time: Wed, 10 Aug 2016 23:26:16 GMT
[...]
Hash (crc32c): JVo9EA==
Hash (md5): XdrBIyCaADR9arH67ucdEA==
[..]
I can save the above awk code block to the file md5convert, and:
analogist#project-1111$ ls -L gs://bucket-name/directory | bash md5convert
db15d8329eda69c98e67e339eca1c46c directory/file1.tar.gz
5ddac123209a00347d6ab1faeee71d10 directory/file2.tar.gz