Normally I work with Python but I have a project in Perl. So: What is the process for directing the results of an snmpwalk to a string? I would like to search the string to see if it contains a smaller string.
Here is what I have so far:
foreach (#list){
chomp($_);
system("snmpwalk -v 2c -c community-string $_ oid-hidden");
if (index($string, $substring) != -1) {
print "'$string' contains '$substring'\n";
}
}
system function doesn't return the function output, use qx// or backticks, so your snmpwalk call line will look like this:
my $output = qx/snmpwalk -v 2c -c community-string $_ oid-hidden/;
And then you do with the output variable what you need, for more info I'd refer you to http://perldoc.perl.org/perlop.html#Quote-Like-Operators
However in more general terms I'd follow the advice in #ThisSuitIsBlackNot's comment...
Related
I have a python list as a string with the following structure:
var="["127.0.0.1:14550","127.0.0.1:14551"]"
I would like to turn the string into a bash array to be able to loop through it with bash:
for ip in ${var[#]}; do
something
done
Use Perl to parse the Python output, like so (note single quotes around the string, which contains double quotes inside):
array=( $( echo '["127.0.0.1:14550","127.0.0.1:14551"]' | perl -F'[^\d.:]' -lane 'print for grep /./, #F;' ) )
echo ${array[*]}
Output:
127.0.0.1:14550 127.0.0.1:14551
Alternatively, use jq as in the answer by 0stone0, or pipe its output through xargs, which removes quotes, like so:
array=( $( echo '["127.0.0.1:14550","127.0.0.1:14551"]' | jq -c '.[]' | xargs ) )
The Perl one-liner uses these command line flags:
-e : Tells Perl to look for code in-line, instead of in a file.
-n : Loop over the input one line at a time, assigning it to $_ by default.
-l : Strip the input line separator ("\n" on *NIX by default) before executing the code in-line, and append it when printing.
-a : Split $_ into array #F on whitespace or on the regex specified in -F option.
-F'[^\d.:]' : Split into #F on any chars other than digit, period, or colon, rather than on whitespace.
print for grep /./, #F; : take the line split into array of strings #F, select with grep only non-empty strings, print one per line.
SEE ALSO:
perldoc perlrun: how to execute the Perl interpreter: command line switches
One option is to treat the string as json, and use jq to parse it:
jq -rc '.[]' <<< '["127.0.0.1:14550","127.0.0.1:14551"]' | while read i; do
echo $i
done
127.0.0.1:14550
127.0.0.1:14551
I've got a log file similar to below:
/* BUG: axiom too complex: SubClassOf(ObjectOneOf([NamedIndividual(http://www.sem.org/sina/onto/2015/7/TSB-GCL#t_Xi_xi)]),DataHasValue(DataProperty(http://www.code.org/onto/ont.owl#XoX_type),^^(periodic,http://www.mdos.org/1956/21/2-rdf-syntax-ns#PlainLiteral))) */
/* BUG: axiom too complex: SubClassOf(ObjectOneOf([NamedIndividual(http://www.sem.org/sina/onto/2015/7/TSB-GCL#t_Ziz)]),DataHasValue(DataProperty(http://www.co-ode.org/ontologies/ont.owl#YoY_type),^^(latency,http://www.w3.org/1956/01/11-rdf-syntax-ns#PlainLiteral))) */
....
I want to extract the fields of t_Xi_xi, t_Ziz ,XoX_type and YoY_type and also the values after ^^( which in this case are latency and periodic.
Note: There are different alphabetic values for each X and Y in the file (e.g. X="sina" Y="Boom" so --> t_Xi_xi ~ t_Sina_sina) so I guess using the regex would be a better choice.
So the final result must be something like below:
t_Xi_xi XoX_type periodic
t_Ziz YoY_type latency
I've tried the regex below to extract them and hopefully to be able to replace the rest of it to " " in the file with the help of sed in shell, but I failed.
([a-zA-Z]_[a-zA-Z]*_[a-zA-Z]*)|(\#[a-zA-Z]*_[a-zA-Z]*)|(\^\([a-zA-Z]*)+
Any kind of help is appreciated on how to do this in Python (or even shell itself).
$ awk -F'#|\\^\\^\\(' '{for (i=2; i<NF; i++) printf "%s%s", gensub(/[^[:alnum:]_].*/,"",1,$i), (i<(NF-1) ? OFS : ORS) }' file
t_Xi_xi XoX_type periodic
t_Ziz YoY_type latency
The above uses GNU awk for gensub(), with other awks you'd use sub() and a separate printf statement.
Hello all I was wondering what the option is for the python based version of youtube-dl for this argument in terminal --restrict-filenames? What does the options in python does the tuple need to have added to it?
Thanks in advance, Ondeckshooting
Per the documentation, that option does not require an argument. So a command
such as this will suffice:
youtube-dl --restrict-filenames 73VCKpU9ZnA
Here is the option detail:
Restrict filenames to only ASCII characters, and avoid "&" and spaces in
filenames
As far as what ASCII is, this script will reveal:
#!/usr/bin/awk -f
BEGIN {
while (z++ < 0x7e) {
$0 = sprintf("%c", z)
if (/[[:graph:]]/) printf $0
}
}
Result
!"#$%&'()*+,-./0123456789:;<=>?#ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~
)I have confirmed my Linux command works in the terminal, however when I try to call it from python it breaks.
The command is a bit long and has lots of single quotes, so I wrapped it around three double quotes (""") so python can interpret as a raw string (or so I thought). However, when I run it I am getting
sh: -c: line 0: unexpected EOF while looking for matching `''
sh: -c: line 1: syntax error: unexpected end of file
but I have double and tripple checked my single and double quotes and I have no idea where to go from here.
See the test script below
import os
os.system("""awk -F ' *[[:alnum:]_]*: *' 'BEGIN {h="insert_job;box_name;command;owner;permission;condition;description;std_out_file;std_err_file;alarm_if_fail"; print h; n=split(h,F,/;/)} function pr() {if(F[1] in A) {for(i=1;i<=n;i++)printf "%s%s",A[F[i]],(i<n)?";":RS}} /insert_job/ {pr(); delete A} {for(i in F){if($0~"^"F[i])A[F[i]]=$2}} END {pr()}' ./output/JILS/B7443_dev_jil_20140306104313.csv > /trvapps/autosys/admin/EPS/output/JILS/testout.txt""")
FYI I am using Python 2.4.3, hence why I am using os instead of subprocess.
For your own sanity, try using pipes.quote (or something similar if that doesn't exist in 2.4), ' '.join(words) and '\n'.join(lines) to be able to build up the command rather than using a single complex string if you have to put it in Python. A better solution would be to call a script like #kojiro suggested.
It looks like you are doing some simple CSV munging. How about checking SO for tips on doing that in Python?
In any case, 400+ characters of awk on a single line is enough to make anyone squirm, and doing it in Python, which already has excellent string handling features, is just passing the pain to the next developer. Which will be very angry.
Cramming the awk script into one huge line is awful, and makes it nearly impossible to read and maintain. Don't do that -- if you really must use awk (a dubious claim), write it out on multiple lines, with proper indentation, like you would any other script.
To fix the bug with sh -c interpreting things wrong, use the subprocess module (passing an argument array and not setting shell=True) instead of os.system().
import subprocess
awk_script = r'''
*[[:alnum:]_]*: *
BEGIN {
h="insert_job;box_name;command;owner;permission;condition;description;std_out_file;std_err_file;alarm_if_fail";
print h;
n=split(h,F,/;/)
}
function pr() {
if(F[1] in A) {
for(i=1;i<=n;i++)
printf "%s%s", A[F[i]], (i<n) ? ";" : RS
}
}
/insert_job/ {
pr();
delete A;
}
{
for(i in F) {
if($0~"^"F[i])
A[F[i]]=$2
}
}
END {pr()}
'''
exit_status = subprocess.call(['awk', '-F', awk_script],
stdin=open('./output/JILS/B7443_dev_jil_20140306104313.csv', 'r'),
stdout=open('/trvapps/autosys/admin/EPS/output/JILS/testout.txt', 'w'))
if exit_status != 0:
raise RuntimeException('awk failed')
I have a series of XML files produced from a data playback utility. The utility produces correctly formed XML tags. Unfortunately, the utility isn't perfect. Some of the Java objects it attempts to serialize fail and they are simply inserted (as binary blobs) in between these other, valid XML tags.
For example...
<track>
<cto>Valid_XML_HERE</cto>#Binary_Blob_of_Junk#<cto>(...)</cto>
</track>
Environment is RHEL-5, which means Python 2.4, Perl, or SED/AWK solutions are usable.
Any suggestions on how to remove the junk?
I built off of Birei's suggestion to inspect tree elements, but came up with a SED-only solution. As shown in the OP, the <cto> tags happen to be on one continuous line. The solution, then, was to split the lines such that each <cto> tag was on a new line -- thus, also isolating the junk binary data on new lines -- and then simply select lines starting with a <cto> tag.
The <tracks> and </tracks> tag can simply be added to the new file via CAT.
Here are the SED commands that I've tested and confirm to work...
Step 1. Isolate the <cto> tags to be on new lines.
sed -i "s/<cto/\n<cto/g;s/<\/cto>/<\/cto>\n/g" ${FILE}
Step 2. Select only the lines starting with a <cto> tag.
sed -i "/<cto/p" ${FILE}
Step 3. Format the new XML document.
xmllint --format "${FILE}" > foo.xml
Thanks for all of your respective inputs.
Other way to remove the text of track tags using the XML::Twig parser:
#!/usr/bin/env perl
use strict;
use warnings;
use XML::Twig;
my $twig = XML::Twig->new(
twig_handlers => {
track => sub {
for my $t ( $_->children() ) {
if ( $t->is_text ) {
$t->set_text( '' );
}
}
}
},
pretty_print => 'indented',
)->parsefile( shift)->print;
Run it with your file as first (and unique) argument:
perl script.pl xmlfile
Here's a quick Perl solution for you.
#!/usr/bin/perl -Tw
use strict;
use warnings;
use English qw( -no_match_vars $INPUT_RECORD_SEPARATOR );
my $text = do { local $INPUT_RECORD_SEPARATOR = undef; <>; };
my #ctos = $text =~ m{<cto>( .+? )</cto>}xmsg;
if ( #ctos ) {
printf '<track><cto>%s</cto></track>', join '</cto><cto>', #ctos;
}
print "\n";
You can pipe your track text through it like so:
$: cat track.txt | ./clean_track.pl
<track><cto>Valid_XML_HERE</cto><cto>(...)</cto></track>