I have the following piece of code to get name and launch time for each instance:
for instance in instances:
instance_name = instance.name
launch_time = instance.launch_time
As result i want to get a variable with the list like:
instance_name: launch_time
instance_name: launch_time
...
i.e
Name: server1; Launch time: 2 days, 7:33:46.319073
Name: server2: Launch time: 4 days, 6:33:46.319073
...
and then use it for email notification in another multi-line string.
I've done it with:
result = []
result.append({'name':instance_name, 'launch_uptime':uptime })
but i don't like these quotes, i want a good formatted text
You can do like this:
multi_line = ''
for instance in instances:
instance_name = instance.name
launch_time = instance.launch_time
multi_line += 'Name: ' + instance_name +'; Launch time: ' + launch_time + '\n'
print(multi_line)
Related
I am using the following line of code for executing and printing data from my sql database. For some reason that is the only command that works for me.
json_string = json.dumps(location_query_1)
My question is that when I print json_string it shows data in the following format:
Actions.py code:
class FindByLocation(Action):
def name(self) -> Text:
return "action_find_by_location"
def run (self, dispatcher: CollectingDispatcher,
tracker: Tracker,
doman: Dict[Text, Any])-> List[Dict[Text,Any]]:
global flag
location = tracker.get_slot("location")
price = tracker.get_slot("price")
cuisine = tracker.get_slot("cuisine")
print("In find by Location")
print(location)
location_query = "SELECT Name FROM Restaurant WHERE Location = '%s' LIMIT 5" % location
location_count_query = "SELECT COUNT(Name) FROM Restaurant WHERE Location = '%s'" % location
location_query_1 = getData(location_query)
location_count_query_1 = getData(location_count_query)
if not location_query_1:
flag = 1
sublocation_view_query = "CREATE VIEW SublocationView AS SELECT RestaurantID, Name, PhoneNumber, Rating, PriceRange, Location, Sublocation FROM Restaurant WHERE Sublocation = '%s'"%(location)
sublocation_view = getData(sublocation_view_query)
dispatcher.utter_message(text="یہ جگہ کس ایریا میں ہے")
else:
flag = 0
if cuisine is None and price is None:
json_string = json.dumps(location_query_1)
print(isinstance(json_string, str))
print("Check here")
list_a=json_string.split(',')
remove=["'",'"','[',']']
for i in remove:
list_a=[s.replace(i, '') for s in list_a]
dispatcher.utter_message(text="Restaurants in Location only: ")
dispatcher.utter_message(list_a)
What should I do se that the data is showed in a vertical list format (new line indentation) and without the bracket and quotation marks? Thank you
First of all, have you tried reading your data into a pandas object? I have done some programs with a sqlite database and this worked for me:
df = pd.read_sql_query("SELECT * FROM {}".format(self.tablename), conn)
But now to the string formatting part:
# this code should do the work for you
# first of all we have our string a like yours
a="[['hallo'],['welt'],['kannst'],['du'],['mich'],['hoeren?']]"
# now we split the string into a list on every ,
list_a=a.split(',')
# this is our list with chars we want to remove
remove=["'",'"','[',']']
# now we replace all elements step by step with nothing
for i in remove:
list_a=[s.replace(i, '') for s in list_a]
print(list_a)
for z in list_a:
print(z)
The output is then:
['hallo', 'welt', 'kannst', 'du', 'mich', 'hoeren?']
hallo
welt
kannst
du
mich
hoeren?
I hope I could help.
I have 2 pickel files which hold the IPaddress along with the ports , aws_tags and region information assocaiated with them. This is basically a port scanner which has a method which prints when a new IP addresss is found. This is done by subtracting the NEW_pickel_scan with OLD_pickel_scan as follows :
self.prev_hosts = set()
self.curr_hosts = set()
def new_hosts(self)
result_new_hosts = self.curr_hosts - self.prev_hosts
this works fine and prints the new IP added in the pickel report.
Now i need to add the associated tag and region of that IP address too. I have the needed data already at a mapping :
mapping = {i[0]:[i[1],i[2]] for i in data}
i[0] is IP , i[1] is Tag and i[2] is the region
so i am trying to print the tag using this mapping.
Just for example i have another method which prints when a illegle port is found
def dump_raw(self,mapping):
nmap_report = self.report
for host in nmap_report.hosts:
#print
if len(host.hostnames):
tmp_host = host.hostnames.pop()
else:
tmp_host = host.address
print("Nmap scan report for {0} ({1})".format(tmp_host,host.address))
print("Host is {0}.".format(host.status))
#val = config.get('ports', 'scan_range')
#val_known = config.get('ports','known')
#safe_port = range(*map(int, val.split(',')))
#known_ports = map(int, val_known.split(','))
print(" PORT STATE SERVICE")
for serv in host.services:
if serv.state == "open":
## print ('Illegal Port open :'+str(serv.port) +'/'+str(serv.protocol)+' '+str(serv.service)+', on host=> '+str(host))
print ('Illegal Port open :'+str(serv.port) +'/'+str(serv.protocol)+' '+str(serv.service)+', on host=> '+str(host) + ' Tag =' + (mapping[host.address.strip()][0]) + ' Region =' + str(mapping[host.address.strip()][1]))
This is how i used mapping , can someone help me for the new_hosts() ?
I tried :
def new_hosts(self,mapping):
"""Return a list of new hosts added in latest scan"""
result_new_hosts = self.curr_hosts - self.prev_hosts
print mapping[result_new_hosts]
it says : TypeError: unhashable type: 'set'
also if i do something like :
def new_hosts(self,mapping):
"""Return a list of new hosts added in latest scan"""
result_new_hosts = self.curr_hosts - self.prev_hosts
print mapping[result_new_hosts]
nmap_report = self.report
for host in nmap_report.hosts:
for serv in host.services:
print result_new_hosts,mapping[result_new_hosts.address.strip()[0]],mapping[result_new_hosts.address.strip()[1]]
return (result_new_hosts,mapping[result_new_hosts.address.strip()[0]],mapping[result_new_hosts.address.strip()[1]])
This prints:
AttributeError: 'set' object has no attribute 'address'
result_new_hosts = self.curr_hosts - self.prev_hosts
print mapping[result_new_hosts]
result_new_hosts is a set, and as the error says, sets are unhashable therefore can't be stored or looked up in a dictionary.
Instead, you should search each individual element in the set:
result_new_hosts = self.curr_hosts - self.prev_hosts
for result in result_new_hosts:
print mapping[result]
UPDATE In case you want to return a list of tuples that contain (ip, (tag, region)):
def new_hosts(self, mapping):
result_new_hosts = self.curr_hosts - self.prev_hosts
return [(result, mapping[result]) for result in result_new_hosts]
So I've created a class...
class Dept_member:
quarterly_budget = 0
outside_services = 0
regular_count = 0
contractor_count = 0
gds_function = ''
dept_name = ''
def __init__(self, quarterly_budget, outside_services, dept_name):
self.quarterly_budget = quarterly_budget
self.outside_services = outside_services
self.dept_name = dept_name
def regular_cost(self):
print "%s" % str((self.quarterly_budget - self.outside_services) / self.regular_count)
def contractor_cost(self):
print "%s" % str(self.outside_services / self.contractor_count)
Now I want to use variables I collect while iterating over an excel file to create objects for each row using the class detailed above.
for row in range(6,d_sh.get_highest_row()):
if f_sh.cell(row=row, column=2).value:
deptno = f_sh.cell(row=row, column=2).value
q_budget = f_sh.cell(row=row, column=17).value #Q3 Actual
os_budget = f_sh.cell(row=row, column=14).value
deptnode = f_sh.cell(row=row, column=1).value
chop = deptnode.split(" ")
deptname = " ".join(chop[1:])
Dept = "gds_"+str(deptno) ### This is what I want my new object to be called!
Dept = Dept_member(q_budget, os_budget, deptname)
Below are some output from an idle interactive session after this runs.
>>>
>>> deptno
u'180024446'
>>> q_budget
59412.00978792818
>>> os_budget
9973.898858075034
>>> deptnode
u'M180024446 GDS Common HW FEP China'
>>> deptname
u'GDS Common HW FEP China'
>>> Dept
<__main__.Dept_member instance at 0x126c32050>
>>> Dept.quarterly_budget
59412.00978792818
What I really wanted was an object named gds_180024446 but instead it mutated the variable.
Is it possible to create a bunch of objects using variables in a loop?
You should probably use python dictionaries (tutorial page describing dictionaries), instead of creating bunch of variables using eval function:
Dept["gds_"+str(deptno)] = Dept_member(q_budget, os_budget, deptname)
After that, you can fetch your object from dictionary with:
Dept['gds_180024446']
I think you need to use the eval function like
eval("gds_" + str(deptno) + " = Dept_member(q_budget, os_budget, deptname)")
I try to build a regex with Python which must match on this :
STRING
STRING STRING
STRING (STRING) STRING (STRING)
STRING (STRING) STRING (STRING) STRING (STRING) STRING
I tried to do the job using metacharacter optionnal ? but for the second pattern STRING STRING it doesn't work : I have just the first character after the first string
\w+\s+\w+?
gives
STRING S
but should gives
STRING STRING
and match on
STRING
STRING STRING
Here is full code :
import csv
import re
import sys
fname = sys.argv[1]
r = r'(\w+) access = (\w+)\s+Vol ID = (\w+)\s+Snap ID = (\w+)\s+Inode = (\w+)\s+IP = ((\d|\.)+)\s+UID = (\w+)\s+Full Path = (\S+)\s+Handle ID: (\S+)\s+Operation ID: (\S+)\s+Process ID: (\d+)\s+Image File Name: (\w+\s+\w+\s+\w+)\s+Primary User Name: (\S+)\s+Primary Domain: (\S+)\s+Primary Logon ID: (.....\s+......)\s+Client User Name: (\S+)\s+Client Domain: (\S+)\s+Client Logon ID: (\S+)'
regex = re.compile(r)
out = csv.writer(sys.stdout)
f_hdl = open(fname, 'r')
csv_rdr = csv.reader(f_hdl)
header = True
for row in csv_rdr:
#print row
if header:
header = False
else:
field = row[-1]
res = re.search(regex, field)
if res:
audit_status = row[3]
device = row[7]
date_time = row[0]
event_id = row[2]
user = row[6]
access_source = res.group(1)
access_type = res.group(2)
volume = res.group(3)
snap = res.group(4)
inode = res.group(5)
ip = res.group(6)
uid = res.group(8)
path = res.group(9)
handle_id = res.group(10)
operation_id = res.group(11)
process_id = res.group(12)
image_file_name = res.group(13)
primary_user_name = res.group(14)
primary_domain = res.group(15)
primary_logon_id = res.group(16)
client_user_name = res.group(17)
client_domain = res.group(18)
client_logon_id = res.group(19)
print audit_status, device, date_time, event_id, user, access_source, access_type, volume, snap, inode, ip, uid, path
out.writerow(
[audit_status, device, date_time, event_id, user, access_source, access_type, volume, snap, inode, ip, uid, path, handle_id, operation_id, process_id, image_file_name, primary_user_name, primary_domain, primary_logon_id, client_user_name, client_domain, client_logon_id]
)
else:
print 'NOMATCH'
Any suggestions ?
Some people, when confronted with a problem, think
“I know, I'll use regular expressions.” Now they have two problems.
If it's a csv file that uses space for separation and parenthesis for quoting, use
csv.reader(csvfile, delimiter=' ', quotechar='(')
If it's even a simpler case, use the split method on the string and expand it to fill all fields with an empty string:
fields = field.split(' ')
fields = [i or j for i, j in map(None, fields, ('',) * 7)]
Try this for your regex string:
r = '(\\w+) access = (\\w+)\\s+Vol ID = (\\w+)\\s+Snap ID = (\\w+)\\s+Inode = (\\w+)\\s+IP = ((\\d|\\.)+)\\s+UID = (\\w+)\\s+Full Path = (\\S+)\\s+Handle ID: (\\S+)\\s+Operation ID: (\\S+)\\s+Process ID: (\\d+)\\s+Image File Name: (\\w+\\s+\\w+\\s+\\w+)\\s+Primary User Name: (\\S+)\\s+Primary Domain: (\\S+)\\s+Primary Logon ID: (.....\\s+......)\\s+Client User Name: (\\S+)\\s+Client Domain: (\\S+)\\s+Client Logon ID: (\\S+)\\s+Accesses: (.*)'
I am using MRjob to run Hadoop Streaming jobs over our HBase instance. For the life of me I cannot figure out how to pass a parameter to my reducer. I have two parameters that I want to pass to my reducer from when I run the job: startDate and endDate. Here's what my current reducer looks like:
def reducer(self, groupId, meterList):
"""
Print bucket.
"""
sys.stderr.write("Working on group = " + str(groupId) + "\n")
#print "Opening connection..."
conn = open_connection(hostname)
#print "Getting table..."
table = get_table(conn, tableName)
compositeDf = DataFrame()
for meterId in meterList:
sys.stderr.write("Querying: " + str(meterId) + "\n")
df = extract_meter_data(table, meterId, startDate, endDate)
I cannot seem to pass startDate and endDate as parameters to my reducer. The only way I can get the job to pick up the parameters is through a global variable at the top of the class.
startDate = datetime.datetime(2012, 6, 10)
endDate = datetime.datetime(2012, 6, 11)
class MRDataQuality(MRJob):
"""
MapReduce job that does a data quality check on the meter data in HBase.
"""
But that is dirty. I want to pass it in from calling the job. I've tried many methods. Setting it as an instance variable, setting it as a static class variable, creating an overloaded constructor for MRDataQualityJob.... nothing seems to work. I am calling it from my top-level script programmatically like so:
if args.hadoop:
mrdq_job = MRDataQuality(args=['-r', 'hadoop', '--conf-path', 'mrjob.conf', '--jobconf', 'mapred.reduce.tasks=42', meterFile])
else:
mrdq_job = MRDataQuality(args=[meterFile])
with mrdq_job.make_runner() as runner:
runner.run()
No matter what I do to the mrdq_job instance it seems like the runner.run() is using a fresh new instance of the class which doesn't have the instance or static variables defined. How can i pass my parameters to the reducer???? I can do it in regular Hadoop Streaming by passing a string: "--reducer reducer.py arg1 arg2". Is there any equivalent for MRjob?
How about passing your parameters to job config and then reading them with get_jobconf_value?
Something like this:
from mrjob.compat import get_jobconf_value
class MRDataQuality(MRJob):
def reducer(self, groupId, meterList):
...
startDate = get_jobconf_value("my.job.settings.startdate")
endDate = get_jobconf_value("my.job.settings.enddate")
for meterId in meterList:
sys.stderr.write("Querying: " + str(meterId) + "\n")
df = extract_meter_data(table, meterId, startDate, endDate)
And then set the parameters in code like you did above
mrdq_job = MRDataQuality(args=['-r', 'hadoop', '--conf-path', 'mrjob.conf', '--jobconf', 'mapred.reduce.tasks=42', '--jobconf', 'my.job.settings.startdate=2013-06-10', '--jobconf', 'my.job.settings.enddate=2013-06-11', meterFile])
How about passing your parameters to job config and then reading them with get_jobconf_value inside of the reducer_init? This way you only have to read the parameters in once.
Something like this:
from mrjob.compat import get_jobconf_value
class MRDataQuality(MRJob):
def reducer_init(self):
...
self.startDate = get_jobconf_value("my.job.settings.startdate")
self.endDate = get_jobconf_value("my.job.settings.enddate")
def reducer(self, groupId, meterList):
for meterId in meterList:
sys.stderr.write("Querying: " + str(meterId) + "\n")
df = extract_meter_data(table, meterId, self.startDate, self.endDate)
And then set the parameters in code like you did above
mrdq_job = MRDataQuality(args=['-r', 'hadoop', '--conf-path', 'mrjob.conf', '--jobconf', 'mapred.reduce.tasks=42', '--jobconf', 'my.job.settings.startdate=2013-06-10', '--jobconf', 'my.job.settings.enddate=2013-06-11', meterFile])