The problem in code is that if a field is missed then it raises error and if I except the error then it will not show anything
import pyshark
from tabulate import tabulate
capture = pyshark.FileCapture('/home/sipl/Downloads/DHCP.cap', display_filter='udp.port eq 67')
# capture2 = pyshark.LiveCapture(interface='wlo2', display_filter='arp')
d = dict()
for packet in capture:
try:
d['mac'] = packet.dhcp.hw_mac_addr
d['hname'] = packet.dhcp.option_hostname
d['vend'] = packet.dhcp.option_vendor_class_id
except AttributeError:
pass
try:
d['srvrid'] = packet.dhcp.option_dhcp_server_id
d['smask'] = packet.dhcp.option_subnet_mask
d['DNS'] = packet.dhcp.option_domain_name_server
d['Domain'] = packet.dhcp.option_domain_name
except AttributeError:
pass
try:
d['ip'] = packet.dhcp.option_requested_ip_address
except AttributeError:
pass
try:
table = {'Mac': [d['mac']], 'IP': [d['ip']], 'host': [d['hname']],'vendor': [d['vend']], 'Server id': [d['srvrid']],
'Sub mask': [d['smask']], 'DNS': [d['dns']], 'Domain': [d['Domain']]}
print(tabulate(table, headers='keys'))
except KeyError:
continue
I want that if a field is missed then it store the incoming fields i got in a packet and show in the table, for empty field it doesn't show anything and leave the field empty in table.
Basically I want that it stores the incoming field and prints in table and didn't raise error for the missed field.
I'm trying it now on fileCapture to check working but i need to do this on liveCapture
If Im understand you correctly, you don't want to get the Attribute Error but put an empty value when field is missing.
You can do it by check for value using getattr function.
so I have no Idea exactly what dhcp and if its missing or always existing and only what comes after can be missing.
But lets says that dhcp always exists and the actual fields you are pointing at can be missed:
Create a function called: get_value_or_none(obj, key, default='') -> str
Now lets implement it using the getattr.
def get_value(obj, key, default='') -> str:
return getattr(obj, key, default=default)
Now replace all the coresponding assignments you made in your code by wrapping the calls with the function calls:
i.e: get_value(packet.dhcp, 'option_domain_name')
That's it, it should work.
PS. If the dhcp is not always present, you will have to do the same with it too.
I did it by using the method:
dictionary.get()
Related
I'm trying to extract values from JSON input using python. There are many tag that I need to extract and not all JSON files have the same structure as the sources are multiple. Sometimes there is a possibility that a tag might be missing. So, a KeyError is bound to happen. If a tag is missing then respective variable will by default be None and it will be returned as a list (members) to the main call.
I tried calling a function to pass each tags into an individual try/except. But, I get hit by an error on the function call itself where the tag is being passed. So, instead I tried the below code but it skips any subsequent lines even if the tags are present. Is there a better way to do this?
def extract(self):
try:
self.data_version = self.data['meta']['data_version']
self.created = self.data['meta']['created']
self.revision = self.data['meta']['revision']
self.gender = self.data['info']['gender']
self.season = self.data['info']['season']
self.team_type = self.data['info']['team_type']
self.venue = self.data['info']['venue']
status = True
except KeyError:
status = False
members = [attr for attr in dir(self) if
not callable(getattr(self, attr)) and not attr.startswith("__") and getattr(self, attr) is None]
return status, members
UPDATED:
Thanks Barmar & John! .get() worked really well.
Changes on our LDAP Server have changed the case of the attributes returned from search. For example, "mailroutingaddress" is now "mailRoutingAddress". The searches themselves are case insensitive, but the python code processing the returned ldap object is attempting to reference attributes in all lower case and failing.
Is there a simple way to specify that the LDAP module should lowercase all returned attributes? Or, is there a straightforward way to change them to lowercase as soon as the results are returned?
We're trying to avoid extensive rewrites to handle this change in our LDAP software.
This code returns an error "Key not found"
timestamp_filter = ldap.filter.filter_format("a filter that works as intended")
timestamp_result = search_ldap(timestamp_filter, ['modifytimestamp'])
if not timestamp_result:
log_it('WARN', 'No timestamp returned for group "%s"' % group_cn)
return False
else:
mod = timestamp_result[0][1]['modifytimestamp'][0].decode('utf-8').split('Z')[0]
When the last line was change to this, it worked:
mod = timestamp_result[0][1]['modifyTimestamp'][0].decode('utf-8').split('Z')[0]
I was hoping there was something I could do when the ldap object was first bound:
def bind_to_ldap(dn=auth_dn,pwd=auth_pwd):
# create connection to ldap and pass bind object back to caller
try:
ldc = ldap.initialize(uri, bytes_mode=False)
bind = ldc.simple_bind_s(dn, pwd)
except ldap.LDAPError as lerr:
log_it('EXCEPTION',"Exception raised when attempting to bind to LDAP server %s with dn %s" % (uri, dn))
graceful_exit("%s" % lerr)
return ldc
Or, I could iterate over all of the attributes passed back by the search function.
s_filter = ldap.filter.filter_format("a filter that works as intended")
s_results = search_ldap(s_filter)
groups = {}
# okay, lots of processing do here....
for group in s_results:
# etc. etc. etc.
You can change a dict's keys to lowercase pretty easily with a dict comprehension:
>>> timestamp_result = {'modifyTimestamp': 'foo'}
>>> timestamp_result = {k.lower(): v for k, v in timestamp_result.items()}
>>> timestamp_result
{'modifytimestamp': 'foo'}
A more robust solution would be to have the search_ldap function normalize the server output -- that would minimize the amount of code that you'd need to update when the server response changes.
I am working on a flask app and using mongodb with it. In one endpoint i took csv files and inserts the content to mongodb with insert_many() . Before inserting i am creating a unique index for preventing duplication on mongodb. When there is no duplication i can reach inserted_ids for that process but when it raises duplication error i get None and i can't get inserted_ids . I am using ordered=False also. Is there any way that allows me to get inserted_ids even with duplicate key error ?
def createBulk(): #in controller
identity = get_jwt_identity()
try:
csv_file = request.files['csv']
insertedResult = ProductService(identity).create_product_bulk(csv_file)
print(insertedResult) # this result is None when get Duplicate Key Error
threading.Thread(target=ProductService(identity).sendInsertedItemsToEventCollector,args=(insertedResult,)).start()
return json_response(True,status=200)
except Exception as e:
print("insertedResultErr -> ",str(e))
return json_response({'error':str(e)},400)
def create_product_bulk(self,products): # in service
data_frame = read_csv(products)
data_json = data_frame.to_json(orient="records",force_ascii=False)
try:
return self.repo_client.create_bulk(loads(data_json))
except bulkErr as e:
print(str(e))
pass
except DuplicateKeyError as e:
print(str(e))
pass
def create_bulk(self, products): # in repo
self.checkCollectionName()
self.db.get_collection(name=self.collection_name).create_index('barcode',unique=True)
return self.db.get_collection(name=self.collection_name).insert_many(products,ordered=False)
Unfortunately, not in the way you have done it with the current pymongo drivers. As you have found, if you get errors in your insert_many() it will throw an exception and the exception detail does not contain details of the inserted_ids.
It does contain details of the keys the fail (in e.details['writeErrors'][]['keyValue']) so you could try and work backwards from that from your original products list.
Your other workaround is to use insert_one() in a loop with a try ... except and check each insert. I know this is less efficient but it's a workaround ...
I have the following in my django model, which I am using with PostgresSql
class Business(models.Model):
location = models.CharField(max_length=200,default="")
name = models.CharField(max_length=200,default="",unique=True)
In my view I have:
for b in bs:
try:
p = Business(**b)
p.save()
except IntegrityError:
pass
When the app is run and an IntegrityError is triggered I would like to grab the already inserted record and also the object (I assume 'p') that triggered the error and update the location field.
In pseudocode:
for b in bs:
try:
p = Business(**b)
p.save()
except IntegrityError:
EXISTING_RECORD.location = EXISTING_RECORD.location + p.location
EXISTING_RECORD.save()
How is this done in django?
This is the way I got the existing record that you are asking for.
In this case, I had MyModel with
unique_together = (("owner", "hsh"),)
I used regex to get the owner and hsh of the existing record that was causing the issue.
import re
from django.db import IntegrityError
try:
// do something that might raise Integrity error
except IntegrityError as e:
#example error message (e.message): 'duplicate key value violates unique constraint "thingi_userfile_owner_id_7031f4ac5e4595e3_uniq"\nDETAIL: Key (owner_id, hsh)=(66819, 4252d2eba0e567e471cb08a8da4611e2) already exists.\n'
import re
match = re.search( r'Key \(owner_id, hsh\)=\((?P<owner_id>\d+), (?P<hsh>\w+)\) already', e.message)
existing_record = MyModel.objects.get(owner_id=match.group('owner_id'), hsh=match.group('hsh'))
I tried get_or_create, but that doesn't quite work the way you want (if you do get_or_create with both the name and the location, you still get an integrity error; if you do what Joran suggested, unless you overload update, it will overwrite location as opposed to append.
This should work the way you want:
for b in bs:
bobj, new_flag = Business.objects.get_or_create(name=b['name'])
if new_flag:
bobj.location = b['location']
else:
bobj.location += b['location'] # or possibly something like += ',' + b['location'] if you wanted to separate them
bobj.save()
It would be nice (and may be possible but I haven't tried), in the case where you can have multiple unique constraints, to be able to inspect the IntegrityException (similar to the accepted answer in IntegrityError: distinguish between unique constraint and not null violations, which also has the downside of appearing to be postgres only) to determine which field(s) violated. Note that if you wanted to follow your original framework, you can do collidedObject = Business.objects.get(name=b['name']) in your exception but that only works in the case where you know for sure that it was a name collision.
for b in bs:
p = Business.objects.get_or_create(name=b['name'])
p.update(**b)
p.save()
I think anyway
I'm doing some RESTful API calls to an outside department and have written various functions (similar to the snippet below) that handle this based on what info I'm needing (e.g. "enrollment", "person", etc.). Now I'm left wondering if it wouldn't be more pythonic to put this inside of a class, which I believe would then make it easier to do processing such as "has_a_passing_grade", etc. and pass that out as an attribute or something when the class is instantiated.
Is there a standard way of doing this? Is it as easy as creating a class, somehow building the api_url as I'm doing below, call the api, parse and format the data, build a dict or something to return, and be done? And how would the call to such a class look? Does anyone have some example code similar to this that can be shared?
Thanks, in advance, for any help!
from django.utils import simplejson
try:
api_url = get_api_url(request, 'enrollment', person_id)
enrollment = call_rest_stop(key, secret, 'GET', api_url)
enrollment_raw = enrollment.read()
if enrollment_raw == '' or None:
return 'error encountered', ''
enrollment_recs = simplejson.loads(enrollment_raw)
# now put it in a dict
for enrollment in enrollment_recs:
coursework_dict = {
'enrollment_id': enrollment['id'],
...,
}
coursework_list.append(coursework_dict)
cola_enrollment.close()
except Exception, exception:
return 'Error: ' + str(exception), ''
So, let's say you want your API's users to call your API like so:
student_history, error_message = get_student_history(student_id)
You could then just wrap the above in that function:
from django.utils import simplejson
def get_student_history(person_id)
try:
api_url = get_api_url(request, 'enrollment', person_id)
enrollment = call_rest_stop(key, secret, 'GET', api_url)
enrollment_raw = enrollment.read()
if enrollment_raw == '' or None:
return [], 'Got empty enrollment response'
enrollment_recs = simplejson.loads(enrollment_raw)
# now put it in a dict
for enrollment in enrollment_recs:
coursework_dict = {
'enrollment_id': enrollment['id'],
...,
}
coursework_list.append(coursework_dict)
cola_enrollment.close()
return coursework_list, None
except Exception as e:
return [], str(exception)
You could also use a class, but keep in mind that you should only do that if there would be methods that those using your API would benefit from having. For example:
class EnrollmentFetcher(object):
def __init__(person_id):
self.person_id = person_id
def fetch_data(self):
self.coursework_list, self.error_message = get_student_history(self.person_id)
def has_coursework(self):
return len(self.coursework_list) > 0
fetcher = EnrollmentFetcher(student_id)
fetcher.fetch_data()
if fetcher.has_coursework():
# Do something
Object-oriented programming is neither a good practice nor a bad one. You should choose to use it if it serves your needs in any particular case. In this case, it could help clarify your code (has_coursework is a bit clearer than checking if a list is empty, for example), but it may very well do the opposite.
Side note: Be careful about catching such a broad exception. Are you really okay with continuing if it's an OutOfMemory error, for example?