Python supress warnings while creating a empty pcap file with Scapy - python

I want to create an empty pcap file. I'm using wrpcap module of 'Scapy'. But wrpcap takes exactly 2 arguments: one is the file name and other is the packet list like:
wrpcap("my.pcap",my_pkt_list)
Since I want to make it, empty and I don't have a packet list, I'm writing an empty string to the pcap file. Which is creating the file but also giving a warning as well as errors since a string doesn't match to a packet type.
WARNING: PcapWriter: unknown LL type for str. Using type 1 (Ethernet)
Traceback (most recent call last):
File "test.py", line 35, in <module>
wrpcap("pcap/FU.pcap","")
File "/usr/lib/python2.7/site-packages/scapy/utils.py", line 466, in wrpcap
PcapWriter(filename, *args, **kargs).write(pkt)
File "/usr/lib/python2.7/site-packages/scapy/utils.py", line 646, in write
self._write_packet(pkt)
File "/usr/lib/python2.7/site-packages/scapy/utils.py", line 688, in _write_packet
sec = int(packet.time)
AttributeError: 'str' object has no attribute 'time'
For now, I'm able to suppress the errors with try and except but unable to suppress the warning.
Code
from scapy.all import *
try:
wrpcap("my.pcap","")
except:
pass
and the warning is still there:
WARNING: PcapWriter: unknown LL type for str. Using type 1 (Ethernet)
How to suppress it from inside the python code?

Python has a built in method in the standard library for supressing warnings:
import warnings
warnings.simplefilter("ignore", Warning)
You can read more about the warnings library in the docs.
EDIT
It doesn't look like scapy uses the warnings library to generate this. Instead it logs them to a logger called scapy.runtime with a level of warning. There's obviously some default logging config to echo that to stdout. You could add your own logging handler for scapy.runtime to supress them.

You can avoid that warning by using PcapWriter method of scapy.
from scapy.all import *
try:
writer=PcapWriter("my.pcap")
except:
pass
This creates your empty pcap file. When you want to write some packets to it, just use the following code:
writer.write(<Your_packets>)
writer.flush()

You can suppress this warning by disabling warning logging before the call wrpcap:
import logging
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)

The text "WARNING: ..." is not part of the exception and will thus not be surpressed by your try clause.
An idea I got was to redirect stdout during the call
from scapy.all import *
try:
old_stdout = sys.stdout
sys.stdout = None
wrpcap("my.pcap","")
sys.stdout = old_stdout
except:
pass
that should surpress any and all output during the call

Related

Python cPickle unable to load an OCR model library

I have just installed ocropus OCR with all dependencies in my windows 7 machine. (I am using 32bit python 2.7) It seems to be working fine except that I cannot load the default OCR model: en-default.pyrnn.gz. , and receiving a Traceback. I am using the following syntax:
python ocropus-rpred -m en-default.pyrnn.gz book\0001\*.png
here is the error
INFO: #inputs47
# loading object /usr/local/share/ocropus/en-default.pyrnn.gz
Traceback (most recent call last):
File "ocropus-rpred" line 109, in <module>
network = ocrolib.load_object(args.model,verbose=1)
File "C:\anaconda32\lib\site-packages\ocrolib\common.py", line 513, in load_object
return unpickler.load()
EOFError
I have checked the file is not empty; also double checked the binary mode flag enabled i.e. "wb" and "rb"; also converted the newlines of common.py using dos2unix. I am being unable to unable to solve this problem. If anyone have expereinced similar issues, kindly share.
import cPickle
import gzip
def save_object(fname,obj,zip=0):
if zip==0 and fname.endswith(".gz"):
zip = 1
if zip>0:
# with gzip.GzipFile(fname,"wb") as stream:
with os.popen("gzip -9 > '%s'"%fname,"wb") as stream:
cPickle.dump(obj,stream,2)
else:
with open(fname,"wb") as stream:
cPickle.dump(obj,stream,2)
def unpickle_find_global(mname,cname):
if mname=="lstm.lstm":
return getattr(lstm,cname)
if not mname in sys.modules.keys():
exec "import "+mname
return getattr(sys.modules[mname],cname)
def load_object(fname,zip=0,nofind=0,verbose=0):
"""Loads an object from disk. By default, this handles zipped files
and searches in the usual places for OCRopus. It also handles some
class names that have changed."""
if not nofind:
fname = ocropus_find_file(fname)
if verbose:
print "# loading object",fname
if zip==0 and fname.endswith(".gz"):
zip = 1
if zip>0:
# with gzip.GzipFile(fname,"rb") as stream:
with os.popen("gunzip < '%s'"%fname,"rb") as stream:
unpickler = cPickle.Unpickler(stream)
unpickler.find_global = unpickle_find_global
return unpickler.load()
else:
with open(fname,"rb") as stream:
unpickler = cPickle.Unpickler(stream)
unpickler.find_global = unpickle_find_global
return unpickler.load()
UPDATE: Hi, please note that I have used Python's native gzip, and it is working fine. Thank you for pointing that out. Here is the correct syntax that is working on Windows: {with gzip.GzipFile(fname,"rb") as stream:}
Your use of gunzip (in the load_object function) is incorrect. Unless passed the -c argument, gunzip writes the decompressed data to a new file, not to its stdout (which is what you seem to be attempting to do).
As a result, it doesn't write anything to its stdout, and your stream variable contains no data, hence the EOFError.
A quick fix is to change your gunzip command line to give it the -c argument.
More info here: http://linux.die.net/man/1/gzip
That said, why are you even shelling out to gunzip to decompress your data? Python's built-in gzip module should handle that without problems.

ConfigParser instance has no attribute '[extension]'

I am learning python and I am trying to do a simple task of reading information from a config file.
So using the Python Doc and this similar problem as a reference I created two files.
This is my config file config.ini (also tried config.cfg)
[DEFAULT]
OutDir = path_to_file/
[AUTH]
TestInt = 100
TestStr = blue
TestParse = blua
and this is my python file test.py
import ConfigParser
from ConfigParser import *
config = ConfigParser()
config.read(config.cfg)
for name in config.options('AUTH'):
print name
out = config.get('DEFAULT', 'OutDir')
print 'Output directory is ' + out
however when running the command python test.py I am erroring out and receiving this error
Traceback (most recent call last):
File "test.py", line 7, in <module>
config.read(config.cfg)
AttributeError: ConfigParser instance has no attribute 'cfg'
Note: I thought that meant it the extension couldn't be read so I created the .ini file and changed it in the code and I received the same error but it instead read ...has no attribute 'ini'
I am not sure what I am doing wrong since I am doing the exact same as the python doc and the solution someone used to fix this similar issue.
config.read takes a string as its argument. You forgot to quote the file name, and config was coincidentally the name of a Python object (the module) that potentially could have a cfg attribute. You'd get an entirely different error if you had written config.read(foobarbaz.ini).
The correct line is
config.read('config.cfg') # or 'config.ini', if that's the file name

How to know which file is calling which file, filesystem

How to know which file is calling which file in filesystem, like file1.exe is calling file2.exe
so file2.exe is modified,
and file1.exe is entered in log file.
winos
I have searched INTERNET but not able to find any samples.
In order know which file is calling which file you can use the Trace module
exp: if you have 2 files
***file1.py***
import file2
def call1():
file2.call2()
***file2.py***
def call2():
print "---------"
u can use it using console:
$ python -m trace --trackcalls path/to/file1.py
or within a program using a Trace object
****tracefile.py***
import trace,sys
from file1 import call1
#specify what to trace here
tracer = trace.Trace(ignoredirs=[sys.prefix, sys.exec_prefix], trace=0, count=1)
tracer.runfunc(call1) #call the function call1 in fille1
results = tracer.results()
results.write_results(summary=True, coverdir='.')

python os.sys.stdin.buffer.read failed if given buffer length

import os
s = os.sys.stdin.buffer.read(1024*32)
failed with
D:\Projects\pytools>python t1.py
Traceback (most recent call last):
File "t1.py", line 2, in <module>
s = os.sys.stdin.buffer.read(1024*32)
OSError: [Errno 12] Not enough space
buf if given buflen = 1024*32-1 then it goes right
import os
s = os.sys.stdin.buffer.read(1024*32-1)
if you run python t1.py, then the process blocked and waiting for input as expect.
Why python3.3 have 1024*32-1 buffer length limitation? Is it system different, or just a the same for all systems? How can we remove this limitation?
BTW: i using windows 7 python 32 bit version 3.3
We start by looking at the source of os module here, where line 26 reads
import sys, errno
This tells us that os.sys is just a reference to the standard sys module.
Then we head over to the source of the sys module, where in line 1593 we find the following comment (thankfully someone put it there...):
/* stdin/stdout/stderr are now set by pythonrun.c */
Then we go to the pythonrun.c file, where we meet the following code in line 1086:
std = create_stdio(iomod, fd, 0, "<stdin>", encoding, errors);
and this on line 1091:
PySys_SetObject("stdin", std);
Then we look for definition of create_stdio() function which we find in line 910. We look for the return value of this function which is on line 999 and looks like this:
return stream;
Now we have to find out what the stream is. It's the return value of function _PyObject_CallMethodId() called in line 984.
I hope you see the flow - try to follow from here.

How can python subprocess.Popen see select.poll and then later not? (select 'module' object has no attribute 'poll')

I'm using the (awesome) mrjob library from Yelp to run my python programs in Amazon's Elastic Map Reduce. It depends on subprocess in the standard python library. From my mac running python2.7.2, everything works as expected
However, when I switched to using the exact same code on Ubuntu LTS 11.04 also with python2.7.2, I encountered something strange:
mrjob loads the job, and then attempts to communicate with its child processes using subprocess and generates this error:
File "/usr/local/lib/python2.7/dist-packages/mrjob-0.3.1-py2.7.egg/mrjob/emr.py", line 1212, in _build_steps
steps = self._get_steps()
File "/usr/local/lib/python2.7/dist-packages/mrjob-0.3.1-py2.7.egg/mrjob/runner.py", line 1003, in _get_steps
stdout, stderr = steps_proc.communicate()
File "/usr/lib/python2.7/subprocess.py", line 754, in communicate
return self._communicate(input)
File "/usr/lib/python2.7/subprocess.py", line 1302, in _communicate
stdout, stderr = self._communicate_with_poll(input)
File "/usr/lib/python2.7/subprocess.py", line 1332, in _communicate_with_poll
poller = select.poll()
AttributeError: 'module' object has no attribute 'poll'
This appears to be a problem with subprocess and not mrjob.
I dug into /usr/lib/python2.7/subprocess.py and found that during import it runs:
if mswindows:
... snip ...
else:
import select
_has_poll = hasattr(select, 'poll')
By editing that, I verified that it really does set _has_poll==True. And this is correct; easily verified on the command line.
However, when execution progresses to using Popen._communicate_with_poll somehow the select module has changed! This is generated by printing dir(select) right before it attempts to use select.poll().
['EPOLLERR', 'EPOLLET', 'EPOLLHUP', 'EPOLLIN', 'EPOLLMSG',
'EPOLLONESHOT', 'EPOLLOUT', 'EPOLLPRI', 'EPOLLRDBAND',
'EPOLLRDNORM', 'EPOLLWRBAND', 'EPOLLWRNORM', 'PIPE_BUF',
'POLLERR', 'POLLHUP', 'POLLIN', 'POLLMSG', 'POLLNVAL',
'POLLOUT', 'POLLPRI', 'POLLRDBAND', 'POLLRDNORM',
'POLLWRBAND', 'POLLWRNORM', '__doc__', '__name__',
'__package__', 'error', 'select']
no attribute called 'poll'!?!? How did it go away?
So, I hardcoded _has_poll=False and then mrjob happily continues with its work, runs my job in AWS EMR, with subprocess using communicate_with_select... and I'm stuck with a hand-modified standard library...
Any advice? :-)
I had a similar problem and it turns out that gevent replaces the built-in select module with gevent.select.select which doesn't have a poll method (as it is a blocking method).
However for some reason by default gevent doesn't patch subprocess which uses select.poll.
An easy fix is to replace subprocess with gevent.subprocess:
import gevent.monkey
gevent.monkey.patch_all(subprocess=True)
import sys
import gevent.subprocess
sys.modules['subprocess'] = gevent.subprocess
If you do this before importing the mrjob library, it should work fine.
Sorry for writing a full answer instead of a comment, otherwise I'd lose code indentation.
I cannot help you directly since something seems very strictly tied to your code, but I can help you find out, by relying on the fact that Python modules can be arbitrary object, try something like that:
class FakeModule(dict):
def __init__(self, origmodule):
self._origmodule = origmodule
self.__all__ = dir(origmodule)
def __getattr__(self, attr):
return getattr(self._origmodule, attr)
def __delattr__(self, attr):
if attr == "poll":
raise RuntimeError, "Trying to delete poll!"
self._origmodule.__delattr__(attr)
def replaceSelect():
import sys
import select
fakeselect = FakeModule(select)
sys.modules["select"] = fakeselect
replaceSelect()
import select
del select.poll
and you'll get an output like:
Traceback (most recent call last):
File "domy.py", line 27, in <module>
del select.poll
File "domy.py", line 14, in __delattr__
raise RuntimeError, "Trying to delete poll!"
RuntimeError: Trying to delete poll!
By calling replaceSelect() in your code you should be able to get a traceback of where somebody is deleting poll(), so you can understand why.
I hope my FakeModule implementation is good enough, otherwise you might need to modify it.

Categories