Scapy variable sniff stop - python

i found a similar problem:
(Instance variables not being updated Python when using Multiprocessing),
but still do not know the solutionn for my task.
The task is to stop a scapy sniff function after the completness of a testskript. the running duration of single testscripts can vary greatly (from some seconds till hours). My sniff function runs in a separate threat. The testscript calls an init Funktion in the beginning which calls the sniff Function from an other modul.
#classmethod
def SaveFullTrafficPcap(self, TestCase, Termination):
try:
Full_Traffic = []
PktList = []
FullPcapName = Settings['GeneralSettings']['ResultsPath']+TestCase.TestCaseName +"Full_Traffic_PCAP.pcap"
#while Term.Termination < 1:
Full_Traffic = sniff(lfilter=None, iface=str(Settings['GeneralSettings']['EthInterface']), store=True, prn = lambda x: Full_Traffic.append(x), count=0, timeout=Term.Termination)
print(Full_Traffic)
wrpcap(FullPcapName, Full_Traffic)
except(Exception):
SYS.ABS_print("No full traffic PCAP file wirtten!\n")
At the end of the testscript an exit function is called. In the exit function I set Term.Termination parameter to 1 and wait for 5 sec, but it doesnt work. The sniff function is stoped by the system and i get no file"FullPCAPName"
If count or timeout get a value, the code works without problemms and i get my FullPCAPName file with he complet traffic on my Interface.
Have anybody hinds how i can stopt the sniff function regulary after finisching the testscript?

Use of the stop_filter command as specified here worked for me. I've duplicated HenningCash's code below for convenience:
import time, threading
from scapy.all import sniff
e = threading.Event()
def _sniff(e):
a = sniff(filter="tcp port 80", stop_filter=lambda p: e.is_set())
print("Stopped after %i packets" % len(a))
print("Start capturing thread")
t = threading.Thread(target=_sniff, args=(e,))
t.start()
time.sleep(3)
print("Try to shutdown capturing...")
e.set()
# This will run until you send a HTTP request somewhere
# There is no way to exit clean if no package is received
while True:
t.join(2)
if t.is_alive():
print("Thread is still running...")
else:
break
print("Shutdown complete!")
However, you still have to wait for a final packet to be sniffed, which might not be ideal in your scenario.

now i solved the problem with global variables. It is not nice, but it works well.
Nevertheless I am interested in a better solution for the variable sniff stop.

stop_var = ""
def stop():
global stop_var
stop_var.stop()
def start():
"""
your code
"""
global stop_var
stop_var = AsyncSniffer(**arg)
stop_var=start()

Related

How to stop continuous_threading in python

I am working on a project, where I have to read values from serial port and display them on tkinter GUI. I am using continous threading module of python. I am using a continous thread to read the data on serial port continously after every 0.5s, but now i want to stop this continous thread. So how should I stop it ?
This is the function which I am calling when a checkbutton is presssed
def scan():
print("in scan")
btn1_state = var1.get()
print("Scan: %d"%btn1_state)
t1 = continuous_threading.PeriodicThread(0.5, readserial)
if(btn1_state == 1):
t1.start()
else:
print("entered else ")
t1.stop() #I am using stop() but the thread doesn't stop
Please Help
The problem is likely that you are using a blocking read function in your readserial function. It needs a timeout. I can reproduce with this code:
import time
import continuous_threading
time_list = []
def save_time():
while True:
time.sleep(1)
time_list.append(time.time())
th = continuous_threading.PeriodicThread(0.5, save_time)
th.start()
time.sleep(4)
th.join()
print(time_list)
This never exits.
Modified from the examples.
Since continuous_threading expects it's event loop to be in control, it never gets to the stop event.

Multiple stdout w/ flush going on in Python threading

I have a small piece of code that I made to test out and hopefully debug the problem without having to modify the code in my main applet in Python. This has let me to build this code:
#!/usr/bin/env python
import sys, threading, time
def loop1():
count = 0
while True:
sys.stdout.write('\r thread 1: ' + str(count))
sys.stdout.flush()
count = count + 1
time.sleep(.3)
pass
pass
def loop2():
count = 0
print ""
while True:
sys.stdout.write('\r thread 2: ' + str(count))
sys.stdout.flush()
count = count + 2
time.sleep(.3)
pass
if __name__ == '__main__':
try:
th = threading.Thread(target=loop1)
th.start()
th1 = threading.Thread(target=loop2)
th1.start()
pass
except KeyboardInterrupt:
print ""
pass
pass
My goal with this code is to be able to have both of these threads displaying output in stdout format (with flushing) at the same time and have then side by side or something. problem is that I assume since it is flushing each one, it flushes the other string by default. I don't quite know how to get this to work if it is even possible.
If you just run one of the threads, it works fine. However I want to be able to run both threads with their own string running at the same time in the terminal output. Here is a picture displaying what I'm getting:
terminal screenshot
let me know if you need more info. thanks in advance.
Instead of allowing each thread to output to stdout, a better solution is to have one thread control stdout exclusively. Then provide a threadsafe channel for the other threads to dispatch data to be output.
One good method to achieve this is to share a Queue between all threads. Ensure that only the output thread is accessing data after it has been added to the queue.
The output thread can store the last message from each other thread and use that data to format stdout nicely. This can include clearing output to display something like this, and update it as each thread generates new data.
Threads
#1: 0
#2: 0
Example
Some decisions were made to simplify this example:
There are gotchas to be wary of when giving arguments to threads.
Daemon threads terminate themselves when the main thread exits. They are used to avoid adding complexity to this answer. Using them on long-running or large applications can pose problems. Other
questions discuss how to exit a multithreaded application without leaking memory or locking system resources. You will need to think about how your program needs to signal an exit. Consider using asyncio to save yourself these considerations.
No newlines are used because \r carriage returns cannot clear the whole console. They only allow the current line to be rewritten.
import queue, threading
import time, sys
q = queue.Queue()
keepRunning = True
def loop_output():
thread_outputs = dict()
while keepRunning:
try:
thread_id, data = q.get_nowait()
thread_outputs[thread_id] = data
except queue.Empty:
# because the queue is used to update, there's no need to wait or block.
pass
pretty_output = ""
for thread_id, data in thread_outputs.items():
pretty_output += '({}:{}) '.format(thread_id, str(data))
sys.stdout.write('\r' + pretty_output)
sys.stdout.flush()
time.sleep(1)
def loop_count(thread_id, increment):
count = 0
while keepRunning:
msg = (thread_id, count)
try:
q.put_nowait(msg)
except queue.Full:
pass
count = count + increment
time.sleep(.3)
pass
pass
if __name__ == '__main__':
try:
th_out = threading.Thread(target=loop_output)
th_out.start()
# make sure to use args, not pass arguments directly
th0 = threading.Thread(target=loop_count, args=("Thread0", 1))
th0.daemon = True
th0.start()
th1 = threading.Thread(target=loop_count, args=("Thread1", 3))
th1.daemon = True
th1.start()
# Keep the main thread alive to wait for KeyboardInterrupt
while True:
time.sleep(.1)
except KeyboardInterrupt:
print("Ended by keyboard stroke")
keepRunning = False
for th in [th0, th1]:
th.join()
Example Output:
(Thread0:110) (Thread1:330)

Dynamic Threading

Hello I am having an issue with Python Threads.
What I am aiming to do is create a function that launches other functions as a thread when called. Unfortunately I am running into two issues (Besides my lack of knowledge in Python.)
1: If I put quotes around: "globals()[T_Name[i]]()" it treats it as a string and executes the code normally.
2: If I omit the quotes around globals()[T_Name[i]]() it launches the first function immediately and does not process through the rest of the script to launch it as a thread.
If anyone could provide some insight I apologize for the formatting I will be bumping it up to PEP 8 standards eventually.
Code:
import threading
import time
T_Name=("Write_Done", "Write_Pin")
T_Time=[]
Tr=[]
for i, Nu in enumerate(T_Name):
Tr.append("T" + str(i))
T_Time.append("0")
def Write_Done():
while True:
print("Done")
time.sleep(5)
def Write_Pin():
while True:
print("Pin")
time.sleep(15)
def Thread_Checker():
while True:
time.sleep(5)
for i, TH in enumerate(T_Time):
if (time.time() - int(TH)) < 30:
pass
#thread is still rocking
else:
#thread has failed Time to get her done.
Tr[i] = threading.Thread(target=("globals()[T_Name[i]]()"))
print("starting" + T_Name[i])
Tr[i].daemon = True
Tr[i].start()
print("Test if alive")
if Tr[0].is_alive():
print("I LIVE!")
else:
print("I ded")
Thread_Checker()
Use a lambda function to create something that is actually callable as the target, but defers the call of what you want until the target is called.
Tr[i] = threading.Thread(target=lambda: globals()[T_Name[i]]())

How do I sleep a dedicated python thread for task scheduling?

I'm writing a library that will connect to sockets and manage them, process its data, and do stuff based on that.
My problem lays on sending b"\r\n\x00" to the socket every 20 seconds. I thought that if I started a new thread for the ping function, that would work.
..however, time.sleep() seems to pause the whole entire program instead of what I thought would be just that thread.
here's my code thus far:
def main(self):
recvbuf = b""
self.connect(self.group, self.user, self.password)
while self.connected:
 rSocket, wSocket, error = select.select([x[self.group] for x in self.conArray], [x[self.group] for x in self.conArray], [x[self.group] for x in self.conArray], 0.2) #getting already made socket connections
for rChSocket in rSocket:
while not recvbuf.endswith(b"\x00"): #[-1] doesnt work on empty things... and recvbuf is empty.
recvbuf += rChSocket.recv(1024) #need the WHOLE message ;D
if len(recvbuf) > 0:
dataManager.manage(self, self.group, recvbuf)
recvbuf = b""
for wChSocket in wSocket:
t = threading.Thread(self.pingTimer(wChSocket)) #here's what I need to be ran every 20 seconds.
t.start()
x[self.group] for x in self.conArray.close()
and here's the pingTimer function:
def pingTimer(self, wChSocket):
time.sleep(20)
print(time.strftime("%I:%M:%S %p]")+"Ping test!") #I don't want to mini-DDoS, testing first.
#wChSocket.send(b"\r\n\x00")
Thanks :D
This:
t = threading.Thread(self.pingTimer(wChSocket))
Does not do what you expect. It calls self.pingTimer in the same thread and passes the return value to threading.Thread. That's not what you want. You probably want this:
t = threading.Thread(target=self.pingTimer, args=(wChSocket,))

The thread hangs using FTP LIST with Python

I'm using ftplib for connecting and getting file list from FTP server.
The problem I have is that the connection hangs from time to time and I don't know why. I'm running python script as a daemon, using threads.
See what I mean:
def main():
signal.signal(signal.SIGINT, signal_handler)
app.db = MySQLWrapper()
try:
app.opener = FTP_Opener()
mainloop = MainLoop()
while not app.terminate:
# suspend main thread until the queue terminates
# this lets to restart the queue automatically in case of unexpected shutdown
mainloop.join(10)
while (not app.terminate) and (not mainloop.isAlive()):
time.sleep(script_timeout)
print time.ctime(), "main: trying to restart the queue"
try:
mainloop = MainLoop()
except Exception:
time.sleep(60)
finally:
app.db.close()
app.db = None
app.opener = None
mainloop = None
try:
os.unlink(PIDFILE)
except:
pass
# give other threads time to terminate
time.sleep(1)
print time.ctime(), "main: main thread terminated"
MainLoop() has some functions for FTP connect, download specific files and disconnect from the server.
Here's how I get the file's list:
file_list = app.opener.load_list()
And how FTP_Opener.load_list() function looks like:
def load_list(self):
attempts = 0
while attempts<=ftp_config.load_max_attempts:
attempts += 1
filelist = []
try:
self._connect()
self._chdir()
# retrieve file list to 'filelist' var
self.FTP.retrlines('LIST', lambda s: filelist.append(s))
filelist = self._filter_filelist(self._parse_filelist(filelist))
return filelist
except Exception:
print sys.exc_info()
self._disconnect()
sleep(0.1)
print time.ctime(), "FTP Opener: can't load file list"
return []
Why sometimes the FTP connection hangs and how can I monitor this? So if it happens I would like to terminate the thread somehow and start a new one.
Thanks
If you are building for robustness, I would highly recommend that you look into using an event-driven method. One such which have FTP support is Twisted (API).
The advantage is that you don't block the thread while waiting for i/O and you can create simple timer functions to monitor your connections if you so prefer. It also scales a lot better. It is slightly more complicated to code using event-driven patterns, so if this is just a simple script it may or may not be worth the extra effort, but since you write that you are writing a daemon, it might be worth looking into.
Here is an example of an FTP client: ftpclient.py

Categories