Getting stdin while connected to a server Python - python

I'm connecting to an IRC server but while it's sitting waiting for data I'd like the program to be able to grab input from the terminal and then relay it to the server, so essentially say JOIN #foobar and the program send JOIN #foobar. The current code looks like:
def receive(self):
while True:
raw = self.socket.recv(4096).decode()
raw_split = raw.splitlines()
if not raw:
break
for line in raw_split:
#if line.find('MODE {0} :'.format(self.config['nick'])) > -1:
# placeholder for perform
data = line.split()
if data[0] == 'PING':
self.send('PONG {0}'.format(data[1]))
color_print("-> {0}".format(data), 'yellow')
#self.plugin.run(data)
Any ideas how to do this?

Take a look at the select module. You can use it to wait on multiple file-like objects including a socket and stdin/stdout/stderr.
There's some example code at this site.

Related

Netmiko - Problems Connecting to Multiple Devices at a Time

I made a script that accepts an IP address as input and checks all of the ARP tables across all our layer three networking devices and searches for a match, and returns the mac address if one is found. Works great, but I've seen a bunch of different scripts here, github and reddit that can do this on multiple devices at once, significantly cutting the time. I've tried to copy one by making an updated script with limited success: it seems to execute the commands perfectly on some device types (IOS/XE) and not on others (ASA/HP ProCurve). Specifically, the HP devices seem to be getting stuck running the terminal width command that Netmiko customarily runs, while the ASA devices appear to run the arp command but have an issue with TextFSM parsing the output. Even still, I figure it's an issue with something I did because it works fine on all of them in my original script that features the same block of netmiko code. Here is the first function:
'''
Use Netmiko to execute show arp.
Return the result as a dictionary.
'''
global address
global addressFound
ios_device = {
'device_type': 'cisco_ios',
'ip': ip_add, #function argument
'username': 'redadmin',
'password': password,
'secret': password,
}
#more dictionaries for each device type follow
output_dict = {}
for ip_add in ios_device_list:
if addressFound == True:
break
remote_conn = ConnectHandler(**ios_device)
remote_conn.enable()
hostname = remote_conn.find_prompt()[:-1]
output = remote_conn.send_command("show ip arp", use_textfsm=True)
remote_conn.disconnect()
output_dict[hostname] = pd.DataFrame(output)
if address in output_dict[hostname].values:
print('A result was found on ' + hostname + ' (' + ip_add + '):')
print(output_dict[hostname].loc[output_dict[hostname]['address'] == address, 'mac'].item())
addressFound = True
break
#more if statements for each device type follow
Here is the main function:
global address
address = input('Enter the address you would like to search for: ')
global addressFound
'''
Use processes and Netmiko to connect to each of the devices. Execute
'show version' on each device. Use concurrent futures built-in queue
to pass the output back to the parent process. Record the amount of
time required to do this.
'''
start_time = datetime.now()
# Start a ThreadPool (or ProcessPool if you change Thread to Process)
# Using 5 workers (threads/processes) simultaneously
with cf.ThreadPoolExecutor(max_workers=20) as executor:
# Start the Netmiko operation and mark each future with its device dict
future_to_ios_device = {
executor.submit(show_version, ip_add): ip_add
for ip_add in ios_device_list
}
#more dictionaries for each device type follow
future_to_devices = [future_to_ios_device, future_to_asa_device, future_to_hp_device, future_to_xe_device]
# Do something with the results as they complete. Could be a print,
# a database write, or write to a CSV to store for later use
for futures in future_to_devices:
for future in cf.as_completed(futures):
device = futures[future]
try:
data = future.result()
except Exception as exc:
print("{} generated an exception: {}".format(device, exc))
The biggest difference I gather is that my script features a list of ip addresses vs a list of devices as in the case of the original (and many others I've seen). I'm pretty new to Python, so any assistance is greatly appreciated.

Kafka-python producer.send isn't being recieved in a try-except block, but does send with time.sleep(1)

I'm testing a script that runs binwalk on a file and then sends a kafka message to let the sending file know that it was completed or if it failed. It looks like this:
if __name__ == "__main__":
# finds the path of this file
scriptpath = os.path.dirname(inspect.getfile(inspect.currentframe()))
print(scriptpath)
# sets up kafka consumer on the binwalk topic and kafka producer for the bwsignature topic
consumer = KafkaConsumer('binwalk', bootstrap_servers=['localhost:9092'])
producer = KafkaProducer(bootstrap_servers = ['localhost:9092'])
# watches the binwalk kafka topic
for msg in consumer:
# load the json
job = json.loads(msg.value)
# get the filepath of the .bin
filepath = job["src"]
print(0)
try:
# runs the script
binwalkthedog(filepath, scriptpath)
# send a receipt
producer.send('bwsignature', b'accepted')
except:
producer.send('bwsignature', b'failed')
pass
producer.close()
consumer.close()
If I send in a file that doesn't give any errors in the 'binwalkthedog' function then it works fine, but if I give it a file that doesn't exist it prints a general error message and moves on to the next input, as it should. For some reason, the producer.send('bwsignature', b'failed') doesn't send unless there's something that creates a delay after the binwalkthedog call fails like time.sleep(1) or a for loop that counts to a million.
Obviously I could keep that in place but it's really gross and I'm sure there's a better way to do this.
This is the temp script I'm using to send and recieve a signal from the binwalkthedog module:
job = {
'src' : '/home/nick/Documents/summer-2021-intern-project/BinwalkModule/bo.bin',
'id' : 1
}
chomp = json.dumps(job).encode('ascii')
receipt = KafkaConsumer('bwsignature', bootstrap_servers=['localhost:9092'])
producer = KafkaProducer(bootstrap_servers = ['localhost:9092'])
future = producer.send('binwalk', chomp)
try:
record_metadata = future.get(timeout=10)
except KafkaError:
print("sucks")
pass
print(record_metadata.topic)
print(record_metadata.partition)
print(record_metadata.offset)
producer.close()
for msg in receipt:
print(msg.value)
break
Kafka producers batch many records together to reduce requests made to the server. If you want to force records to send, rather than introducing a blocking sleep call, or calling a get on the future, you should use producer.flush()

How to put an iterator into a thread in python, and get return value as next cannot be used in thread

I am now building two components, a receiver (receiver.py) and a scraper (a software). The scraper would get data from database and send to a receiver through a socket. There is a machine learning model in the receiver, receive collected data and return an order to determine what kind of data should collect next. The socket in receiver would listen until it receives an order "terminate". In order to return the collected data and keep the socket open at the same time, I use "yield" and change the socket into a generator. And get data with next()
class receiver:
def __init__(self, orders):
self.orders = orders #orders is a list
def start(self):
while True:
client_socket.send(orders)
collected_data = client_socket.recv(1024)
yield collected_data
def add_order(self, new_orders):
self.orders.append(new_orders)
r = receiver("search field 1") #ask the scraper to search field 1
r_generator = r.start()
field1_data = next(r_generator)
r.add_order("search field 2")
field2_data = next(r_generator)
The code run correctly when I run receiver and the scraper seperately. The collected data would send from scraper to the receiver through socket, and assign to variables. I can add new order to let the scraper search new data (connection of receiver and scraper is closed, but the socket in receiver is still listening)
Now I need to put them together into one python file, and starting the scraper with python command:
subprocess.check_output('cmd run scaper')
I need to run them simultaneously, hence I use thread
r_generator = r.start
t1 = Thread(target = next, args=(r_generator)).start()
t2 = Thread(target = subprocess.check_output('cmd run scaper')).start()
But I can no longer do add_order or assigning the collected data to variable as
field1_data = next(r_generator)
field1_data = next(t1)
are not workable, and
t1 = Thread(target = r_generator).start()
r.add_order("search field 2")
t1.join()
The "search field 2" order would add directly after the thread start, but what I want is get field 1 data first then to decide if doing another search or what field should be searched.

See real time changes in data using python meteor

I am using Python to retrieve data from Mongo database to analyze it.
So I am changing data using meteor app and client python to retrieve it in a real time. This is my code:
from MeteorClient import MeteorClient
def call_back_meth():
print("subscribed")
client = MeteorClient('ws://localhost:3000/websocket')
client.connect()
client.subscribe('tasks', [], call_back_meth)
a=client.find('tasks')
print(a)
when I run this script, it only show me current data in 'a' and console will close,
I want to let the console stay open and print data in case of change.
I have used While True to let script running and see changes but I guess it's not a good solution. Is there any another optimized solution?
To get realtime feedback you need to subscribe to changes,and then monitor those changes. Here is an example of watching tasks:
from MeteorClient import MeteorClient
def call_back_added(collection, id, fields):
print('* ADDED {} {}'.format(collection, id))
for key, value in fields.items():
print(' - FIELD {} {}'.format(key, value))
# query the data each time something has been added to
# a collection to see the data `grow`
all_lists = client.find('lists', selector={})
print('Lists: {}'.format(all_lists))
print('Num lists: {}'.format(len(all_lists)))
client = MeteorClient('ws://localhost:3000/websocket')
client.on('added', call_back_added)
client.connect()
client.subscribe('tasks')
# (sort of) hacky way to keep the client alive
# ctrl + c to kill the script
while True:
try:
time.sleep(1)
except KeyboardInterrupt:
break
client.unsubscribe('tasks')
(Reference) (Docs)

How to unit test modules that work together as subprocesses via pipes?

I'm writing a program that acts as a script manager. It consists of 3 parts:
A client - Receives a script name to run from the server.
Manager - Manages running scripts. Receives them from the client using a json transferred in a Pipe.
The scrips - .py scripts that are found under the addons directory in the project.
The important thing to notice is that all 3 components run simultaneously as processes (because I could have an alarm script running while accepting and executing a play music script.
Because it consists of 3 separate parts that interact with each other I don't know how to write proper unit tests for it.
So my questions are:
How can I write good unit tests for this?
Is this a design problem? if so, what am I doing wrong and what should I do to fix it?
Here is most of the code for the above components:
The Client
class MessageReceiver:
def __init__(self):
'''
Connect to the AMQP broker and starts listening for messages.
Creates the a Popen object to pass command info to the addon_manager script (which
is in charge of managing scripts)
'''
addon_manager_path = configuration.addon_manager_path()
addon_manager_path = os.path.join(addon_manager_path,'addon_manager.py')
execute = "python " + addon_manager_path
self.addon_manager = subprocess.Popen(execute, stdin=subprocess.PIPE, shell=True)
self.component_name= configuration.get_attribute("name")
if len(sys.argv)>1:
host_ip = sys.argv[1]
else:
host_ip = 'localhost'
#Start a connection to the AMQP server
self.connection = pika.BlockingConnection(pika.ConnectionParameters(host=host_ip))
#Create a channel to the server
self.channel = self.connection.channel()
self.channel.queue_declare(queue="example")
#callback method to be called when data is received
#It sends the data that is received by the client to the addon_manager
def data_received(ch, method, properties, body):
##TODO: Might want to add some checks. Is body a JSON? etc.
print("GOT IT")
self.addon_manager.communicate(body)
self.channel.basic_consume(data_received,queue='example',no_ack=True)
self.channel.start_consuming()
The Manager
class AddonManager:
def __init__(self):
self.command_analyzer = analyzer.Analyzer(configuration.get_attribute("commands"))
self.active_addons = {}
def activate_addon(self,command,json_data):
child_popen = self.command_analyzer.execute_command(command,json_data)
self.active_addons[analyzer.intent(json_data)] = child_popen
def communicate_with_addon(self,command,json_data,json_string):
child_popen = self.active_addons[analyzer.intent(json_data)]
#Child process hasn't finished running
if child_popen.returncode is None:
#Send data to child to process if he wants to
child_popen.stdin.write(json_string)
else:
#Process finished running. Can't send it anything. delete it. (deleting the Popen instance also kills zombie process)
del self.active_addons[analyzer.intent(json_data)]
self.activate_addon(command,json_data)
def get_input(self):
"""
Reads command from stdin, returns its JSON form
"""
json_string = sys.stdin.read()
json_data =json.loads(json_string)
print(json_data)
return json_data
def accept_commands(self):
while True:
json_data = self.get_input()
command = self.command_analyzer.is_command(json_data) # Check wether the command exists. Return it if it does
#The command exists
if command is not None:
#The addon is not currently active
if analyzer.intent(json_data) not in self.active_addons:
self.activate_addon(command,json_data)
#The addon is active and so we need to send the data to the subprocess
else:
self.communicate_with_addon(command,json_data,json_string)
manager = AddonManager()
manager.accept_commands()

Categories