I made a script that accepts an IP address as input and checks all of the ARP tables across all our layer three networking devices and searches for a match, and returns the mac address if one is found. Works great, but I've seen a bunch of different scripts here, github and reddit that can do this on multiple devices at once, significantly cutting the time. I've tried to copy one by making an updated script with limited success: it seems to execute the commands perfectly on some device types (IOS/XE) and not on others (ASA/HP ProCurve). Specifically, the HP devices seem to be getting stuck running the terminal width command that Netmiko customarily runs, while the ASA devices appear to run the arp command but have an issue with TextFSM parsing the output. Even still, I figure it's an issue with something I did because it works fine on all of them in my original script that features the same block of netmiko code. Here is the first function:
'''
Use Netmiko to execute show arp.
Return the result as a dictionary.
'''
global address
global addressFound
ios_device = {
'device_type': 'cisco_ios',
'ip': ip_add, #function argument
'username': 'redadmin',
'password': password,
'secret': password,
}
#more dictionaries for each device type follow
output_dict = {}
for ip_add in ios_device_list:
if addressFound == True:
break
remote_conn = ConnectHandler(**ios_device)
remote_conn.enable()
hostname = remote_conn.find_prompt()[:-1]
output = remote_conn.send_command("show ip arp", use_textfsm=True)
remote_conn.disconnect()
output_dict[hostname] = pd.DataFrame(output)
if address in output_dict[hostname].values:
print('A result was found on ' + hostname + ' (' + ip_add + '):')
print(output_dict[hostname].loc[output_dict[hostname]['address'] == address, 'mac'].item())
addressFound = True
break
#more if statements for each device type follow
Here is the main function:
global address
address = input('Enter the address you would like to search for: ')
global addressFound
'''
Use processes and Netmiko to connect to each of the devices. Execute
'show version' on each device. Use concurrent futures built-in queue
to pass the output back to the parent process. Record the amount of
time required to do this.
'''
start_time = datetime.now()
# Start a ThreadPool (or ProcessPool if you change Thread to Process)
# Using 5 workers (threads/processes) simultaneously
with cf.ThreadPoolExecutor(max_workers=20) as executor:
# Start the Netmiko operation and mark each future with its device dict
future_to_ios_device = {
executor.submit(show_version, ip_add): ip_add
for ip_add in ios_device_list
}
#more dictionaries for each device type follow
future_to_devices = [future_to_ios_device, future_to_asa_device, future_to_hp_device, future_to_xe_device]
# Do something with the results as they complete. Could be a print,
# a database write, or write to a CSV to store for later use
for futures in future_to_devices:
for future in cf.as_completed(futures):
device = futures[future]
try:
data = future.result()
except Exception as exc:
print("{} generated an exception: {}".format(device, exc))
The biggest difference I gather is that my script features a list of ip addresses vs a list of devices as in the case of the original (and many others I've seen). I'm pretty new to Python, so any assistance is greatly appreciated.
Related
I
setting up a Websocket that receives market data from 33 pairs, process the data and insert it into a local mysql database.
what I've tried so far :
Setting up the websocket works fine, then process the data on each new message function and insert it directly into the database
--> problem was that with 33 pairs the websocket was stacking up the buffer with market data, and after a few minutes I would get a delay in the database of at least 10 seconds
Then I tried processing the data through a thread : the on_message function would execute a thread that is simply putting the market data into an array, like below
datas=[]
def add_queue(symbol,t,a,b,r_n):
global datas
datas.append([symbol,t,a,b,r_n])
if json_msg['ev']=="C":
symbol=json_msg['p'].replace("/","-")
round_number=pairs_dict_new[symbol]
t = Thread(target=add_queue, args=(symbol,json_msg['t'],json_msg['a'],json_msg['b'],round_number,))
t.start()
and then another function, with a loop thread would pick it up to insert it into the database
def add_db():
global datas
try:
# db = mysql.connector.connect(
# host="104.168.157.164",
# user="bvnwurux_noe_dev",
# password="Tickprofile333",
# database="bvnwurux_tick_values"
# )
while True:
for x in datas:
database.add_db(x[0],x[1],x[2],x[3],x[4])
if x in datas:
datas.remove(x)
except KeyboardInterrupt:
print("program ending..")
t2 = Thread(target=add_db)
t2.start()
still giving a delay, and the threaded process wasn't actually using a lot of CPU but more of RAM and it just was even worse.
instead of using a websocket with a thread, I tried simple webrequests to the API call, so with 1 thread per symbol, it would loop through a webrequest and in everythread send it to the database. my issues here were that mysql connections don't like threads (sometimes they would make a request with the same connection at the same time and crash) or it would still be delayed by the time to process the code, even without buffer. the code was taking too long to process the answered request that it couldnt keep it under 10s of delay.
Here is a little example of the basic code I used to get the data.
pairs={'AUDCAD':5,'AUDCHF':5,'AUDJPY':3,'AUDNZD':5,'AUDSGD':2,'AUDUSD':5,'CADCHF':5,'CADJPY':3,'CHFJPY':3,'EURAUD':5,'EURCAD':5,'EURCHF':5,'EURGBP':5,'EURJPY':3,'EURNZD':5,'EURSGD':5,'EURUSD':5,'GBPAUD':5,'GBPCAD':5,'GBPCHF':5,'GBPJPY':3,'GBPNZD':5,'GBPSGD':5,'GBPUSD':5,'NZDCAD':5,'NZDCHF':5,'NZDJPY':3,'NZDUSD':5,'USDCAD':5,'USDCHF':5,'USDJPY':3,'USDSGD':5,'SGDJPY':3}
def on_open(ws):
print("Opened connection")
ws.send('{"action":"auth","params":"<API KEY>"}') #connecting with secret api key
def on_message(ws, message):
print("msg",message)
json_msg = json.loads(message)[0]
if json_msg['status'] == "auth_success": # successfully authenticated
r = ws.send('{"action":"subscribe","params":"C.*"}') # subscribing to currencies
print("should subscribe to " + pairs)
#once the websocket is connected to all the pairs, process the data
--> process json_msg
if __name__ == "__main__":
# websocket.enableTrace(True) # just to show all the requests made (debug mode)
ws = websocket.WebSocketApp("wss://socket.polygon.io/forex",
on_open=on_open,
on_message=on_message)
ws.run_forever(dispatcher=rel) # Set dispatcher to automatic reconnection
rel.signal(2, rel.abort) # Keyboard Interrupt
rel.dispatch()
method I tried multiprocess, but this was on the other crashing my server because it would use 100% CPU, and then the requests made on the apache server would not reach or take a long time loading. Its really a balance problem
I'm using an ubuntu server with 32CPUS, based in london and the API polygon is based in NYC.
I also tried with 4 CPUS in seattle to NYC, but still no luck.
Even with 4 pairs and 32CPUS , it would eventually reach 10s delay. I think this is more of a code structure problem.
I've been working with the example-minimal.py script from https://github.com/toddmedema/echo and need to alter it so that rather than printing the status changes to the terminal, it executes another script.
I'm a rank amateur but eager to learn and even more eager to get this project done.
Thanks in advance for any help you can provide!!
""" fauxmo_minimal.py - Fabricate.IO
This is a demo python file showing what can be done with the debounce_handler.
The handler prints True when you say "Alexa, device on" and False when you say
"Alexa, device off".
If you have two or more Echos, it only handles the one that hears you more clearly.
You can have an Echo per room and not worry about your handlers triggering for
those other rooms.
The IP of the triggering Echo is also passed into the act() function, so you can
do different things based on which Echo triggered the handler.
"""
import fauxmo
import logging
import time
from debounce_handler import debounce_handler
logging.basicConfig(level=logging.DEBUG)
class device_handler(debounce_handler):
"""Publishes the on/off state requested,
and the IP address of the Echo making the request.
"""
TRIGGERS = {"device": 52000}
def act(self, client_address, state, name):
print "State", state, "on ", name, "from client #", client_address
return True
if __name__ == "__main__":
# Startup the fauxmo server
fauxmo.DEBUG = True
p = fauxmo.poller()
u = fauxmo.upnp_broadcast_responder()
u.init_socket()
p.add(u)
# Register the device callback as a fauxmo handler
d = device_handler()
for trig, port in d.TRIGGERS.items():
fauxmo.fauxmo(trig, u, p, None, port, d)
# Loop and poll for incoming Echo requests
logging.debug("Entering fauxmo polling loop")
while True:
try:
# Allow time for a ctrl-c to stop the process
p.poll(100)
time.sleep(0.1)
except Exception, e:
logging.critical("Critical exception: " + str(e))
break
I'm going to try and be helpful by going through that script and explaining what each bit does. This should help you understand what it's doing, and therefore what you need to do to get it running something else:
import fauxmo
This is a library that allows whatever device is running the script to pretend to be a Belkin WeMo; a device that is triggerable by the Echo.
import logging
import time
from debounce_handler import debounce_handler
This is importing some more libraries that the script will need. Logging will be used for logging things, which is useful for debugging, time will be used to cause the script to pause so that you can quit it by typing ctrl-c, and the debounce_handler library will be used to keep multiple Echos from reacting to the same voice command (which would cause a software bounce).
logging.basicConfig(level=logging.DEBUG)
Configures a logger that will allow events to be logged to assist in debugging.
class device_handler(debounce_handler):
"""Publishes the on/off state requested,
and the IP address of the Echo making the request.
"""
TRIGGERS = {"device": 52000}
def act(self, client_address, state, name):
print "State", state, "on ", name, "from client #", client_address
return True
We've created a class called device_handler which contains a dictionary called TRIGGERS and a function called act.
act takes a number of variables as input; self (any data structures in the class, such as our TRIGGERS dictionary), client_address, state, and name. We don't know what these are yet, but the names are quite self explanatory, so we can guess that client_address is probably going to be the IP address of the Echo, *state" that it is in, and name will be its name. This is the function that you're going to want to edit, since it is the final function triggered by the Echo. You can probably just stick whatever you function you want after the print statement. The act function returns True when called.
if __name__ == "__main__":
This will execute everything indented below it if you're running the script directly. More detail about that here if you want it.
# Startup the fauxmo server
fauxmo.DEBUG = True
p = fauxmo.poller()
u = fauxmo.upnp_broadcast_responder()
u.init_socket()
p.add(u)
As the comment suggests, this starts the fake WeMo server. We enable debugging, which just prints any debug messages to the command line, create a poller, p, which can process incoming messages, and create a upnp broadcast responder, u, which can handle UPnP device registration. We then tell u to initialise a socket, setting itself up on the network listening for UPnP devices, and add u to p so that we can respond when a broadcast is received.
# Register the device callback as a fauxmo handler
d = device_handler()
for trig, port in d.TRIGGERS.items():
fauxmo.fauxmo(trig, u, p, None, port, d)
As the comment says, this sets up an instance of the device handler class that we made earlier. Now we for-loop through the items in our TRIGGERS dictionary in our device handler d and calls fauxmo.fauxmo using the information it has found in the dictionary. If we look at the dictionary definition in the class definition we can see that there's only one entry, a trig device on port 52000. This essentially does the bulk of the work, making the actual fake WeMo device talk to the Echo. If we look at that fauxmo.fauxmo function we see that, when it receives a suitable trigger it calls the act function in the device_handler class we defined before.
# Loop and poll for incoming Echo requests
logging.debug("Entering fauxmo polling loop")
while True:
try:
# Allow time for a ctrl-c to stop the process
p.poll(100)
time.sleep(0.1)
except Exception, e:
logging.critical("Critical exception: " + str(e))
break
And here we enter the fauxmo polling loop. This indefinitely loops through the following code, checking to see if we've received a message. The code below it tries to poll for messages, to see if its received anything, then wait for a bit, then poll again. Except, if it can't do that for some reason, then the script will break and the error will be logged so you can see what went wrong.
Just to clarify; If the Fauxmo loop is running then the script is fine, right?
I think the TO is not getting any connection between the Echo and the WeMo fake device. It can help if you install the WeMo skill first. You may require an original WeMo device initially though.
I know these are old threads but it might help someone still.
I am using Python to retrieve data from Mongo database to analyze it.
So I am changing data using meteor app and client python to retrieve it in a real time. This is my code:
from MeteorClient import MeteorClient
def call_back_meth():
print("subscribed")
client = MeteorClient('ws://localhost:3000/websocket')
client.connect()
client.subscribe('tasks', [], call_back_meth)
a=client.find('tasks')
print(a)
when I run this script, it only show me current data in 'a' and console will close,
I want to let the console stay open and print data in case of change.
I have used While True to let script running and see changes but I guess it's not a good solution. Is there any another optimized solution?
To get realtime feedback you need to subscribe to changes,and then monitor those changes. Here is an example of watching tasks:
from MeteorClient import MeteorClient
def call_back_added(collection, id, fields):
print('* ADDED {} {}'.format(collection, id))
for key, value in fields.items():
print(' - FIELD {} {}'.format(key, value))
# query the data each time something has been added to
# a collection to see the data `grow`
all_lists = client.find('lists', selector={})
print('Lists: {}'.format(all_lists))
print('Num lists: {}'.format(len(all_lists)))
client = MeteorClient('ws://localhost:3000/websocket')
client.on('added', call_back_added)
client.connect()
client.subscribe('tasks')
# (sort of) hacky way to keep the client alive
# ctrl + c to kill the script
while True:
try:
time.sleep(1)
except KeyboardInterrupt:
break
client.unsubscribe('tasks')
(Reference) (Docs)
I'm connecting to an IRC server but while it's sitting waiting for data I'd like the program to be able to grab input from the terminal and then relay it to the server, so essentially say JOIN #foobar and the program send JOIN #foobar. The current code looks like:
def receive(self):
while True:
raw = self.socket.recv(4096).decode()
raw_split = raw.splitlines()
if not raw:
break
for line in raw_split:
#if line.find('MODE {0} :'.format(self.config['nick'])) > -1:
# placeholder for perform
data = line.split()
if data[0] == 'PING':
self.send('PONG {0}'.format(data[1]))
color_print("-> {0}".format(data), 'yellow')
#self.plugin.run(data)
Any ideas how to do this?
Take a look at the select module. You can use it to wait on multiple file-like objects including a socket and stdin/stdout/stderr.
There's some example code at this site.
I'm trying to write a program that outputs data that can be served over a network with avahi. The documentation I've looked at seems to say I have to register the service with dbus and then connect it to avahi, but the documentation to do this is pretty sparse. Does anyone know of good documentation for it? I've been looking at these:
python-dbus:
http://dbus.freedesktop.org/doc/dbus-python/doc/tutorial.html#exporting-objects
python-avahi:
http://www.amk.ca/diary/2007/04/rough_notes_python_and_dbus.html
I'm really unfamiliar with how avahi works at all, so any pointers would be helpful.
I realise this answer is pretty late, considering your question was asked four years ago. However, it might help others.
The following announces a service using avahi/dbus:
import avahi
import dbus
from time import sleep
class ServiceAnnouncer:
def __init__(self, name, service, port, txt):
bus = dbus.SystemBus()
server = dbus.Interface(bus.get_object(avahi.DBUS_NAME, avahi.DBUS_PATH_SERVER), avahi.DBUS_INTERFACE_SERVER)
group = dbus.Interface(bus.get_object(avahi.DBUS_NAME, server.EntryGroupNew()),
avahi.DBUS_INTERFACE_ENTRY_GROUP)
self._service_name = name
index = 1
while True:
try:
group.AddService(avahi.IF_UNSPEC, avahi.PROTO_INET, 0, self._service_name, service, '', '', port, avahi.string_array_to_txt_array(txt))
except dbus.DBusException: # name collision -> rename
index += 1
self._service_name = '%s #%s' % (name, str(index))
else:
break
group.Commit()
def get_service_name(self):
return self._service_name
if __name__ == '__main__':
announcer = ServiceAnnouncer('Test Service', '_test._tcp', 12345, ['foo=bar', '42=true'])
print announcer.get_service_name()
sleep(42)
Using avahi-browse to verify it is indeed published:
micke#els-mifr-03:~$ avahi-browse -a -v -t -r
Server version: avahi 0.6.30; Host name: els-mifr-03.local
E Ifce Prot Name Type Domain
+ eth0 IPv4 Test Service _test._tcp local
= eth0 IPv4 Test Service _test._tcp local
hostname = [els-mifr-03.local]
address = [10.9.0.153]
port = [12345]
txt = ["42=true" "foo=bar"]
Avahi is "just" a Client implementation of ZeroConfig which basically is a "Multicast based DNS" protocol. You can use Avahi to publish the availability of your "data" through end-points. The actual data must be retrieved through some other means but you would normally register an end-point that can be "invoked" through a method of your liking.