I'm having issues calling a variable in Python that I'm setting as "global."
I am setting the variable at the beginning of my script, and then modifying the variable inside of a function, and at end of my script I am then echoing out the variable's value, and the value that I'm setting the variable to in the function does not seem to be carrying out of the function.
Script (Simple port scanner to scan the first 1000 ports of a PC). Also, the reason that I'm setting the script up with processes is because I'm calling multiple port sections at the same time...however I did not include all of the functions, as it is a pretty long script.
#!/usr/bin/env python
import socket
import subprocess
import sys
from datetime import datetime
from multiprocessing import Process
import random
import datetime
# Clear the screen
#subprocess.call('clear', shell=True)
# Ask for input
#remoteServer = raw_input("Enter a remote host to scan: ")
remoteServer = sys.argv[1]
remoteServerIP = socket.gethostbyname(remoteServer)
global openports
openports = []
def ports1():
global openports
for port in random.sample(range(1,1000), 999):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(.01)
result = sock.connect_ex((remoteServerIP, port))
if result == 0:
openports.append(format(port))
sock.close()
if __name__ == '__main__':
p1 = Process(target = ports1)
p1.start()
p1.join()
print openports
I know that the variable is getting set, because if I echo the openports variable from inside of the function it is returning the correct data, however when I echo it after I call the function in the name==main section, it returns as an empty string.
This happens because you are using a separate process to query the ports. The openports variable in that process is updated, not the one in the parent process!
There are multiple possible solutions.
You can just call the ports1() function in the original process, since you're just waiting for it anyway.
If you want to paralellize the scanner, you could use a multiprocessing.Pool to do the port scanning. The function called in the pool could return a port number for an open port, or None for a closed port. For example:
from multiprocessing import Pool
import random
import socket
import sys
remoteServer = sys.argv[1]
remoteServerIP = socket.gethostbyname(remoteServer)
ports = random.sample(range(1,1000), 999)
def testport(num):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(.01)
result = sock.connect_ex((remoteServerIP, port))
sock.close()
if result == 0:
return num
return None
p = Pool()
openports = p.map(testport, ports)
openports = [prt for prt in openports if prt is not None]
Try removing the global keyword from the first place you define it. I believe you can just define it as a list there, and then use the global keyword inside the function before you change it. Does that then give you the correct result when printing it out?
Related
I'm stuck with a threading problem here. I need threads to access a global variable.
I've read a previous answer to a similar question and I understood the "power" of the global keyword in order for functions and threads to access to global variables.
I'm able to make the following code work and it is pretty straightforward to me:
# WORKING CODE !!!
from threading import Thread
import sys, time
a = "" #global variable
def thread1(threadname):
global a
while True:
a *= 2
time.sleep(2)
def thread2(threadname):
global a
while True:
a += 1
time.sleep(1)
if __name__ == "__main__":
thread1 = Thread( target=thread1, args=("Thread-1", ) )
thread2 = Thread( target=thread2, args=("Thread-2", ) )
a = 23
thread1.start()
thread2.start()
while True:
print(a)
Now I would like to have an OSC driven function to modify the global variable a.
I'm using the python-osc module and I'm making the OSC server running on its own thread.
As before I have declared a as a global variable inside the mapped function associated with the "/learn" OSC method.
Strangely to my comprehension the following code is not behaving the same way as the previous one.
edited 2018-10-18, 16:14: "a" is not increasing at all and what I'm seeing printed is
a: 1
printed continuosly. As if we had two different "a" values: one that is increasing inside the OSC thread which is different from the global "a" of the main one.
What I doing wrong?
import threading
from time import sleep
from pythonosc import osc_server, dispatcher
OSCaddress = "192.168.1.68"
OSCport = 13000
a = ""
# OSC functions
def menageLearnButton(unused_addr, args, value):
global a
if value == 1:
a += 1
else:
a += 3
if __name__ == "__main__":
# OSC dispatcher to respond to incoming OSC messages
dispatcher = dispatcher.Dispatcher()
dispatcher.map("/learn", menageLearnButton, "learning")
a = 1
# better to run the OSC server on its own thread
# in order not to block the program here
OSCserver = osc_server.ForkingOSCUDPServer((OSCaddress, OSCport), dispatcher)
OSCserver_thread = threading.Thread(target=OSCserver.serve_forever)
OSCserver_thread.start()
while True:
print("a: {}".format(a))
sleep(1)
Thank you very much for your support.
I think what is going on is that 'ForkingOSCUDPServer' is creating a new process for each OSC request so 'a' is getting reinitialized each time. If I switch your code to use 'ThreadingOSCUDPServer' it seems to have the desired behavior.
I have a script that read data from Serial Port, so I have an infinite loop that always fill my data to global variable, and also I schedule a function that run every X seconds to post in the database, and this function also use the same global variable.
Here's a small example I create it to show you my situation :
import serial
import schedule
import threading
shared_var = []
def save_to_db():
print(threading.current_thread())
global shared_var
for l in shared_var:
print(l)
shared_var.clear()
def run_threaded(job_func):
job_thread = threading.Thread(target=job_func)
job_thread.start()
ser = serial.Serial() # initialize the serial
ser.baudrate = 115200 # set the baud rate : default 115200
ser.port = "/dev/ttyUSB0" # set the port to use
ser.timeout = 30
ser.write_timeout = None
if not ser.is_open:
ser.open() # Open port
ser.write(b'scan=01\r\n') # Stop scan if already started
schedule.every(5).seconds.do(run_threaded, save_to_db)
while 1:
schedule.run_pending()
line = ser.readline()
shared_var.append(line)
print(threading.current_thread())
Is this code can cause a problem ? more specific what will happen if the MainThread (the one that read from Serail Port and write to shared_var) write to the shared variable between the 2 thread and in the same moment the other Thread read from the variable, is This will cause a problem because the 2 threads gonna access the same global variable in the same time ? and if yes It's a problem should I use mutex mechanism for that ?
Yes it is sure that you will have a problem if 2 processes affect the same variable at the same time.
To overcome this you must use threading.Lock() (this is Threading's mutex system).
lock = threading.Lock()
lock.acquire()
try:
yourVariable += 1
finally:
lock.release()
I hope I helped you.
If I have a python script running (with full Tkinter GUI and everything) and I want to pass the live data it is gathering (stored internally in arrays and such) to another python script, what would be the best way of doing that?
I cannot simply import script A into script B as it will create a new instance of script A, rather than accessing any variables in the already running script A.
The only way I can think of doing it is by having script A write to a file, and then script B get the data from the file. This is less than ideal however as something bad might happen if script B tries to read a file that script A is already writing in. Also I am looking for a much faster speed to communication between the two programs.
EDIT:
Here are the examples as requested. I am aware why this doesn't work, but it is the basic premise of what needs to be achieved. My source code is very long and unfortunately confidential, so it is not going to help here. In summary, script A is running Tkinter and gathering data, while script B is views.py as a part of Django, but I'm hoping this can be achieved as a part of Python.
Script A
import time
i = 0
def return_data():
return i
if __name__ == "__main__":
while True:
i = i + 1
print i
time.sleep(.01)
Script B
import time
from scriptA import return_data
if __name__ == '__main__':
while True:
print return_data() # from script A
time.sleep(1)
you can use multiprocessing module to implement a Pipe between the two modules. Then you can start one of the modules as a Process and use the Pipe to communicate with it. The best part about using pipes is you can also pass python objects like dict,list through it.
Ex:
mp2.py:
from multiprocessing import Process,Queue,Pipe
from mp1 import f
if __name__ == '__main__':
parent_conn,child_conn = Pipe()
p = Process(target=f, args=(child_conn,))
p.start()
print(parent_conn.recv()) # prints "Hello"
mp1.py:
from multiprocessing import Process,Pipe
def f(child_conn):
msg = "Hello"
child_conn.send(msg)
child_conn.close()
If you wanna read and modify shared data, between 2 scripts, which run separately, a good solution is, take advantage of the python multiprocessing module, and use a Pipe() or a Queue() (see differences here). This way, you get to sync scripts, and avoid problems regarding concurrency and global variables (like what happens if both scripts wanna modify a variable at the same time).
As Akshay Apte said in his answer, the best part about using pipes/queues, is that you can pass python objects through them.
Also, there are methods to avoid waiting for data, if there hasn't been any passed yet (queue.empty() and pipeConn.poll()).
See an example using Queue() below:
# main.py
from multiprocessing import Process, Queue
from stage1 import Stage1
from stage2 import Stage2
s1= Stage1()
s2= Stage2()
# S1 to S2 communication
queueS1 = Queue() # s1.stage1() writes to queueS1
# S2 to S1 communication
queueS2 = Queue() # s2.stage2() writes to queueS2
# start s2 as another process
s2 = Process(target=s2.stage2, args=(queueS1, queueS2))
s2.daemon = True
s2.start() # Launch the stage2 process
s1.stage1(queueS1, queueS2) # start sending stuff from s1 to s2
s2.join() # wait till s2 daemon finishes
# stage1.py
import time
import random
class Stage1:
def stage1(self, queueS1, queueS2):
print("stage1")
lala = []
lis = [1, 2, 3, 4, 5]
for i in range(len(lis)):
# to avoid unnecessary waiting
if not queueS2.empty():
msg = queueS2.get() # get msg from s2
print("! ! ! stage1 RECEIVED from s2:", msg)
lala = [6, 7, 8] # now that a msg was received, further msgs will be different
time.sleep(1) # work
random.shuffle(lis)
queueS1.put(lis + lala)
queueS1.put('s1 is DONE')
# stage2.py
import time
class Stage2:
def stage2(self, queueS1, queueS2):
print("stage2")
while True:
msg = queueS1.get() # wait till there is a msg from s1
print("- - - stage2 RECEIVED from s1:", msg)
if msg == 's1 is DONE ':
break # ends loop
time.sleep(1) # work
queueS2.put("update lists")
EDIT: just found that you can use queue.get(False) to avoid blockage when receiving data. This way there's no need to check first if the queue is empty. This is no possible if you use pipes.
You could use the pickling module to pass data between two python programs.
import pickle
def storeData():
# initializing data to be stored in db
employee1 = {'key' : 'Engineer', 'name' : 'Harrison',
'age' : 21, 'pay' : 40000}
employee2 = {'key' : 'LeadDeveloper', 'name' : 'Jack',
'age' : 50, 'pay' : 50000}
# database
db = {}
db['employee1'] = employee1
db['employee2'] = employee2
# Its important to use binary mode
dbfile = open('examplePickle', 'ab')
# source, destination
pickle.dump(db, dbfile)
dbfile.close()
def loadData():
# for reading also binary mode is important
dbfile = open('examplePickle', 'rb')
db = pickle.load(dbfile)
for keys in db:
print(keys, '=>', db[keys])
dbfile.close()
This will pass data to and from two running scripts using TCP host socket. https://zeromq.org/languages/python/. required module zmq: use( pip install zmq ).
This this is called a client server communication. The server will wait for the client to send a request. The client will also not run if the server is not running. In addition, this client server communication allows for you to send a request from one device(client) to another device(server), as long as the client and server are on the same network and you change localhost (localhost for the server is marked with: * )to the actual IP of your device(server)( IP help( go into your device network settings, click on your network icon, find advanced or properties, look for IP address. note this may be different from going to google and asking for your ip. I am using IPV6 so. DDOS protection.)) Change the localhost IP of the client to the server IP. QUESTION to OP. Do you have to have script b always running or can script b be imported as a module to script a? If so look up how to make python modules.
I solved the same problem using the lib Shared Memory Dict, it's a very simple dict implementation of multiprocessing.shared_memory.
Source1.py
from shared_memory_dict import SharedMemoryDict
from time import sleep
smd_config = SharedMemoryDict(name='config', size=1024)
if __name__ == "__main__":
smd_config["status"] = True
while True:
smd_config["status"] = not smd_config["status"]
sleep(1)
Source2.py
from shared_memory_dict import SharedMemoryDict
from time import sleep
smd_config = SharedMemoryDict(name='config', size=1024)
if __name__ == "__main__":
while True:
print(smd_config["status"])
sleep(1)
Example: A simple program that prints the value of a list every 10 seconds
import argparse
import time
import sys
myList = []
def parseArguments():
parser = argparse.ArgumentParser(description="example")
parser.add_argument('-a', '--addElement', help='adds an element to the list')
args = parser.parse_args()
if args.addElement:
myList.append(args.addElement)
def main():
parseArguments()
while(True):
print(myList)
time.sleep(10)
The problem is that the program only reads the arguments passed at the start, I want it to read arguments passed at any time while it is running.
I want to run the program in the background like a service, and pass arguments to the program every once in a while.
I understand that what you are asking for looks like a service (or daemon process) able to accept asynchonous commands.
External interface:
prog foo
=> ok repeatedly prints ['foo']
later:
prog bar
=> second instance exits and first instance repeatedly prints ['foo', 'bar']
Internal design
That's far from being simple! You need to setup an IPC mechanisme to allow second instance to communicate with first one, with non blocking IO (or multithreading) in first instance. Under Unix, you could use os.mkfifo, but is you want a portable solution, your will have to use IP sockets on localhost
Structure in high level pseudo code
get argument via argparse
bind to a fix port on localhost, in UDP protocol
if success:
# ok it is the first prog
initialize list from argument
loop:
get command from UDP socket, with timeout = 10s
if cmd is add param:
add parameter to list
elif cmd is exit: # not asked in question but should exist
exit
print list
else:
# another prog has taken the socket, pass it the arg
send the arg to the UDP port with proper protocol
Caveats on this simple design: there is a race condition is there is already a prog waiting on the socket that exits between the first try to bind and the send. To deal with that, you should use TCP protocol, with a select with timeout on listening socket, and a graceful shutdown to ensure that the message was received on the other side. In case of an error, you iterate (a maximum number of time) because the first server could have exited in the while.
Here is an implementation example:
import socket
import select
import argparse
import time
import sys
TIMEOUT=10
IFACE='127.0.0.1'
PORT=4000
DEBUG=False
myList = []
old = ""
def parseArguments():
parser = argparse.ArgumentParser(description="example")
parser.add_argument('-a', '--addElement',
help='adds an element to the list')
parser.add_argument('-q', '--quit', action='store_true',
help='closes main service')
parser.add_argument('-d', '--debug', action='store_true',
help='display debug information')
args = parser.parse_args()
if args.quit:
senddata("QUIT\n")
sys.exit(0)
if args.debug:
DEBUG=True
if args.addElement:
myList.append(args.addElement)
def read(s):
global old
data = old
while True:
block = s.recv(1024)
if len(block) == 0: return data
if b'\n' in block:
block,o = block.split(b'\n', 1)
old = o.decode()
data += block.decode()
return data
data += block.decode()
def gracefulclose(s, msg):
s.send(msg.encode())
s.shutdown(socket.SHUT_WR)
try:
read(s)
finally:
s.close()
def server(s):
if DEBUG:
print("SERVER")
s.listen(5)
while True:
sl = select.select([s], [], [], TIMEOUT)
if len(sl[0]) > 0:
s2, peer = s.accept()
try:
data = read(s2)
print(data)
gracefulclose(s2, "OK")
finally:
s2.close()
if data.startswith("QUIT"):
return
elif data.startswith("DATA:"):
myList.append(data[5:])
print(myList)
def senddata(data):
s = socket.socket(socket.AF_INET)
try:
s.connect((IFACE, PORT))
s.send(data.encode())
data = read(s)
if (data.startswith("OK")):
return True
except:
pass
finally:
s.close()
return False
def client():
return senddata("DATA:" + myList[0] + "\n")
def main():
end = False
MAX = 5
while not end and MAX > 0:
s = socket.socket(socket.AF_INET)
try:
s.bind((IFACE, PORT))
except Exception:
s.close()
s = None
if s:
try:
server(s)
finally:
s.close()
return
else:
if DEBUG:
print("CLIENT", " ", 6 - MAX)
end = client()
MAX -= 1
time.sleep(1)
if __name__ == "__main__":
parseArguments()
main()
import argparse
import time
import sys
myList = []
def parseArguments():
parser = argparse.ArgumentParser(description="example")
parser.add_argument('-a', '--addElement', help='adds an element to the list')
args = parser.parse_args()
if args.addElement:
myList.append(args.addElement)
def main():
parseArguments()
import select
while(True):
while select.select([sys.stdin], [], [], 0)[0]:
myList.append(sys.stdin.readline().strip())
print(myList)
time.sleep(10)
If you are passing more arguments during execution, you must read them from the stdin. Using the select module you can check if there is any new line in stdin and then add them to myList.
Basically what you're asking is how to do Inter-process communication (IPC).
Why did I say that? Well, answer yourself: how would you like to pass these arguments to your background service? By hand? I don't think so (because that way you'd have a simple interactive program which should just wait for user input). You probably want some other script/program which sends these arguments via some kind of commands on-demand.
Generally there are several several ways to communicate two or more programs, the most popular being:
Shared file - you could simply check contents of a file on your disk. Advantage of this solution is that you could probably edit this file with your favourite text editor, without the need of writing a client application.
Pipes - one program reads its input which is the other program's output. You should simply read sys.stdin.
# receiver
def read_input():
for l in sys.stdin:
yield l
Sockets - a data stream sent over a network interface (but it can be sent locally on the same machine). Python docs have very nice introduction to sockets programming.
Shared memory - your programs read/write the same memory block. In Python you can use mmap module to achieve this.
Whichever way to communicate your processes you choose, you should establish some kind of interface between them. It can be very simple text-based interface like this one:
# command syntax
<command> SPACE <parameter> NEWLINE
SPACE := 0x20 # space character
NEWLINE := 0x0A # '\n' character
# a command adding element to receiver's list
ADD SPACE <element> NEWLINE
# a command removing element from receiver's list:
REMOVE SPACE <element> NEWLINE
# examples:
ADD first element\n
REMOVE first element\n
So for example if you send a message over a socket (which I recommend), your receiver (server) should read a buffer until a newline character, then check if the first word is "ADD" and then add remaining characters (minus newline) to your list. Of course you should be prepared for some kind of "attacks" - like you should specify that your messages cannot be longer than e.g. 4096 bytes. This way you can discard your current buffer after it reached its limitation, meaning that you won't allocate memory indefinitely while waiting for a newline character. That's one very important rule: don't trust user input.
Good luck! :)
I've been trying to get my head around multiprocessing. The problem is all the examples I've come across don't seem to fit my scenario. I'd like to multiprocess or thread work that involves sharing a list from an argument, now of course I don't want an item from the said list being worked on twice so the work needs to be divided out to each new thread/process (or across processes).
Any advice on the approach I should be looking at would be appreciated.
I am aware my code below is not correct by any means, it is only to aid in visualising what I am trying to attempt to explain.
SUDO
def work_do(ip_list)
for ip in list
ping -c 4 ip
def mp_handler(ip_range):
p = multiprocessing.Pool(4)
p.map(work_do, args=(ip_range))
ip_list = [192.168.1.1-192.168.1.254]
mp_handler(ip_list)
EDITED:
Some Working Code
import multiprocessing
import subprocess
def job(ip_range):
p = subprocess.check_output(["ping", "-c", "4", ip])
print p
def mp_handler(ip_range):
p = multiprocessing.Pool(2)
p.map(job, ip_list)
ip_list = ("192.168.1.74", "192.168.1.254")
for ip in ip_list:
mp_handler(ip)
If you run the above code, you'll notice both IP's are run twice. How do I manage the processes to only work on unique data from the list?
What you are currently doing should pose no problem, but if you want to manually create the processes and then join them later on:
import subprocess
import multiprocessing as mp
# Creating our target function here
def do_work(args):
# dummy function
p = subprocess.check_output(["ping", "-c", "4", ip])
print(p)
# Your ip list
ip_list = ['8.8.8.8', '8.8.4.4']
procs = [] # Will contain references to our processes
for ip in ip_list:
# Creating a new process
p = mp.Process(target=do_work, args=(ip,))
# Appending to procs
procs.append(p)
# starting process
p.start()
# Waiting for other processes to join
for p in procs:
p.join()
To ping multiple ip addresses concurrently is easy using multiprocessing:
#!/usr/bin/env python
from multiprocessing.pool import ThreadPool # use threads
from subprocess import check_output
def ping(ip, timeout=10):
cmd = "ping -c4 -n -w {timeout} {ip}".format(**vars())
try:
result = check_output(cmd.split())
except Exception as e:
return ip, None, str(e)
else:
return ip, result, None
pool = ThreadPool(100) # no more than 100 pings at any single time
for ip, result, error in pool.imap_unordered(ping, ip_list):
if error is None: # no error
print(ip) # print ips that have returned 4 packets in timeout seconds
Note: I've used ThreadPool here as a convient way to limit number of concurrent pings. If you want to do all pings at once then you don't need neither threading nor multiprocessing modules because each ping is already in its own process. See Multiple ping script in Python.