Multiple threads sending over one socket simultaneously? - python

I have two python programs. Program 1 displays videos in a grid with multiple controls on it, and Program 2 performs manipulations to the images and sends it back depending on the control pressed in Program 1.
Each video in the grid is running in its own thread, and each video has a thread in Program 2 for sending results back.
I'm running this on the same machine though and I was unable to get multiple socket connections working to and from the same address (localhost). If there's a way of doing that - please stop reading and tell me how!
I currently have one socket sitting independent of all of my video threads in Program 1, and in Program 2 I have multiple threads sending data to the one socket in an array with a flag for which video the data is for. The problem is when I have multiple threads sending data at the same time it seems to scramble things and stop working. Any tips on how I can achieve this?

Regarding If there's a way of doing that - please stop reading and tell me how!.
There's a way of doing it, assuming you are on Linux or using WSL on Windows, you could use the hostname -I commend which will output an IP that looks like 192.168.X.X.
You can use that IP in your python program by binding your server to that IP instead of localhost or 127.0.0.1.

Related

Telemetry data through python socket, without stopping execution of the program

I'm building photovoltaic motorized solar trackers. They're controlled by Raspberry Pi's running python script. RPI's are connected to my public openVPN server for remote control and continuous software development. That's working fine. Recently a passionate customer asked me for some sort of telemetry data for his tracker - let's say, it's current orientation, measured wind speed etc.. By being new to python, I'm really struggling with this part.
I've decided to use socket approach from guides like this. Python script listens on a socket, and my openVPN server, which is also web server, connects to it using PHP fsockopen. Python sends telemetry data, PHP makes it user friendly and displays it on the web. Everything so far works, however I don't know how to design my python script around it.
The problem is, that my script has to run continuously, and socket.accept() halts it's execution, waiting for a connection. Didn't find any obvious solution on the web. Would multi-threading work for this? Sounds a bit like overkill.
Is there a way to run socket listening asynchronously? Like, for example, pigpio callback's which I'm using abundantly?
Or alternatively, is there a better way to accomplish my goal?
I tried with remote accessing status file that my script is maintaining, but that proved to be extremely involved with setup and prone to errors when the file was being written.
I also tried running the second script. Problem is, then I have no access to relevant data, or I need to read beforementioned status file, and that leads to the same problems as above.
Relevant bit of code is literally only this:
# Main loop
try:
while True:
# Telemetry
conn, addr = S.accept()
conn.send(data.encode())
conn.close()
Best regards.
For a simple case like this I would probably just wrap the socket code into a separate thread.
With multithreading in python, the Global Interpreter Lock (GIL) means that only one thread executes at a time, so you don't really need to add any further locks to the data if you're just reading the values, and don't care if it's also being updated at the same time.
Your code would essentially read something like:
from threading import Thread
def handle_telemetry_requests():
# Main loop
try:
while True:
# Telemetry
conn, addr = S.accept()
conn.send(data.encode())
conn.close()
except:
# Error handling here (this will cause thread to exit if any error occurs)
pass
socket_thread = Thread(target=handle_telemetry_requests)
socket_thread.daemon = True
socket_thread.start()
Setting the daemon flag means that when the main application ends, the thread will also be terminated.
Python does provide the asyncio module - which may provide the callbacks you're looking for (though I don't have any experience with this).
Other options are to run a flask server in the python apps which will handle the sockets for you and you can just code the endpoints to request the data. Or think about using an MQTT broker - the current data can be written to that - and other apps can subscribe to updates.

Using Python and pyserial to run python scripts on a pico (RP2040)

My ultimate goal is to control a number of peripheral chips on a PCB from a GUI interface on a PC. To do so, my plan was to incorporate a RP2040 (and memory) on the PCB in order to hold all the python scripts and to program/monitor all the peripheral chips. Then, using a PC to interface with the RP2040, send commands over the serial port to execute specific python files on the pico.
I realize that is a bit confusing, so the attached block diagram should help.
Block diagram
Starting on the left of the block diagram, I have a PC running a tkinter GUI. I am currently running the tkinter gui in Thonny. (eventually i would like it to be an executable, but that is beyond the scope of this post) The gui has a number of buttons to choose which python scripts to run. The PC is connected to the PCB through the USB cable. The USB data lines are routed to the RP2040's USB inputs (pins 47,48). The memory on the PCB holds a number of python scrips that correspond to the buttons in the GUI. Ideally, pressing a button on the PC would execute the corresponding py file on the pcb.
What I've got working so far:
My real expertise lies in peripheral chips and PCB design, in this case the front end for a 2-18GHz transceiver, so bare with me if some of my python questions seem basic or misinformed. I've written and tested all the .py files on the pico's memory. To test those scripts I used Thonny to connect to my pico and simply ran (f5) the scripts with the peripherals connected to the right GPIOs. I was also able to get tkinter working and create functioning buttons that can execute commands. Using the pyserial module, I am also able to connected to the pico through the USB and write... strings. Not very useful, but a start.
import serial
ser = serial.Serial('COM3', 38400, timeout=1, parity=serial.PARITY_EVEN, rtscts=1)
s = ser.read(100) # read up to one hundred bytes or as much is in the buffer
print(ser.name) # check which port was really used
ser.write(b'ToggleLED.py') # write a string
ser.close() # close port
Remaining task: The final task I have been failing miserably at the past 2 days has been actually trying to execute the .py files located on the pico's memory through the serial port. My unexperienced/naïve notion was to simply send a string with the files name, obviously not correct. Any thoughts on how to execute those py files using that pyserial module?
BTW, if there is a better solution, please feel free to share! Perhaps the files should be located on the PC and i send 1 command at time?
I can't say anything about your serial problems until you clarify just what is running on the pi (see my comment: I'll update this answer when you do), but re 'is there a better way': possibly.
Since the Pi is running a full operating system, there are a few options. You are basically creating a network connection to the Pi. Whilst this can be done over serial (with the Pi, presumably, acting as a fake USB serial device), it can also be done more conventionally over wifi or ethernet. Lastly, you could host your interface on the Pi and interact with it in a webbrowser, cutting the second computer out of the picture. Exactly which option you decide to take is up to you, and is really off topic here (though it might be on topic elsewhere on SE).
Sending commands to the pi and having it run scripts is Remote Procedure Call. You might want to lookup some of the protocols (like JSON-RPC) generally used to do that, but the basic approach will have code running on the pi:
def do_something():
pass
def do_something_else():
pass
functions = {"something": do_something, "something_else": do_something_else}
while True:
cmd = get_input() # blocks until input comes
if cmd in functions:
reply(f"Running {cmd}")
output = functions[cmd]()
reply(f"{cmd} returned with output {output}")
else:
reply(f"Invalid command {cmd}")
This is schematic: exactly what get_input() on the pi is will depend on how you end up connecting, and what protocol (if any) you end up using. Notice that I have built in confirmation: you want to know if things work or fail.
Whilst you could store these commands in separate scripts and invoke them, if they are just python scripts there is no reason not to call the functions directly from the code running on the pi.
I've seen two different solutions to the question:
With the Pi pico running the REPL (python shell), text strings can simply be sent from a python program running on the host (Windows or Raspberry Pi, etc., connected to the USB com port, with Thonny window closed), and sent to the pico as python commands. This approach is detailed in:
https://blog.rareschool.com/2021/01/controlling-raspberry-pi-pico-using.html
On the pico side, one just need to define a number of functions that will execute from the REPL. E.g.,:
from machine import Pin
# use onboard LED which is controlled by Pin 25
led = Pin(25, Pin.OUT)
# Turn the LED on
def on():
led.value(1)
# Turn the LED off
def off():
led.value(0)
Any string sent to the pico, such as
on()
will be executed accordingly.
In the 2nd approach, the pico will read each line from the USB serial port and parse the line into command and additional parameters, to be compared against a list of predefined commands and executed accordingly. I did that using the Arduino Due, and I'm in the process of porting it to the pico (= no code to show yet :-().

Python: Send incoming serial data to MySQL

I am trying to write a simple python script that will read incoming serial data (usb adaptor) at 115200. After each line is received it must upload it to MySQL running in a Synology NAS in the same network.
The problem I see with the mysql INSERT from within python is that it can take from 0,5 seconds to 1,3 seconds and during this time any incoming messages would be lost, probably several ones.
I have tried many threading options codes but cant get it to work as normally example codes show you how to run 1, 2 or 3 threads at the same time, but what I need is to create threads as they are required by the incoming data.
btw: Using Raspberry Pi.
As a reference, some of the examples I have tried:
http://www.tutorialspoint.com/python/python_multithreading.htm
How about just two threads - the first monitoring the serial data & creating a list of messages to be stored - the second to monitor the list of messages, if not empty then store to the database & remove from the list.

Is there a way to determine if multiple users are running a particular Python script?

I have script which can be run by any user who is connected to a server. This script writes to a single log file, but there is no restriction on who can use it at one time. So multiple people could attempt to write to the log and data might be lost. Is there a way for one instance of the code to know if other instances of that code are running? Moreover, is it possible to gather this information dynamically? (ie not allow data saving for the second user until the first user has completed hes/her task)
I know I could do this with a text file. So I could write the user name to the file when the start, then delete it when they finish, but this could lead to errors if the either step misses, such as an unexpected script termination. So what other reliable ways are there?
Some information on the system: Python 2.7 is installed on a Windows 7 64-bit server via Anaconda. All connected machines are also Windows 7 64-bit. Thanks in advance
Here is an implementation:
http://www.evanfosmark.com/2009/01/cross-platform-file-locking-support-in-python/
If you are using a lock, be aware that stale locks (that are left by hung or crashed processes) can be a bitch. Have a process that periodically searches for locks that were created longer than X minutes ago and free them.
It just in't clean allowing multiple users to write to a single log and hoping things go ok..
why dont you write a daemon that handles logs? other processes connect to a "logging port" and in the simplest case they only succeed if no one else has connected.
you can just modify the echoserver example given here: (keep a timeout in the server for all connections)
http://docs.python.org/release/2.5.2/lib/socket-example.html
If you want know exactly who logged what, and make sure no one unauthorized gets in, you can use unix sockest to restrict it to only certain uids/gids etc.
here is a very good example
NTEventLogHandler is probably the easiest way for logging to a given Windows machine/server, but it might make more sense to use SyslogHandler if you have a syslog sink on a Unix server.
The catch I can think of with SyslogHandler is that you'll likely need to poke holes through the Windows firewall in order to send packets over the syslog protocol, i.e., 514/TCP ("reliable syslog") and 514/UDP (traditional or "unreliable syslog").

Python socket program with shell script?

I have two machines connected by a switch. I have a popular server application which we can call "SXC_SERVER" on machine A and I interrogate the "SXC_SERVER" with the corresponding application from machine B, which I'll call "SXC_CLIENT". What I am trying to do is two-fold:
firstly, gain the traffic flow of SXC_SERVER and SXC_CLIENT interaction through tcpdump. The interaction between the two is a simple GET and RESPONSE, but I require the traffic traces.
secondly, I am wanting to log the Resident Set Size (RSS) usage of the SXC_SERVER process during each interaction/iteration
Moreover, I don't just need one traffic trace of the communication and one memory usage log of the SXC_SERVER process otherwise I wouldn't be writing this because I could go away and do that in ten minutes... In fact I am aiming to do very many! But let's say here for simplicity I want to do 10.
Since this will be very labor intensive as it will require me to be at both machines stopping and starting all of the SCX_CLIENT-to-SXC_SERVER interrogation, the tcpdump traffic capture, and the RSS memory usage of SXC_SERVER logging I want to write an automation script.
But! I am not a programmer, or software guy...(darn)
However, that said I can imaging a separate client/server program that oversees this automation, which we can call AUTO_SERVER and AUTO_CLIENT. My thoughts are that machine B would run AUTO_CLIENT and machine A would run AUTO_SERVER. The aim of both are to facilitate the automation, i.e. the stopping and starting of the tcpdump, and the memory logging on machine A of SXC_SERVER process before machine B queries SXC_SERVER with SXC_CLIENT (if you follow me!).
Effectively after one run of the SXC_SERVER-to-SXC_CLIENT GET/RESPONSE interaction I'll end up with:
one traffic capture *.pcap file called n1.pcap
and one memory log dump (of the RSS associated to the process) called n1.csv.
I am not a programmer or software guy but I can see a rough method (to the best of my ability) to achieve this, as follows:
Machine A: AUTO_SERVER
BEGIN:
msgRecieved = open socket(listen on port *n*)
DO
1. wait for machine A to tell me when to start watch (as in the program) to log RSS memory usage of the SXC_SERVER process using hardcoded command:
watch -n 0.1 'ps -p $(pgrep -d"," -x snmpd) -o rss= | awk '\''{ i += $1 } END { print i }'\'' >> ~/Desktop/mem_logs/mem_i.csv
UNTIL (messageRecieved == "FINISH")
quit
END.
Machine B: AUTO_CLIENT
BEGIN:
open socket(new)
for i in 10, do
1. locally start tcpdump with hardcoded hardcoded tcpdump command with relevant filter to only capture the SXC_SERVER-to-SXC_CLIENT traffic and set output flag to capture all traffic to a PCAP file called n*i*.pcap where *i* is the integer of the current for loop, saving the file in folder "~/Desktop/test_captures/".
2. Send the GET request to SXC_SERVER
3. wait for RESPONSE reply from SXC_SERVER
4. after recieved reply tell machine B to stop watch command
i++
5. send string "FINISH" to machine A.
END.
As you can see I would assume that this would be achieved by the use of a separate, and small client/server-like program (which here I've called AUTO_SERVER and AUTO_CLIENT) on both machines. The really rought pseudo-code design should be self-explanatory.
I have found a small client/server socket program located here: http://www.velvetcache.org/2010/06/14/python-unix-sockets which I would think may be suitable if I edit it, but I am not sure how exactly I can feasibly achieve this. Which is where you may be able to provide some assistance.
Can Python to do this automating?
Can it be done with a single bash script?
Do you think I am on the right path with this?
Or have you any helpful suggestions?
Regards.
You can use Python for this kind of thing, but I would strongly recommend using SSH for the bulk of the work (rather than coding the connection stuff yourself), and then using either a bash script or Python script to launch the tcpdump etc. processes.
Your question, however, is a bit too open-ended for stackoverflow - it sounds like you are asking someone to write this program for you, rather than for help with a specific problem.

Categories