I followed a YouTube video to create a remote viewable camera with a Raspberry Pi, source code for tutorial available here. Basically it creates a Flask server to stream a live feed of a Pi Camera, which is available via browser on other devices. The problem I am having is I cannot get a feed after shutting down and starting the Pi. If I reboot the Pi, debug the app or manually start the service, everything works just fine, however if I actually shut down the Pi, unplug it, plug it back in and let it boot, the server seems to fail to start and the video fee cannot be accessed from any device including the Pi itself, although the service status says it is running. I need this server to start whenever I plug in the Pi, the OS starts and I connect to a predefined network.
The final portion of the tutorial states to add sudo python3 /home/pi/pi-camera-stream-flask/main.py at the end of the /etc/profile file is supposed to start the main.py file which starts the flask server. This did not work so I created a service to start the app after there's a network connection, which looks like:
[Unit]
Description=Start Camera Flask
After=systemd-networkd-wait-online.service
Wants=systemd-networkd-wait-online.service
[Service]
User=pi
WorkingDirectory=/home/pi/pi-camera-stream-flask/
ExecStart=sudo python3 /home/pi/pi-camera-stream-flask/main.py
Restart=always
[Install]
WantedBy=multi-user.target
note, I have also tried After=Network.target and After=Network-online.target
I also enabled NetworkManager-wait-online.service and systemd-networkd-wait-online.service
My Python app looks like:
#Modified by smartbuilds.io
#Date: 27.09.20
#Desc: This web application serves a motion JPEG stream
# main.py
# import the necessary packages
from flask import Flask, render_template, Response, request
from camera import VideoCamera
import time
import threading
import os
pi_camera = VideoCamera(flip=False) # flip pi camera if upside down.
# App Globals (do not edit)
app = Flask(__name__)
#app.route('/')
def index():
return render_template('index.html') #you can customze index.html here
def gen(camera):
#get camera frame
while True:
frame = camera.get_frame()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')
#app.route('/video_feed')
def video_feed():
return Response(gen(pi_camera),
mimetype='multipart/x-mixed-replace; boundary=frame')
if __name__ == '__main__':
app.run(host='192.168.0.14', port=5000, debug=False) # have also tried app.run(host='0.0.0.0', debug=False)
you can try to autoboot your code everytime its connected to power by set it up in your .bashrc
sudo nano /home/pi/.bashrc
scroll down to the bottom. Add these two line
echo running flask
sudo python3 /home/pi/pi-camera-stream-flask/main.py
try to remove your editing in /etc/profile first
and make sure you have some cooldown at the start maybe 5 secound
time.sleep(5)
This problem usually occurs when the Flask server is started before the Raspberry Pi is connected to the network. There are a few ways to solve the problem, but here's my approach.
Add a new function to check connection status.
Create a shell script to execute main.py
Make the shell script executable.
Create a cron job to execute the script after reboot.
Check Connection Status
We will create a function that checks network connection using subprocess module.
The function will check network connection periodically until it is properly connected to the network and return True. (assume that the Raspberry Pi ever connected to the network and network adapter is always enabled)
Add the following code snippet in your code and execute it before initializing Flask Server.
from subprocess import check_output
from time import sleep
def initializing_network():
while True:
try:
result = check_output(['hostname', '-I']).decode('utf-8').strip()
# The conditional below may not be the cleanest solution.
# Feel free to come up with better solution
# It will check if the shortest IP length exists,
# i.e. "x.x.x.x" = 7 characters
# or check '192' in the result string if you are sure your private network
# always begins with '192'
if (len(result) > 6) or ('192' in result):
return True
except:
pass
# If it fails, wait for 10 seconds then recheck the connection again.
sleep(10)
Considering the only possible return value of initializing_network() function is True or loop indefinitely, the function can be called without additional condition in main function = blocking function. You may want to log the exception or terminate the Python script to prevent infinite loop.
if __name__ == '__main__':
initializing_network()
app.run(host='192.168.0.14', port=5000, debug=False)
Create Shell Script
Assuming that you are in main directory, create a shell script in the directory, let say runner.sh.
Type the following in terminal
nano runner.sh
Then add the following code snippet.
#!/bin/sh
cd /
sudo python3 /home/pi/pi-camera-stream-flask/main.py
cd /
When you're done, press Ctrl + X and select Yes to save the change.
Make Shell Script Executable
Assuming we are still in the current directory, type the following command on terminal.
chmod 755 runner.sh
Create a Cron Job
Next, let's add a new cron job for the Flask Server.
Back to terminal and execute the following.
sudo crontab -e
Next, select nano as the text editor, but feel free to use any text editor that you like.
At the very bottom of the content, insert new line and add this line.
#reboot sh /home/pi/runner.sh
Similarly, press Ctrl + X and Yes to save the change.
Final Test
To ensure the shell script runs properly, execute it and check if everything works.
./runner.sh
If it works, then it is time to test the cron job.
Type sudo reboot in the terminal. After reboot, wait for a while then check the designated IP address whether the server has been started. It may take some time, so check it periodically. Ideally, it will work without any problem. Otherwise, repeat the steps and make sure you don't miss anything.
Related
I am trying to launch a steam game on my computer through an ssh connection (into a Win10 machine). When run locally, the following python call works.
subprocess.run("start steam://rungameid/[gameid]", shell=True)
However, whenever I run this over an ssh connection—either in an interactive interpreter or by invoking a script on the target machine—my steam client suddenly exits.
I haven't noticed anything in the steam logs except that Steam\logs\connection_log.txt contains logoff and a new session start each time. This is not the case when I run the command locally on my machine. Why is steam aware of the different sources of this command, and why is this causing the steam connection to drop? Can anyone suggest a workaround?
Thanks.
Steam is likely failing to launch the application because Windows services, including OpenSSH server, cannot access the desktop, and, hence, cannot launch GUI applications. Presumably, Steam does not expect to run an application in an environment in which it cannot interact with the desktop, and this is what eventually causes Steam to crash. (Admittedly, this is just a guess—it's hard to be sure exactly what is happening when the crash does not seem to appear in the logs or crash dumps.)
You can see a somewhat more detailed explanation of why starting GUI applications over SSH fails when the server is run as a Windows service in this answer by domih to this question about running GUI applications over SSH on Windows.
domih also suggests some workarounds. If it is an option for you, the simplest one is probably to download and run OpenSSH server manually instead of running the server as a service. You can find the latest release of Win32-OpenSSH/Windows for OpenSSH here.
The other workaround that still seems to work is to use schtasks. The idea is to create a scheduled task that runs your command—the Task Scheduler can access the desktop. Unfortunately, this is only an acceptable solution if you don't mind waiting until the next minute at least; schtasks can only schedule tasks to occur exactly on the minute. Moreover, to be safe to run at any time, code should probably schedule the task for at least one minute into the future, meaning that wait times could be anywhere between 1–2 minutes.
There are also other drawbacks to this approach. For example, it's probably harder to monitor the running process this way. However, it might be an acceptable solution in some circumstances, so I've written some Python code that can be used to run a program with schtasks, along with an example. The code depends on the the shortuuid package; you will need to install it before trying the example.
import subprocess
import tempfile
import shortuuid
import datetime
def run_with_schtasks_soon(s, delay=2):
"""
Run a program with schtasks with a delay of no more than
delay minutes and no less than delay - 1 minutes.
"""
# delay needs to be no less than 2 since, at best, we
# could be calling subprocess at the end of the minute.
assert delay >= 2
task_name = shortuuid.uuid()
temp_file = tempfile.NamedTemporaryFile(mode="w", suffix=".bat", delete=False)
temp_file.write('{}\nschtasks /delete /tn {} /f\ndel "{}"'.format(s, task_name, temp_file.name))
temp_file.close()
run_time = datetime.datetime.now() + datetime.timedelta(minutes=delay)
time_string = run_time.strftime("%H:%M")
# This is locale-specific. You will need to change this to
# match your locale. (locale.setlocale and the "%x" format
# does not seem to work here)
date_string = run_time.strftime("%m/%d/%Y")
return subprocess.run("schtasks /create /tn {} /tr {} /sc once /st {} /sd {}".format(task_name,
temp_file.name,
time_string,
date_string),
shell=True)
if __name__ == "__main__":
# Runs The Witness (if you have it)
run_with_schtasks_soon("start steam://rungameid/210970")
I've been cracking my head over this but nothing comes to my mind yet.
I want my script to execute a .py file inside of another already started process. I have a maya process opened, and inside for example modo I want to start file hello.py (print 'hello!') inside that exact Maya.
I already got the PID of that maya process, but don't know how to actually send a command to execute.
is theres some attribute/flag in subprocess or signal modules I could be missing? or is it done even another way?
import os
openedMaya = []
r = os.popen('tasklist /v').read().strip().split('\n')
for i in range(len(r)):
s = r[i]
if 'maya.exe' in s and ': untitled' in s:
openedMaya.append(s)
mayaPID = openedMaya.split('maya.exe')[1].split('Console')[0]
I need a command that could execute hello.py in that maya process.
You could use RPyC to act as a bridge so that you can communicate from one software to another. The idea is that you use RPyC to run an idle server in Maya, where the PYTHONPATH is also pointing to your hello.py script. This server stays active in the session, but the user shouldn't notice it exists.
Then in your other software you use RPyC to broadcast a message using the same port as the server so that it triggers it in Maya. This would then run your command.
It's slightly more overhead, but I have been able to use this successfully for stand-alone tools to trigger events in Maya. As far as using subprocess, you can use it to run a command in a new Maya session, but I don't think there's a way to use it for an existing one.
Hope that nudges you in the right direction.
Maybe an easier way would be to transfer your mesh by using an intermediate file. One process creates the file, another process (running inside the host app) reads it in.
Thanks for the advices, at the end I found a solution by opening the port of maya, by starting a mel command (at the startup):
commandPort -n ":<some_port>";
and connecting from modo to that port through socket:
HOST = '127.0.0.1'
PORT = <some_port>
ADDR=(HOST,PORT)
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(ADDR)
client.send(<message_that_you_want_to_send)
data = client.recv(1024)
client.close()
and i'm able to do whatever I want inside that opened maya, as long as I send mel commands.
Thanks for the help though!
I have python daemon started from a init.d script. The daemon optionally reads an array of id:s from a server through a REST interface. Otherwise I use an array of pre-defined id:s.
logger.info("BehovsBoBoxen control system: bbb_domoticz.py starting up")
if DOMOTICZ_IN or DOMOTICZ_OUT:
#
# build authenticate string to access Domoticz server
#
p = urllib2.HTTPPasswordMgrWithDefaultRealm()
p.add_password(None, DOMOTICZ_URL, USERNAME, PASSWORD)
handler = urllib2.HTTPBasicAuthHandler(p)
opener = urllib2.build_opener(handler)
urllib2.install_opener(opener)
if DOMOTICZ_IN:
#
# Find all temperature sensors in Domoticz and populate sensors array
#
url= "http://"+DOMOTICZ_URL+"/json.htm?type=devices&filter=temp&used=true&order=Name"
logger.debug('Reading from %s',url)
response=urllib2.urlopen(url)
data=json.loads(response.read())
logger.debug('Response is %s',json.dumps(data, indent=4, sort_keys=True))
for i in range(len(data["result"])):
a=data["result"][i]["Description"]
ini=a.find('%room')
if ini != -1:
ini=ini+6
rIndex=int(a[ini:])
logger.info('Configure room id %s with Domoticz sensor idx: %s', rIndex, data["result"][i]["idx"])
sensors[rIndex]=data["result"][i]["idx"]
The daemon is started from an init.d script at boot. Everything works perfectly if I use the option with predefined id:s, i.e. I don't use the REST interface. The daemon starts at boot, and I can stop and restart the deamon with the command
sudo service start/stop/restart
However, if I use the other option (read id:s from server), the daemon does not start at boot. In the log file I find one single line ("...bbb_domoticz.py starting up"). Henze, the daemon exits silently right after this, probably in one of the following urllib2 calls. The following logger.debug('Reading...') does not show up in the log-file.
But the strange thing is if I manually start the daemon with a copy of the init.d script in my home directory, the daemon starts. If I run the init.d script from /etc/init.d, the deamon immediatly exits as it does at boot. But if start the daemon with the script in my home directory, I can continue to start/stop/restart with the service command.
So my taking from this is something goes wrong in urllib2 unless I have managed to start the daemon once from my home directory. It puzzles me I don't get any traceback or anything when the daemon exits.
Any idea how to nail down this problem?
Edit: Inspired by the answer to add logging to specific modules, I tried to add logging to urllib2. However, I can´t figure out how to let this module use my logging handler. Help on this is appreciated.
I have this script, I do not know how to have it running in the background, cause when i close the session it close too. I tried putting it on crontab but not find the index.html and shows the list of files in /.
#! /opt/python3/bin/python3
from http.server import HTTPServer, CGIHTTPRequestHandler
port = 8000
httpd = HTTPServer(('', port), CGIHTTPRequestHandler)
print("Starting simple_httpd on port: " + str(httpd.server_port))
httpd.serve_forever()`
Basically you are asking about how to detach the program from your shell ... here is a few options
./scriptname.py >/dev/null 2>&1 & # sends the program to the background
Use gnu-screen (or similar) ... run your program via screen and you can bring it back up when you log back in
Daemonize your program properly
Update:
Recently I have not written a single daemon in python. The days of forking twice or using a daemon library seem to be well behind us. I currently use supervisord and have heard good things about circus. These are just a small set of extra options you can use to deploy python daemons.
Im attempting to start a server app (in erlang, opens ports and listens for http requests) via the command line using pexpect (or even directly using subprocess.Popen()).
the app starts fine, logs (via pexpect) to the screen fine, I can interact with it as well via command line...
the issue is that the servers wont listen for incoming requests. The app listens when I start it up manually, by typing commands in the command line. using subprocess/pexpect stops the app from listening somehow...
when I start it manually "netstat -tlp" displays the app as listening, when I start it via python (subprocess/pexpect) netstat does not register the app...
I have a feeling it has something to do with the environemnt, the way python forks things, etc.
Any ideas?
thank you
basic example:
note:
"-pz" - just ads ./ebin to the modules path for the erl VM, library search path
"-run" - runs moduleName, without any parameters.
command_str = "erl -pz ./ebin -run moduleName"
child = pexpect.spawn(command_str)
child.interact() # Give control of the child to the user
all of this stuff works correctly, which is strange. I have logging inside my code and all the log messages output as they should. the server wouldnt listen even if I started up its process via a bash script, so I dont think its the python code thats causing it (thats why I have a feeling that its something regarding the way the new OS process is started).
It could be to do with the way that command line arguments are passed to the subprocess.
Without more specific code, I can't say for sure, but I had this problem working on sshsplit ( https://launchpad.net/sshsplit )
To pass arguments correctly (in this example "ssh -ND 3000"), you should use something like this:
openargs = ["ssh", "-ND", "3000"]
print "Launching %s" %(" ".join(openargs))
p = subprocess.Popen(openargs, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
This will not only allow you to see exactly what command you are launching, but should correctly pass the values to the executable. Although I can't say for sure without seeing some code, this seems the most likely cause of failure (could it also be that the program requires a specific working directory, or configuration file?).