Please help me wrap my brain around handling an exception when using subprocess in the following scenario. I am sure many of you could come up with some really advanced exception traps but I am really looking for rudimentary knowledge so I can build on that over time. This code is writing a jpeg image to a mounted windows network share. I have purposely toggled Read/Write permissions on the share, basically denying the Pi access. I don't want my program to spew it's digital guts in the absence of a good storage location, but rather just pass me a sensible message.
snap_pic = 'raspistill -t 1200 -a '+pic_tag+' -ae 50,0x00,0x8080FF -o '+file_path
try:
subprocess.check_call(snap_pic, shell=True)
except subprocess.CalledProcessError:
print ( 'Cannot write to network storage' )
sys.exc_clear()
else:
print ( 'Image number '+image_no+' being processed' )
Before this evening I did not even know what "subprocess" was and I was using os.system to call 'snap_pic'. I saw some trapping limitations with that so here I am trying to step up what little game I have.
Should I be using .call or .check_call here?
My "except" command always gets bypassed whether connectivity exists or not.
And should I have to clear an error flag for these lines for each iteration of this code segment?
As always, any help is much appreciated.
Try using the PiCamera Python package. It is simpler and cleaner to use compared to using subprocesses.
Here is a basic example from the documentation demonstrating how to take a photo:
import time
import picamera
with picamera.PiCamera() as camera:
camera.resolution = (1280, 720)
camera.start_preview()
camera.exposure_compensation = 2
camera.exposure_mode = 'spotlight'
camera.meter_mode = 'matrix'
camera.image_effect = 'gpen'
# Give the camera some time to adjust to conditions
time.sleep(2)
camera.capture('foo.jpg')
camera.stop_preview()
Related
I want to write a Python script that checks whether my device has a display and whether that display is turned on or off.
I googled it, there is a third-party library named "WMI", but it can only get some information like CPU/HDD/process/thread, so I am confused about it.
I am using Windows 10, in case that matters.
Is it possible to get that kind of low level hardware information via Python, and if it is, how can I do it?
It looks like Windows does not really have a way to tell you if the monitor is on or off. The WMI Win32_DesktopMonitor class has an 'Availability' property but this doesn't seem to be affected by changing the monitor state. I tested this using the following python script:
import wmi # pip install WMI
import win32gui, win32con
SC_MONITORPOWER = 0xF170
wmic = wmi.WMI()
def powersave():
# Put the monitor to Off.
win32gui.SendMessage(win32con.HWND_BROADCAST, win32con.WM_SYSCOMMAND, SC_MONITORPOWER, 2)
# Get the monitor states
print([monitor.Availability for monitor in wmic.Win32_DesktopMonitor()])
if __name__ == '__main__':
powersave()
The SC_MONITORPOWER arguments are documented here.
Unfortunately the result for my monitor is always 3 which means it is "on", even when it is actually powered down either in sleep mode or physically off.
Depending on your requirement, you might just want to send the broadcast message to assert the power state you want and not need to check the current state.
I was wondering if there was a way of watching for a window to open and when it does close it? I've got a very annoying VPN client on our Mac systems that is very verbose and gets really annoying. There's no configuration to change this, so I'm wondering if I could write a Python script that is always running and watching for the window to open and close it?
As far as I know, there's no global notification that gets generated every time a window is opened, and no standard notification that every app uses, and no other way to do this in general, short of (a) injecting code into the VPN client, (b) using deprecated functionality like CGRemoteOperation, or (c) reverse engineering undocumented Window Server functionality.
So, the simplest solution is to periodically poll for windows and close them, probably using UI scripting via ScriptingBridge, NSAppleScript (through pyobjc), or appscript.
For example:
se = appscript.app('SystemEvents')
while True:
try:
client = se.application_processes['Annoying VPN Client']
window = client.windows['Annoying Window']
close = window.buttons[1]
close.click()
except Exception as e:
print('Exception: {}'.format(e))
time.sleep(1)
If you're interested in the other options—which you won't be able to do from Python—let me know. If you're familiar with system-level C and ObjC programming, creating a SIMBL program that hooks into the ObjC runtime to insert your own delegate in front of the existing one and intercept the relevant messages isn't that hard.
I am making an online webcam for my parents, using raspberry pi. I want it to capture a photo, upload it to a webserver, then upload a copy to a different server for archiving. I use the script streamer to snap a still from the webcam. It works, the problem is that it seems that streamer sometimes crashes, looping the error message "v4l2: oops: select timeout". This can happen after a few shots, or after 10 minutes of operation, it seems random. I have added a command that kills the streamer process after each snapshot, it did make the program a bit more stable, but eventually it still gets stuck in the error loop. I don't know what the problem is or even how to debug it.. What can I do?
I am using raspbian with the included drivers. The webcam is logitech c200. I first tried using opencv to capture stills, but couldn't get it to work properly. If someone could help with that maybe it would fix the problem, I don't know..
This is the code, it's python:
import time
import sys
from subprocess import call
import ftputil
while True:
call("streamer -q -f jpeg -s 640x480 -o ./current.jpeg", shell=True)
time.sleep(0.2);
call("killall -q streamer", shell=True)
filename = str(time.time()) + ".jpg"
host = ftputil.FTPHost(*****)
#host.remove("/domains/***/public_html/webcam.jpg")
host.upload("./current.jpeg", "/domains/***/public_html/webcam.jpg", mode='b')
host.close()
host = ftputil.FTPHost(****)
#host.remove("/domains/***/public_html/webcam.jpg")
host.upload("./current.jpeg", "/webcamarchive/"+filename, mode='b')
host.close()
time.sleep(10);
Never mind, used pygame instead:
cam = pygame.camera.Camera("/dev/video0",(640,480))
cam.start()
image = cam.get_image()
I'm using pyEDSDK (a python wrapper for the canon sdk) to control a Rebel T1i. It mostly works - I can take pictures and save the images to the hard drive, but it screws up when I try to send the start_bulb command.
Actually, start_bulb works flawlessly. The shutter opens and the camera begins capturing an image. The problem is that I can't get it to stop when I send the bulb_stop command.
For start_bulb to work, I had to manually change the camera to bulb mode. Maybe there's some setting I'm missing? Or some kind of init code for bulb mode?
I updated the firmware from 0.9 to 1.1, but it had no effect.
Some other people have had similar experiences:
http://forums.dpreview.com/forums/thread/2858921#forum-post-36169599
http://tech.dir.groups.yahoo.com/group/CanonSDK/message/921
I found the answer here: http://tech.dir.groups.yahoo.com/group/CanonSDK/message/1711
For some reason the T1i camera works differently than the others. The code below successfully closes the shutter after two seconds.
print "started"
self.SendCommand(kEdsCameraCommand_PressShutterButton, kEdsCameraCommand_ShutterButton_Completely_NonAF)
sleep(2)
self.SendCommand(kEdsCameraCommand_PressShutterButton)
print "finished"
If anyone has a chance to test this on other models, I'm interested in hearing about it. I'm wondering if this method will work for them.
I'm working on a timer in python which sounds a chime when the waiting time is over. I use the following code:
from wave import open as wave_open
from ossaudiodev import open as oss_open
def _play_chime():
"""
Play a sound file once.
"""
sound_file = wave_open('chime.wav','rb')
(nc,sw,fr,nf,comptype, compname) = sound_file.getparams( )
dsp = oss_open('/dev/dsp','w')
try:
from ossaudiodev import AFMT_S16_NE
except ImportError:
if byteorder == "little":
AFMT_S16_NE = ossaudiodev.AFMT_S16_LE
else:
AFMT_S16_NE = ossaudiodev.AFMT_S16_BE
dsp.setparameters(AFMT_S16_NE, nc, fr)
data = sound_file.readframes(nf)
sound_file.close()
dsp.write(data)
dsp.close()
It works pretty good, unless any other device is already outputing sound.
How could I do basically the same (under linux) without having the prerequisite that no sound is being played?
If you think the process would require an API to ensure software mixing, please suggest a method :)
Thx for the support :)
The easy answer is "Switch from OSS to PulseAudio." (Or set up ALSA to use dmix, or get a soundcard with better Linux drivers...)
The more complicated answer is, your code already works the way you want it to... on some soundcards. OSS drivers can expose hardware mixers so that you can have multiple audio streams playing simultaneously, or they can expose a single stream which results in the blocking audio you see on your system. The only correct solution here is to use an API that ensures software mixing.
Modern hardware and drivers supports multiple streams. So unless you are running with ancient hardware or a crappy driver, it should work anyway.
Having said that, ALSA may give you more control than OSS. Most kernels shipped nowadays support both.