Sensor data with pythin does not get written to File - python

I'm currently working on a script for my sensor on my Raspberry Pi. The code underneath should get the values of my sensor and write it into a the data.json file. My problem is, if I run the scipt with my the Thonny editor everything works but if I add the script to my crontab menu the data does not get written to the data.json file.
The Code:
import time
import board
import adafruit_dht
import psutil
import io
import json
import os
from gpiozero import LED
from datetime import date
from datetime import datetime
# We first check if a libgpiod process is running. If yes, we kill it!
for proc in psutil.process_iter():
if proc.name() == "libgpiod_pulsein" or proc.name() == "libgpiod_pulsei":
proc.kill()
sensor = adafruit_dht.DHT11(board.D23)
# init
temp_values = [10]
hum_values = [10]
counter = 0
dataLED = LED(13)
dataList = []
def errSignal():
for i in range(0,3):
dataLED.on()
time.sleep(0.1)
dataLED.off()
time.sleep(0.1)
#on startup
def runSignal():
for i in range(0,5):
dataLED.on()
time.sleep(0.2)
dataLED.off()
time.sleep(0.2)
def getExistingData():
with open('data.json') as fp:
dataList = json.load(fp)
print(dataList)
def startupCheck():
if os.path.isfile("data.json") and os.access("data.json", os.R_OK):
# checks if file exists
print("File exists and is readable.")
# get json data an push into arr on startup
getExistingData()
else:
print("Either file is missing or is not readable, creating file...")
# create json file
with open("data.json", "w") as f:
print("The json file is created.")#
def calc_avgValue(values):
sum = 0
for iterator in values:
sum += iterator
return sum / len(values)
def onOFF():
dataLED.on()
time.sleep(0.7)
dataLED.off()
# data led blinking on startup
runSignal()
# checks if file exists
startupCheck()
while True:
try:
temp_values.insert(counter, sensor.temperature)
hum_values.insert(counter, sensor.humidity)
counter += 1
time.sleep(6)
if counter >= 10:
print(
"Temperature: {}*C Humidity: {}% ".format(
round(calc_avgValue(temp_values), 2),
round(calc_avgValue(hum_values), 2)
)
)
# get time
today = date.today()
now = datetime.now()
# create json obj
data = {
"temperature": round(calc_avgValue(temp_values), 2),
"humidity": round(calc_avgValue(hum_values), 2),
"fullDate": str(today),
"fullDate2": str(today.strftime("%d/%m/%Y")),
"fullDate3": str(today.strftime("%B %d, %Y")),
"fullDate4": str(today.strftime("%b-%d-%Y")),
"date_time": str(now.strftime("%d/%m/%Y %H:%M:%S"))
}
# push data into list
dataList.append(data)
# writing to data.json
with open("data.json", "w") as f:
json.dump(dataList, f, indent=4, separators=(',',': '))
# if data is written signal appears
onOFF()
print("Data has been written to data.json...")
counter = 0
except RuntimeError as error:
continue
except Exception as error:
sensor.exit()
while True:
errSignal()
raise error
time.sleep(0.2)
Crontab Menu:
The line in the center is the script.

Investigation areas:
Do not put & in crontab, it serves no purpose.
You should capture the output of your scripts to see what is going on. You do this by adding >/tmp/stats.out 2>/tmp/stats.err (and similar for the other 2 lines). You will see what output and errors your scripts encounter.
cron does not run your scripts in the same environment, and from the same directory you are running them. Load what you require in the script.
cron might not have permissions to write into data.yml in the directory it is running from. Specify a full path, and ensure cron can write in that directory.
Look at https://unix.stackexchange.com/questions/109804/crontabs-reboot-only-works-for-root for usage of #reboot. Things that should occur at startup should be configured through systemd or init.d (I do not know what Rasperry Pie uses vs distro). Cron is to schedule jobs, not run things at startup.
It could be as simple as not having python3 in the PATH configured in cron.

Related

Keep log file for pushing data from python to Kafka topic

# importing the required libraries from time import sleep
from json import dumps
from kafka import KafkaProducer
# initializing the Kafka producer
my_producer = KafkaProducer( bootstrap_servers = ['localhost:9092'], value_serializer = lambda x:dumps(x).encode('utf-8') )
# generating the numbers ranging from 1 to 500
for n in range(10):
my_data = {'num' : n}
my_producer.send('testnum', value = my_data)
sleep(1)
for n in range(10):
if(n%2==0):
json_data= {'num' :n}
my_producer.send('testnum1',value = json_data)
sleep(1)
Could any one help me to set log for this file.
You've not printed anything, but if you want a log file, use python logging module, or simple shell redirection
python producer.py > out.log

python: monitor updates in /proc/mydev file

I wrote a kernel module that writes in /proc/mydev to notify the python program in userspace. I want to trigger a function in the python program whenever there is an update of data in /proc/mydev from the kernel module. What is the best way to listen for an update here? I am thinking about using "watchdog" (https://pythonhosted.org/watchdog/). Is there a better way for this?
This is an easy and efficient way:
import os
from time import sleep
from datetime import datetime
def myfuction(_time):
print("file modified, time: "+datetime.fromtimestamp(_time).strftime("%H:%M:%S"))
if __name__ == "__main__":
_time = 0
while True:
last_modified_time = os.stat("/proc/mydev").st_mtime
if last_modified_time > _time:
myfuction(last_modified_time)
_time = last_modified_time
sleep(1) # prevent high cpu usage
result:
file modified, time: 11:44:09
file modified, time: 11:46:15
file modified, time: 11:46:24
The while loop guarantees that the program keeps listening to changes forever.
You can set the interval by changing the sleep time. Low sleep time causes high CPU usage.
import time
import os
# get the file descriptor for the proc file
fd = os.open("/proc/mydev", os.O_RDONLY)
# create a polling object to monitor the file for updates
poller = select.poll()
poller.register(fd, select.POLLIN)
# create a loop to monitor the file for updates
while True:
events = poller.poll(10000)
if len(events) > 0:
# read the contents of the file if updated
print(os.read(fd, 1024))
sudo pip install inotify
Example
Code for monitoring a simple, flat path (see “Recursive Watching” for watching a hierarchical structure):
import inotify.adapters
def _main():
i = inotify.adapters.Inotify()
i.add_watch('/tmp')
with open('/tmp/test_file', 'w'):
pass
for event in i.event_gen(yield_nones=False):
(_, type_names, path, filename) = event
print("PATH=[{}] FILENAME=[{}] EVENT_TYPES={}".format(
path, filename, type_names))
if __name__ == '__main__':
_main()
Expected output:
PATH=[/tmp] FILENAME=[test_file] EVENT_TYPES=['IN_MODIFY']
PATH=[/tmp] FILENAME=[test_file] EVENT_TYPES=['IN_OPEN']
PATH=[/tmp] FILENAME=[test_file] EVENT_TYPES=['IN_CLOSE_WRITE']
I'm not sure if this would work for your situation, since it seems that you're wanting to watch a folder, but this program watches a file at a time until the main() loop repeats:
import os
import time
def main():
contents = os.listdir("/proc/mydev")
for file in contents:
f = open("/proc/mydev/" + file, "r")
init = f.read()
f.close()
while different = false:
f = open("/proc/mydev/" + file, "r")
check = f.read()
f.close()
if init !== check:
different = true
else:
different = false
time.sleep(1)
main()
# Write what you would want to happen if a change occured here...
main()
main()
main()
You could then write what you would want to happen right before the last usage of main(), as it would then repeat.
Also, this may contain errors, since I rushed this.
Hope this at least helps!
You can't do this efficiently without modifying your kernel driver.
Instead of using procfs, have it register a new character device under /dev, and write that driver to make new content available to read from that device only when new content has in fact come in from the underlying hardware, such that the application layer can issue a blocking read and have it return only when new content exists.
A good example to work from (which also has plenty of native Python clients) is the evdev devices in the input core.

Running asynchronous functions throws an error

Screenshot I am trying to write a script that runs a .exe file using a silent installer with python. My main problem is that when the installer finishes it request a user input - see screenshot attached and the script will not finish until the user hit any key to start the services
Therefore, I wrote a script that has two functions running asynchronous
As expected I am having errors
My expectations: Function click_window will wait for function run_installer to finish and then it will click enter in the cmd.exe window - I will add a screenshot
What is happening: The Script runs as expected but when it gets to os.system = (file + '/SILENT') base on the debugger - it dies out with an error message
import asyncio
import subprocess
import time
import pyautogui
import pywinauto
from pyautogui import getWindowsWithTitle
import pygetwindow
async def click_window():
# Wait for everything to load in, windows, apps etc.
# You may need to adjust this timing for your system.
time.sleep(10)
# We are waiting for the window to open up and then click ok
app = pywinauto.Application()
# Tell Pyautogui what the windows name is.
Win = "C:\\WINDOWS\\System32\\cmd.exe"
# wait for the window to appear
app = pywinauto.Application()
app.WindowSpecification.wait(Win)
app.window_(title=Win).SetFocus()
app.window_(title=Win).TypeKeys("Enter")
# keyboard shortcut R to hit 'Run Cleaner' button,
pyautogui.press('Enter')
# Wait 15 seconds to give time for the cleaning operation.
# You may need to adjust.
time.sleep(15)
# Move mouse to exit button and click.
# pyautogui.moveTo(1905, 10, 1)
pyautogui.click()
print("We did it")
async def run_installer():
import fnmatch
import glob
import os
import shutil
import time
import urllib.request
import zipfile
# Download the installer
url = 'http://build.****.com:8080/job/CI_build_***_grs_master_windows_mysql/lastBuild/artifact/com.****.sdm.****/target/release/*zip*/release.zip'
file_name = 'release.zip'
with urllib.request.urlopen(url) as response, open(file_name, 'wb') as out_file:
shutil.copyfileobj(response, out_file)
time.sleep(120)
print("Download is done")
file_name = "release.zip"
with zipfile.ZipFile(file_name, 'r') as zip:
# printing all the contents of the zip file
zip.printdir()
time.sleep(60)
# extracting all the files
print('Extracting all the files now...')
zip.extractall()
print('I am done extracting Done!')
print("Path at terminal when executing this file")
print(os.getcwd() + "\n")
os.chdir("C:\\Users\\****\\PycharmProjects\\Phyton-Project\\release")
print("Path at terminal when executing this file again")
print(os.getcwd() + "\n")
# Checking installer directory
for file in glob.glob("*.exe"):
print(file + " Here is the file found")
# file variable contains installer
time.sleep(120)
# Adding permissions to installer to run/execute
os.chmod("C:\\Users\\****\\PycharmProjects\\Phyton-Project\\release", 0o777) # read/write by everyone
for file in os.listdir('.'):
if fnmatch.fnmatch(file, '*.exe'):
print("here is the file" + file)
# file variable is the exe installer
time.sleep(120)
print("Installation starts")
os.system = (file + '/SILENT')
# time.sleep(360)
start = time.time()
loop = asyncio.get_event_loop()
tasks = [
asyncio.gather(run_installer()),
asyncio.gather(click_window()),
]
loop.run_until_complete(asyncio.wait(tasks))
loop.close()
end = time.time()
print("Total time: {}".format(end - start))
Expected Results:
Function run installer finish and the window opens up to start up the services and then the click_window function starts up and clicks on the window and after 4 clicks for every service the script finish
Actual Results:
Function run installer runs but it dies at os.system = (file + '/SILENT') because the installer never installs and then it throws the following error message:
Connected to pydev debugger (build 192.5728.105)
I am done waiting
File Name Modified Size
release/pa****_**_5.4.3_20190903_win_x64_db.exe 2019-09-03 22:29:42 757267728
Extracting all the files now...
Done!
Path at terminal when executing this file
C:\Users\j****\PycharmProjects\Phyton-Project
Path at terminal when executing this file again
C:\Users\j*****\PycharmProjects\Phyton-Project\release
pa***_***_5.4.3_20190903_win_x64_db.exe Here is the file found
here is the fileparasoft_dtp_5.4.3_20190903_win_x64_db.exe
Installation starts
Total time: 485.42616963386536
_GatheringFuture exception was never retrieved
future: <_GatheringFuture finished exception=AppNotConnected('Please use start or connect before trying anything else')>
Traceback (most recent call last):
File "C:/Users/****/PycharmProjects/Phyton-Project/DTP-Installation.py", line 27, in click_window
app.WindowSpecification.wait(Win)
File "C:\Users\****\PycharmProjects\Phyton-Project\venv\lib\site-packages\pywinauto\application.py", line 1234, in __getattribute__
return self[attr_name]
File "C:\Users\****\PycharmProjects\Phyton-Project\venv\lib\site-packages\pywinauto\application.py", line 1220, in __getitem__
return self.window(best_match=key)
File "C:\Users\*****\PycharmProjects\Phyton-Project\venv\lib\site-packages\pywinauto\application.py", line 1207, in window
raise AppNotConnected("Please use start or connect before trying "
pywinauto.application.AppNotConnected: Please use start or connect before trying anything else
Process finished with exit code
ok I decided to create a new function to click on the window - so far I can find the window, set the focus but I can't click enter or any key
import win32con
import win32gui
import pyautogui as pg
from pywinauto.win32structures import HANDLE
toplist = []
winlist = []
def enum_callback(hwnd, results): winlist.append((hwnd, win32gui.GetWindowText(hwnd)))
win32gui.EnumWindows(enum_callback, toplist)
services = [(hwnd, title) for hwnd, title in winlist if 'system32' in title.lower()]
# just grab the first window that matches
services = services[0]
# use the window handle to set focus
win32gui.SetForegroundWindow(services[0])
win32gui.GetForegroundWindow()
# print windows title
print("Windows Title")
print(services)
print('FOUND')
win32gui.EnableWindow(1, True)
win32api.SendMessage(services, win32con.WM_KEYDOWN, win32con.VK_F11, 0)
print('Window Close')
Response:
C:\Users\jromero\PycharmProjects\Phyton-Project\venv\Scripts\python.exe C:/Users/jromero/PycharmProjects/Phyton-Project/click-enter.py
Windows Title
Traceback (most recent call last):
(334514, 'C:\\WINDOWS\\system32\\cmd.exe')
File "C:/Users/jromero/PycharmProjects/Phyton-Project/click-enter.py", line 32, in <module>
FOUND
win32api.SendMessage(services, win32con.WM_KEYDOWN, win32con.VK_F11, 0)
TypeError: The object is not a PyHANDLE object
Process finished with exit code 1

Executing program via pyvmomi creates a process, but nothing happens after that

I'm studying vCenter 6.5 and community samples help a lot, but in this particular situation I can't figure out, what's going on. The script:
from __future__ import with_statement
import atexit
from tools import cli
from pyVim import connect
from pyVmomi import vim, vmodl
def get_args():
*Boring args parsing works*
return args
def main():
args = get_args()
try:
service_instance = connect.SmartConnectNoSSL(host=args.host,
user=args.user,
pwd=args.password,
port=int(args.port))
atexit.register(connect.Disconnect, service_instance)
content = service_instance.RetrieveContent()
vm = content.searchIndex.FindByUuid(None, args.vm_uuid, True)
creds = vim.vm.guest.NamePasswordAuthentication(
username=args.vm_user, password=args.vm_pwd
)
try:
pm = content.guestOperationsManager.processManager
ps = vim.vm.guest.ProcessManager.ProgramSpec(
programPath=args.path_to_program,
arguments=args.program_arguments
)
res = pm.StartProgramInGuest(vm, creds, ps)
if res > 0:
print "Program executed, PID is %d" % res
except IOError, e:
print e
except vmodl.MethodFault as error:
print "Caught vmodl fault : " + error.msg
return -1
return 0
# Start program
if __name__ == "__main__":
main()
When I execute it in console, it successfully connects to the target virtual machine and prints
Program executed, PID is 2036
In task manager I see process with mentioned PID, it was created by the correct user, but there is no GUI of the process (calc.exe). RMB click does not allow to "Expand" the process.
I suppose, that this process was created with special parameters, maybe in different session.
In addition, I tried to run batch file to check if it actually executes, but the answer is no, batch file does not execute.
Any help, advices, clues would be awesome.
P.S. I tried other scripts and successfully transferred a file to the VM.
P.P.S. Sorry for my English.
Update: All such processes start in session 0.
Have you tried interactiveSession ?
https://github.com/vmware/pyvmomi/blob/master/docs/vim/vm/guest/GuestAuthentication.rst
This boolean argument passed to NamePasswordAuthentication and means the following:
This is set to true if the client wants an interactive session in the guest.

How much CPU should a Python time.sleep(n) call use

I have a programme running on an old laptop which is constantly monitoring a Dropbox folder for new files being added. When it's running the Python process uses close to 50% of the CPU on a dual-core machine, and about 12% on an 8-core machine which suggests it's using close to 100% of one core). This is giving off a lot of heat.
The relevant bit of code is:
while True:
files = dict ([(f, None) for f in os.listdir(path_to_watch)])
if len(files) > 0:
print "You have %s new file/s!" % len(files)
time.sleep(20)
In the case that there is no new file, surely most of the time should be spent in the time.sleep() waiting which I wouldn't have thought would be CPU-intensive - and the answers here seem to say it shouldn't be.
So two questions:
1) Since time.sleep() shouldn't be so CPU-intensive, what is going on here?
2) Is there another way of monitoring a folder for changes which would run cooler?
1) Your sleep only gets called when there are new files.
This should be much better:
while True:
files = dict ([(f, None) for f in os.listdir(path_to_watch)])
if len(files) > 0:
print "You have %s new file/s!" % len(files)
time.sleep(20)
2) Yes, especially if using linux. Gamin would be something I'd recommend looking into.
Example:
import gamin
import time
mydir = /path/to/watch
def callback(path, event):
global mydir
try:
if event == gamin.GAMCreated:
print "New file detected: %s" % (path)
fullname = mydir + "/" + path
print "Goint to read",fullname
data = open(fullname).read()
print "Going to upload",fullname
rez = upload_file(data,path)
print "Response from uploading was",rez
except Exception,e: #Not good practice
print e
import pdb
pdb.set_trace()
mon = gamin.WatchMonitor()
mon.watch_directory(mydir, callback)
time.sleep(1)
while True:
ret = mon.handle_one_event()
mon.stop_watch(mydir)
del mon
There is also a cross platform API to monitor file system changes: Watchdog

Categories