NSSM windows service shows running but output is not visible - python

I have a python execute files which runs every 5th minute and 35th minute of every hour. The script contains below code to run the content inside (itcmodel)
minutetorun = [5,35]
while True:
thisminute = datetime.now().minute
if (thisminute in minutetorun) and (thisminute != last_run):
itcmodel(loaded_model, configdata)
print("Windows service running.")
logging.info("Windows service running.")
last_run = thisminute
when the current minute matches minutetorun the itcmodel will run.
This entire script is converted to exe file. This exe file is triggered by NSSM.
Now my problem is, the service is stopping suddenly after working fine for an entire day or so. I have attached the log file below:
As you can see, the code was running fine until 00:05. Then it stopped giving output. After I restarted the service manually, it started working fine.
What might be the reason for this behavior? if the network was off, I will get the log to this file.
I checked the event viewer, it showed no action of system shut down or sleep.
What might be the reason for this behavior?

Related

Python/Django Prevent a script from being run twice

I got a big script that retrieves a lot of data via an API (Magento Invoices). This script is launched by a cron every 20 minutes. But sometimes, we need to refresh manually for getting the last invoiced orders. I have a dedicated page for this.
I would like to prevent from manual launching of the script by testing if it's already running because both API and script take a lot of ressources and time.
I tried to add a "process" model with is_active = True/False that would be tested and avoid re-launching if script is already active. At the start of the script, I switch the process status to TRUE and set it to FALSE when the script has finished.
But it seems that the 2nd instance of the script waits for the first to be finished before starting. At the end, both scripts are run because process.is_active always = False
I also tried with request.session variable, but same issue.
Spent lotsa time on this but didn't find a way to achieve my goal.
Has anyone already faced such a case ?

Python on task scheduler is not working with some libraries

I have made a lot of research on the forum, and still can't figure out how to solve my issue. I am running a Python code from the Windows task scheduler, it can run it but stop at some point. I have create a log file to see where the code stop running and it stops when I am doing an HTTP GET from the requests library.
r = session.get(urlRequest, allow_redirects=True)
The code runs fine on Spyder. Any suggestions?
I created a bat file as well and same issue.
#echo off
"C:\Users\NAME\Anaconda3\python.exe" "C:\Users\NAME\Documents\GTD_scheduledTasks\exchangeRate.py"
pause
In my log files, i asked to print several parameters:
Sys.excecutable C:\Users\NAME\Anaconda3\python.exe
Sys.path
['C:\Users\NAME\Documents\GTD_scheduledTasks',
'C:\Users\NAME\Anaconda3\python39.zip',
'C:\Users\NAME\Anaconda3\DLLs',
'C:\Users\NAME\Anaconda3\lib',
'C:\Users\NAME\Anaconda3',
'C:\Users\NAME\Anaconda3\lib\site-packages',
'C:\Users\NAME\Anaconda3\lib\site-packages\win32',
'C:\Users\NAME\Anaconda3\lib\site-packages\win32\lib',
'C:\Users\NAME\Anaconda3\lib\site-packages\Pythonwin']
os.getcwd() C:\WINDOWS\system32
Thanks!
Edit: I also checked form Spyder where is my ex files for Python and I used the one from sys.excecutable (C:\Users\NAME\Anaconda3\python.exe) both with one \ and with double \
When I go to the event history in the Task Scheduler, i see:
successfully completed: I dont get error return but some statements are not processed (the requests)
actionName is C:\WINDOWS\SYSTEM32\cmd.exe. Not sure if relevant

Using Python Script to run an Alteryx workflow via the Microsoft Task Scheduler on Remote Desktop

my python script works perfectly fine as long as I'm logged into the remote desktop while the task scheduler initiates it. However, if I'm no longer connected to the remote desktop, it terminates because pyautogui has a Failsafe. When I turn the failsafe to false, the whole workflow just doesn't run at all (no matter where in the workflow I put it).
My goal is for this to run on remote desktop while I don't have to be logged onto it.
Current path:
Windows Task Scheduler
Runs Run_Alteryx_Workflow.bat
Runs Run_Alteryx_Workflow.py
Runs Alteryx_Workflow.yxmd
Code for Run_Alteryx_Workflow.py:
import pyautogui as pg
import os
import time
# pg.moveTo(800, 800, 1.5)
# pg.FAILSAFE = False
time.sleep(10)
# OPEN ALTERYX
os.startfile(r'C:\Users\eXXXXXX\Desktop\Alteryx_Workflow.yxmd')
time.sleep(30) #build in enough time for Alteryx to open the workflow
# RUN ALTERYX WORKFLOW
pg.hotkey('ctrl', 'r')
# time.sleep(300)
time.sleep(100)
# CLOSE ALTERYX
pg.hotkey('alt', 'F4')
Code for Batch File (Run_Alteryx_Workflow.bat):
C:\Users\eXXXXXX\Desktop>"python" "C:\Users\eXXXXXX\Desktop\Run_Alteryx_Workflow.py"
Thank you for your help! This is my first post to StackOverflow!
Much of my code is commented out as I've tested things.
My biggest issue is running it while I'm not logged into my remote desktop!
You can't automate Alteryx like this, it just doesn't work.
You need to purchase Desktop Automation.

My python script which contains infinity loop breaks which I don't want. What to do?

I have a python script which uses selenium and chromedriver to get some info from a website. I have used python scheduler module to make the script run every 2 minute. It runs fine and get the info I want. But, whenever another script scheduled in windows task scheduler gets triggered, this script automatically stops although there is a infinity loop in it. I don't want another script causing this script to stop. How to handle that?
This is how my code is structured.
def get_info():
# code here
schedule.every(2).minutes.until("18:30").do(get_info)
while True:
schedule.run_pending()
time.sleep(1)
I want this infinity loop to run for a long time and not interrupted by another task and script scheduled in the windows task scheduler. What should I do? Any suggestion is appreciated.

Steam browser protocol failing silently when run over ssh

I am trying to launch a steam game on my computer through an ssh connection (into a Win10 machine). When run locally, the following python call works.
subprocess.run("start steam://rungameid/[gameid]", shell=True)
However, whenever I run this over an ssh connection—either in an interactive interpreter or by invoking a script on the target machine—my steam client suddenly exits.
I haven't noticed anything in the steam logs except that Steam\logs\connection_log.txt contains logoff and a new session start each time. This is not the case when I run the command locally on my machine. Why is steam aware of the different sources of this command, and why is this causing the steam connection to drop? Can anyone suggest a workaround?
Thanks.
Steam is likely failing to launch the application because Windows services, including OpenSSH server, cannot access the desktop, and, hence, cannot launch GUI applications. Presumably, Steam does not expect to run an application in an environment in which it cannot interact with the desktop, and this is what eventually causes Steam to crash. (Admittedly, this is just a guess—it's hard to be sure exactly what is happening when the crash does not seem to appear in the logs or crash dumps.)
You can see a somewhat more detailed explanation of why starting GUI applications over SSH fails when the server is run as a Windows service in this answer by domih to this question about running GUI applications over SSH on Windows.
domih also suggests some workarounds. If it is an option for you, the simplest one is probably to download and run OpenSSH server manually instead of running the server as a service. You can find the latest release of Win32-OpenSSH/Windows for OpenSSH here.
The other workaround that still seems to work is to use schtasks. The idea is to create a scheduled task that runs your command—the Task Scheduler can access the desktop. Unfortunately, this is only an acceptable solution if you don't mind waiting until the next minute at least; schtasks can only schedule tasks to occur exactly on the minute. Moreover, to be safe to run at any time, code should probably schedule the task for at least one minute into the future, meaning that wait times could be anywhere between 1–2 minutes.
There are also other drawbacks to this approach. For example, it's probably harder to monitor the running process this way. However, it might be an acceptable solution in some circumstances, so I've written some Python code that can be used to run a program with schtasks, along with an example. The code depends on the the shortuuid package; you will need to install it before trying the example.
import subprocess
import tempfile
import shortuuid
import datetime
def run_with_schtasks_soon(s, delay=2):
"""
Run a program with schtasks with a delay of no more than
delay minutes and no less than delay - 1 minutes.
"""
# delay needs to be no less than 2 since, at best, we
# could be calling subprocess at the end of the minute.
assert delay >= 2
task_name = shortuuid.uuid()
temp_file = tempfile.NamedTemporaryFile(mode="w", suffix=".bat", delete=False)
temp_file.write('{}\nschtasks /delete /tn {} /f\ndel "{}"'.format(s, task_name, temp_file.name))
temp_file.close()
run_time = datetime.datetime.now() + datetime.timedelta(minutes=delay)
time_string = run_time.strftime("%H:%M")
# This is locale-specific. You will need to change this to
# match your locale. (locale.setlocale and the "%x" format
# does not seem to work here)
date_string = run_time.strftime("%m/%d/%Y")
return subprocess.run("schtasks /create /tn {} /tr {} /sc once /st {} /sd {}".format(task_name,
temp_file.name,
time_string,
date_string),
shell=True)
if __name__ == "__main__":
# Runs The Witness (if you have it)
run_with_schtasks_soon("start steam://rungameid/210970")

Categories