I have made a lot of research on the forum, and still can't figure out how to solve my issue. I am running a Python code from the Windows task scheduler, it can run it but stop at some point. I have create a log file to see where the code stop running and it stops when I am doing an HTTP GET from the requests library.
r = session.get(urlRequest, allow_redirects=True)
The code runs fine on Spyder. Any suggestions?
I created a bat file as well and same issue.
#echo off
"C:\Users\NAME\Anaconda3\python.exe" "C:\Users\NAME\Documents\GTD_scheduledTasks\exchangeRate.py"
pause
In my log files, i asked to print several parameters:
Sys.excecutable C:\Users\NAME\Anaconda3\python.exe
Sys.path
['C:\Users\NAME\Documents\GTD_scheduledTasks',
'C:\Users\NAME\Anaconda3\python39.zip',
'C:\Users\NAME\Anaconda3\DLLs',
'C:\Users\NAME\Anaconda3\lib',
'C:\Users\NAME\Anaconda3',
'C:\Users\NAME\Anaconda3\lib\site-packages',
'C:\Users\NAME\Anaconda3\lib\site-packages\win32',
'C:\Users\NAME\Anaconda3\lib\site-packages\win32\lib',
'C:\Users\NAME\Anaconda3\lib\site-packages\Pythonwin']
os.getcwd() C:\WINDOWS\system32
Thanks!
Edit: I also checked form Spyder where is my ex files for Python and I used the one from sys.excecutable (C:\Users\NAME\Anaconda3\python.exe) both with one \ and with double \
When I go to the event history in the Task Scheduler, i see:
successfully completed: I dont get error return but some statements are not processed (the requests)
actionName is C:\WINDOWS\SYSTEM32\cmd.exe. Not sure if relevant
Related
I got a big script that retrieves a lot of data via an API (Magento Invoices). This script is launched by a cron every 20 minutes. But sometimes, we need to refresh manually for getting the last invoiced orders. I have a dedicated page for this.
I would like to prevent from manual launching of the script by testing if it's already running because both API and script take a lot of ressources and time.
I tried to add a "process" model with is_active = True/False that would be tested and avoid re-launching if script is already active. At the start of the script, I switch the process status to TRUE and set it to FALSE when the script has finished.
But it seems that the 2nd instance of the script waits for the first to be finished before starting. At the end, both scripts are run because process.is_active always = False
I also tried with request.session variable, but same issue.
Spent lotsa time on this but didn't find a way to achieve my goal.
Has anyone already faced such a case ?
I have a script which I want to run on a daily basis at the same time, automatically. I tried to use Windows Task Scheduler. but no luck so far. FYI I can run the same script from the console without any issue.
I tried
where python
C:\Users\name\Anaconda3\python.exe
C:\Users\name\AppData\Local\Programs\Python\Python38\python.exe
C:\Users\name\AppData\Local\Microsoft\WindowsApps\python.exe
on TaskScheduler
Program/script: "C:\Users\name\AppData\Local\Programs\Python\Python38\python.exe"
Add arguments: Nomura_Daily_PnL_Check.py
Start in: "C:\Users\name\Jobs\scripts_need_to_run_daily"
when the scheduled time come, literally nothing happens. No error, No output, nothing!
what is wrong in this process?
try mentioning values without double inverted commas, like:
Program/script: C:\Users\name\AppData\Local\Programs\Python\Python38\python.exe
Add arguments: Nomura_Daily_PnL_Check.py
Start in: C:\Users\name\Jobs\scripts_need_to_run_daily
Also first try to run a simple program, lets say a python program which will write to a file so that you will know whether task is failing or there is some issue with your program.
I am trying to launch a steam game on my computer through an ssh connection (into a Win10 machine). When run locally, the following python call works.
subprocess.run("start steam://rungameid/[gameid]", shell=True)
However, whenever I run this over an ssh connection—either in an interactive interpreter or by invoking a script on the target machine—my steam client suddenly exits.
I haven't noticed anything in the steam logs except that Steam\logs\connection_log.txt contains logoff and a new session start each time. This is not the case when I run the command locally on my machine. Why is steam aware of the different sources of this command, and why is this causing the steam connection to drop? Can anyone suggest a workaround?
Thanks.
Steam is likely failing to launch the application because Windows services, including OpenSSH server, cannot access the desktop, and, hence, cannot launch GUI applications. Presumably, Steam does not expect to run an application in an environment in which it cannot interact with the desktop, and this is what eventually causes Steam to crash. (Admittedly, this is just a guess—it's hard to be sure exactly what is happening when the crash does not seem to appear in the logs or crash dumps.)
You can see a somewhat more detailed explanation of why starting GUI applications over SSH fails when the server is run as a Windows service in this answer by domih to this question about running GUI applications over SSH on Windows.
domih also suggests some workarounds. If it is an option for you, the simplest one is probably to download and run OpenSSH server manually instead of running the server as a service. You can find the latest release of Win32-OpenSSH/Windows for OpenSSH here.
The other workaround that still seems to work is to use schtasks. The idea is to create a scheduled task that runs your command—the Task Scheduler can access the desktop. Unfortunately, this is only an acceptable solution if you don't mind waiting until the next minute at least; schtasks can only schedule tasks to occur exactly on the minute. Moreover, to be safe to run at any time, code should probably schedule the task for at least one minute into the future, meaning that wait times could be anywhere between 1–2 minutes.
There are also other drawbacks to this approach. For example, it's probably harder to monitor the running process this way. However, it might be an acceptable solution in some circumstances, so I've written some Python code that can be used to run a program with schtasks, along with an example. The code depends on the the shortuuid package; you will need to install it before trying the example.
import subprocess
import tempfile
import shortuuid
import datetime
def run_with_schtasks_soon(s, delay=2):
"""
Run a program with schtasks with a delay of no more than
delay minutes and no less than delay - 1 minutes.
"""
# delay needs to be no less than 2 since, at best, we
# could be calling subprocess at the end of the minute.
assert delay >= 2
task_name = shortuuid.uuid()
temp_file = tempfile.NamedTemporaryFile(mode="w", suffix=".bat", delete=False)
temp_file.write('{}\nschtasks /delete /tn {} /f\ndel "{}"'.format(s, task_name, temp_file.name))
temp_file.close()
run_time = datetime.datetime.now() + datetime.timedelta(minutes=delay)
time_string = run_time.strftime("%H:%M")
# This is locale-specific. You will need to change this to
# match your locale. (locale.setlocale and the "%x" format
# does not seem to work here)
date_string = run_time.strftime("%m/%d/%Y")
return subprocess.run("schtasks /create /tn {} /tr {} /sc once /st {} /sd {}".format(task_name,
temp_file.name,
time_string,
date_string),
shell=True)
if __name__ == "__main__":
# Runs The Witness (if you have it)
run_with_schtasks_soon("start steam://rungameid/210970")
We have been running a script on partner's computer for 18 hours. We underestimated how long it would take, and now need to turn in the results. Is it possible to stop the script from running, but still have access to all the lists we are building?
We need to add additional code to the one we are currently running that will use the lists being populated right now. Is there a way to stop the process, but still use (what has been generated of) the lists in the next portion of code?
My partner was using python interactively.
update
We were able to successfully print the results and copy and paste after interrupting the program with control-C.
Well, OP doesn't seem to need an answer anymore. But I'll answer anyway for anyone else coming accross this.
While it is true that stopping the program will delete all data from memory you can still save it. You can inject a debug session and save whatever you need before you kill the process.
Both PyCharm and PyDev support attaching their debugger to a running python application.
See here for an explanation how it works in PyCharm.
Once you've attached the debugger, you can set a breakpoint in your code and the program will stop when it hits that line the next time. Then you can inspect all variables and run some code via the 'Evaluate' feature. This code may save whatever variable you need.
I've tested this with PyCharm 2018.1.1 Community Edition and Python 3.6.4.
In order to do so I ran this code which I saved as test.py
import collections
import time
data = collections.deque(maxlen=100)
i = 0
while True:
data.append(i % 1000)
i += 1
time.sleep(0.001)
via the command python3 test.py from an external Windows PowerShell instance.
Then I've opened that file in PyCharm and attached the debugger. I set a Breakpoint at the line i += 1 and it halted right there. Then I evaluated the following code fragment:
import json
with open('data.json', 'w') as ofile:
json.dump(list(data), ofile)
And found all entries from data in the json file data.json.
Follow-up:
This even works in an interactive session! I ran the very same code in a jupyter notebook cell and then attached the debugger to the kernel. Still having test.py open, I set the breakpoint again on the same line as before and the kernel halted. Then I could see all variables from the interactive notebook session.
I don't think so. Stopping the program should also release all of the memory it was using.
edit: See Swenzel's comment for one way of doing it.
Very specific question: I am writing a (Python) script that should
generate a temporary file.
Launch an image viewer.
Upon closing the viewer, deletes the tmp file.
In linux, this would work fine because I'd open a subprocess from Python and run the following
eog myimg.png; rm myimg.png
However, the story is different on Mac. The open command launches in a different process. If I use /Applications/Preview.app/MacOS/Preview, I get a weird permissions issue. This persists if I kill my script, leaving the file, then fire up Terminal.app:
Running open myimg.png works as expected. Running /Applications/Preview.app/MacOS/Preview myimg.png gets the same permissions error. (Meaning to say- it's not actually a file permissions error). And FWIW, the file is 444 anyway.
My guess is that the open command runs applications from a different user, which is allowed to access parent directories that my user is not using, something like that.
Anyway, anyone know exactly what's going on, and what a viable solution would be? Thank you!
EDIT
Current code is
name = '/var/folders/qy/w9zq1h3d22ndc2d_7hgwj2zm0000gn/T/tmpDHRg2T.png'
viewer_command = 'open'
subprocess.Popen(viewer_command + ' ' + name +' ; rm ' + name, shell=True)
Anyway, anyone know exactly what's going on, and what a viable solution would be?
It's hard to say for certain without more details (like how your script is being run, and what error message you are getting), but it seems likely that your script is either running as a different user from the console user, or is not running within the login context.
Contexts are an unusual feature of Mac OS X, and are documented in Apple's Kernel Programming Guide ("Bootstrap Contexts"). In brief, a process that is not launched from a process descended from the login window (for instance, a process that is launched through SSH) will not have the necessary access to services within that context, such as the WindowServer, to start a desktop application.
#!/bin/sh
TMP=$(mktemp sperry.XXXXXX.jpg)
echo "Made $TMP"
IMG="/Users/sean_perry/Pictures/Photo Booth Library/Pictures/Photo on 6-8-12 at 4.37 PM.jpg"
cp "$IMG" $TMP
open "$TMP"
rm $TMP
This script works just fine on my OSX machine.
So does
#!/usr/bin/python
import subprocess
subprocess.call(["open", "/Users/me/Pictures/Photo Booth Library/Pictures/Photo on 6-8-12 at 4.37 PM.jpg"])
Can you post yours?