I have a piece of python script which puts magnet links in transmission. Now when I run it through terminal it runs ok, opens transmission if closed and adds the torrent/s. Now when I put it in a cron, transmission doesn't open but I know that the cron is running because it writes to a text file the name of the file which is being added.
def download_movie(magnet_link):
os.system('transmission-gtk ' + magnet_link)
As you can see the code is pretty simple and just invokes transmission and passes the magnet link. Thank you.
Altough resons for this may vary, what solved the issue for me most times was logging in as superuser and then do the cron.
If that does not work additional information would be needed, so consider also posting the log. It should be in /var/log/syslog.
What might help, too is setting an absolute path for python: Instead of python write the full path, normally /usr/bin/python+yourVersionNumber
If the script is really simple you could write the code in bash, it would be something like this...
magnetlink=`cat file.txt | cut -d ' ' -f1`
echo "magnetlink" | transmission-gtk
Like #frankenapps said you could try adding the code to:
sudo crontab -e
Related
I made a simple project in python that pings a server every few seconds and I want to store the ping data in a .txt file. (It might also be cool to put it in a GUI but I need it in a txt file for now). Also, it just shows the ping in the terminal so I have no idea how I would make it go into a txt because I'm new at coding.
(here's my code btw)
import os
import time
while 1:
os.system('ping 1.1.1.1 -n 1')
time.sleep(5)
I didn't try much because I couldn't figure out anything I looked up stuff and nothing was what I wanted.
(also I'm a noob at coding anyways)
You'll have to run your code with Popen instead of os.system (which is a bad idea in most cases, anyway, for security reasons).
With Popen (python.org -> documentation is your friend!) you can capture the output of the programs you run. You can then write that to a file object. (That's a built-in type in python. Again, official documentation on this is good and comes with examples!)
I honestly don't see a reason to write the results of ping to a file. Wouldn't you just care about whether that ping worked and was reasonably fast? Maybe extract that information instead and just log it instead!
I have a python script I'm successfully executing every night at midnight. It's outputting the log file, however, I want it to also send an email with the log contents.
I've read this is pretty use to do, but I've had no luck thus far. I've tried this but it does not work. Does anyone else have some other suggestions?
I'm running Ubuntu 14.04, if that makes a difference with the mail smtp.
MAILTO=mcgoga12#wfu.edu
0 0 * * * /usr/bin/python /home/grant/Developer/Projects/StudyBug/Main.py > /home/grant/Desktop/Studybuglog.log 2>&1
Cron will send everything sent by the command to its standard output (what would be sent to the screen if you ran the command from the command line) in an email to the email address in MAILTO.
Unfortunately for you, you are changing the behaviour of this command using shell redirection. If you ran the command exactly as written above, there would be nothing shown on the screen because everything is written to the file (because you redirect standard output to the file using the '>' operator).
If you want an email, remove the >, and everything after it and then test.
If you also want to write to a log file, you might try the 'tee' command, or changing your script to take a log file as a command line argument, and write to both the log file and the standard output.
Very specific question: I am writing a (Python) script that should
generate a temporary file.
Launch an image viewer.
Upon closing the viewer, deletes the tmp file.
In linux, this would work fine because I'd open a subprocess from Python and run the following
eog myimg.png; rm myimg.png
However, the story is different on Mac. The open command launches in a different process. If I use /Applications/Preview.app/MacOS/Preview, I get a weird permissions issue. This persists if I kill my script, leaving the file, then fire up Terminal.app:
Running open myimg.png works as expected. Running /Applications/Preview.app/MacOS/Preview myimg.png gets the same permissions error. (Meaning to say- it's not actually a file permissions error). And FWIW, the file is 444 anyway.
My guess is that the open command runs applications from a different user, which is allowed to access parent directories that my user is not using, something like that.
Anyway, anyone know exactly what's going on, and what a viable solution would be? Thank you!
EDIT
Current code is
name = '/var/folders/qy/w9zq1h3d22ndc2d_7hgwj2zm0000gn/T/tmpDHRg2T.png'
viewer_command = 'open'
subprocess.Popen(viewer_command + ' ' + name +' ; rm ' + name, shell=True)
Anyway, anyone know exactly what's going on, and what a viable solution would be?
It's hard to say for certain without more details (like how your script is being run, and what error message you are getting), but it seems likely that your script is either running as a different user from the console user, or is not running within the login context.
Contexts are an unusual feature of Mac OS X, and are documented in Apple's Kernel Programming Guide ("Bootstrap Contexts"). In brief, a process that is not launched from a process descended from the login window (for instance, a process that is launched through SSH) will not have the necessary access to services within that context, such as the WindowServer, to start a desktop application.
#!/bin/sh
TMP=$(mktemp sperry.XXXXXX.jpg)
echo "Made $TMP"
IMG="/Users/sean_perry/Pictures/Photo Booth Library/Pictures/Photo on 6-8-12 at 4.37 PM.jpg"
cp "$IMG" $TMP
open "$TMP"
rm $TMP
This script works just fine on my OSX machine.
So does
#!/usr/bin/python
import subprocess
subprocess.call(["open", "/Users/me/Pictures/Photo Booth Library/Pictures/Photo on 6-8-12 at 4.37 PM.jpg"])
Can you post yours?
I wrote a simple script using python-daemon which prints to sys.stdout:
#!/usr/bin/env python
#-*- coding: utf-8 -*-
import daemon
import sys
import time
def main():
with daemon.DaemonContext(stdout=sys.stdout):
while True:
print "matt daemon!!"
time.sleep(3)
if __name__ == '__main__':
main()
The script works as I would hope, except for one major flaw--it interrupts my input when I'm typing in my shell:
(daemon)modocache $ git clomatt daemon!!
matt daemon!!ne
matt daemon!! https://github.com/schacon/cowsay.git
(daemon)modocache $
Is there any way for the output to be displayed in a non-intrusive way? I'm hoping for something like:
(daemon)modocache $ git clo
matt daemon!! # <- displayed on new line
(daemon)modocache $ git clo # <- what I had typed so far is displayed on a new line
Please excuse me if this is a silly question, I'm not too familiar with how shells work in general.
Edit: Clarification
The reason I would like this script to run daemonized is that I want to provide updates to the shell user from within the shell, such as printing weather updates to the console in a non-intrusive way. If there is a better way to accomplish this, please let me know. But the purpose is to display information from within the terminal (not via, say, Growl notifications), without blocking.
If it doesn't need to be an "instant" notification, and you can wait until the next time the user runs a command, you can bake all kinds of things into your bash shell prompt. Mine tells me the time and the git repository status for the directory I'm in, for example.
The shell variable for "normal user" shell prompts is PS1, so Googling for bash PS1 or bash prompt customisation will get you some interesting examples.
Here's some links:
Some basic customisations
A more complex example: git bash prompt
In general, you can include the output of any arbitrary script in the prompt string. Be aware, however, that high-latency commands will delay printing of the prompt string until they can be evaluated, so it may be a good idea to cache information. (For example, if you want to display the weather from a weather website, don't make your bash prompt go out and retrieve the webpage every time the prompt is displayed!)
A daemon process, by definition should run in the background. Therefore it should write to a log file.
So either you redirect it's output to a logfile (shell redirect or hand it over to some sys logging daemon) or you make it write to a logfile in the python code.
Update:
man write
man wall
http://linux.die.net/man/1/write, http://linux.die.net/man/1/wall
It probably is best practice to write to a log file for daemons. But could you not write to stderr and have the behavior desired above with inter-woven lines?
Take a look at the logging library (part of the standard library). This can be made to route debug and runtime data either to the console or a file (or anywhere for that matter) depending on the state of the system.
It provides several log facilities, e.g. error, debug, info. Each can be configured differently.
See documentation on logging - link
import logging
logging.basicConfig(filename='example.log',level=logging.DEBUG)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
I'm debugging a service I'm developing, which basically will open my .app and pass it some data to stdin. But it doesn't seem like it's possible to something like:
open -a myapp.app < foo_in.txt
Is it possible to pass stuff to an .app's stdin at all?
Edit:
Sorry, I should have posted this on SO and been more clear. What I'm trying to do is that I have an app made in Python + py2app. I want to be able to handle both when a user drops a file, and use it as a service. The first case isn't a problem since py2app has argv_emulation. I just check if the first argument is a path.
But reading from stdin doesn't work at all, it doesn't read any data regardless if I do as the example above or pipe it. If I pass stdin data to the actual python main script, it works. So I rephrase my question, is it possible to read from stdin with a py2app bundle?
What do you mean with using it as a service?
The example you show won't work, the open command calls LaunchServices to launch the application, and there is no place in the LaunchServices API to pass stdin data or similar to the application.
If you mean adding an item to the OS X Services Menu, you should look at the introductory documentation for developers.
Well,
open -a /Applications/myapp.app < foo_in.txt
will open foo_in.txt in your myapp.app application. You need the full path of the application, be it Applications, bin, or wherever it is...
It depends on what your application does. This may be more appropriate:
cat foo_in.txt | your_command_goes_here
That will read the contents of foo_in.txt (with cat) and pass them to stdin (with the pipe), so then you just follow that with your command / application.
To start Finder as root, one would not use:
sudo open -a /System/Library/CoreServices/Finder.app
The above runs open as root, but still open runs Finder as the normal user. Instead, one would use:
sudo /System/Library/CoreServices/Finder.app/Contents/MacOS/Finder
So, following that, maybe (I am really just guessing) one needs:
myapp.app/Contents/MacOS/myapp < foo_in.txt
You should almost certainly be doing this through Mach ports or Distributed Objects or pretty much any other method of interapplication communication the OS makes available to you.
open creates an entirely new process. Therefore do not use it to redirect stuff into an application from Terminal.
You could try
./Foo.app/Contents/MacOS/Foo < Foo.txt
Already mentioned cat Foo.txt | ./Foo.app/Contents/MacOS/Foo very much depending on whether you set Foo as execurtbale and it's in your path. In your case I'd check the .app package for a Ressources folder, that may contain another binary.
A *.app Package is a directory. It cannot handle commandline arguments.