How to import other script in same dir? - python

I am trying to put a few python scripts to scheduled and run in main.py. Those scripts are put in the same folder.
main.py:
import schedule
import time
from test1 import dd
schedule.every(2).seconds.do(dd,fname)
while True:
schedule.run_pending()
time.sleep(1)
test1.py:
def dd(fname):
print('hello' + fname)
dd('Mary')
dd('John')
It run out as those 2 name and name 'fname' is not defined.
How to define the argument at main.py file? If I have more than one def in the script, shall I need to import multiple times in the main.py
and the script that I import at top of main.py, it run once before running the schedule? That mean it will run one while you import it?

You are not defining your fname in main.py so it says name 'fname' is not defined. You are only importing the functions to main.py from test1.py
Here is the modified code:
main.py
import schedule
import time
from test1 import dd
fname="Mary"
schedule.every(2).seconds.do(dd,fname)
while True:
schedule.run_pending()
time.sleep(1)
test1.py
def dd(fname):
print('hello' + fname)
if you want to input more than one string, just simply use a list! Here is the sample code for test1.py:
def dd(fname:list):
for n in fname:
print('hello' + n)
These codes are tested using Python 3.7.7

Your problem is that you are trying to use a function argument as it's own variable. Importing is not the problem here.
Try this:
import schedule
import time
from test1 import dd
schedule.every(2).seconds.do(dd,("Any String",))
while True:
schedule.run_pending()
time.sleep(1)

Related

Python, create a function to control the process in another file python

I created these three functions in python (pool.py, funcA.py, config.py). My intent is that having two python files I would like the two functions to interact. Specifically, I have the pool.py file which has a while loop inside that simulates an always active process that should be blocked when the funcA.py file is called separately. The config.py file is used to create a global variable and use this variable in both files, with funcA.py I go to modify these variables cmd0 and cmd1 and instead with the file pool.py I repeatedly read these variables and check if they change. If you want to test the operation you can take these files and open two terminals in one run pool.py and in the other, you can run the funcA.py file then you can change the content of the cmd0 and cmd1 variables in the funcA.py file while the pool.py function is still running on the other terminal and you can see that the output of the pool.py file updates immediately.
Now my question is how can I make the contents of the funcA.py file become a function and run it directly as a function and have the same result? That is the one that by launching the new function that is inside the funcA.py file I modify the output of the pool.py function that is running on the other terminal.
config.py
def init():
global cmd0, cmd1
cmd0 = ''
cmd1 = ''
return cmd0, cmd1
pool.py
import os
import time
import sys
import numpy as np
import config
import funcA
from importlib import reload
while True:
funcA = reload(funcA)
print(str(config.cmd0))
print(str(config.cmd1))
if config.cmd0 == 'start':
print('stop')
time.sleep(10)
funcA.py
import os
import sys
import numpy as np
import config
config.cmd0 = 'start'
config.cmd1 = 'u1'
If i understood correctly, the following is working:
pool.py
python
import os
import time
import sys
import numpy as np
import config
import funcA
from importlib import reload
while True:
funcA = reload(funcA)
funcA.change_values("cmd0", "cmd1")
print(str(config.cmd0))
print(str(config.cmd1))
if config.cmd0 == 'start':
print('stop')
time.sleep(10)
funcA.py
import os
import sys
import numpy as np
import config
config.cmd0 = 'start'
config.cmd1 = 'u1'
def change_values(a, b):
config.cmd0 = a
config.cmd1 = b
As you can see, the function change_values is called in the pool.py without any check, then the cmd0 and cmd1 will change immediately.

Import update in Python

So I want to import * from a phyton file. However, these variables I want to import are connected to an API. So they change.
For instance:
at 3:35 is A=5
at 3:36 is A=6
So I want my import to be done every 15 second. How do I write this?
Just use it this way where I just show the schedule to show how the variables can be changed in two files and the schedule represented the change of the API.
In the file1.py
import schedule
import time
def job(t):
var1=5
print(var1)
schedule.every().day.at("3:35").do(job,'It is 3:35')
while True:
schedule.run_pending()
time.sleep(15) # wait one minute
In the file2.py
from file1 import var1
import schedule
import time
def job(t):
var1=6
print(var1)
schedule.every().day.at("3:36").do(job,'It is 3:36')
while True:
schedule.run_pending()
time.sleep(15) # wait one minute

How running process does reload new lib file code dynamically in python?

I want to use my project's new lib file. However, i don't hope to main progress stop running?
for example ,i have b.py:
import a
import time
def main():
for i in range(1000):
time.sleep(5)
print i
a.abc()
main()
a.py is
def abc():
print 'abc'
I want to modify my abc function in a.py to
def abc():
print '123'
When i finish modified abc function in a.py, I hope it worked at once in main process in a.py .
i remove a.pyc file, but it still print abc, not 123. How to print 123 when don't stop main progress?
Can't change main process. Because it is always running.
You might want to save your module hashsum and after every execution check if it changed and reload if so:
import hashlib
import time
import a
def main():
with open('a.py', 'rb') as f:
module_hashsum = hashlib.md5(f.read()).hexdigest()
for i in range(1000):
time.sleep(5)
print i
a.abc()
with open('a.py', 'rb') as f:
hashsum_temp = hashlib.md5(f.read()).hexdigest()
if module_hashsum != hashsum_temp:
module_hashsum = hashsum_temp
reload(a)
main()
Or just reload it after every execution.
Might aswell do some fancy checks for file's mtime, like it's done in django. But it will trigger reload even if you didn't change anything in a file and only did :wq in Vim (just saved file) for e.g.
But I don't think it's necessary, as building hash is fast.

return value from one python script to another

I have two files: script1.py and script2.py. I need to invoke script2.py from script1.py and return the value from script2.py back to script1.py. But the catch is script1.py actually runs script2.py through os.
script1.py:
import os
print(os.system("script2.py 34"))
script2.py
import sys
def main():
x="Hello World"+str(sys.argv[1])
return x
if __name__ == "__main__":
x= main()
As you can see, I am able to get the value into script2, but not back to script1. How can I do that? NOTE: script2.py HAS to be called as if its a commandline execution. Thats why I am using os.
Ok, if I understand you correctly you want to:
pass an argument to another script
retrieve an output from another script to original caller
I'll recommend using subprocess module. Easiest way would be to use check_output() function.
Run command with arguments and return its output as a byte string.
Sample solution:
script1.py
import sys
import subprocess
s2_out = subprocess.check_output([sys.executable, "script2.py", "34"])
print s2_out
script2.py:
import sys
def main(arg):
print("Hello World"+arg)
if __name__ == "__main__":
main(sys.argv[1])
The recommended way to return a value from one python "script" to another is to import the script as a Python module and call the functions directly:
import another_module
value = another_module.get_value(34)
where another_module.py is:
#!/usr/bin/env python
def get_value(*args):
return "Hello World " + ":".join(map(str, args))
def main(argv):
print(get_value(*argv[1:]))
if __name__ == "__main__":
import sys
main(sys.argv)
You could both import another_module and run it as a script from the command-line. If you don't need to run it as a command-line script then you could remove main() function and if __name__ == "__main__" block.
See also, Call python script with input with in a python script using subprocess.

How to get the caller script name

I'm using Python 2.7.6 and I have two scripts:
outer.py
import sys
import os
print "Outer file launching..."
os.system('inner.py')
calling inner.py:
import sys
import os
print "[CALLER GOES HERE]"
I want the second script (inner.py) to print the name of the caller script (outer.py).
I can't pass to inner.py a parameter with the name of the first script because I have tons of called/caller scripts and I can't refactor all the code.
Any idea?
One idea is to use psutil.
#!env/bin/python
import psutil
me = psutil.Process()
parent = psutil.Process(me.ppid())
grandparent = psutil.Process(parent.ppid())
print grandparent.cmdline()
This is ofcourse dependant of how you start outer.py.
This solution is os independant.
On linux you can get the process id and then the caller name like so.
p1.py
import os
os.system('python p2.py')
p2.py
import os
pid = os.getppid()
cmd = open('/proc/%d/cmdline' % (pid,)).read()
caller = ' '.join(cmd.split(' ')[1:])
print caller
running python p1.py will yield p1.py
I imagine you can do similar things in other OS as well.
Another, a slightly shorter version for unix only
import os
parent = os.system('readlink -f /proc/%d/exe' % os.getppid())
If applicable to your situation you could also simply pass an argument that lets inner.py differentiate:
import sys
import os
print "Outer file launching..."
os.system('inner.py launcher')
innter.py
import sys
import os
try:
if sys.argv[0] == 'launcher':
print 'outer.py called us'
except:
pass

Categories