Using Python 2.7 on Linux
i have two files one is called plot10.py & the other one is called plot10i.py and has def main() in it
with plot10.py being my main file, this is my code:
When you import a module for the first time, it gets executed. So, you need to encapsulate all your execution flow inside functions that will be called from the main program.
In fact, in plot10i.py your main is as trivial as useless: just prints hello.
You don't need to use if __name__ == '__main__' if you don't have anything to put in. But if you don't and your project is small, you can use that to include some tests. For example, I would add one to things llike datetime.strptime(x, '%d/%m/%Y %H:%M') because they are easy to mess up.
Related
I have a python library/package that contains modules with functions & classes I usually import inside of my application's main script. However, these modules each contain their own 'main()' function so that I can run them individually if needed. Here's an example:
# my_module.py
import logging
log = logging.getLogger(__name__)
class SomeModuleClass:
def __init__(self):
pass
def some_module_function():
pass
def main():
# Some code when running the module as the main script
if __name__ == "__main__":
main()
I have two questions related to running this module as the main script (i.e. running python my_module.py):
What's the cleanest way to use the logging module in this case? I usually simply overwrite the global 'log' variable by including the following in the main() function, but I'm not sure if there's a more pythonic way of doing this:
def main():
# ... some logging setup
logging.config.dictConfig(log_cfg_dictionary)
global log
log = logging.getLogger(__name__)
Is it considered good practice to include import statements inside of the main() function for all libraries only needed when running the script as main? For instance, one might need to use argparse, json, logging.config, etc. inside of the main function, but these libraries are not used anywhere else in my module. I know the most pythonic way is probably to keep all the import statements at the top of the file (especially if there's no significant difference in terms of memory usage or performance), but keeping main function-specific imports inside of main() remains readable & looks cleaner in my opinion.
I have two python files. One is main.py which I execute. The other is test.py, from which I import a class in main.py.
Here is the sample code for main.py:
from test import Test
if __name__ == '__main__':
print("You're inside main.py")
test_object = Test()
And, here is the sample code for test.py:
class Test:
def __init__(self):
print("you're initializing the class.")
if __name__ == '__main__':
print('You executed test.py')
else:
print('You executed main.py')
Finally, here's the output, when you execute main.py:
You executed main.py
You're inside main.py
you're initializing the class.
From the order of outputs above, you can see that once you import a piece of a file, the whole file gets executed immediately. I am wondering why? what's the logic behind that?
I am coming from java language, where all files included a single class with the same name. I am just confused that why python behaves this way.
Any explanation would be appricated.
What is happening?
When you import the test-module, the interpreter runs through it, executing line by line. Since the if __name__ == '__main__' evaluates as false, it executes the else-clause. After this it continues beyond the from test import Test in main.py.
Why does python execute the imported file?
Python is an interpreted language. Being interpreted means that the program being read and evaluated one line at the time. Going through the imported module, the interpreter needs to evaluate each line, as it has no way to discern which lines are useful to the module or not. For instance, a module could have variables that need to be initialized.
Python is designed to support multiple paradigms. This behavior is used in some of the paradigms python supports, such as procedural programming.
Execution allows the designer of that module to account for different use cases. The module could be imported or run as a script. To accommodate this, some functions, classes or methods may need to be redefined. As an example, a script could output non-critical errors to the terminal, while an imported module to a log-file.
Why specify what to import?
Lets say you are importing two modules, both with a Test-class. If everything from those modules is imported, only one version of the Test-class can exist in our program. We can resolve this issue using different syntax.
import package1
import package2
package1.Test()
packade2.Test()
Alternatively, you can rename them with the as-keyword.
from package1 import Test
from package2 import Test as OtherTest
Test()
OtherTest()
Dumping everything into the global namepace (i.e from test import *) pollutes the namespace of your program with a lot of definitions you might not need and unintentionally overwrite/use.
where all files included a single class with the same name
There is not such requirement imposed in python, you can put multiple classes, functions, values in single .py file for example
class OneClass:
pass
class AnotherClass:
pass
def add(x,y):
return x+y
def diff(x,y):
return x-y
pi = 22/7
is legal python file.
According to interview with python's creator modules mechanism in python was influenced by Modula-2 and Modula-3 languages. So maybe right question is why creators of said languages elected to implement modules that way?
I wrote a module that, if it is imported, automatically changes the error output of my program. It is quite handy to have it in almost any python code I write.
Thus I don't want to add the line import my_errorhook to every code I write but want to have this line added automatically.
I found this answer, stating that it should be avoided to change the behavior of python directly. So I thought about changing the command line, something like
python --importModule my_errorhook main.py
and defining an alias in the bashrc to overwrite the python command to automatically add the parameter. Is there any way I could achieve such a behavior?
There is no such thing like --importModule in python command line. The only way you can incept the code without explicitly importing is by putting your functions in builtins module. However, this is a practice that is discouraged because it makes your code hard to maintain without proper design.
Let's assume that your python file main.py is the entry point of the whole program. Now you can create another file bootstrap.py, and put below codes into the new file.
import main
__builtins__.func = lambda x: x>=0
main.main()
Then the function func() can be called from all modules without being imported. For example in main.py
def main():
...
print(func(1))
...
I'm creating a simple text-based adventure game. I have a long piece of code for combat, but I don't want to copy+paste it every time there's a fight. Is there a way for me to put the combat code into another script, and simply run that whenever combat occurs?
Put that code in a function and save it in another file, in the same directory.
Then in the file where you want to use your function, import the file at the top like:
import newFile
where newFile.py is the name of your file. You do not need .py here. And when you want to use a function from the imported file, use :
newFile.newFunction()
To invoke a pythonsource use the runpy-module:
import runpy
runpy.run_module(
mod_name = "combat.py",
init_globals = None,
run_name = "__main__",
alter_sys = None
)
You are going to do exactly that. First you will put at the top of your code import theScript then your code in that script should be a specific method called possibly doCombat, so you will call theScript.doCombat() in your piece of code.
A great tip that I learned in a computer science class is that if you are ever about to copy and paste code, 90% of the time it should be contained in a method which you should call.
I have 3 python files.(first.py, second.py, third.py) I'm executing 2nd python file from the 1st python file. 2nd python file uses the 'import' statement to make use of 3rd python file. This is what I'm doing.
This is my code.
first.py
import os
file_path = "folder\second.py"
os.system(file_path)
second.py
import third
...
(rest of the code)
third.py (which contains ReportLab code for generating PDF )
....
canvas.drawImage('xyz.jpg',0.2*inch, 7.65*inch, width=w*scale, height=h*scale)
....
when I'm executing this code, it gives error
IOError: Cannot open resource "xyz.jpg"
But when i execute second.py file directly by writing python second.py , everything works fine..!!
Even i tried this code,
file_path = "folder\second.py"
execfile(file_path)
But it gives this error,
ImportError: No module named third
But as i stated everything works fine if i directly execute the second.py file. !!
why this is happening? Is there any better idea for executing such a kind of nested python files?
Any idea or suggestions would be greatly appreciated.
I used this three files just to give the basic idea of my structure. You can consider this flow of execution as a single process. There are too many processes like this and each file contains thousandth lines of codes. That's why i can't change the whole code to be modularize which can be used by import statement. :-(
So the question is how to make a single python file which will take care of executing all the other processes. (If we are executing each process individually, everything works fine )
This should be easy if you do it the right way. There's a couple steps that you can follow to set it up.
Step 1: Set your files up to be run or imported
#!/usr/bin/env python
def main():
do_stuff()
if __name__ == '__main__':
The __name__ special variable will contain __main__ when invoked as a script, and the module name if imported. You can use that to provide a file that can be used either way.
Step 2: Make your subdirectory a package
If you add an empty file called __init__.py to folder, it becomes a package that you can import.
Step 3: Import and run your scripts
from folder import first, second, third
first.main()
second.main()
third.main()
The way you are doing thing is invalid.
You should: create a main application, and import 1,2,3.
In 1,2,3: You should define the things as your functions. Then call them from the main application.
IMHO: I don't need that you have much code to put into separate files, you just also put them into one file with function definitions and call them properly.
I second S.Lott: You really should rethink your design.
But just to provide an answer to your specific problem:
From what I can guess so far, you have second.py and third.py in folder, along with xyz.jpg. To make this work, you will have to change your working directory first. Try it in this way in first.py:
import os
....
os.chdir('folder')
execfile('second.py')
Try reading about the os module.
Future readers:
Pradyumna's answer from here solved Moin Ahmed's second issue for me:
import sys, change "sys.path" by appending the path during run
time,then import the module that will help
[i.e. sys.path.append(execfile's directory)]