Which is better:
Importing .py files or executing .txt files?
For example, there is a .py file with this written, named "python.py":
def MyFunction():
print(".py file")
and there is a .txt file named "text.txt":
def MyFunction()
print(".txt file")
I can use the first one like:
import python
and the second one like:
exec(open("text.txt", "r").read())
Which method is better in terms of speed, reliability, and safety?
I am not really concerned about the length of each code
A big problem with exec can appear if you exec a module that imports another module.
If this imported module uses relative paths I could imagine this to not work.
Apart from that the only reason I could imagine why you want to use the latter method is dynamic imports if you have a module system for example.
If that is the case, I would recommend you to have a look at importlib
So in terms of speed, reliably and safety always use import for normal imports and importlib for dynamic imports
Definitely the first one with .py file. You should always save python scripts in files with .py extension. 4 reasons: Portability, Readability, Useability, Convention.
If your storing python code.
Then python_code.py file is preferable.
If you are storing information like text, or some other raw data.
Then any type of file can be used. That can be in .txt, .xls, .csv, may be databases.
Note:
you have named your file name as python. Which is highly not recommended.
Related
What I want to achieve: I have a function (object/module/etc) that allows me to run a model. This model requires a few parameters, so I want to have them in a config file rather than pass all through the code. So what I want is to have a config file created automatically when my module is imported. Is it possible in python? Can I have some pointers on where to start please?
All the code in a python file is run when its imported. If you have
def f():
print("ran")
print("imported")
Then when you import it it will print imported
This is also why you sometimes see if __name__=="__main__":
In some files. The code in that block is only run if the file is run as main not imported.
However creating files in a predetermined location may be bad UX, so if you want other people to use your library id think of a better solution.
Generally you can just put code at top level of a module and it will run.
I have simple question.
I have a python module "dev1.py" that needs a file "dev1_blob"
If I have everything in one directory.
my_app loads the dev1 like
from dev1 import func1
it works fine.
I want to move dev1.py to a directory "./dev_files" with init.py in it.I can load the dev1.py as
from dev_files.dev1 import func1
However when func1 runs to access the "device_blob" -- it barfs as:
resource not found ..
This is so basic that I believe I am missing something.
I can't figure out why great minds of python want everything to refer to __file__ (cwd) and force me to modify dev1.py based on where it's being run from. i.e. in dev1.py refer to the file as: 'dev_files/device_blob'
I can make it work this way, but it's purely absurd way of writing code.
Is there a simple way to access a file next to the module files or in the tree below?
Relative pathing is one of Python's larger flaws.
For this use case, you might be able to call open('../dev_files/device_blob') to go back a dir first.
My general solution is to have a "project.py" file containing an absolute path to the project directory. Then I call open(os.path.join(PROJECT_DIR, 'dev_files', 'device_blob')).
I have a python script that is becoming rather long. Therefore, the functions defined in the rather large single script were written into individual files for easier maintenance and to easily share them between different main scripts.
In the single script, I import numpy and other modules at the top of the file.
Now, if the function is written into a separate file, I need to import numpy in that separate file. I'd rather avoid that because with multiple functions it will end up importing numpy several times.
Can this be done?
Thanks
Yes it can be done, as described here: Python: Importing an "import file"
In short, you can put all imports in another file and just import that file when you need it.
Note though that each file needs to have numpy imported, one way or another.
EDIT:
Also read this: Does python optimize modules when they are imported multiple times? to understand how python handles multiple imports. Thanks to #EdChum
I was wondering what the best practices are on working with paths in the following scenario:
I can either choose to change the current directory to the desired folder and then generate a file using only the file name, or just use the full path directly.
Here is the code where I set the current directory os.chdir():
a=time.clock()
import os
for year in range(start,end):
os.chdir("C:/CO2/%s" % year)
with open("Table.csv",'r') as file:
content=file.read()
b=time.clock()
b-a
Out[55]: 0.002037443263361638
And that is slower than when using the full path directly:
a=time.clock()
for year in range(start,end):
with open("C:/CO2/%s/Table.csv" % year,'r') as file:
content=file.read()
b=time.clock()
b-a
Out[56]: 0.0014569102613677387
I still doubt though whether using the full path is good practice. Are both the methods cross platform? Should I be using os.path instead of %s?
What's the use case for the code in question? Is it a script invoked on the command line by a user? If so, I would usually take the path as a command-line argument (sys.argv), as a command-line option (argparse), or using some sort of configuration file.
Or is the file path part of a more general-purpose module? In that case, I might think about wrapping the path and related code in a class (class FooBar). Users of the module could pass in the needed file path information when instantiating a FooBar. If users tended to use the same path over and over, I would again lean toward a strategy based on a configuration file.
Either way, the file path would be separate from the code -- at least for real software projects.
If we're talking about a one-off script with very few users and almost zero likelihood of future evolution or code re-use, it does not matter too much what you do.
As #lutz-horn said, hardcoded path isn't good idea for any code, except single-run scripts.
Talking about design, choose the methods that seem to be more explicit and simple for further development, don't optimize your code until run time becomes an issue.
In particular case, I'd prefer second way. No need to chdir until you're writing consistent files. You should use explicit chdir in case you're writing many files with different name schemas.
I have a simple web-server written using Python Twisted. Users can log in and use it to generate certain reports (pdf-format), specific to that user. The report is made by having a .tex template file where I replace certain content depending on user, including embedding user-specific graphs (.png or similar), then use the command line program pdflatex to generate the pdf.
Currently the graphs are saved in a tmp folder, and that path is then put into the .tex template before calling pdflatex. But this probably opens up a whole pile of problems when the number of users increases, so I want to use temporary files (tempfile module) instead of a real tmp folder. Is there any way I can make pdflatex see these temporary files? Or am I doing this the wrong way?
without any code it's hard to tell you how, but
Is there any way I can make pdflatex see these temporary files?
yes you can print the path to the temporary file by using a named temporary file:
>>> with tempfile.NamedTemporaryFile() as temp:
... print temp.name
...
/tmp/tmp7gjBHU
As commented you can use tempfile.NamedTemporaryFile. The problem is that this will be deleted once it is closed. That means you have to run pdflatex while the file is still referenced within python.
As an alternative way you could just save the picture with a randomly generated name. The tempfile is designed to allow you to create temporary files on various platforms in a consistent way. This is not what you need, since you'll always run the script on the same webserver I guess.
You could generate random file names using the uuid module:
import uuid
for i in xrange(3):
print(str(uuid.uuid4()))
The you save the pictures explictly using the random name and pass insert it into the tex-file.
After running pdflatex you explicitly have to delete the file, which is the drawback of that approach.