What I want to achieve: I have a function (object/module/etc) that allows me to run a model. This model requires a few parameters, so I want to have them in a config file rather than pass all through the code. So what I want is to have a config file created automatically when my module is imported. Is it possible in python? Can I have some pointers on where to start please?
All the code in a python file is run when its imported. If you have
def f():
print("ran")
print("imported")
Then when you import it it will print imported
This is also why you sometimes see if __name__=="__main__":
In some files. The code in that block is only run if the file is run as main not imported.
However creating files in a predetermined location may be bad UX, so if you want other people to use your library id think of a better solution.
Generally you can just put code at top level of a module and it will run.
Related
I want to access my 'utils' folder that is outside of the current directory that my code is in. How do I access the files and functions within that file?
For example, I am trying to access functions in file_to_access.py from 00_label_data.py.
I tried doing this but it still is not working and says 'no module found'
What is the correct method for doing this method?
It's a little hard to tell from the screen shots, as the display isn't quite as descriptive as I've like. However, the main suspicion is at
... import file_to_access.py
There is no such module: the module name is merely file_to_access; you supplied the file name. Also, I suspect that you will want
import mbl.code.utils.file_to_access
to properly activate your reference in line 8.
Setting the root directory in a python-chunk with the following code line results in an error while for an ordinary r-chunk it works just fine
knitr::opts_knit$set(root.dir ="..")
knitr::opts_knit$set(root.dir ="..")
Optimally there should exist the following options for each knitr-chunk:
- directory to find code to be imported / executed
- directory to find files / dependencies that are needed for code execution
- directory to save any code output
Does something similar exist?
What it looks like here is that you have told it that it is to look for python code:
```{python}
knitr::opts_knit$set(root.dir ="..")
```
When you run this in R studio it will give you an error:
Error: invalid syntax (, line 1)
You fed it python code instead. This makes sense as the call knit::opt_knit$set means to look in the knitr package for the opts_knit$set and set it to…. This doesn’t exist in python… yet. The python compiler does not recognize it as python code and returns a syntax error. Whereas when you run it as an R chunk, it knows to look into the knitr package. Error handling can be huge issue to deal with. It makes more sense to handle error categories than to account for every type of error. If you want to control the settings for a code chunk, you would do so in the parenthesis ie:
```{python, args }
codeHere
```
I have not seen args for any other language than R, but that does not mean it doesn’t exist. I have just not seen them. I also do not believe that this will fix your issue. You could try some of the following ideas:
Writing your python in a separate file and link to it. This would allow for you to take advantage of the language and utilize things like the OS import. This may be something you want to consider as even python has its ways of navigating around the various operating systems. This may be helpful if you are just running quick scripts and not loading or running python programs.
# OS module
import os
# Your os name
print(os.name)
# Gets PWD or present working directory
os.getcwd()
# change your directory
os.chdir("path")
You could try using the reticulate library within an R chunk and load your python that way
Another thought is that you could try
library(reticulate)
use_python(“path”)
Knitr looks in the same directory as your markdown file to find other files if needed. This is just like about any other IDE
At one point in time knitr would not accept R’s setwd() command. Trying to call setwd() may not do you any good.
It may not the best idea to compute paths relative to what's being executed. If possible they should be determined relative to where the user calls the code from.
This site may help.
The author of the knitr package is very active and involved. Good luck with what you are doing!
Which is better:
Importing .py files or executing .txt files?
For example, there is a .py file with this written, named "python.py":
def MyFunction():
print(".py file")
and there is a .txt file named "text.txt":
def MyFunction()
print(".txt file")
I can use the first one like:
import python
and the second one like:
exec(open("text.txt", "r").read())
Which method is better in terms of speed, reliability, and safety?
I am not really concerned about the length of each code
A big problem with exec can appear if you exec a module that imports another module.
If this imported module uses relative paths I could imagine this to not work.
Apart from that the only reason I could imagine why you want to use the latter method is dynamic imports if you have a module system for example.
If that is the case, I would recommend you to have a look at importlib
So in terms of speed, reliably and safety always use import for normal imports and importlib for dynamic imports
Definitely the first one with .py file. You should always save python scripts in files with .py extension. 4 reasons: Portability, Readability, Useability, Convention.
If your storing python code.
Then python_code.py file is preferable.
If you are storing information like text, or some other raw data.
Then any type of file can be used. That can be in .txt, .xls, .csv, may be databases.
Note:
you have named your file name as python. Which is highly not recommended.
I have simple question.
I have a python module "dev1.py" that needs a file "dev1_blob"
If I have everything in one directory.
my_app loads the dev1 like
from dev1 import func1
it works fine.
I want to move dev1.py to a directory "./dev_files" with init.py in it.I can load the dev1.py as
from dev_files.dev1 import func1
However when func1 runs to access the "device_blob" -- it barfs as:
resource not found ..
This is so basic that I believe I am missing something.
I can't figure out why great minds of python want everything to refer to __file__ (cwd) and force me to modify dev1.py based on where it's being run from. i.e. in dev1.py refer to the file as: 'dev_files/device_blob'
I can make it work this way, but it's purely absurd way of writing code.
Is there a simple way to access a file next to the module files or in the tree below?
Relative pathing is one of Python's larger flaws.
For this use case, you might be able to call open('../dev_files/device_blob') to go back a dir first.
My general solution is to have a "project.py" file containing an absolute path to the project directory. Then I call open(os.path.join(PROJECT_DIR, 'dev_files', 'device_blob')).
So I have 2 files. Coinvalues.py which has a bunch of functions in it that I want to use but it is also a standalone program that does things.
In my second file called GUI.py it will look up data and display it in a GUI.
I am trying to pull functions from Coinvalues.py using
from Coinvalues import USDValue, SATValue, BTCValue
But once I run the program GUI.py it runs Coinvalues.py in its entirety then it starts GUI.py. I just want to take the few functions from Coinvalues without it doing this. Is this built into python like this or am I doing something wrong?
Unfortunately those functions do not exist unless the other file is executed. There is no way around this. You can, however, use a main sentinel in order to prevent execution of specific blocks of code when a file is imported.