Dynamically running python code on google app engine - python

I am trying to be able to dynamically run python code, with variables being able to be passed through to the code. I was able to do this on my computer before I added my project to the google app engine environment (because I can access all the files, but now, with google app engine, I can not do that.
I am struggling to find a solution to this problem. It does not need to be too terribly fancy, just send variables in and get html out, as well as scripts being able to be added client side (the crucial part) to whatever database method that is used.
Edit:
well basically what I mean by dynamically is so that I can import (or thats what I did in IDLE when I tested the prototype, the solution will probably not be called importing) a python script with the name of the library being stored in a variable, as well as an unknown number of variables that would be added. I got this to work on Idle, but now I need to get it to work in the google app engine environment, and people need to be able to upload scripts as well (which is the main problem that cascades into many more problems)
Edit:
When I say that I managed to get this to work on my local machine, I mean I was able to manually drop scripts into the same directory as my main script. The script would later import and execute the scripts when necessary. I was able to get this to work with the following code:
#calling function
mod = __import__('actions.'+folder+'.'+FILE)
VAR = getattr(getattr(mod, folder), FILE)
response = VAR.Main()
print response
This code worked on both my laptop and in the google app engine environment, But When I try to add more scripts to the directory is when things get problematic. On my laptop I could just move the file over one way or another because I had full access to the file directory. On Google App engine I do not have the ability to just upload a file to the same directory or subdirectory of the rest of my python scripts. So basically the problem comes up when trying to design a way to allow more code to come into the system (in my case, adding more 'plugins').

The answer is the exec statement (also known as the exec() function) or the eval() function. See http://docs.python.org/reference/simple_stmts.html#the-exec-statement and http://docs.python.org/library/functions.html?highlight=eval#eval. These can execute arbitrary Python code from a string. exec() runs a script and you get the side effects; eval() takes an expression and returns its value. Typically you pass input in as variables in the local namespace.

Ok, So what I eventually did was use the datastore to upload everything such as the name, description, uploader and code of the plugin (for now the code is just entered into a textarea box). I then, instead of importing a file located in a folder under the same directory of my code like I had before when running everything off of my desktop, Imported the plaintext code into a module using this little bit of magic:
#Initiating Variables for use by importing functions
module_name = 'mymod'
filename = 'action_file'
source = PossibleMatches[0][1] #the source code from the best matched option
# define module_name somewhere
import types
module = types.ModuleType(module_name)
# source should the code to execute
# filename should be a pseudo-filename that the code's from
# (it doesn't actually have to exist; it's used for error messages)
code_object = compile(source, filename, 'exec')
#execute the code in the context of the module
exec code_object in module.__dict__
#Executing the 'Main' Function from the code
return module.Main()

Related

Restrict the Python file to read and write

I'm trying to restrict write and read access to a Python file. Suppose I have the following code:
with open('test.py', 'w+') as file:
file.write('''
open("document.txt", "w+").write("Hello, World!")
open("document.txt", "r+").read()
''')
By executing this code, a new file is created that in the new file there are two lines of code to write and read a another file.
I want the file created by executing this code (test.py) to hit PermissionError while running and not be able to create a new file or read it; Also, this file is only executable and normal commands work in it, but it can not access other files.
If I read you correctly, this is not a python problem, but an environment problem. I understand the question as something like 'how do I prevent python code from executing arbitrary reads or writes?'. There would be a trivial solution (modifying the generated test.py so it throws an error) but presumably that's not what you want.
The easiest way to make python hit a PermissionError... is to make sure it doesn't have permissions. So run your code as a user with extremely limited permissions---specifically no write permissions anywhere---or perhaps no default permissions at all, and use something like facls to grant permission to read specific files explicitly from a more priveleged sentinel process. (This assumes you are running Linux, but there are likely other ways to do this in different OSs).
Alternatively, look into various sandboxing techniques to give you a python interpreter with the relavent modules replaced with modules which throw errors, or an environment where outside modification is impossible.
It would help if you made it clearer why this is important, and why you are writing a python script with another python script (is this just an example of malicious action?).
You could technically change the permission of the file itself on the filesystem your trying to access.
Check the previous thread about changing permissions
os.chmod(path, <permission value>)
Where 000 is to disable anyone other than root to edit on linux.

Run a function when importing module in Python

What I want to achieve: I have a function (object/module/etc) that allows me to run a model. This model requires a few parameters, so I want to have them in a config file rather than pass all through the code. So what I want is to have a config file created automatically when my module is imported. Is it possible in python? Can I have some pointers on where to start please?
All the code in a python file is run when its imported. If you have
def f():
print("ran")
print("imported")
Then when you import it it will print imported
This is also why you sometimes see if __name__=="__main__":
In some files. The code in that block is only run if the file is run as main not imported.
However creating files in a predetermined location may be bad UX, so if you want other people to use your library id think of a better solution.
Generally you can just put code at top level of a module and it will run.

Handling imports in externally-called multi-file script

I have a file structure like the following:
config.py
main.py
some_other_file.py
where config.py contains easily accessible parameters but not much code otherwise. These should be accessible to all other code files. Normally import config would do, but in this case the python script is called externally from another program, and therefore the root calling directory is not the same as the one the files are located at (so just an import results in an exception since it does not find the files).
Right now, the solution I have is to include into my main.py file (the one that is directly called by the third program) the following:
code_path = "Path\\To\\My\\Project\\"
import sys
sys.path.insert(0, code_path)
import config
import some_other_file
...
However, this means having to modify main.py every time the code is moved around. I could live with that, but I would certainly like having one single, simple file with all necessary configuration, not needing to dig through the others (especially since the code may later be passed onto others who just want it to work as effortlessly as possible).
I tried having the sys.path.insert inside the config file, and having that be the file directly called by the external script (and all other files called from there). However, I run into the problem that when the other files do import config, it results in an import loop since they are also being imported from config.py. Typically, I believe this is solved by making the imports in the config.py file only once through something like if __name__ == "__main__": (see below). This does not work in my case, and the script never goes into the if statement, possibly because it is being called as a sub-routine by a third program and it is not the main program itself. As a result, I have no way of enforcing a portion of the code in config.py to only be executed once.
This is what I meant above for config.py (which does not work for my case):
... # Some parameter definitions
if __name__ == "__main__":
code_path = "Path\\To\\My\\Project\\"
import sys
sys.path.insert(0, code_path)
import main # Everything is then executed in main.py (where config.py is also cross-imported)
Is there any way to enforce the portion of code inside the if above to only be executed once even if cross-imported, but without relying on __name__ == "__main__"? Or any other way to handle this at all, while keeping all parameters and configuration data within one single, simple file.
By the way, I am using IronPython for this (not exactly a choice), but since I am sticking to hopefully very simple stuff, I believe it is common to all python versions.
tl;dr: I want a config.py file with a bunch of parameters, including the directory where the program is located, to be accessible to multiple .py files. I want to avoid needing the project directory written in the "main" code file, since that should be a parameter and therefore only in config.py. The whole thing is passed to and executed by a third external program, so the directory where these files are located is not the same as where they are called from (therefore the project directory has to be included to system path at some point to import the different files).
A possible design alternative that is fairly common would be to rely on environment variables configured with a single file. Your multi-program system would then be started with some run script and your python application would then need to use something along the lines of os.env[…] to get/set/check the needed variables. Your directory would then look something along the lines of:
.
.
.
.env (environment variables - doesn't have to be called .env)
main.py
run.sh (starts system of programs - doesn't have to be called run.sh)
.
.
.
For the run script, you could then "activate" the environment variables and, after, start the relevant programs. If using bash as your terminal:
source .env # "activate" your environment variables
# Then the command to start whatever you need to; for example
#
# python main.py
# or
# ./myprogram

How can the root directory in python chunk be specified?

Setting the root directory in a python-chunk with the following code line results in an error while for an ordinary r-chunk it works just fine
knitr::opts_knit$set(root.dir ="..")
knitr::opts_knit$set(root.dir ="..")
Optimally there should exist the following options for each knitr-chunk:
- directory to find code to be imported / executed
- directory to find files / dependencies that are needed for code execution
- directory to save any code output
Does something similar exist?
What it looks like here is that you have told it that it is to look for python code:
```{python}
knitr::opts_knit$set(root.dir ="..")
```
When you run this in R studio it will give you an error:
Error: invalid syntax (, line 1)
You fed it python code instead. This makes sense as the call knit::opt_knit$set means to look in the knitr package for the opts_knit$set and set it to…. This doesn’t exist in python… yet. The python compiler does not recognize it as python code and returns a syntax error. Whereas when you run it as an R chunk, it knows to look into the knitr package. Error handling can be huge issue to deal with. It makes more sense to handle error categories than to account for every type of error. If you want to control the settings for a code chunk, you would do so in the parenthesis ie:
```{python, args }
codeHere
```
I have not seen args for any other language than R, but that does not mean it doesn’t exist. I have just not seen them. I also do not believe that this will fix your issue. You could try some of the following ideas:
Writing your python in a separate file and link to it. This would allow for you to take advantage of the language and utilize things like the OS import. This may be something you want to consider as even python has its ways of navigating around the various operating systems. This may be helpful if you are just running quick scripts and not loading or running python programs.
# OS module
import os
# Your os name
print(os.name)
# Gets PWD or present working directory
os.getcwd()
# change your directory
os.chdir("path")
You could try using the reticulate library within an R chunk and load your python that way
Another thought is that you could try
library(reticulate)
use_python(“path”)
Knitr looks in the same directory as your markdown file to find other files if needed. This is just like about any other IDE
At one point in time knitr would not accept R’s setwd() command. Trying to call setwd() may not do you any good.
It may not the best idea to compute paths relative to what's being executed. If possible they should be determined relative to where the user calls the code from.
This site may help.
The author of the knitr package is very active and involved. Good luck with what you are doing!

Can Not execute Python .py file using RobotFramework like Javascript

Has anyone found a method for executing their .py files from the Robot Framework like you can for JS?
RobotFramework:
Executes the given JavaScript code.
code may contain multiple statements and the return value of last
statement is returned by this keyword.
code may be divided into multiple cells in the test data. In that
case, the parts are catenated together without adding spaces.
If code is an absolute path to an existing file, the JavaScript to
execute will be read from that file. Forward slashes work as a path
separator on all operating systems. The functionality to read the code
from a file was added in SeleniumLibrary 2.5.
Note that, by default, the code will be executed in the context of the
Selenium object itself, so this will refer to the Selenium object. Use
window to refer to the window of your application, e.g.
window.document.getElementById('foo').
Example: Execute JavaScript window.my_js_function('arg1', 'arg2')
Execute JavaScript ${CURDIR}/js_to_execute.js
It's bs that I can't run my .py files this way...
The Execute Javascript extension isn't a part of RobotFramework, it's something added by the Selenium integration, it would therefore follow that you can't use Selenium to execute a .py file.
That said, RobotFramework is written in Python and can obviously be extended with a Python script.
Can you clear up what you're actually trying to achieve here though?
My concern is that if you're using a .py file in your test state to validate your code, isn't that introducing an uncertainty that means that what you're testing is not the same as the code that gets executed when you release your project?
A bit more detail would help a lot here!

Categories