Executing an import statement string and using the import [duplicate] - python
How do I execute a string containing Python code in Python?
Do not ever use eval (or exec) on data that could possibly come from outside the program in any form. It is a critical security risk. You allow the author of the data to run arbitrary code on your computer. If you are here because you want to create multiple variables in your Python program following a pattern, you almost certainly have an XY problem. Do not create those variables at all - instead, use a list or dict appropriately.
For statements, use exec(string) (Python 2/3) or exec string (Python 2):
>>> my_code = 'print("hello world")'
>>> exec(my_code)
Hello world
When you need the value of an expression, use eval(string):
>>> x = eval("2+2")
>>> x
4
However, the first step should be to ask yourself if you really need to. Executing code should generally be the position of last resort: It's slow, ugly and dangerous if it can contain user-entered code. You should always look at alternatives first, such as higher order functions, to see if these can better meet your needs.
In the example a string is executed as code using the exec function.
import sys
import StringIO
# create file-like string to capture output
codeOut = StringIO.StringIO()
codeErr = StringIO.StringIO()
code = """
def f(x):
x = x + 1
return x
print 'This is my output.'
"""
# capture output and errors
sys.stdout = codeOut
sys.stderr = codeErr
exec code
# restore stdout and stderr
sys.stdout = sys.__stdout__
sys.stderr = sys.__stderr__
print f(4)
s = codeErr.getvalue()
print "error:\n%s\n" % s
s = codeOut.getvalue()
print "output:\n%s" % s
codeOut.close()
codeErr.close()
eval and exec are the correct solution, and they can be used in a safer manner.
As discussed in Python's reference manual and clearly explained in this tutorial, the eval and exec functions take two extra parameters that allow a user to specify what global and local functions and variables are available.
For example:
public_variable = 10
private_variable = 2
def public_function():
return "public information"
def private_function():
return "super sensitive information"
# make a list of safe functions
safe_list = ['public_variable', 'public_function']
safe_dict = dict([ (k, locals().get(k, None)) for k in safe_list ])
# add any needed builtins back in
safe_dict['len'] = len
>>> eval("public_variable+2", {"__builtins__" : None }, safe_dict)
12
>>> eval("private_variable+2", {"__builtins__" : None }, safe_dict)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 1, in <module>
NameError: name 'private_variable' is not defined
>>> exec("print \"'%s' has %i characters\" % (public_function(), len(public_function()))", {"__builtins__" : None}, safe_dict)
'public information' has 18 characters
>>> exec("print \"'%s' has %i characters\" % (private_function(), len(private_function()))", {"__builtins__" : None}, safe_dict)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 1, in <module>
NameError: name 'private_function' is not defined
In essence you are defining the namespace in which the code will be executed.
Remember that from version 3 exec is a function!
so always use exec(mystring) instead of exec mystring.
Avoid exec and eval
Using exec and eval in Python is highly frowned upon.
There are better alternatives
From the top answer (emphasis mine):
For statements, use exec.
When you need the value of an expression, use eval.
However, the first step should be to ask yourself if you really need to. Executing code should generally be the position of last resort: It's slow, ugly and dangerous if it can contain user-entered code. You should always look at alternatives first, such as higher order functions, to see if these can better meet your needs.
From Alternatives to exec/eval?
set and get values of variables with the names in strings
[while eval] would work, it is generally not advised to use variable names bearing a meaning to the program itself.
Instead, better use a dict.
It is not idiomatic
From http://lucumr.pocoo.org/2011/2/1/exec-in-python/ (emphasis mine)
Python is not PHP
Don't try to circumvent Python idioms because some other language does it differently. Namespaces are in Python for a reason and just because it gives you the tool exec it does not mean you should use that tool.
It is dangerous
From http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html (emphasis mine)
So eval is not safe, even if you remove all the globals and the builtins!
The problem with all of these attempts to protect eval() is that they are blacklists. They explicitly remove things that could be dangerous. That is a losing battle because if there's just one item left off the list, you can attack the system.
So, can eval be made safe? Hard to say. At this point, my best guess is that you can't do any harm if you can't use any double underscores, so maybe if you exclude any string with double underscores you are safe. Maybe...
It is hard to read and understand
From http://stupidpythonideas.blogspot.it/2013/05/why-evalexec-is-bad.html (emphasis mine):
First, exec makes it harder to human beings to read your code. In order to figure out what's happening, I don't just have to read your code, I have to read your code, figure out what string it's going to generate, then read that virtual code. So, if you're working on a team, or publishing open source software, or asking for help somewhere like StackOverflow, you're making it harder for other people to help you. And if there's any chance that you're going to be debugging or expanding on this code 6 months from now, you're making it harder for yourself directly.
eval() is just for expressions, while eval('x+1') works, eval('x=1') won't work for example. In that case, it's better to use exec, or even better: try to find a better solution :)
It's worth mentioning that exec's brother exists as well, called execfile, if you want to call a Python file. That is sometimes good if you are working in a third party package which have terrible IDE's included and you want to code outside of their package.
Example:
execfile('/path/to/source.py')
or:
exec(open("/path/to/source.py").read())
You accomplish executing code using exec, as with the following IDLE session:
>>> kw = {}
>>> exec( "ret = 4" ) in kw
>>> kw['ret']
4
As the others mentioned, it's "exec" ..
but, in case your code contains variables, you can use "global" to access it, also to prevent the compiler to raise the following error:
NameError: name 'p_variable' is not defined
exec('p_variable = [1,2,3,4]')
global p_variable
print(p_variable)
I tried quite a few things, but the only thing that worked was the following:
temp_dict = {}
exec("temp_dict['val'] = 10")
print(temp_dict['val'])
output:
10
Use eval.
Check out eval:
x = 1
print eval('x+1')
->2
The most logical solution would be to use the built-in eval() function .Another solution is to write that string to a temporary python file and execute it.
Ok .. I know this isn't exactly an answer, but possibly a note for people looking at this as I was. I wanted to execute specific code for different users/customers but also wanted to avoid the exec/eval. I initially looked to storing the code in a database for each user and doing the above.
I ended up creating the files on the file system within a 'customer_filters' folder and using the 'imp' module, if no filter applied for that customer, it just carried on
import imp
def get_customer_module(customerName='default', name='filter'):
lm = None
try:
module_name = customerName+"_"+name;
m = imp.find_module(module_name, ['customer_filters'])
lm = imp.load_module(module_name, m[0], m[1], m[2])
except:
''
#ignore, if no module is found,
return lm
m = get_customer_module(customerName, "filter")
if m is not None:
m.apply_address_filter(myobj)
so customerName = "jj"
would execute apply_address_filter from the customer_filters\jj_filter.py file
Related
How to run multiple strings as source code?
I'm solving some text2code problem for question-answering system and in the process I had the following question: Is it possible to run strings of code as complete code by passing arguments from the original environment? For example,I have these piece of code in str: import module model=module(data,*args) And from base environment I want to do: #for example args=[**some args**] data=pd.DataFrame() exec(string)(data=data,*args) For obvious reasons, transferring the data object using string.format() will not work. Standard unpacking using * does not work either, although, perhaps, I am doing something wrong.
exec can take 3 args: the string, globals var and locals var. So you can do something like: exec(string, globals(), locals()) to pass all variable. It might be faster to only pass a subset of it: exec(string, None, {k: locals()[k] for k in ('data', 'args')})` For single expression you can use eval: >>> i = 50 >>> eval("print(i)") 50 >>> How do I execute a string containing Python code in Python?
Create a keyword in python instead of inheriting from a class [duplicate]
Can you add new statements (like print, raise, with) to Python's syntax? Say, to allow.. mystatement "Something" Or, new_if True: print "example" Not so much if you should, but rather if it's possible (short of modifying the python interpreters code)
You may find this useful - Python internals: adding a new statement to Python, quoted here: This article is an attempt to better understand how the front-end of Python works. Just reading documentation and source code may be a bit boring, so I'm taking a hands-on approach here: I'm going to add an until statement to Python. All the coding for this article was done against the cutting-edge Py3k branch in the Python Mercurial repository mirror. The until statement Some languages, like Ruby, have an until statement, which is the complement to while (until num == 0 is equivalent to while num != 0). In Ruby, I can write: num = 3 until num == 0 do puts num num -= 1 end And it will print: 3 2 1 So, I want to add a similar capability to Python. That is, being able to write: num = 3 until num == 0: print(num) num -= 1 A language-advocacy digression This article doesn't attempt to suggest the addition of an until statement to Python. Although I think such a statement would make some code clearer, and this article displays how easy it is to add, I completely respect Python's philosophy of minimalism. All I'm trying to do here, really, is gain some insight into the inner workings of Python. Modifying the grammar Python uses a custom parser generator named pgen. This is a LL(1) parser that converts Python source code into a parse tree. The input to the parser generator is the file Grammar/Grammar[1]. This is a simple text file that specifies the grammar of Python. [1]: From here on, references to files in the Python source are given relatively to the root of the source tree, which is the directory where you run configure and make to build Python. Two modifications have to be made to the grammar file. The first is to add a definition for the until statement. I found where the while statement was defined (while_stmt), and added until_stmt below [2]: compound_stmt: if_stmt | while_stmt | until_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite] while_stmt: 'while' test ':' suite ['else' ':' suite] until_stmt: 'until' test ':' suite [2]: This demonstrates a common technique I use when modifying source code I’m not familiar with: work by similarity. This principle won’t solve all your problems, but it can definitely ease the process. Since everything that has to be done for while also has to be done for until, it serves as a pretty good guideline. Note that I've decided to exclude the else clause from my definition of until, just to make it a little bit different (and because frankly I dislike the else clause of loops and don't think it fits well with the Zen of Python). The second change is to modify the rule for compound_stmt to include until_stmt, as you can see in the snippet above. It's right after while_stmt, again. When you run make after modifying Grammar/Grammar, notice that the pgen program is run to re-generate Include/graminit.h and Python/graminit.c, and then several files get re-compiled. Modifying the AST generation code After the Python parser has created a parse tree, this tree is converted into an AST, since ASTs are much simpler to work with in subsequent stages of the compilation process. So, we're going to visit Parser/Python.asdl which defines the structure of Python's ASTs and add an AST node for our new until statement, again right below the while: | While(expr test, stmt* body, stmt* orelse) | Until(expr test, stmt* body) If you now run make, notice that before compiling a bunch of files, Parser/asdl_c.py is run to generate C code from the AST definition file. This (like Grammar/Grammar) is another example of the Python source-code using a mini-language (in other words, a DSL) to simplify programming. Also note that since Parser/asdl_c.py is a Python script, this is a kind of bootstrapping - to build Python from scratch, Python already has to be available. While Parser/asdl_c.py generated the code to manage our newly defined AST node (into the files Include/Python-ast.h and Python/Python-ast.c), we still have to write the code that converts a relevant parse-tree node into it by hand. This is done in the file Python/ast.c. There, a function named ast_for_stmt converts parse tree nodes for statements into AST nodes. Again, guided by our old friend while, we jump right into the big switch for handling compound statements and add a clause for until_stmt: case while_stmt: return ast_for_while_stmt(c, ch); case until_stmt: return ast_for_until_stmt(c, ch); Now we should implement ast_for_until_stmt. Here it is: static stmt_ty ast_for_until_stmt(struct compiling *c, const node *n) { /* until_stmt: 'until' test ':' suite */ REQ(n, until_stmt); if (NCH(n) == 4) { expr_ty expression; asdl_seq *suite_seq; expression = ast_for_expr(c, CHILD(n, 1)); if (!expression) return NULL; suite_seq = ast_for_suite(c, CHILD(n, 3)); if (!suite_seq) return NULL; return Until(expression, suite_seq, LINENO(n), n->n_col_offset, c->c_arena); } PyErr_Format(PyExc_SystemError, "wrong number of tokens for 'until' statement: %d", NCH(n)); return NULL; } Again, this was coded while closely looking at the equivalent ast_for_while_stmt, with the difference that for until I've decided not to support the else clause. As expected, the AST is created recursively, using other AST creating functions like ast_for_expr for the condition expression and ast_for_suite for the body of the until statement. Finally, a new node named Until is returned. Note that we access the parse-tree node n using some macros like NCH and CHILD. These are worth understanding - their code is in Include/node.h. Digression: AST composition I chose to create a new type of AST for the until statement, but actually this isn't necessary. I could've saved some work and implemented the new functionality using composition of existing AST nodes, since: until condition: # do stuff Is functionally equivalent to: while not condition: # do stuff Instead of creating the Until node in ast_for_until_stmt, I could have created a Not node with an While node as a child. Since the AST compiler already knows how to handle these nodes, the next steps of the process could be skipped. Compiling ASTs into bytecode The next step is compiling the AST into Python bytecode. The compilation has an intermediate result which is a CFG (Control Flow Graph), but since the same code handles it I will ignore this detail for now and leave it for another article. The code we will look at next is Python/compile.c. Following the lead of while, we find the function compiler_visit_stmt, which is responsible for compiling statements into bytecode. We add a clause for Until: case While_kind: return compiler_while(c, s); case Until_kind: return compiler_until(c, s); If you wonder what Until_kind is, it's a constant (actually a value of the _stmt_kind enumeration) automatically generated from the AST definition file into Include/Python-ast.h. Anyway, we call compiler_until which, of course, still doesn't exist. I'll get to it an a moment. If you're curious like me, you'll notice that compiler_visit_stmt is peculiar. No amount of grep-ping the source tree reveals where it is called. When this is the case, only one option remains - C macro-fu. Indeed, a short investigation leads us to the VISIT macro defined in Python/compile.c: #define VISIT(C, TYPE, V) {\ if (!compiler_visit_ ## TYPE((C), (V))) \ return 0; \ It's used to invoke compiler_visit_stmt in compiler_body. Back to our business, however... As promised, here's compiler_until: static int compiler_until(struct compiler *c, stmt_ty s) { basicblock *loop, *end, *anchor = NULL; int constant = expr_constant(s->v.Until.test); if (constant == 1) { return 1; } loop = compiler_new_block(c); end = compiler_new_block(c); if (constant == -1) { anchor = compiler_new_block(c); if (anchor == NULL) return 0; } if (loop == NULL || end == NULL) return 0; ADDOP_JREL(c, SETUP_LOOP, end); compiler_use_next_block(c, loop); if (!compiler_push_fblock(c, LOOP, loop)) return 0; if (constant == -1) { VISIT(c, expr, s->v.Until.test); ADDOP_JABS(c, POP_JUMP_IF_TRUE, anchor); } VISIT_SEQ(c, stmt, s->v.Until.body); ADDOP_JABS(c, JUMP_ABSOLUTE, loop); if (constant == -1) { compiler_use_next_block(c, anchor); ADDOP(c, POP_BLOCK); } compiler_pop_fblock(c, LOOP, loop); compiler_use_next_block(c, end); return 1; } I have a confession to make: this code wasn't written based on a deep understanding of Python bytecode. Like the rest of the article, it was done in imitation of the kin compiler_while function. By reading it carefully, however, keeping in mind that the Python VM is stack-based, and glancing into the documentation of the dis module, which has a list of Python bytecodes with descriptions, it's possible to understand what's going on. That's it, we're done... Aren't we? After making all the changes and running make, we can run the newly compiled Python and try our new until statement: >>> until num == 0: ... print(num) ... num -= 1 ... 3 2 1 Voila, it works! Let's see the bytecode created for the new statement by using the dis module as follows: import dis def myfoo(num): until num == 0: print(num) num -= 1 dis.dis(myfoo) Here's the result: 4 0 SETUP_LOOP 36 (to 39) >> 3 LOAD_FAST 0 (num) 6 LOAD_CONST 1 (0) 9 COMPARE_OP 2 (==) 12 POP_JUMP_IF_TRUE 38 5 15 LOAD_NAME 0 (print) 18 LOAD_FAST 0 (num) 21 CALL_FUNCTION 1 24 POP_TOP 6 25 LOAD_FAST 0 (num) 28 LOAD_CONST 2 (1) 31 INPLACE_SUBTRACT 32 STORE_FAST 0 (num) 35 JUMP_ABSOLUTE 3 >> 38 POP_BLOCK >> 39 LOAD_CONST 0 (None) 42 RETURN_VALUE The most interesting operation is number 12: if the condition is true, we jump to after the loop. This is correct semantics for until. If the jump isn't executed, the loop body keeps running until it jumps back to the condition at operation 35. Feeling good about my change, I then tried running the function (executing myfoo(3)) instead of showing its bytecode. The result was less than encouraging: Traceback (most recent call last): File "zy.py", line 9, in myfoo(3) File "zy.py", line 5, in myfoo print(num) SystemError: no locals when loading 'print' Whoa... this can't be good. So what went wrong? The case of the missing symbol table One of the steps the Python compiler performs when compiling the AST is create a symbol table for the code it compiles. The call to PySymtable_Build in PyAST_Compile calls into the symbol table module (Python/symtable.c), which walks the AST in a manner similar to the code generation functions. Having a symbol table for each scope helps the compiler figure out some key information, such as which variables are global and which are local to a scope. To fix the problem, we have to modify the symtable_visit_stmt function in Python/symtable.c, adding code for handling until statements, after the similar code for while statements [3]: case While_kind: VISIT(st, expr, s->v.While.test); VISIT_SEQ(st, stmt, s->v.While.body); if (s->v.While.orelse) VISIT_SEQ(st, stmt, s->v.While.orelse); break; case Until_kind: VISIT(st, expr, s->v.Until.test); VISIT_SEQ(st, stmt, s->v.Until.body); break; [3]: By the way, without this code there’s a compiler warning for Python/symtable.c. The compiler notices that the Until_kind enumeration value isn’t handled in the switch statement of symtable_visit_stmt and complains. It’s always important to check for compiler warnings! And now we really are done. Compiling the source after this change makes the execution of myfoo(3) work as expected. Conclusion In this article I've demonstrated how to add a new statement to Python. Albeit requiring quite a bit of tinkering in the code of the Python compiler, the change wasn't difficult to implement, because I used a similar and existing statement as a guideline. The Python compiler is a sophisticated chunk of software, and I don't claim being an expert in it. However, I am really interested in the internals of Python, and particularly its front-end. Therefore, I found this exercise a very useful companion to theoretical study of the compiler's principles and source code. It will serve as a base for future articles that will get deeper into the compiler. References I used a few excellent references for the construction of this article. Here they are, in no particular order: PEP 339: Design of the CPython compiler - probably the most important and comprehensive piece of official documentation for the Python compiler. Being very short, it painfully displays the scarcity of good documentation of the internals of Python. "Python Compiler Internals" - an article by Thomas Lee "Python: Design and Implementation" - a presentation by Guido van Rossum Python (2.5) Virtual Machine, A guided tour - a presentation by Peter Tröger original source
One way to do things like this is to preprocess the source and modify it, translating your added statement to python. There are various problems this approach will bring, and I wouldn't recommend it for general usage, but for experimentation with language, or specific-purpose metaprogramming, it can occassionally be useful. For instance, lets say we want to introduce a "myprint" statement, that instead of printing to the screen instead logs to a specific file. ie: myprint "This gets logged to file" would be equivalent to print >>open('/tmp/logfile.txt','a'), "This gets logged to file" There are various options as to how to do the replacing, from regex substitution to generating an AST, to writing your own parser depending on how close your syntax matches existing python. A good intermediate approach is to use the tokenizer module. This should allow you to add new keywords, control structures etc while interpreting the source similarly to the python interpreter, thus avoiding the breakage crude regex solutions would cause. For the above "myprint", you could write the following transformation code: import tokenize LOGFILE = '/tmp/log.txt' def translate(readline): for type, name,_,_,_ in tokenize.generate_tokens(readline): if type ==tokenize.NAME and name =='myprint': yield tokenize.NAME, 'print' yield tokenize.OP, '>>' yield tokenize.NAME, "open" yield tokenize.OP, "(" yield tokenize.STRING, repr(LOGFILE) yield tokenize.OP, "," yield tokenize.STRING, "'a'" yield tokenize.OP, ")" yield tokenize.OP, "," else: yield type,name (This does make myprint effectively a keyword, so use as a variable elsewhere will likely cause problems) The problem then is how to use it so that your code is usable from python. One way would just be to write your own import function, and use it to load code written in your custom language. ie: import new def myimport(filename): mod = new.module(filename) f=open(filename) data = tokenize.untokenize(translate(f.readline)) exec data in mod.__dict__ return mod This requires you handle your customised code differently from normal python modules however. ie "some_mod = myimport("some_mod.py")" rather than "import some_mod" Another fairly neat (albeit hacky) solution is to create a custom encoding (See PEP 263) as this recipe demonstrates. You could implement this as: import codecs, cStringIO, encodings from encodings import utf_8 class StreamReader(utf_8.StreamReader): def __init__(self, *args, **kwargs): codecs.StreamReader.__init__(self, *args, **kwargs) data = tokenize.untokenize(translate(self.stream.readline)) self.stream = cStringIO.StringIO(data) def search_function(s): if s!='mylang': return None utf8=encodings.search_function('utf8') # Assume utf8 encoding return codecs.CodecInfo( name='mylang', encode = utf8.encode, decode = utf8.decode, incrementalencoder=utf8.incrementalencoder, incrementaldecoder=utf8.incrementaldecoder, streamreader=StreamReader, streamwriter=utf8.streamwriter) codecs.register(search_function) Now after this code gets run (eg. you could place it in your .pythonrc or site.py) any code starting with the comment "# coding: mylang" will automatically be translated through the above preprocessing step. eg. # coding: mylang myprint "this gets logged to file" for i in range(10): myprint "so does this : ", i, "times" myprint ("works fine" "with arbitrary" + " syntax" "and line continuations") Caveats: There are problems to the preprocessor approach, as you'll probably be familiar with if you've worked with the C preprocessor. The main one is debugging. All python sees is the preprocessed file which means that text printed in the stack trace etc will refer to that. If you've performed significant translation, this may be very different from your source text. The example above doesn't change line numbers etc, so won't be too different, but the more you change it, the harder it will be to figure out.
Yes, to some extent it is possible. There is a module out there that uses sys.settrace() to implement goto and comefrom "keywords": from goto import goto, label for i in range(1, 10): for j in range(1, 20): print i, j if j == 3: goto .end # breaking out from nested loop label .end print "Finished"
Short of changing and recompiling the source code (which is possible with open source), changing the base language is not really possible. Even if you do recompile the source, it wouldn't be python, just your hacked-up changed version which you need to be very careful not to introduce bugs into. However, I'm not sure why you'd want to. Python's object-oriented features makes it quite simple to achieve similar results with the language as it stands.
General answer: you need to preprocess your source files. More specific answer: install EasyExtend, and go through following steps i) Create a new langlet ( extension language ) import EasyExtend EasyExtend.new_langlet("mystmts", prompt = "my> ", source_ext = "mypy") Without additional specification a bunch of files shall be created under EasyExtend/langlets/mystmts/ . ii) Open mystmts/parsedef/Grammar.ext and add following lines small_stmt: (expr_stmt | print_stmt | del_stmt | pass_stmt | flow_stmt | import_stmt | global_stmt | exec_stmt | assert_stmt | my_stmt ) my_stmt: 'mystatement' expr This is sufficient to define the syntax of your new statement. The small_stmt non-terminal is part of the Python grammar and it's the place where the new statement is hooked in. The parser will now recognize the new statement i.e. a source file containing it will be parsed. The compiler will reject it though because it still has to be transformed into valid Python. iii) Now one has to add semantics of the statement. For this one has to edit msytmts/langlet.py and add a my_stmt node visitor. def call_my_stmt(expression): "defines behaviour for my_stmt" print "my stmt called with", expression class LangletTransformer(Transformer): #transform def my_stmt(self, node): _expr = find_node(node, symbol.expr) return any_stmt(CST_CallFunc("call_my_stmt", [_expr])) __publish__ = ["call_my_stmt"] iv) cd to langlets/mystmts and type python run_mystmts.py Now a session shall be started and the newly defined statement can be used: __________________________________________________________________________________ mystmts On Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit (Intel)] __________________________________________________________________________________ my> mystatement 40+2 my stmt called with 42 Quite a few steps to come to a trivial statement, right? There isn't an API yet that lets one define simple things without having to care about grammars. But EE is very reliable modulo some bugs. So it's just a matter of time that an API emerges that lets programmers define convenient stuff like infix operators or small statements using just convenient OO programming. For more complex things like embedding whole languages in Python by means of building a langlet there is no way of going around a full grammar approach.
Here's a very simple but crappy way to add new statements, in interpretive mode only. I'm using it for little 1-letter commands for editing gene annotations using only sys.displayhook, but just so I could answer this question I added sys.excepthook for the syntax errors as well. The latter is really ugly, fetching the raw code from the readline buffer. The benefit is, it's trivially easy to add new statements this way. jcomeau#intrepid:~/$ cat demo.py; ./demo.py #!/usr/bin/python -i 'load everything needed under "package", such as package.common.normalize()' import os, sys, readline, traceback if __name__ == '__main__': class t: #staticmethod def localfunction(*args): print 'this is a test' if args: print 'ignoring %s' % repr(args) def displayhook(whatever): if hasattr(whatever, 'localfunction'): return whatever.localfunction() else: print whatever def excepthook(exctype, value, tb): if exctype is SyntaxError: index = readline.get_current_history_length() item = readline.get_history_item(index) command = item.split() print 'command:', command if len(command[0]) == 1: try: eval(command[0]).localfunction(*command[1:]) except: traceback.print_exception(exctype, value, tb) else: traceback.print_exception(exctype, value, tb) sys.displayhook = displayhook sys.excepthook = excepthook >>> t this is a test >>> t t command: ['t', 't'] this is a test ignoring ('t',) >>> ^D
I've found a guide on adding new statements: https://troeger.eu/files/teaching/pythonvm08lab.pdf Basically, to add new statements, you must edit Python/ast.c (among other things) and recompile the python binary. While it's possible, don't. You can achieve almost everything via functions and classes (which wont require people to recompile python just to run your script..)
It's possible to do this using EasyExtend: EasyExtend (EE) is a preprocessor generator and metaprogramming framework written in pure Python and integrated with CPython. The main purpose of EasyExtend is the creation of extension languages i.e. adding custom syntax and semantics to Python.
It's not exactly adding new statements to the language syntax, but macros are a powerful tool: https://github.com/lihaoyi/macropy
Some things can be done with decorators. Let's e.g. assume, Python had no with statement. We could then implement a similar behaviour like this: # ====== Implementation of "mywith" decorator ====== def mywith(stream): def decorator(function): try: function(stream) finally: stream.close() return decorator # ====== Using the decorator ====== #mywith(open("test.py","r")) def _(infile): for l in infile.readlines(): print(">>", l.rstrip()) It is a pretty unclean solution however as done here. Especially the behaviour where the decorator calls the function and sets _ to None is unexpected. For clarification: This decorator is equivalent to writing def _(infile): ... _ = mywith(open(...))(_) # mywith returns None. and decorators are normally expected to modify, not to execute, functions. I used such a method before in a script where I had to temporarily set the working directory for several functions.
OUTDATED: The Logix project is now deprecated and no longer developed, per the Logix website. There is a language based on python called Logix with which you CAN do such things. It hasn't been under development for a while, but the features that you asked for do work with the latest version.
Not without modifying the interpreter. I know a lot of languages in the past several years have been described as "extensible", but not in the way you're describing. You extend Python by adding functions and classes.
Ten years ago you couldn't, and I doubt that's changed. However, it wasn't that hard to modify the syntax back then if you were prepared to recompile python, and I doubt that's changed, either.
Partial Imports: Import everything up to a error
Is there a way to perform a python import which is not atomic? For instance, I have a file as follows: # Filename: a.py myvariable = 1 mylist = [1, 2, 3] raise ImportError donotimportthis = 5 I then have a separate file which does the following: import a a.myvariable == 1 # This is okay as it imported it a.donotimportthis # <-- raise an exception as this is not imported. I have a file which contains some python code, this follows the format of: ...variables... import X I do not have X installed nor do I want it however I do want the variables. Note: This file is autogenerated not by me but by a tool whose version is frozen.
Two choices, in descending order of preference: Change the autogeneration process. Instead of invoking proprietary_autogen_process, invoke custom_autogen_wrapper. This wrapper in turn first invokes the proprietary third-party tool, and then modifies the produced module source code by searching for the code that imports module X, and deletes everything after it. This is relatively straightforward. You just need to take some care to not introduce false positives or false negatives by performing too loose (or too strict) matching of the import code. Ideally you’d use an AST rewriter but that’s probably overkill; a regular expression search for import X might work, although it will yield wrong results if this text appears inside a comment, string literal or inside a method which isn’t executed. Generate a stub module X in a location where it will be found by the autogenerated module when importing the latter. I don’t recommend this because it’s tedious: You probably can’t just generate an empty module, since the autogenerated module will want to use X. You need to generate meaningful method stubs.
You can do specific imports with from a import myvariable EDIT: The above won't work if anything that is flat in the file raises an error. If you have no way to edit the imported file then I don't know if there is a (resonable) solution to this. Sorry didn't realise. (an unreasonable solution would be to read in the file as text, slice it, and then run eval on it). Or, as mentioned in the comments, put the stuff you don't want under if __name__=="__main__": <here> Then it will only be invoked if you run the file directly.
What you can do is removing the donotimportthis variable at the end of the module, as follows: del donotimportthis. I hope it helps
executing python code from string loaded into a module
I found the following code snippet that I can't seem to make work for my scenario (or any scenario at all): def load(code): # Delete all local variables globals()['code'] = code del locals()['code'] # Run the code exec(globals()['code']) # Delete any global variables we've added del globals()['load'] del globals()['code'] # Copy k so we can use it if 'k' in locals(): globals()['k'] = locals()['k'] del locals()['k'] # Copy the rest of the variables for k in locals().keys(): globals()[k] = locals()[k] I created a file called "dynamic_module" and put this code in it, which I then used to try to execute the following code which is a placeholder for some dynamically created string I would like to execute. import random import datetime class MyClass(object): def main(self, a, b): r = random.Random(datetime.datetime.now().microsecond) a = r.randint(a, b) return a Then I tried executing the following: import dynamic_module dynamic_module.load(code_string) return_value = dynamic_module.MyClass().main(1,100) When this runs it should return a random number between 1 and 100. However, I can't seem to get the initial snippet I found to work for even the simplest of code strings. I think part of my confusion in doing this is that I may misunderstand how globals and locals work and therefore how to properly fix the problems I'm encountering. I need the code string to use its own imports and variables and not have access to the ones where it is being run from, which is the reason I am going through this somewhat over-complicated method.
You should not be using the code you found. It is has several big problems, not least that most of it doesn't actually do anything (locals() is a proxy, deleting from it has no effect on the actual locals, it puts any code you execute in the same shared globals, etc.) Use the accepted answer in that post instead; recast as a function that becomes: import sys, imp def load_module_from_string(code, name='dynamic_module') module = imp.new_module(name) exec(code, mymodule.__dict__) return module then just use that: dynamic_module = load_module_from_string(code_string) return_value = dynamic_module.MyClass().main(1, 100) The function produces a new, clean module object.
In general, this is not how you should dynamically import and use external modules. You should be using __import__ within your function to do this. Here's a simple example that worked for me: plt = __import__('matplotlib.pyplot', fromlist = ['plt']) plt.plot(np.arange(5), np.arange(5)) plt.show() I imagine that for your specific application (loading from code string) it would be much easier to save the dynamically generated code string to a file (in a folder containing an __init__.py file) and then to call it using __import__. Then you could access all variables and functions of the code as parts of the imported module. Unless I'm missing something?
Dynamically instantiating objects
I'm attempting to instantiate an object from a string. Specifically, I'm trying to change this: from node.mapper import Mapper mapper = Mapper(file) mapper.map(src, dst) into something like this: with open('C:.../node/mapper.py', 'r') as f: mapping_script = f.read() eval(mapping_script) mapper = Mapper(file) mapper.map(src, dst) The motivation for this seemingly bizarre task is to be able to store different versions of mapping scripts in a database and then retrieve/use them as needed (with emphasis on the polymorphism of the map() method). The above does not work. For some reason, eval() throws SyntaxError: invalid syntax. I don't understand this since it's the same file that's being imported in the first case. Is there some reason why eval() cannot be used to define classes? I should note that I am aware of the security concerns around eval(). I would love to hear of alternative approaches if there are any. The only other thing I can think of is to fetch the script, physically save it into the node package directory, and then import it, but that seems even crazier.
You need to use exec: exec(mapping_script) eval() works only for expressions. exec() works for statements. A typical Python script contains statements. For example: code = """class Mapper: pass""" exec(code) mapper = Mapper() print(mapper) Output: <__main__.Mapper object at 0x10ae326a0> Make sure you either call exec() (Python 3, in Python 2 it is a statement) at the module level. When you call it in a function, you need to add globals(), for example exec(code, globals()), to make the objects available in the global scope and to the rest of the function as discussed here.