I'm facing some problems trying to load a full python script from my pastebin/github pages.
I followed this link, trying to convert the raw into a temp file and use it like a module: How to load a python script from a raw link (such as Pastebin)?
And this is my test (Using a really simple python script as raw, my main program is not so simple unfortunately): https://trinket.io/python/0e95ba50c8
When I run the script (that now is creating a temp file in the current directory of the .py file) I get this error:
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\BOT\\Images\\tempxm4xpwpz.py'
Otherwise I also treid the exec() function... No better results unfortunately.
With this code:
import requests as rq
import urllib.request
def main():
code = "https://pastebin.com/raw/MJmYEKqh"
response = urllib.request.urlopen(code)
data = response.read()
exec(data)
I get this error:
File "<string>", line 10, in <module>
File "<string>", line 5, in hola
NameError: name 'printest' is not defined
Since my program is more complex compared to this simple test, I don't know how to proceed...
Basically What I want to achieve is to write the full script of my program on GitHub and connect it to a .exe so if I upgrade the raw also my program is updated. Avoiding to generate and share (only with my friends) a new .exe everytime...
Do you think is possible? If so.. what am I doing wrong?
PS: I'm also open to other possibilities to let my friends update the program without downloading everytime the .exe, as soon as they don't have to install anything (that's why I'm using .exe).
Disclaimer: it is really not a good idea to run an unverified (let alone untrusted) code. That being said if you really want to do it...
Probably the easiest and "least-dirty" way would be to run whole new process. This can be done directly in python. Something like this should work (inspiration from the answer you linked in your question):
import urllib.request
import tempfile
import subprocess
code = "https://pastebin.com/raw/MJmYEKqh"
response = urllib.request.urlopen(code)
data = response.read()
with tempfile.NamedTemporaryFile(suffix='.py') as source_code_file:
source_code_file.write(data)
source_code_file.flush()
subprocess.run(['python3', source_code_file.name])
You can also make your code with exec run correctly:
What may work:
exec(data, {}) -- All you need to do, is to supply {} as second argument (that is use exec(data, {})). Function exec may receive two additional optional arguments -- globals and locals. If you supply just one, it will use the same directory for locals. That is the code within the exec would behave like sort-of "clean" environment, at the top-level. Which is something you aim for.
exec(data, globals()) -- Second option is to supply the globals from your current scope. This will also work, though you probably has no need to give the execucted code access to your globals, given that that code will set-up everything inside anyway
What does not work:
exec(data, {}, {}) -- In this case the executed code will have two different dictionaries (albeit both empty) for locals and globals. As such it will behavie "as-in" (I'm not really sure about this part, but as I tested it, it seams as such) the function. Meaning that it will add the printest and hola functions to the local scope instead of global scope. Regardless, I expected it to work -- I expected it will just query the printest in the hola function from the local scope instead of global. However, for some reason the hola function in this case gets compiled in such a way it expects printest to be in global scope and not local, which is not there. I really did not figured out why. So this will result in the NameError
exec(data, globals(), locals()) -- This will provide access to the state from the caller function. Nevertheless, it will crash for the very same reason as in the previous case
exec(data) -- This is just a shorthand for exec(data, globals(), locals()
Related
I wrote a "compiler" PypTeX that converts an input file a.tex containing Hello #{3+4} to an ouput file a.pyptex containing Hello 7. I evaluate arbitrary Python fragments like #{3+4} using something like eval(compile('3+4','a.tex',mode='eval'),myglobals), where myglobals is some (initially empty) dict. This creates a thin illusion of an embedded interpreter for running code in a.tex, however the call stack when running '3+4' looks pretty weird, because it backs up all the way into the PypTeX interpreter, instead of topping out at the user code '3+4' in a.tex.
Is there a way of doing something like eval but chopping off the top of the call stack?
Motivation: debugging
Imagine an exception is raised by the Python fragment deep inside numpy, and pdb is launched. The user types up until they reach the scope of their user code and then they type list. The way I've done it, this displays the a.tex file, which is the right context to be showing to the user and is the reason why I've done it this way. However, if the user types up again, the user ends up in the bowels of the PypTeX compiler.
An analogy would be if the g++ compiler had an error deep in a template, displayed a template "call stack" in its error message, but that template call stack backed all the way out into the bowels of the actual g++ call stack and exposed internal g++ details that would only serve to confuse the user.
Embedding Python in Python
Maybe the problem is that the illusion of the "embedded interpreter" created by eval is slightly too thin. eval allows to specify globals, but it inherits whatever call stack the caller has, so if one could somehow supply eval with a truncated call stack, that would resolve my problem. Alternatively, if pdb could be told "you shall go no further up" past a certain stack frame, that would help too. For example, if I could chop off a part of the stack in the traceback object and then pass it to pdb.post_mortem().
Or if one could do from sys import Interpreter; foo = Interpreter(); foo.eval(...), meaning that foo is a clean embedded interpreter with a distinct call stack, global variables, etc..., that would also be good.
Is there a way of doing this?
A rejected alternative
One way that is not good is to extract all Python fragments from a.tex by regular expression, dump them into a temporary file a.py and then run them by invoking a fresh new Python interpreter at the command line. This causes pdb to eventually top out into a.py. I've tried this and it's a very bad user experience. a.py should be an implementation detail; it is automatically generated and will look very unfamiliar to the user. It is hard for the user to figure out what bits of a.py came from what bits of a.tex. For large documents, I found this to be much too hard to use. See also pythontex.
I think I found a sufficient solution:
import pdb, traceback
def exec_and_catch(cmd,globals):
try:
exec(cmd,globals) # execute a user program
except Exception as e:
tb = e.__traceback__.tb_next # if user program raises an exception, remove the
f = e.with_traceback(tb) # top stack frame and return the exception.
return f
return None # otherwise, return None signifying success.
foo = exec_and_catch("import module_that_does_not_exist",{})
if foo is not None:
traceback.print_exception(value=foo, tb=foo.__traceback__, etype=type(foo))
pdb.post_mortem(foo.__traceback__)
I have downloaded and installed the stereoscopy library. I know how to execute the program via command line as it shows clearly here: https://pypi.org/project/stereoscopy/#description
However, i looked at its code and I wanted to do it myself. I want to insert the code from: https://github.com/2sh/StereoscoPy/blob/master/stereoscopy/init.py and see if it works there.
I copied the code and when I run it, nothing happens. No errors or anything, but no picture shows up and no picture is saved.
So I would like to learn how to use this library to make my own anaglyph pictures by coding it myself and not use the command line executable.
Thank you for your help :)
Running StereoscoPy from a command line executes stereoscopy.__main__, which mainly consists of
from . import _main
_main()
_main is imported from _init.py. The latter defines some constants, classes, and functions, including _main, but does not call any of them. You need to do what __main__.py does, call _main, after stuffing arguments into sys.argv. One way to do the latter is to use sys.argv = shlex.split(cli_string, posix=True) where cli_string is the string that would be typed at a command line.
I have a script, that uses a config file called config.py. Actually this is rather a configuration module then. Anyways: the configuration-module contains a lot of parameters and dictionaries and lists of dictionaries and so on.
In the script today it is used like this
import config
def main():
myParameter = config.myParameter
Now I have another application scenario for this script that uses a related config ('config_advanced.py', but the parameters and dictionaries have different values.
My goal is now, to chose the name of the used config-modul as a passed command-line argument:
myScript.py -configuration config_advanced.py
Since the configuration-module is in the same folder than the main script, I guess I have to rename the passed configuration file to 'config.py' first. Afterwards I can perform import config. Otherwise, if I used `import config_advanced, I wouldn't be able to use a call like
config.myParameter
in the main script.
Another possibility could be, to put the configuration-modules in subfolders and keep the name config.py. The passed command-line-argument will have to contain the subfolder then.
Either way I won't be able to perform the import at the top of the main file, since I have to do the argument parsing first. This isn't a technically problem, but someone said that this it at least bad pratice.
What do you think?
What is a better way to do the trick with not much effort?
Thanks a lot
Edit:
One working solution has been
import sys fullpath = "d:\\python\\scripts\\projectA\\configurationFiles\\"
sys.path.append(fullpath)
config = __import__('config_advanced')
Without syspath it does NOT work, so those following tries won't work:
config = __import__('d:\\python\\scripts\\projectA\\configurationFiles\\config_advanced')
config = __import__('d:\\python\\scripts\\projectA\\configurationFiles\\config_advanced.py')
Another possibility that's similar to what you suggest in the question, but which doesn't need you to hide things in subfolders, is to put config_advanced.py and config_plain.py in the same folder as the main script and then dynamically make config.py a link to the actual config file you want to use.
However, martineau's suggestion is much simpler.
OTOH, georg brings up a very valid point, especially if this script isn't just for your own personal use. While using Python itself for the config data is flexible and powerful, it's perhaps a little too powerful. Config data should just be data, not live executable code. If you make a minor mistake when modifying config data you could cause havoc if it's in an executable file. And if a malicious user gets to it, there's no limit to the damage they could cause.
Bad data in a plain old data file will at worst cause a ValueError if it does something weird that your config parsing code isn't suspecting. But bad data in a live Python file could throw all sorts of nasty errors. Or even worse, it could do something evil in complete silence...
In reply to your comments, here's some code to illustrate the first point:
#! /usr/bin/env python
import os
config_file = "config.py"
def link_config(mode):
if os.path.exists(config_file):
os.remove(config_file)
config_name = "config_%s.py" % mode
os.symlink(config_name, config_file)
#.... parse command line to determine config_mode string, then do
link_config(config_mode)
#Now import the newly-linked config file
import config
If config_mode == "plain" the above code will cause config_plain.py to be imported as 'config'
and if config_mode == "advanced" it will cause config_advanced.py to be imported as 'config'
But as I said before, martineau's method is much simpler. And IIRC, os.symlink may not work on non-unix systems.
...
As for your second point, check out the docs for the json module
How do I execute all the code inside a python file so I can use def's in my current code? I have about 100 scripts that were all written like the script below.
For a simple example, I have a python file called:
D:/bt_test.py
His code looks like this:
def bt_test():
test = 2;
test += addFive(test)
return(test)
def addFive(test):
return(test+5)
Now, I want to from a completely new file, run bt_test()
I've tried doing this:
def openPyFile(script):
execfile(script)
openPyFile('D:/bt_test.py')
bt_test()
But this doesn't work.
I've tried doing this as well:
sys.path.append('D:/')
def openPyFile(script):
name = script.split('/')[-1].split('.')[0]
command = 'from ' + name + ' import *'
exec command
openPyFile('D:/bt_test.py')
bt_test()
Does anyone know why this isn't working?
Here's a link to a quicktime video that will help explain what's happening.
https://dl.dropbox.com/u/1612489/pythonHelp.mp4
You should put those files somewhere on your Python path, and then import them. That's what the import statement is for. BTW: the same directory as your main program is on the Python path, that could be a good place to put them.
# Find and execute bt_test.py, and make a module object of it.
import bt_test
# Use the bt_test function in the bt_test module.
bt_test.bt_test()
The reason that execfile doesn't work is because the functions inside bt_test are limited by the scope of the openPyFile function. One simple test would be to try to run bt_test() from inside openPyFile. Since openPyFile doesn't really do anything other than execfile you could get rid of it altogether, or you could alias execfile
openPyFile=execfile
Note putting the file in your python path and importing it is definitely your best bet -- I only post this answer here to hopefully point out why you're not seeing what you want to see.
In addition to Ned's answer, __import__() might be useful if you don't want the file names hardcoded.
http://docs.python.org/library/functions.html#__import__
Update based on the video.
I don't have access to Maya, but i can try and speculate.
cmds.button(l='print', c='bt_press()') is where the issue seems to lurk. bt_press() is passed as a string object, and whatever way the interpreter uses to resolve that identifier doesn't look in the right namespace.
1) Try passing bt_press() with the module prepended: cmds.button(l='print', c='bt_test.bt_press()')
2) See if you can bind c directly to the function object: cmds.button(l='print', c=bt_press)
Good luck.
>>> from bt_test import bt_test
>>> bt_test()
Imagine a python script that will take a long time to run, what will happen if I modify it while it's running? Will the result be different?
Nothing, because Python precompiles your script into a PYC file and launches that.
However, if some kind of exception occurs, you may get a slightly misleading explanation, because line X may have different code than before you started the script.
When you run a python program and the interpreter is started up, the first thing that happens is the following:
the module sys and builtins is initialized
the __main__ module is initialized, which is the file you gave as an argument to the interpreter; this causes your code to execute
When a module is initialized, it's code is run, defining classes, variables, and functions in the process. The first step of your module (i.e. main file) will probably be to import other modules, which will again be initialized in just the same way; their resulting namespaces are then made available for your module to use. The result of an importing process is in part a module (python-) object in memory. This object does have fields that point to the .py and .pyc content, but these are not evaluated anymore: module objects are cached and their source never run twice. Hence, modifying the module afterwards on disk has no effect on the execution. It can have an effect when the source is read for introspective purposes, such as when exceptions are thrown, or via the module inspect.
This is why the check if __name__ == "__main__" is necessary when adding code that is not intended to run when the module is imported. Running the file as main is equivalent to that file being imported, with the exception of __name__ having a different value.
Sources:
What happens when a module is imported: The import system
What happens when the interpreter starts: Top Level Components
What's the __main__ module: __main__- Top-level code environment
This is a fun question. The answer is that "it depends".
Consider the following code:
"Example script showing bad things you can do with python."
import os
print('this is a good script')
with open(__file__, 'w') as fdesc:
fdesc.write('print("this is a bad script")')
import bad
Try saving the above as "/tmp/bad.py" then do "cd /tmp" and finally "python3 bad.py" and see what happens.
On my ubuntu 20 system I see the output:
this is a good script
this is a bad script
So again, the answer to your question is "it depends". If you don't do anything funky then the script is in memory and you are fine. But python is a pretty dynamic language so there are a variety of ways to modify your "script" and have it affect the output.
If you aren't trying to do anything funky, then probably one of the things to watch out for are imports inside functions.
Below is another example which illustrates the idea (save as "/tmp/modify.py" and do "cd /tmp" and then "python3 modify.py" to run). The fiddle function defined below simulates you modifying the script while it is running (if desired, you could remove the fiddle function, put in a time.sleep(300) at the second to last line, and modify the file yourself).
The point is that since the show function is doing an import inside the function instead of at the top of the module, the import won't happen until the function is called. If you have modified the script before you call show, then your modified version of the script will be used.
If you are seeing surprising or unexpected behavior from modifying a running script, I would suggest looking for import statements inside functions. There are sometimes good reasons to do that sort of thing so you will see it in people's code as well as some libraries from time to time.
Below is the demonstration of how an import inside a function can cause strange effects. You can try this as is vs commenting out the call to the fiddle function to see the effect of modifying a script while it is running.
"Example showing import in a function"
import time
def yell(msg):
"Yell a msg"
return f'#{msg}#'
def show(msg):
"Print a message nicely"
import modify
print(modify.yell(msg))
def fiddle():
orig = open(__file__).read()
with open(__file__, 'w') as fdesc:
modified = orig.replace('{' + 'msg' + '}', '{msg.upper()}')
fdesc.write(modified)
fiddle()
show('What do you think?')
No, the result will not reflect the changes once saved. The result will not change when running regular python files. You will have to save your changes and re-run your program.
If you run the following script:
from time import sleep
print("Printing hello world in: ")
for i in range(10, 0, -1):
print(f"{i}...")
sleep(1)
print("Hello World!")
Then change "Hello World!" to "Hello StackOverflow!" while it's counting down, it will still output "Hello World".
Nothing, as this answer. Besides, I did experiment when multiprocessing is involved. Save the script below as x.py:
import multiprocessing
import time
def f(x):
print(x)
time.sleep(10)
if __name__ == '__main__':
with multiprocessing.Pool(2) as pool:
for _ in pool.imap(f, ['hello'] * 5):
pass
After python3 x.py and after the first two 'hello' being printed out, I modified ['hello'] to ['world'] and observed what happend. Nothing interesting happened. The result was still:
hello
hello
hello
hello
hello
It happens nothing. Once the script is loaded in memory and running it will keep like this.
An "auto-reloading" feature can be implemented anyway in your code, like Flask and other frameworks does.
This is slightly different from what you describe in your question, but it works:
my_string = "Hello World!"
line = input(">>> ")
exec(line)
print(my_string)
Test run:
>>> print("Hey")
Hey
Hello World!
>>> my_string = "Goodbye, World"
Goodbye, World
See, you can change the behavior of your "loaded" code dynamically.
depending. if a python script links to other modified file, then will load newer version ofcourse. but if source doesnt point to any other file it'll just run all script from cache as long as its run. changes will be visible next time...
and if about auto-applying changes when they're made - yes, #pcbacterio was correct. its possible to do thar but script which does it just remembers last action/thing what was doing and checks when the file is modified to rerun it (so its almost invisible)
=]