It might be a very simple question. I was running a python code, and I got an error message as such :
File "/home/mbenchoufi/brisket/../brisket/views.py", line 11, in <module>
from influence.forms import SearchForm
ImportError: No module named forms
The problem is first that I have indeed a file called views.py in /home/myname/brisket/ but I don''t understand the notation : /home/myname/brisket/../brisket/views.py
Do I have a path config problem and what does this notation means ?
Btw, a really weird thing is that I have a file called forms.py, in the influence folder, and in this file a I have a class called SearchForm... How can the error message can be ?
This is not a Python-specific notation, it's a UNIX filesystem notation. .. in a UNIX path means "back up one directory," so for example, in this case, /home/myname/brisket/.. is equivalent to just /home/myname.
The reason Python displays the filename in this way might be that your sys.path has actually has /home/myname/brisket/.. in it for some reason. It's not a problem, since Python will be able to follow the ..s in the path just fine.
What this error message is telling you is that, while processing the file /home/myname/brisket/../brisket/views.py (which is the same file as /home/myname/brisket/views.py) there is a line of code
from influence.forms import SearchForm
which caused an error. Specifically, it's an ImportError, meaning that the file influence/forms.py wasn't found (or could not be read) by Python's import mechanism. You should check the value of sys.path in your Python program to make sure that the parent directory of influence/ is in the list, and make sure that the file is readable. (Also make sure that influence/__init__.py exists, though I'm not sure that particular problem would cause the error you're seeing.)
"/home/myname/brisket/../brisket/views.py"
is equivalent to
"/home/myname/brisket/views.py"
The cause might be an entry in you PYTHONPATH, e.g. like
export PYTHONPATH="$HOME/../brisket:$PYTHONPATH"
http://docs.python.org/using/cmdline.html#envvar-PYTHONPATH
The above approach has the benefit of working for other users, while not requiring an absolute path to /home. Write it like
export PYTHONPATH="/home/brisket:$PYTHONPATH"
to get simpler paths
Related
I'm facing some problems trying to load a full python script from my pastebin/github pages.
I followed this link, trying to convert the raw into a temp file and use it like a module: How to load a python script from a raw link (such as Pastebin)?
And this is my test (Using a really simple python script as raw, my main program is not so simple unfortunately): https://trinket.io/python/0e95ba50c8
When I run the script (that now is creating a temp file in the current directory of the .py file) I get this error:
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\BOT\\Images\\tempxm4xpwpz.py'
Otherwise I also treid the exec() function... No better results unfortunately.
With this code:
import requests as rq
import urllib.request
def main():
code = "https://pastebin.com/raw/MJmYEKqh"
response = urllib.request.urlopen(code)
data = response.read()
exec(data)
I get this error:
File "<string>", line 10, in <module>
File "<string>", line 5, in hola
NameError: name 'printest' is not defined
Since my program is more complex compared to this simple test, I don't know how to proceed...
Basically What I want to achieve is to write the full script of my program on GitHub and connect it to a .exe so if I upgrade the raw also my program is updated. Avoiding to generate and share (only with my friends) a new .exe everytime...
Do you think is possible? If so.. what am I doing wrong?
PS: I'm also open to other possibilities to let my friends update the program without downloading everytime the .exe, as soon as they don't have to install anything (that's why I'm using .exe).
Disclaimer: it is really not a good idea to run an unverified (let alone untrusted) code. That being said if you really want to do it...
Probably the easiest and "least-dirty" way would be to run whole new process. This can be done directly in python. Something like this should work (inspiration from the answer you linked in your question):
import urllib.request
import tempfile
import subprocess
code = "https://pastebin.com/raw/MJmYEKqh"
response = urllib.request.urlopen(code)
data = response.read()
with tempfile.NamedTemporaryFile(suffix='.py') as source_code_file:
source_code_file.write(data)
source_code_file.flush()
subprocess.run(['python3', source_code_file.name])
You can also make your code with exec run correctly:
What may work:
exec(data, {}) -- All you need to do, is to supply {} as second argument (that is use exec(data, {})). Function exec may receive two additional optional arguments -- globals and locals. If you supply just one, it will use the same directory for locals. That is the code within the exec would behave like sort-of "clean" environment, at the top-level. Which is something you aim for.
exec(data, globals()) -- Second option is to supply the globals from your current scope. This will also work, though you probably has no need to give the execucted code access to your globals, given that that code will set-up everything inside anyway
What does not work:
exec(data, {}, {}) -- In this case the executed code will have two different dictionaries (albeit both empty) for locals and globals. As such it will behavie "as-in" (I'm not really sure about this part, but as I tested it, it seams as such) the function. Meaning that it will add the printest and hola functions to the local scope instead of global scope. Regardless, I expected it to work -- I expected it will just query the printest in the hola function from the local scope instead of global. However, for some reason the hola function in this case gets compiled in such a way it expects printest to be in global scope and not local, which is not there. I really did not figured out why. So this will result in the NameError
exec(data, globals(), locals()) -- This will provide access to the state from the caller function. Nevertheless, it will crash for the very same reason as in the previous case
exec(data) -- This is just a shorthand for exec(data, globals(), locals()
I have downloaded and installed the stereoscopy library. I know how to execute the program via command line as it shows clearly here: https://pypi.org/project/stereoscopy/#description
However, i looked at its code and I wanted to do it myself. I want to insert the code from: https://github.com/2sh/StereoscoPy/blob/master/stereoscopy/init.py and see if it works there.
I copied the code and when I run it, nothing happens. No errors or anything, but no picture shows up and no picture is saved.
So I would like to learn how to use this library to make my own anaglyph pictures by coding it myself and not use the command line executable.
Thank you for your help :)
Running StereoscoPy from a command line executes stereoscopy.__main__, which mainly consists of
from . import _main
_main()
_main is imported from _init.py. The latter defines some constants, classes, and functions, including _main, but does not call any of them. You need to do what __main__.py does, call _main, after stuffing arguments into sys.argv. One way to do the latter is to use sys.argv = shlex.split(cli_string, posix=True) where cli_string is the string that would be typed at a command line.
I like to use ipdb to debug my code. I know we could stop the code on a file on a specific line with b(reak) file:lineno. That command will set a breakpoint in file at line 'no'.
Actually, I have inserted import ipdb; ipdb.set_trace() on a specific file. Each time I use the command s(tep), it executes and step into functions. My problem is it is too slow before seeing what I want to see. The stacktrace showed me lines I do not necessarily want to see. Then I was thinking to put a breakpoint on all files from a certain directory, i.e., b mydirectory/**. Therefore, eachtime I will execute c, it will show me all lines I want to see. However, I can't execute such command (i.e., b mydirectory/**). Could anyone have a solution to this problem?
Thanks!
P.S. The following picture show ton of those irrelevant files I don't want to see. In fact, it is normal to see those files, because I am working on a django project.
Please tell me if the question is unclear
import pdb; pdb.Pdb(skip=['mydirectory.*']).set_trace()
mydirectory have to be a python module, here is more info from the documentaion
The skip argument, if given, must be an iterable of glob-style module
name patterns. The debugger will not step into frames that originate
in a module that matches one of these patterns. [1]
source: https://docs.python.org/2/library/pdb.html#pdb.Pdb
I have a script, that uses a config file called config.py. Actually this is rather a configuration module then. Anyways: the configuration-module contains a lot of parameters and dictionaries and lists of dictionaries and so on.
In the script today it is used like this
import config
def main():
myParameter = config.myParameter
Now I have another application scenario for this script that uses a related config ('config_advanced.py', but the parameters and dictionaries have different values.
My goal is now, to chose the name of the used config-modul as a passed command-line argument:
myScript.py -configuration config_advanced.py
Since the configuration-module is in the same folder than the main script, I guess I have to rename the passed configuration file to 'config.py' first. Afterwards I can perform import config. Otherwise, if I used `import config_advanced, I wouldn't be able to use a call like
config.myParameter
in the main script.
Another possibility could be, to put the configuration-modules in subfolders and keep the name config.py. The passed command-line-argument will have to contain the subfolder then.
Either way I won't be able to perform the import at the top of the main file, since I have to do the argument parsing first. This isn't a technically problem, but someone said that this it at least bad pratice.
What do you think?
What is a better way to do the trick with not much effort?
Thanks a lot
Edit:
One working solution has been
import sys fullpath = "d:\\python\\scripts\\projectA\\configurationFiles\\"
sys.path.append(fullpath)
config = __import__('config_advanced')
Without syspath it does NOT work, so those following tries won't work:
config = __import__('d:\\python\\scripts\\projectA\\configurationFiles\\config_advanced')
config = __import__('d:\\python\\scripts\\projectA\\configurationFiles\\config_advanced.py')
Another possibility that's similar to what you suggest in the question, but which doesn't need you to hide things in subfolders, is to put config_advanced.py and config_plain.py in the same folder as the main script and then dynamically make config.py a link to the actual config file you want to use.
However, martineau's suggestion is much simpler.
OTOH, georg brings up a very valid point, especially if this script isn't just for your own personal use. While using Python itself for the config data is flexible and powerful, it's perhaps a little too powerful. Config data should just be data, not live executable code. If you make a minor mistake when modifying config data you could cause havoc if it's in an executable file. And if a malicious user gets to it, there's no limit to the damage they could cause.
Bad data in a plain old data file will at worst cause a ValueError if it does something weird that your config parsing code isn't suspecting. But bad data in a live Python file could throw all sorts of nasty errors. Or even worse, it could do something evil in complete silence...
In reply to your comments, here's some code to illustrate the first point:
#! /usr/bin/env python
import os
config_file = "config.py"
def link_config(mode):
if os.path.exists(config_file):
os.remove(config_file)
config_name = "config_%s.py" % mode
os.symlink(config_name, config_file)
#.... parse command line to determine config_mode string, then do
link_config(config_mode)
#Now import the newly-linked config file
import config
If config_mode == "plain" the above code will cause config_plain.py to be imported as 'config'
and if config_mode == "advanced" it will cause config_advanced.py to be imported as 'config'
But as I said before, martineau's method is much simpler. And IIRC, os.symlink may not work on non-unix systems.
...
As for your second point, check out the docs for the json module
The entire question is pretty much in the title.
The only documentation I can find on the class is the very sparing cgi documentation and it doesn't mention in the least how the class receives the file, how it's stored, what functions it supports etc.
I'm very interested in where the uploaded file is stored. Clearly it's not in memory, since Bottle mentions the FileStorage.read() function is dangerous on large files. If it's placed on the disk, I would like to move it to a permanent location without having to read through it in Python copy it bit by bit to a new location.
But I have no clue where to begin due to the poor documentation of the class. Any ideas?
A little late, but looking into this myself:
It all comes down to the 'make_file' method in cgi.py:
def make_file(self, binary=None):
import tempfile
return tempfile.TemporaryFile("w+b")
The tempfile docs ( http://docs.python.org/2/library/tempfile.html ) identify that the file is created in a default directory chosen from a platform-dependent list, but that the user can control the directory location by setting one of the environment variables: TMPDIR, TEMP or TMP.
Please also note from the documentation:
Under Unix, the directory entry for the file is removed immediately
after the file is created. Other platforms do not support this; your
code should not rely on a temporary file created using this function
having or not having a visible name in the file system.
Hope this helps.
I hope this could help:
http://epydoc.sourceforge.net/stdlib/cgi.FieldStorage-class.html
cgi.FieldStorage() has
a "filename",
a "value" - the file itself,
a "file" and
a "type"
And something else... you could read in the doc.
Here is my code:
f_sd=open(tempfilesd,'r+b')
newdata_sd = cgi.FieldStorage()
newdata_sd.filename='sdfile.jpg'
newdata_sd.name='file'
newdata_sd.file=f_sd
form.vars.file = newdata_sd
The FieldStorage.file attribute is actually not a file but a cStringIO object, which is described as a memory file on the docs: http://docs.python.org/library/stringio.html
Maybe this can help you a bit.