I am working on a script (script A), that needs to open a new Python IDLE shell, automatically run another script (script B) in it and then close it. The following code is what I use for this purpose:
import sys
sys.argv=['','-n','-t','My New Shell','-c','execfile("VarLoader.py")']
import idlelib.PyShell
idlelib.PyShell.main()
However I can't get the new shell close automatically. I have tried adding the following to script B but either it doesn't close the new shell or a windows pops up asking whether I want to kill it.
exit()
.
import sys
sys.exit()
Instead of monkeypatching or modifying the IDLE source code to make your program skip the prompt to exit I'd recommend you create a subclass of PyShell that overrides the close method how you want it to work:
import idlelib.PyShell
class PyShell_NoExitPrompt(idlelib.PyShell.PyShell):
def close(self):
"Extend EditorWindow.close(), does not prompt to exit"
## if self.executing:
## response = tkMessageBox.askokcancel(
## "Kill?",
## "Your program is still running!\n Do you want to kill it?",
## default="ok",
## parent=self.text)
## if response is False:
## return "cancel"
self.stop_readline()
self.canceled = True
self.closing = True
return idlelib.PyShell.EditorWindow.close(self)
The original issue with this was that then using idlelib.PyShell.main would not use your subclass, you can in fact create a copy of the function - without modifying the original - by using the FunctionType constructor that will use your modified class:
import functools
from types import FunctionType
def copy_function(f, namespace_override):
"""creates a copy of a function (code, signature, defaults) with a modified global scope"""
namespace = dict(f.__globals__)
namespace.update(namespace_override)
new_f = FunctionType(f.__code__, namespace, f.__name__, f.__defaults__, f.__closure__)
return functools.update_wrapper(f, new_f)
Then you can run your extra IDLE shell like this:
import sys
#there is also a way to prevent the need to override sys.argv but that isn't as concerning to me.
sys.argv = ['','-n','-t','My New Shell','-c','execfile("VarLoader.py")']
hacked_main = copy_function(idlelib.PyShell.main,
{"PyShell":PyShell_NoExitPrompt})
hacked_main()
Now you can leave IDLE the way it is and have your program work the way you want it too. (it is also compatible with other versions of python!)
I'm trying to test a Python script (2.7) where I work with the standar input (readed with raw_input() and writed with a simple print) but I don't find how do this and I'm sure that this issue is very simple.
This is a very very very resume code of my script:
def example():
number = raw_input()
print number
if __name__ == '__main__':
example()
I want to write a unittest test to check this, but I don't find how. I've trying with StringIO and other things but I don't find the solution to do this really simple.
Somebody have a idea?
PD: Of course in the real script I use data blocks with several lines and other kind of data.
Thank you so much.
EDIT:
Thank you so much for the first really specific answer, it works perfectly, only I've had a little problem importing StringIO, because I was doing import StringIO and I needed to import like from StringIO import StringIO (I don't understand really why), but be that as It may, it works.
But I I've found another problem using this way, in my project I need test a scripts with this way (that work perfectly thanks to your support) but I want do this:
I have a file with a lot of test to pass over a script, so I open the file and read blocks of info with their result blocks and I would like to do that the code will be able to process a block checking their result and do the same with other and another...
Something like this:
class Test(unittest.TestCase):
...
#open file and process saving data like datablocks and results
...
allTest = True
for test in tests:
stub_stdin(self, test.dataBlock)
stub_stdouts(self)
runScrip()
if sys.stdout.getvalue() != test.expectResult:
allTest = False
self.assertEqual(allTest, True)
I know that maybe unittest doesn't has sense now, but you can do a idea about I want. So, this way fails and I don't know why.
Typical techniques involve mocking the standard sys.stdin and sys.stdout with your desired items. If you do not care for Python 3 compatibility you can just use the StringIO module, however if you want forward thinking and is willing to restrict to Python 2.7 and 3.3+, supporting for this both Python 2 and 3 in this way becomes possible without too much work through the io module (but requires a bit of modification, but put this thought on hold for now).
Assuming you already have a unittest.TestCase going, you can create a utility function (or method in the same class) that will replace sys.stdin/sys.stdout as outlined. First the imports:
import sys
import io
import unittest
In one of my recent projects I've done this for stdin, where it take a str for the inputs that the user (or another program through pipes) will enter into yours as stdin:
def stub_stdin(testcase_inst, inputs):
stdin = sys.stdin
def cleanup():
sys.stdin = stdin
testcase_inst.addCleanup(cleanup)
sys.stdin = StringIO(inputs)
As for stdout and stderr:
def stub_stdouts(testcase_inst):
stderr = sys.stderr
stdout = sys.stdout
def cleanup():
sys.stderr = stderr
sys.stdout = stdout
testcase_inst.addCleanup(cleanup)
sys.stderr = StringIO()
sys.stdout = StringIO()
Note that in both cases, it accepts a testcase instance, and calls its addCleanup method that adds the cleanup function call that will reset them back to where they were when the duration of a test method is concluded. The effect is that for the duration from when this was invoked in the test case until the end, sys.stdout and friends will be replaced with the io.StringIO version, meaning you can check its value easily, and don't have to worry about leaving a mess behind.
Better to show this as an example. To use this, you can simply create a test case like so:
class ExampleTestCase(unittest.TestCase):
def test_example(self):
stub_stdin(self, '42')
stub_stdouts(self)
example()
self.assertEqual(sys.stdout.getvalue(), '42\n')
Now, in Python 2, this test will only pass if the StringIO class is from the StringIO module, and in Python 3 no such module exists. What you can do is use the version from the io module with a modification that makes it slightly more lenient in terms of what input it accepts, so that the unicode encoding/decoding will be done automatically rather than triggering an exception (such as print statements in Python 2 will not work nicely without the following). I typically do this for cross compatibility between Python 2 and 3:
class StringIO(io.StringIO):
"""
A "safely" wrapped version
"""
def __init__(self, value=''):
value = value.encode('utf8', 'backslashreplace').decode('utf8')
io.StringIO.__init__(self, value)
def write(self, msg):
io.StringIO.write(self, msg.encode(
'utf8', 'backslashreplace').decode('utf8'))
Now plug your example function plus every code fragment in this answer into one file, you will get your self contained unittest that works in both Python 2 and 3 (although you need to call print as a function in Python 3) for doing testing against stdio.
One more note: you can always put the stub_ function calls in the setUp method of the TestCase if every single test method requires that.
Of course, if you want to use various mocks related libraries out there to stub out stdin/stdout, you are free to do so, but this way relies on no external dependencies if this is your goal.
For your second issue, test cases have to be written in a certain way, where they must be encapsulated within a method and not at the class level, your original example will fail. However you might want to do something like this:
class Test(unittest.TestCase):
def helper(self, data, answer, runner):
stub_stdin(self, data)
stub_stdouts(self)
runner()
self.assertEqual(sys.stdout.getvalue(), answer)
self.doCleanups() # optional, see comments below
def test_various_inputs(self):
data_and_answers = [
('hello', 'HELLOhello'),
('goodbye', 'GOODBYEgoodbye'),
]
runScript = upperlower # the function I want to test
for data, answer in data_and_answers:
self.helper(data, answer, runScript)
The reason why you might want to call doCleanups is to prevent the cleanup stack from getting as deep as all the data_and_answers pairs are there, but that will pop everything off the cleanup stack so if you had any other things that need to be cleaned up at the end this might end up being problematic - you are free to leave that there as all of the stdio related objects will be restored at the end in the same order, so the real one will always be there. Now the function I wanted to test:
def upperlower():
raw = raw_input()
print (raw.upper() + raw),
So yes, a bit of explanation for what I did might help: remember within a TestCase class, the framework relies strictly on the instance's assertEqual and friends for it to function. So to ensure testing being done at the right level you really want to call those asserts all the time so that helpful error messages will be shown at the moment the error occurred with the inputs/answers that didn't quite show up right, rather than until the very end like what you did with the for loop (that will tell you something was wrong, but not exactly where out of the hundreds and now you are mad). Also the helper method - you can call it anything you want, as long as it doesn't start with test because then the framework will try to run it as one and it will fail terribly. So just follow this convention and you can basically have templates within your test case to run your test with - you can then use it in a loop with a bunch of inputs/outputs like what I did.
As for your other question:
only I've had a little problem importing StringIO, because I was doing import StringIO and I needed to import like from StringIO import StringIO (I don't understand really why), but be that as It may, it works.
Well, if you look at my original code I did show you how did import io and then overrode the io.StringIO class by defining class StringIO(io.StringIO). However it works for you because you are doing this strictly from Python 2, whereas I do try to target my answers to Python 3 whenever possible given that Python 2 will (probably definitely this time) not be supported in less than 5 years. Think of the future users that might be reading this post who had similar problem as you. Anyway, yes, the original from StringIO import StringIO works, as that's the StringIO class from the StringIO module. Though from cStringIO import StringIO should work as that imports the C version of the StringIO module. It works because they all offer close enough interfaces, and so they will basically work as intended (until of course you try to run this under Python 3).
Again, putting all this together along with my code should result in a self-contained working test script. Do remember to look at documentation and follow the form of the code, and not invent your own syntax and hoping things to work (and as for exactly why your code didn't work, because the "test" code was defined at where the class was being constructed, so all of that was executed while Python was importing your module, and since none of the things that are needed for the test to run are even available (namely the class itself doesn't even exist yet), the whole thing just dies in fits of twitching agony). Asking questions here help too, even though the issue you face is something really common, not having a quick and simple name to search for your exact problem does make it difficult to figure out where you went wrong, I supposed? :) Anyway good luck, and good on you for taking the effort to test your code.
There are other methods, but given that the other questions/answers I looked at here at SO doesn't seem to help, I hope this one this. Other ones for reference:
How to supply stdin, files and environment variable inputs to Python unit tests?
python mocking raw input in unittests
Naturally, it bares repeating that all of this can be done using unittest.mock available in Python 3.3+ or the original/rolling backport version on pypi, but given that those libraries hides some of the intricacies on what actually happens, they may end up hiding some of the details on what actually happens (or need to happen) or how the redirection actually happens. If you want, you can read up on unittest.mock.patch and go down slightly to the StringIO patching sys.stdout section.
Assuming you have a python file like so
#python
#comment
x = raw_input()
exec(x)
How could you get the source of the entire file, including the comments with exec?
This is exactly what the inspect module is there for. See the Retrieving source code section in particular.
If you're trying to get the source of the currently-running module:
thismodule = sys.modules[__name__]
inspect.getsource(thismodule)
If you're not totally bound to using exec, this is simple:
print open(__file__).read()
Not sure what you are planning to use this for, but I have been using this to reduce work required to maintain my command line scripts. I always used open(_file_,'r')
'''
Head comments ...
'''
.
.
.
def getheadcomments():
"""
This function will make a string from the text between the first and
second ''' encountered. Its purpose is to make maintenance of the comments
easier by only requiring one change for the main comments.
"""
desc_list = []
start_and_break = "'''"
read_line_bool = False
#Get self name and read self line by line.
for line in open(__file__,'r'):
if read_line_bool:
if not start_and_break in line:
desc_list.append(line)
else:
break
if (start_and_break in line) and read_line_bool == False:
read_line_bool = True
return ''.join(desc_list)
.
.
.
parser = argparse.ArgumentParser(description=getheadcomments())
This way your comments at the top of the program will be output when your run the program from command line with the --help option.
I just started with Python and I'm having some problems. I've written already a few scripts for ArcGIS and had some recurring stuff. So I thought it would be smart to put that in modules which I can easily use again.
So now I have two scripts, script.py and toolbox.py.
My script was working fine so I copied and paste the part I needed, edited it a bit and everything goes well except for the messages created with gp.Addmessage
script.py will create the message "Hello Stackoverflow" but the messages from toolbox.py doesn't show up. Why is that? It loads the toolbox because I can use it later on, so it regocnizes the gp object.
I'm kind of stuck here, would love to be able to print messages from inside the modules to inform the user of the tool what is happening.
script.py:
import os, sys, arcgisscripting
# Create the Geoprocessor object
gp = arcgisscripting.create()
gp.AddMessage("# Hello Stackoverflow")
import toolbox
toolbox.loadToolbox
toolbox.py:
def loadToolbox:
try:
some code
gp.AddToolbox(path)
gp.AddMessage("# Toolbox loaded")
except:
gp.AddMessage("# Toolbox not found")
You have two problems with your code:
You never call the loadToolBox method, you only refer to it. Add ():
toolbox.loadToolbox()
Your loadToolbox() function doesn't take gp as an argument. If gp is meant to be a global, then it won't be visible to the toolbox module (globals are only visible in the current module).
Add gp as a parameter and pass it in when calling loadToolbox. In script.py:
toolbox.loadToolbox(gp)
and in toolbox.py:
def loadToolbox(gp):
# rest of function
Code is much more precise than English; Here's what I'd like to do:
import sys
fileName = sys.argv[1]
className = sys.argv[2]
# open py file here and import the class
# ???
# Instantiante new object of type "className"
a = eval(className + "()") # I don't know if this is the way to do that.
# I "know" that className will have this method:
a.writeByte(0x0)
EDIT:
Per the request of the answers, here's what I'm trying to do:
I'm writing a virtual processor adhering to the SIC/XE instruction set. It's an educational theoretical processor used to teach the fundamentals of assembly language and systems software to computer science students. There is a notion of a "device" that I'm trying to abstract from the programming of the "processor." Essentially, I want the user of my program to be able to write their own device plugin (limited to "read_byte" and "write_byte" functionality) and then I want them to be able to "hook up" their devices to the processor at command-line time, so that they can write something like:
python3 sicsim -d1 dev1module Dev1Class -d2 ...
They would also supply the memory image, which would know how to interact with their device. I basically want both of us to be able to write our code without it interfering with each other.
Use importlib.import_module and the built in function getattr. No need for eval.
import sys
import importlib
module_name = sys.argv[1]
class_name = sys.argv[2]
module = importlib.import_module(module_name)
cls = getattr(module, class_name)
obj = cls()
obj.writeByte(0x0)
This will require that the file lives somewhere on your python path. Most of the time, the current directory is on said path. If this is not sufficient, you'll have to parse the directory out of it and append it to the sys.path. I'll be glad to help with that. Just give me a sample input for the first commandline argument.
Valid input for this version would be something like:
python3 myscript.py mypackage.mymodule MyClass
As aaronasterling mentions, you can take advantage of the import machinery if the file in question happens to be on the python path (somewhere under the directories listed in sys.path), but if that's not the case, use the built in exec() function:
fileVars = {}
exec(file(fileName).read(), fileVars)
Then, to get an instance of the class, you can skip the eval():
a = fileVars[className]()