Pylons REPL reevaluate code in running web server - python

I'm programming in python on a pre-existing pylons project (the okfn's ckan), but I'm a lisper by trade and used to that way of doing things.
Please correct me if I make false statements:
In pylons it seems that I should say
$ paster serve --reload
to get a web server that will notice changes.
At that point I can change a function, save the file and then go to my browser to test the change.
If I want to examine variables in a function in the process of making a webpage, then I put raise "hello", and then when I load the page, I get a browser based debugger, in which I can examine the program.
This is all very nice and works swimmingly, and I get the impression that that's how people tend to write pylons code.
Unfortunately the reload takes several seconds, and it keeps breaking my train of thought.
What I'd like to do is to run the web server from emacs, (although a python REPL on the command line would be almost as good), so that I can change a function in the editor and then send the new code to the running process without having to restart it. (with a command line repl I guess I'd have to copy and paste the new thing, but that would also be workable, just slightly less convenient)
Python seems very dynamic, and much like lisp in many ways, so I can't see in principle any reason why that wouldn't work.
So I guess the question is:
Is anyone familiar with the lisp way of doing things, and with Pylons, and can they tell me how to program the lisp way in pylons? Or is it impossible or a bad idea for some reason?
Edit:
I can run the webserver from my python interpreter inside emacs with:
from paste.script.serve import ServeCommand
ServeCommand("serve").run(["development.ini"])
And I can get the code to stop and show me what it's doing by inserting:
import pdb
pdb.set_trace()
so now all I need is a way to get the webserver to run on a different thread, so that control returns to the REPL and I can redefine functions and variables in the running process.
def start_server():
from paste.script.serve import ServeCommand
ServeCommand("serve").run(["development.ini"])
server_thread=threading.Thread(target=start_server)
server_thread.start()
This seems to work, except that if I redefine a function at the REPL the change doesn't get reflected in the webserver. Does anyone know why?

It seems that this way of working is impossible in python for the reason given by TokenMacGuy's comment, i.e. because redefining a class doesn't change the code in an instance of that class.
That seems a terrible shame, since in many other respects python seems very flexible, but it does explain why there's no python-swank!

Related

Safely run executable in Node

I found myself having to implement the following use case: I need to run a webapp in which users can submit C programs, which need to be run safely on my backend.
I'm trying to get this done using Node. In the past, I had to do something similar but the user-submitted code was JavaScript code, and I got away with using Node vm2 module. Essentially, I would create a VM and call its run method with the user submitted code as a string argument, then collect the output and do whatever I had to.
I'm trying to understand if using the same moule could help me with C code as well. The idea would be to use exec to first call gcc and compile the user code. Afterwards, I would use a VM to run exec again, this time passing the generated executable as a result. Would this be safe?
I don't understand vm2 deeply enough to know whether the safety is only limited to executing JS code or if it can be trusted to also run any arbitrary shell command safely.
In case vm2 isn't appropriate, what would be another way to run an executable in a sandboxed fashion in Node? Feel free to also suggest Python-based solutions, if you know any. Please note that the code will still be executed in a separate container as the main app regardless, but I want to make extra sure users cannot easily just tear it down at their liking.
Thank you in advance.
I am currently experiencing the same challenge as you, trying to execute safely some untrusted code using spawn, so what I can tell you is that vm2 only works for JS/TS code, but can't control what happens to a new process created by spawn, fork or exec.
For now I haven't found any good solution, but I'm thinking of trying to run the process as a user with limited rights.
As you seem to have access to the C source code, I would advise you to search how to run untrusted C programs (in plain C), and see if you can manipulate the C code in order to have a safer environment from this point of view.

What are the potential reasons for my problem with running multiprocessed Python script from elsewhere?

I have a Python script that uses multiprocessing, specifically Pool().map() from the package multiprocessing.
When I run the script locally on my machine, I remember to say if __name__ == "__main__" before running my code, and I run it by simply pressing run in my IDE.
It works, does everything I expect it to.
However, at work, we have this server (I believe it uses C#) which takes Python scripts and executes them. When I upload my script to this server, the multiprocessing part of the code fails.
Is is hard for me to tell what's going on (I do not have access to the server so I have limited info to work with, and really this problem is not my responsibility, so I am asking purely out of curiosity), however, it seems that the code does not throw an error, but rather it seems to get stuck in an infinite loop of some sort, creating and ending new sessions over and over. No work happens inside these sessions, they just begin and end.
Moreover, the code never actually enters the multiprocessed part of the code (i.e. the functions that I map to the Pool) , since if it did, it would fail (since for debugging, I threw a raise Exception as soon as the code enters the code that is meant to run parallel, just to check if it ever reaches it … but it never does).
Any clue what is going on, and how it is meant to be fixed?

Save a Script Variables inside code and reset them after reboot

in my vps i have run 4 Python Script and its been 60 days that i don't reboot my vps and now i have to, but if i reboot vps my python Variables & data will be removed because i don't store them in file and they are store in variables in python script.
my OS is Ubuntu Server 16.04 LTS and i was run my python codes with nohup command until they can run in background.
Now i need a way to stop my scripts without removing they variables and start them with same variables and data after i reboot my vps.
Is There Any Way That I Can Do This?
In Addition, I'm sorry for writing mistakes in my question.
Python doesn't provide any way of doing this.
But you might be able to use CRIU, or a similar tool, to freeze and snapshot the interpreter process. Then, after restart, you can resume the snapshot into a new process that just picks up exactly where you left off.
It may not work.1 But there's a good chance it will. This is essentially the same thing as a Live Migration in the CRIU docs, except that you're not migrating to a new computer/container/etc., just to the future of the same computer. So, start reading with that page, and follow the links from there.
You should probably test before you commit to it.
* Try it (obviously don't include the system restart, just kill -9 the executable) on a Python script that doesn't do anything important (maybe increments a counter, print it out, sleep for a second, repeat.
* Maybe try it on a script that does similar kinds of stuff to what yours are doing.
* If it's safe to have two copies of one of your programs running at the same time (they're not going to stomp all over each other writing to the same file, or fight over the same socket, or whatever), start a second copy and test dump/kill/resume that.
* Try it on one of your real processes, still without restart.
* Try it on all four.
* Cross your fingers, sacrifice a chicken, and do it for real.
If that doesn't pan out, the only option I can think of is to go through your scripts, manually figure out everything that needs to be saved and how it could be accessed from the top-level global, and do that in the debugger.
Ideally, you'll write a script that will automate accessing and saving all that stuff—plus another one to feed it into a new instance at restart. Then you just pdb the live interpreters and start dumping everything.
This is guaranteed to be a whole lot of work, and not much fun. On the plus side, it is guaranteed to work if you do it right. On the third hand, it's pretty easy to not do it right.
1. If you rely on open files, pipes, sockets, etc., CRIU does about as much as you could do, which is more than you might expect at first, but still not everything you could possibly want… Also, if you're using almost all of your RAM, it can be hard to wedge things back into exactly the same state. And there are probably other possible issues.

Why python debugger always get this timeout waiting for response on 113 when using Pycharm?

Bigger image
Especially I run code perhaps running a little long time(10 mins roughly), and hit the break point.
The python debugger always show me this kind of error "timeout waiting for response on 113"
I circle them in red in screencut.
And I use Pycharm as my python IDE, is it just issue for Pycharm IDE? Or Python debugger issue?
And if Pycharm is not recommended, can anyone give me better IDE which be able to debug efficiently.
I had a similar thing happen to me a few months ago, it turned out I had a really slow operation within a __repr__() for a variable I had on the stack. When PyCharm hits a breakpoint it grabs all of the variables in the current scope and calls __repr__ on them. Here's an amusement that demonstrates this issue:
import time
class Foo(object):
def __repr__(self):
time.sleep(100)
return "look at me"
if __name__ == '__main__':
a = Foo()
print "set your breakpoint here"
PyCharm will also call __getattribute__('__class__'). If you have a __getattribute__ that's misbehaving that could trip you up as well.
This may not be what's happening to you but perhaps worth considering.
As you are on Windows, for debugging such & most things I use the good old PythonWin IDE:
This IDE + Debugger runs in the same process as the debugged stuff!
This way, being in direct touch with real objects, like pdb in simple interactive shell, but having a usable GUI, is a big advantage most of the time. And this way there are no issues of transferring vast objects via repr/pickle or so between processes, no delays, no issues of timeouts etc.
If a step takes a long time, PythonWin will also simply wait and not respond before ... (unless one issues a break signal/KeyboardInterrupt via the PythonWin system tray icon).
And the interactive shell of PythonWin is also fully usable during the debugging - with namespace inside the current frame.
It's an old question but reply can be helpful.
Delete the .idea folder from the project root dir. It will clean up the Pycharm's database and the debugger will stop timing out. It works for me on Windows.

Possible to run a delayed code execution?

Will it is possible to run a small set of code automatically after a script was run?
I am asking this because for some reasons, if I added this set of code into the main script, though it works, it will displays a list of tab errors (its already there, but it is stating that it cannot find it some sort).
I realized that after running my script, Maya seems to 'load' its own setup of refreshing, along with some plugins done by my company. As such, if I am running the small set of code after my main script execution and the Maya/ plugins 'refresher', it works with no problem. I had like to make the process as automated as possible, all within a script if that is possible...
Thus is it possible to do so? Like a delayed sort of coding method?
FYI, the main script execution time depends on the number of elements in the scene. The more there are, it will takes longer...
Maya has a command Maya.cmds.evalDeferred that is meant for this purpose. It waits till no more Maya processing is pending and then evaluates itself.
You can also use Maya.cmds.scriptJob for the same purpose.
Note: While eval is considered dangerous and insecure in Maya context its really normal. Mainly because everything in Maya is inherently insecure as nearly all GUI items are just eval commands that the user may modify. So the second you let anybody use your Maya shell your security is breached.

Categories