Generating and running Haskell code from Python - python

We are writing a python program that attempts to synthesize a (simple) haskell-function given input-output pairs. Throughout the run of the program, we generate haskell code and check its correctness against the user-supplied examples.
Suppose we get as input "1 2" and expected output "3". We would (eventually)
come up with the plus function. We would then run
(\x y -> x + y) 1 2 in haskell and check if it evaluates to 3.
The way we currently do things is by running the following python code:
from subprocess import Popen, PIPE, STDOUT
proccess = Popen(f'ghc -e "{haskell_code}"', shell=True, stdout=PIPE, stderr=STDOUT)
haskell_output = proc.stdout.read().decode('utf-8').strip('\n')
As neither of us is familiar with ghc, haskell, processes, or really anything to do with any of this, we were hoping someone could help us with preforming this task in a (much) more efficient manner, as this is currently very slow.
Additionally, we would like to be able to perform more than a single statement. For example, we would like to import Data.Char so that our function can use “toUpper”. However, the way we are currently doing this is by sending a single lambda function and the inputs appended to it, and we aren't sure how to add an import statement above that (adding "\n" did not seem to work).
To summarize, we would like the fastest (runtime) solution that would allow us to test haskell functions from python (where we don’t have the code for all haskell functions in advance or at one point in time, but rather test as we generate the code), while allowing us to use more than a single statement (for example, importing).
Apologies if any of this is trivial or stupid, any help would be highly appreciated.

This seems like an odd thing to be doing.. but interesting none the less.
Two things come immediately to mind here. First is to use ghci repl instead of spawning a new process for every eval attempt. The idea is to stream your I/O into the ghci process instead of spawning a new ghc process for each attempt. The overhead of starting a new process for every eval seems to be quite performance killer. I'd usually go for expect, but since you want python, I'll call on pexpect:
import pexpect
import sys
from subprocess import Popen, PIPE, STDOUT
import time
REPL_PS = unicode('Prelude> ')
LOOPS = 100
def time_function(func):
def decorator(*args, **kwargs):
ts = time.time()
func(*args, **kwargs)
te = time.time()
print "total time", (te - ts)
return decorator
#time_function
def repl_loop():
repl = pexpect.spawnu('ghci')
repl.expect(REPL_PS)
for i in range(LOOPS):
repl.sendline('''(\\x y -> x + y) 1 2''')
_, haskell_output = repl.readline(), repl.readline()
repl.expect(REPL_PS)
#time_function
def subproc_loop():
for i in range(LOOPS):
proc = Popen('''ghc -e "(\\x y -> x + y) 1 2"''', shell=True, stdout=PIPE, stderr=STDOUT)
haskell_output = proc.stdout.read().decode('utf-8').strip('n')
# print haskell_output
repl_loop()
subproc_loop()
This gave me a very consistent >2x speed boost.
See pexpect doc for more info: https://github.com/pexpect/pexpect/
The second immediate idea would be to use some distributed computing. I don't have the time to build full blown demo here, but there are many great examples already living in the land of the internet and SO. The idea is to have multiple "python + ghci" processes reading eval attempts from a common queue then push the results to a common eval attempt checker. I don't know much about ghc(i) but a quick check shows that ghci is a multithreaded process so this may require multiple machines to pull off, each machine attempting different subsets of the the attempts in parallel.
Some links that may be of interest here:
How to use multiprocessing queue in Python?
https://docs.python.org/2/library/multiprocessing.html
https://eli.thegreenplace.net/2012/01/24/distributed-computing-in-python-with-multiprocessing

Related

Python multithreading program is giving unexpected output

I know that there is no guarantee regarding the order of execution for the threads. But my doubt is when I ran below code,
import threading
def doSomething():
print("Hello ")
d = threading.Thread(target=doSomething, args=())
d.start()
print("done")
Output that is coming is either
Hello done
or this
Hello
done
May be if I try too much then it might give me below as well
done
Hello
But I am not convinced with the first output. Since order can be different but how come both outputs are available in the same line. Does that means that one thread is messing up with other threads working?
This is a classic race condition. I can't personally reproduce it, and it would likely vary by interpreter implementation and the precise configuration applied to stdout. On Python interpreters without a GIL, there is basically no protection against races, and this behavior is expected to a certain extent. Python interpreters do tend to try to protect you from egregious data corruption due to threading, unlike C/C++, but even if they ensure every byte written ends up actually printed, they usually wouldn't try to make explicit guarantees against interleaving; Hdelolnoe would be a possible (if fairly unlikely given likely implementations) output when you're making no effort whatsoever to synchronize access to stdout.
On CPython, the GIL protects you more, and writing a single string to stdout is more likely to be atomic, but you're not writing a single string. Essentially, the implementation of print is to write objects one by one to the output file object as it goes, it doesn't batch up to a single string then call write just once. What this means is that:
print("Hello ") # Implicitly outputs default end argument of '\n' after printing provided args
is roughly equivalent to:
sys.stdout.write("Hello ")
sys.stdout.write("\n")
If the underlying stack of file objects that implements sys.stdout decides to engage in real I/O in response to the first write, they'll release the GIL before performing the actual write, allowing the main thread to catch up and potentially grab the GIL before the worker thread is given a chance to write the newline. The main thread then outputs the done and then the newlines from each print come out in some unspecified (and irrelevant) order based on further potential races.
Assuming you're on CPython, you could probably fix this by changing the code to this equivalent code using single write calls:
import threading
import sys
def doSomething():
sys.stdout.write("Hello \n")
d = threading.Thread(target=doSomething) # If it takes no arguments, no need to pass args
d.start()
sys.stdout.write("done\n")
and you'd be back to a race condition that only swaps the order, without interleaving (the language spec wouldn't guarantee a thing, but most reasonable implementations would be atomic for this case). If you want it to work with any guarantees without relying on the quirks of the implementation, you have to synchronize:
import threading
lck = threading.Lock()
def doSomething():
with lck:
print("Hello ")
d = threading.Thread(target=doSomething)
d.start()
with lck:
print("done")

Python: kill the application if subprocess.check_output waits too long to receive the output

I have implemented a cache oblivious algorithm and have shown with the PAPI library that the L1/L2/L3 misses are very low. However I would also like to see how the algorithm behaves if I reduce the available RAM memory and force the algorithm to start using the swap space in the disk. Since the algorithm is cache oblivious, I should expect a much better scaling to the disk compared to other non cache oblivious algorithms for the same problem.
The problem however is that it is very hard to predict how bad the algorithms will perform once out on the disk; a small increase in the input size might dramatically change the time that it takes for the algorithm to finish running. So if you have many algorithms that you want to test, if one takes forever to finish then the experiment will be useless (I could of course sit and monitor the experiment and kill if with ctrl+c, but I really need to sleep).
Let's say the algorithms are A,B and C. I use a different python script, one for each algorithm. For varying input size n I use subprocess.check_output to call the executable of the implementation. This executable returns some statistics that I then process and store in a suitable format that I can then use with R for example to make some nice plots.
This is an example code for algorithm A:
import subprocess
import sys
f1=open('data.stats', 'w+', 1)
min = 200000
max = 2000000
step = 200000
iterations = 10
ns = range(minLeafs, maxLeafs+1, step)
incr = 0
f1.write('n\tp\talg\ttime\n')
for n in ns:
i = 0
for p in ps:
for it in range(0, iterations):
resA = subprocess.check_output(['/usr/bin/time', '-v','./A',n],
stderr=subprocess.STDOUT)
#do something with resA
f1.write(resA + '\n')
incr = incr + 1
print(incr/(((len(ns)))*iterations)*100.0, '%', end="\r")
i = i + 1
My question is, can I somehow kill a script if subprocess.check_outputtakes too long to receive an answer? The best thing would for me to define a cut off, like 10 minutes, so if subprocess.check_output hasn't received anything, then kill the entire script.
If you're using Python 3 (and the format of your call to print suggests you might be), then check_output actually already has a timeout argument that might be useful to you: https://docs.python.org/3.6/library/subprocess.html#subprocess.check_output

Python threading.thread.start() doesn't return control to main thread

I'm trying to a program that executes a piece of code in such a way that the user can stop its execution at any time without stopping the main program. I thought I could do this using threading.Thread, but then I ran the following code in IDLE (Python 3.3):
from threading import *
import math
def f():
eval("math.factorial(1000000000)")
t = Thread(target = f)
t.start()
The last line doesn't return: I eventually restarted the shell. Is this a consequence of the Global Interpreter Lock, or am I doing something wrong? I didn't see anything specific to this problem in the threading documentation (http://docs.python.org/3/library/threading.html)
I tried to do the same thing using a process:
from multiprocessing import *
import math
def f():
eval("math.factorial(1000000000)")
p = Process(target = f)
p.start()
p.is_alive()
The last line returns False, even though I ran it only a few seconds after I started the process! Based on my processor usage, I am forced to conclude that the process never started in the first place. Can somebody please explain what I am doing wrong here?
Thread.start() never returns! Could this have something to do with the C implementation of the math library?
As #eryksun pointed out in the comment: math.factorial() is implemented as a C function that doesn't release GIL so no other Python code may run until it returns.
Note: multiprocessing version should work as is: each Python process has its own GIL.
factorial(1000000000) has hundreds millions of digits. Try import time; time.sleep(10) as dummy calculation instead.
If you have issues with multithreaded code in IDLE then try the same code from the command line, to make sure that the error persists.
If p.is_alive() returns False after p.start() is already called then it might mean that there is an error in f() function e.g., MemoryError.
On my machine, p.is_alive() returns True and one of cpus is at 100% if I paste your code from the question into Python shell.
Unrelated: remove wildcard imports such as from multiprocessing import *. They may shadow other names in your code so that you can't be sure what a given name means e.g., threading could define eval function (it doesn't but it could) with a similar but different semantics that might break your code silently.
I want my program to be able to handle ridiculous inputs from the user gracefully
If you pass user input directly to eval() then the user can do anything.
Is there any way to get a process to print, say, an error message without constructing a pipe or other similar structure?
It is an ordinary Python code:
print(message) # works
The difference is that if several processes run print() then the output might be garbled. You could use a lock to synchronize print() calls.

How do I manage multiple processes in Python?

I have a simple (i hope) question:
my problems started when i wrote a GUI.
i cannot refresh the user interface while executing heavy computations.
-if i use threads there is the G.I.L. (not too slow but the gui freezes)
i tryed so many things that my last hope is starting a new process (and here the problem)
first of all:
-i never used processes before (it could be a semantic error)
-i don't know the limitations ( and exceptions ) of processes
-i am running with cpython 3.1.2 , on Mac os x v 10.6.8
here is an example (not the real code but the result is the same) of what i need to solve:
from multiprocessing import *
def bob(q):
print(q)
A=Process(target=bob,args=("something"))
A.start()
A.is_alive()
A.join()
and the output is:
True
it doesn't print "something",so i guess it doesn't run the process,but "A.is_alive()" says it is running and when the interpreter arrives to "A.join()" it waits more or less forever
can someone explain me this?
You need to add comma: args=("something",).
Comma creates a tuple otherwise it is just a string in parentheses.
You should give a list of arguments, not just the argument. This does the job for me:
from multiprocessing import *
def bob(q):
print(q)
A=Process(target=bob,args=["something"])
A.start()
A.is_alive()
A.join()
The following using sleep-sort (http://stackoverflow.com/questions/6474318/what-is-the-time-complexity-of-the-sleep-sort) to sort upper case characters A-Z
somestring="DGAECBF"
from multiprocessing import *
def bob(t):
import time
time.sleep(ord(t)-ord("A"))
print(t)
p=[]
for c in somestring :
p.append(Process(target=bob,args=([c])))
p[-1].start()
for pp in p:
pp.join()

Asynchronous subprocess on Windows

First of all, the overall problem I am solving is a bit more complicated than I am showing here, so please do not tell me 'use threads with blocking' as it would not solve my actual situation without a fair, FAIR bit of rewriting and refactoring.
I have several applications which are not mine to modify, which take data from stdin and poop it out on stdout after doing their magic. My task is to chain several of these programs. Problem is, sometimes they choke, and as such I need to track their progress which is outputted on STDERR.
pA = subprocess.Popen(CommandA, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# ... some more processes make up the chain, but that is irrelevant to the problem
pB = subprocess.Popen(CommandB, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=pA.stdout )
Now, reading directly through pA.stdout.readline() and pB.stdout.readline(), or the plain read() functions, is a blocking matter. Since different applications output in different paces and different formats, blocking is not an option. (And as I wrote above, threading is not an option unless at a last, last resort.) pA.communicate() is deadlock safe, but since I need the information live, that is not an option either.
Thus google brought me to this asynchronous subprocess snippet on ActiveState.
All good at first, until I implement it. Comparing the cmd.exe output of pA.exe | pB.exe, ignoring the fact both output to the same window making for a mess, I see very instantaneous updates. However, I implement the same thing using the above snippet and the read_some() function declared there, and it takes over 10 seconds to notify updates of a single pipe. But when it does, it has updates leading all the way upto 40% progress, for example.
Thus I do some more research, and see numerous subjects concerning PeekNamedPipe, anonymous handles, and returning 0 bytes available even though there is information available in the pipe. As the subject has proven quite a bit beyond my expertise to fix or code around, I come to Stack Overflow to look for guidance. :)
My platform is W7 64-bit with Python 2.6, the applications are 32-bit in case it matters, and compatibility with Unix is not a concern. I can even deal with a full ctypes or pywin32 solution that subverts subprocess entirely if it is the only solution, as long as I can read from every stderr pipe asynchronously with immediate performance and no deadlocks. :)
How bad is it to have to use threads? I encountered much the same problem and eventually decided to use threads to gather up all the data on a sub-process's stdout and stderr and put it onto a thread-safe queue which which the main thread can read in a blocking fashion, without having to worry about the threading going on behind the scenes.
It's not clear what trouble you anticipate with a solution based on threads and blocking. Are you worried about having to make the rest of your code thread-safe? That shouldn't be an issue since the IO thread wouldn't need to interact with any of the rest of your code or data. If you have very restrictive memory requirements or your pipeline is particularly long then perhaps you may feel unhappy about spawning so many threads. I don't know enough about your situation so I couldn't say if this is likely to be a problem, but it seems to me that since you're already spawning off extra processes a few threads to interact with them should not be a terrible burden. In my situation I have not found these IO threads to be particularly problematic.
My thread function looked something like this:
def simple_io_thread(pipe, queue, tag, stop_event):
"""
Read line-by-line from pipe, writing (tag, line) to the
queue. Also checks for a stop_event to give up before
the end of the stream.
"""
while True:
line = pipe.readline()
while True:
try:
# Post to the queue with a large timeout in case the
# queue is full.
queue.put((tag, line), block=True, timeout=60)
break
except Queue.Full:
if stop_event.isSet():
break
continue
if stop_event.isSet() or line=="":
break
pipe.close()
When I start up the subprocess I do this:
outputqueue = Queue.Queue(50)
stop_event = threading.Event()
process = subprocess.Popen(
command,
cwd=workingdir,
env=env,
shell=useshell,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stderr_thread = threading.Thread(
target=simple_io_thread,
args=(process.stderr, outputqueue, "STDERR", stop_event)
)
stdout_thread = threading.Thread(
target=simple_io_thread,
args=(process.stdout, outputqueue, "STDOUT", stop_event)
)
stderr_thread.daemon = True
stdout_thread.daemon = True
stderr_thread.start()
stdout_thread.start()
Then when I want to read I can just block on outputqueue - each item read from it contains either a string to identify which pipe it came from and a line of text from that pipe. Very little code runs in a separate thread, and it only communicates with the main thread via a thread-safe queue (plus an event in case I need to give up early). Perhaps this approach would be useful and allow you to solve the problem with threads and blocking but without having to rewrite lots of code?
(My solution is made more complicated because I sometimes wish to terminate the subprocesses early, and want to be sure that the threads will all finish. If that's not an issue you can get rid of all the stop_event stuff and it becomes pretty succinct.)
I assume that the process pipeline will not deadlock if it only uses stdin and stdout; and the problem you're trying to solve is how to make it not deadlock if they write to stderr (and have to deal with stderr possibly getting backed up).
If you're letting multiple processes write to stderr, you have to watch out for their output being intermingled. I'm guessing you have that sorted somehow; just putting it out there to be sure.
Be aware of the -u flag to python; it is helpful when testing to see if OS buffering is screwing you up.
If you want to emulate select() on file handles in win32, your only choice is to use PeekNamedPipe() and friends. I have a snippet of code that reads line-oriented output from multiple processes at once, which you may even be able to use directly -- try passing the list of proc.stderr handles to it and go.
class NoLineError(Exception): pass
class NoMoreLineError(Exception): pass
class LineReader(object):
"""Helper class for multi_readlines."""
def __init__(self, f):
self.fd = f.fileno()
self.osf = msvcrt.get_osfhandle(self.fd)
self.buf = ''
def getline(self):
"""Returns a line of text, or raises NoLineError, or NoMoreLineError."""
try:
_, avail, _ = win32pipe.PeekNamedPipe(self.osf, 0)
bClosed = False
except pywintypes.error:
avail = 0
bClosed = True
if avail:
self.buf += os.read(self.fd, avail)
idx = self.buf.find('\n')
if idx >= 0:
ret, self.buf = self.buf[:idx+1], self.buf[idx+1:]
return ret
elif bClosed:
if self.buf:
ret, self.buf = self.buf, None
return ret
else:
raise NoMoreLineError
else:
raise NoLineError
def multi_readlines(fs, timeout=0):
"""Read lines from |fs|, a list of file objects.
The lines come out in arbitrary order, depending on which files
have output available first."""
if type(fs) not in (list, tuple):
raise Exception("argument must be a list.")
objs = [LineReader(f) for f in fs]
for i,obj in enumerate(objs): obj._index = i
while objs:
yielded = 0
for i,obj in enumerate(objs):
try:
yield (obj._index, obj.getline())
yielded += 1
except NoLineError:
#time.sleep(timeout)
pass
except NoMoreLineError:
del objs[i]
break # Because we mutated the array
if not yielded:
time.sleep(timeout)
pass
I have never seen the "Peek returns 0 bytes even though data is available" issue myself. If this happens to others, I bet their libc is buffering their stdout/stderr before sending the data to the OS; there is nothing you can do about that from outside. You have to make the app use unbuffered output somehow (-u to python; win32/libc calls to modify the stderr file handle, ...)
The fact that you are seeing nothing, then a ton of updates, makes me think that your problem is buffering on the source end. win32 libc may buffer differently if it writes to a pipe rather than a console. Again, the best you can do from outside those programs is to aggressively drain their output.
What about using Twisted's FD's? http://twistedmatrix.com/documents/8.1.0/api/twisted.internet.fdesc.html
It's not asynchronous but it is non-blocking. For asynchronous stuff, can you port to using Twisted?

Categories