Does nim compile-in docstrings, so we can echo them at runtime?
Something like:
>>> echo help(echo)
"Writes and flushes the parameters to the standard output.[...]"
edited: there is a way to implement a help functionality
It turns out that using macros.getImpl it is possible to implement a functionality similar to the echo you report above (playground):
import macros
macro implToStr*(ident: typed): string =
toStrLit(getImpl(ident))
template help*(ident: typed) =
echo implToStr(ident)
help(echo)
output:
proc echo(x: varargs[typed, `$`]) {.magic: "Echo", tags: [WriteIOEffect],
gcsafe, locks: 0, sideEffect.}
## Writes and flushes the parameters to the standard output.
##
## Special built-in that takes a variable number of arguments. Each argument
## is converted to a string via ``$``, so it works for user-defined
## types that have an overloaded ``$`` operator.
## It is roughly equivalent to ``writeLine(stdout, x); flushFile(stdout)``, but
## available for the JavaScript target too.
##
## Unlike other IO operations this is guaranteed to be thread-safe as
## ``echo`` is very often used for debugging convenience. If you want to use
## ``echo`` inside a `proc without side effects
## <manual.html#pragmas-nosideeffect-pragma>`_ you can use `debugEcho
## <#debugEcho,varargs[typed,]>`_ instead.
the output is a bit wonky (a very large indent after first line of docstring) and it has some limitations: it will not work on some symbols - e.g. it works on toStrLit but not on getImpl; it will not work on overload. Those limitations might be improved in the future either with a better implementation or with fixes to compiler/stdlib.
previous answer
No, Nim will not compile docstrings (edit: docstrings are not available at runtime, but they are part of the AST and they can be accessed at compile time, see above). With a supported editor you can get help hovering over identifiers or going to definition in source code.
For example in VS Code (with Nim extension), hovering over echo will give you:
And pressing F12 (on Windows) will go to definition of echo in systems.nim.
Another useful resource for standard library identifier is the searchable index.
Related
I am faced with the task of writing a simple debug helper for Qt Creator 4.13.1 / Qt 5.12.5 / MSVC 2017 compiler for the C++ JSON implementation nlohmann::basic_json (https://github.com/nlohmann/json).
An object of nlohmann::basic_json can contain the contents of a single JSON data type (null, boolean, number, string, array, object) at a time.
There's a dump() member function which can be used to output the current content formatted as a std::string regardless of the current data type. I always want to use this function.
What I've done so far:
I've looked at https://doc.qt.io/qtcreator/creator-debugging-helpers.html, as well as at the given example files (qttypes.py, stdtypes.py...).
I made a copy of the file personaltypes.py and told Qt Creator about its existence at
Tools / Options / Debugger / Locals & Expressions / Extra Debugging Helpers
The following code works and displays a "Hello World" in the debugger window for nlohmann::basic_json objects.
import dumper
def qdump__nlohmann__basic_json(d, value):
d.putNumChild(0)
d.putValue("Hello World")
Unfortunately, despite the documentation, I have no idea how to proceed from here on.
I still have absolutely no clue how to correctly call basic_json's dump() function with the dumper from Python (e.g. with d.putCallItem ?).
I also have no starting point how to format the returned std::string so that it is finally displayed in the debugger window.
I imagined something like this, but it doesn't work.
d.putValue("data")
d.putNumChild(1)
d.putCallItem('dump', '#std::string', value, 'dump')
I hope someone can give me a little clue so that I can continue thinking in the right direction.
For example, can I call qdump__std__string from stdtypes.py myself to interpret the std::string?
I am trying to use the built-in R progress-bar (txtProgressBar) with %%R magic in Jupyter. While it does produce a nice animation when executed in the R console or RStudio, it does not produce the desired output in the Jupyter (notebook or lab) with an rpy2 extension instead, printing all the steps at once after finishing (which makes the progress bar useless). Two questions:
How could I make it work?
If it is not possible yet, how do I approach implementing this functionality on the rpy2 side (I already know how to make the interactive output/widgets on the Jupyter/IPython side)?
Here is a simple snippet of a progress bar from rfunction.com:
%%R
SEQ <- seq(1,100)
pb <- txtProgressBar(1, 100, style=3)
TIME <- Sys.time()
for(i in SEQ){
Sys.sleep(0.02)
setTxtProgressBar(pb, i)
}
For the folks new to rpy2: It needs to be installed with pip install rpy2 and the magic needs to be loaded in Jupyter with %load_ext rpy2.ipython.
Edit: The workaround I use for now is to manually invoke the code via robjects.r:
from rpy2.robjects import r
r("""
SEQ <- seq(1,100)
pb <- txtProgressBar(1, 100, style=3)
TIME <- Sys.time()
for(i in SEQ){
Sys.sleep(0.02)
setTxtProgressBar(pb, i)
}
""")
however this is not ideal - I would prefer to keep all the benefits of the rpy2's Rmagic.
There should be a way to achieve this, as the R magic is calling robjects.r() (as you are in your workaround).
In short, the following is happening when you submit an %%R jupyter cell for evaluation.
Parameters on the %%R line are evaluated and eventual setup prior to the evaluation of the R code is done (e.g., use a local converter, convert input parameters, etc...)
The R code in the rest of the %%R cell is evaluated in the R "Global Environment" as a string of code
Exit setup is run and results are returned
The second step is a essentially a call to the R C API, which the GIL makes the only activity happening with that process. However, rpy2 is defining default callbacks that reroute R's printing to the terminal/console to Python's own print() which is why you see the prints as the code is running in your call to robjects.r().
I am seeing that the R magic is caching the R output, and while there is an attribute cache_display_data that should control this is it not used. This is bug, for the reason your are asking on Stackoverflow, and because an R code block printing a lot would use more memory than needed (and even exhaust all RAM). I do not know whether it has always be present or it was introduced during code refactoring; it is now tracked here: https://bitbucket.org/rpy2/rpy2/issues/543
Edit: The fix is now in the repository, and will be part of rpy2-3.0.3 (likely released today).
I'm testing a very simple NASM dll (64-bit) called from ctypes. I pass a single int_64 and the function is expected to return a different int_64.
I get the same error every time:
OSError: exception: access violation writing 0x000000000000092A
where hex value translates to the value I am returning (in this case, 2346). If I change that value, the hex value changes to that value, so the problem is in the value I am returning in rax. I get the same error if I assign mov rax,2346.
I have tested this repeatedly, trying different things, and I've done a lot of research, but this seemingly simple problem is still not solved.
Here is the Python code:
def lcm_ctypes():
input_value = ctypes.c_int64(235)
hDLL = ctypes.WinDLL(r"C:/NASM_Test_Projects/While_Loop_01/While_loops-01.dll")
CallTest = hDLL.lcm
CallTest.argtypes = [ctypes.c_int64]
CallTest.restype = ctypes.c_int64
retvar = CallTest (input_value)
Here is the NASM code:
[BITS 64]
export lcm
section .data
return_val: dq 2346
section .text
finit
lcm:
push rdi
push rbp
mov rax,qword[return_val]
pop rbp
pop rdi
Thanks for any information to help solve this problem.
Your function correctly loads 2346 (0x92a) into RAX. Then execution continues into some following bytes because you didn't jmp or ret.
In this case, we can deduce that the following bytes are probably 00 00, which decodes as add byte [rax], al, hence the access violation writing 0x000000000000092A error message. (i.e. it's not a coincidence that the address it's complaining about is your constant).
As Michael Petch said, using a debugger would have found the problem.
You also don't need to save/restore rdi because you're not touching it. The Windows x86-64 calling convention has many call-clobbered registers, so for a least-common-multiple function you shouldn't need to save/restore anything, just use rax, rcx, rdx, r8, r9, and whatever else Windows lets you clobber (I forget, check the calling convention docs in the x86 tag wiki, especially Agner Fog's guide).
You should definitely use default rel at the top of your file, so the [return_val] load will use a RIP-relative addressing mode instead of absolute.
Also, finit never executes because it's before your function label. But you don't need it either. You had the same finit in your previous asm question: Passing arrays to NASM DLL, pointer value gets reset to zero, where it was also not needed and not executed. The calling convention requires that on function entry (and return), the x87 FPU is already in the state that finit puts it in, more or less. So you don't need it before executing x87 instructions like fmulp and fidivr. But you weren't doing that anyway, you were using SSE FP instructions (which is recommended, especially in 64-bit mode), which don't touch the x87 state at all.
Go read a good tutorial and some docs (some links in the x86 tag wiki) so you understand what's going on well enough to debug a problem like this on your own, or you will have a bad time writing anything more complicated. Guessing what might work doesn't work very well for asm.
From a deleted non-answer: https://www.cs.uaf.edu/2017/fall/cs301/reference/nasm_vs/ shows how to set up Visual Studio to build an executable out of C++ and NASM sources, so you can debug it.
I'm using the ANTLRv4 Python3 grammar from here:
https://github.com/antlr/grammars-v4/blob/master/python3/Python3.g4
and running:
java -jar antlr-4.6-complete.jar -Dlanguage=Python2 Python3.g4
to generate Python3Lexer.py + some other files.
However, Python3Lexer.py contains code which is not python! For eg.
def __init__(self, input=None):
super(Python3Lexer, self).__init__(input)
self.checkVersion("4.6")
self._interp = LexerATNSimulator(self, self.atn, self.decisionsToDFA, PredictionContextCache())
self._actions = None
self._predicates = None
// A queue where extra tokens are pushed on (see the NEWLINE lexer rule).
private java.util.LinkedList<Token> tokens = new java.util.LinkedList<>();
// The stack that keeps track of the indentation level.
private java.util.Stack<Integer> indents = new java.util.Stack<>();
Its unusable because of this. Does anyone know why this is happening and how I can fix it? Thanks!
This grammar is full of action code written in Java to deal with specialities of Python. You have to port that manually to python to make the grammar usuable for you. This is why grammar writers are encouraged to move out action code into base classes or listener code.
I am writing a program that tries to compare two methods. I would like to generate Control flow graphs (CFG) for all matched methods and use either a topological sort to compare the two graphs.
RPython, the translation toolchain behind PyPy, offers a way of grabbing the flow graph (in the pypy/rpython/flowspace directory of the PyPy project) for type inference.
This works quite well in most cases but generators are not supported. The result will be in SSA form, which might be good or bad, depending on what you want.
There's a Python package called staticfg which does exactly the this -- generation of control flow graphs from a piece of Python code.
For instance, putting the first quick sort Python snippet from Rosseta Code in qsort.py, the following code generates its control flow graph.
from staticfg import CFGBuilder
cfg = CFGBuilder().build_from_file('quick sort', 'qsort.py')
cfg.build_visual('qsort', 'png')
Note that it doesn't seem to understand more advanced control flow like comprehensions.
I found py2cfg has a better representation of Control Flow Graph (CFG) than one from staticfg.
https://gitlab.com/classroomcode/py2cfg
https://pypi.org/project/py2cfg/
Let's take this function in Python:
def fib():
a, b = 0, 1
while True:
yield a
a, b = b, a + b
fib_gen = fib()
for _ in range(10):
next(fib_gen)
Image from StaticCFG:
Image from PY2CFG:
http://pycallgraph.slowchop.com/ looks like what you need.
Python trace module also have option --trackcalls that can be an entrypoint for call tracing machinery in stdlib.