I know there are possibilities :
sampleword[::-1]
or
reverse(string)
but I wanted to write it by myself. I don't get why my code doesn't work. Could you help me?
h=input('word\n\n');
rev(h)
def rev(h):
counter=len(h);
reverse="";
while counter>0:
reverse+=h[counter];
counter=counter-1;
return reverse
#print (reverse); ?
input();
There are a couple issues with your code, I pointed them out in the comments of this adjusted script:
def rev(h):
counter=len(h) - 1 # indexes of h go from 0 to len(h) - 1
reverse=""
while counter>=0: # changed to >=0
reverse+=h[counter]
counter -= 1
return reverse
h=input('word\n\n');
revers = rev(h) # put rev(h) after the definition of rev!
print(revers) # actually print the result
# deleted your last line
In addition, you don't need to terminate lines with ; in python and you can write counter=counter-1 as counter -= 1.
You have some issues/problems in your code.
You call rev() before it is defined
Indexing starts at 0, so you need >= 0 instead of > 0
You want counter to equal len(h) - 1 because again, indexing starts at 0
You do not need semicolons at the end of your lines
Here is a much simpler and faster way using recursion:
def reverse(text):
if len(text) <= 1:
return text
return reverse(text[1:]) + text[0]
As you use Python more you will come across the concept of "Pythonic" code ... code that uses Python features well and shows good programming style.
So, you have a good answer above showing how your code can be corrected to work correctly in Python. But the issue I want to point out is that's C style programming. Someone said you can write C in any language (especially true in C++ and C#), but if you do you are probably not using the features of the language well. In Python, writing this style of function and ignoring the available built-in, implemented in C, lightning fast methods is not Pythonic.
I guarantee you that reverse(string) or string[::-1] are both faster than your rev( ) python code. Python has a ton of built in functionality and extensive library of files you can import for extra already debugged functionality. See if you can find a good way to time the execution of reverse( string ) and your rev( ) function. There is a good way, in Python.
Related
I'm trying to understand how the following PyTorch code works. To know how each function works and what they output & to know the outputted variables value and size, I'm using print() after each line of code.
s = pc()
for _ in trange(max_length):
if self.onnx:
outputs = torch.tensor(self.decoder_with_lm_head.run(None, {"input_ids": generated.cpu().numpy(),
"encoder_hidden_states": encoder_outputs_prompt})[0][0])
print(f'decoder output -- {outputs}')
else:
outputs = self.decoder_with_lm_head(input_ids=generated, encoder_hidden_states=encoder_outputs_prompt)[0]
next_token_logits = outputs[-1, :] / (temperature if temperature > 0 else 1.0)
print(f'next_token_logits -- {next_token_logits}')
if int(next_token_logits.argmax()) == 1:
print(f'next token logits argmax -- {int(next_token_logits.argmax())}')
break
new_logits.append(next_token_logits)
print(f'new_logits { i } -- {new_logits}')
print(f'generated -- {generated}')
print(f'generated view list -- {set(generated.view(-1).tolist())}')
for _ in set(generated.view(-1).tolist()):
next_token_logits[_] /= repetition_penalty
print(f'ext_token_logits[_] -- {next_token_logits[_]}')
if temperature == 0: # greedy sampling:
next_token = torch.argmax(next_token_logits).unsqueeze(0)
print(f'next_token -- {next_token}')
else:
filtered_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p=top_p)
next_token = torch.multinomial(F.softmax(filtered_logits, dim=-1), num_samples=1)
generated = torch.cat((generated, next_token.unsqueeze(0)), dim=1)
print(f'generated end -- {generated}')
new_tokens = torch.cat((new_tokens, next_token), 0)
print(f'new_tokens end -- {new_tokens}')
i += 1
print(f'--------------------------\n')
e = pc()
ap = e-s
print(ap)
print(timedelta(ap))
return self.tokenizer.decode(new_tokens), new_logit
my question, is there a more efficient way of tracking these values & their shape, or are there any libraries that handle this task.
after doing a lot of searches I found a library that fits my requirements. The library is pysnooper, instead of using print() function after each line of code, now I can just use pysnooper's decorator
#pysnooper.snoop()
def greedy_search(input_text, num_beam, max_length, max_context_length=512):
...
and it will print all the variable's values with their corresponding time of execution.
for more info refer to its GitHub page.
I feel like debuggers are a bit complicated & I also want to run it on notebooks. this is the simplest option I found. any suggestions are welcome.
The standard practice to monitor code is with a debugger. The documentation for the standard library debugger, "pdb", can be found here: https://docs.python.org/3/library/pdb.html
There are other options which might suit your needs a bit better, pdb is general purpose so while learning it isn't a bad idea, it might be more trouble than your current methods.
Thonny is a great tool for easily seeing what Python code is doing, as well as being a debugger. It will step through each line of code and show you each variable's value and what operations they undergo or functions they feed into.
Apart from those two options the IDE PyCharm has a built in visual debugger, and Visual Studio as well.
When I use the operator in to compare strings in my program it works great but if I call it in a module I created it doesn't seem to work. I know I am making some error but not certain what it is, can anyone help?
Here is my code:
Module:
def checkRoots(wordChecked, rootList):
numRoots = len(rootList);
numTypeRoots = createList(numRoots);
for z in range(0, numRoots):
root = rootList[z];
rootThere = root in word;
if rootThere == True:
numTypeRoots[z] = numTypeRoots[z] + 1;
return numTypeRoots;
code works though when not in module as shown here:
for y in range (0, numRoots):
root = roots[y];
rootThere = root in word;
if rootThere == True:
numTypeRoots[y] = numTypeRoots[y] + 1;
Basic program takes a list of words in from a file and then looks to see if a particular root is in the word.
Thanks
The function is doing root in word but I think what you actually want to do is root in wordChecked
Global variables are the devil. best to avoid them if you can help it.
Suggestion - if writing Python code that other people might look at ( hint: probably any/all code) it's a good idea to read through and follow PEP-8
Sometimes I'm writing small utilities functions and pack them as python package.
How small? 30 - 60 lines of python.
And my question is do you think writing the tests inside the actual code is bad? abusing?
I can see a great benefits like usage examples inside the code itself without jumping between files (again from really small projects).
Example:
#!/usr/bin/env python
# Actual code
def increment(number, by=1):
return number += by
# Tests
def test_increment_positive():
assert increment(1) == 2
def test_increment_negative():
assert increment(-5) == -4
def test_increment_zero():
assert increment(0) == 1
The general Idea taken from the monitoring framework riemann which I use, in riemann you write your tests file along with your code link
You can write doctests inside your documentation to indicate how your function should be used:
def increment(number, by=1):
""" Increments the given number by some other number
>>> increment(3)
4
>>> increment(5,3)
8
"""
return number += by
From the documentation:
To check that a module’s docstrings are up-to-date by verifying that all interactive examples still work as documented.
To perform regression testing by verifying that interactive examples from a test file or a test object work as expected.
To write tutorial documentation for a package, liberally illustrated with input-output examples. Depending on whether the
examples or the expository text are emphasized, this has the
flavor of “literate testing” or “executable documentation”
Edit: Really appreciate help in finding bug - but since it might prove hard to find/reproduce, any general debug help would be greatly appreciated too! Help me help myself! =)
Edit 2: Narrowing it down, commenting out code.
Edit 3: Seems lxml might not be the culprit, thanks! The full script is here. I need to go over it looking for references. What do they look like?
Edit 4: Actually, the scripts stops (goes 100%) in this, the parse_og part of it. So edit 3 is false - it must be lxml somehow.
Edit 5 MAJOR EDIT: As suggested by David Robinson and TankorSmash below, I've found a type of data content that will send lxml.etree.HTML( data ) in a wild loop. (I carelessly disregarded it, but find my sins redeemed as I've paid a price to the tune of an extra two days of debug! ;) A working crashing script is here. (Also opened a new question.)
Edit 6: Turns out this is a bug with lxml version 2.7.8 and below (at
least). Updated to lxml 2.9.0, and bug is gone. Thanks also to the fine folks over at this follow-up question.
I don't know how to debug this weird problem I'm having.
The below code runs fine for about five minutes, when the RAM is suddenly completely filled up (from 200MB to 1700MB during the 100% period - then when memory is full, it goes into blue wait state).
It's due to the code below, specifically the first two lines. That's for sure. But what is going on? What could possibly explain this behaviour?
def parse_og(self, data):
""" lxml parsing to the bone! """
try:
tree = etree.HTML( data ) # << break occurs on this line >>
m = tree.xpath("//meta[#property]")
#for i in m:
# y = i.attrib['property']
# x = i.attrib['content']
# # self.rj[y] = x # commented out in this example because code fails anyway
tree = ''
m = ''
x = ''
y = ''
i = ''
del tree
del m
del x
del y
del i
except Exception:
print 'lxml error: ', sys.exc_info()[1:3]
print len(data)
pass
You can try Low-level Python debugging with GDB. Probably there is a bug in Python interpreter or in lxml library and it is hard to find it without extra tools.
You can interrupt your script running under gdb when CPU usage goes to 100% and look at stack trace. It will probably help to understand what's going on inside script.
it must be due to some references which keep the documents alive. one must always be careful with string results from xpath evaluation. I see you have assigned None to tree and m but not to y,x and i .
Can you also assign None to y,x and i .
Tools are also helpful when trying to track down memory problems. I've found guppy to be a very useful Python memory profiling and exploration tool.
It is not the easiest to get started with due to a lack of good tutorials / documentation, but once you get to grips with it you will find it very useful. Features I make use of:
Remote memory profiling (via sockets)
Basic GUI for charting usage, optionally showing live data
Powerful, and consistent, interfaces for exploring data usage in a Python shell
I am writing a program that tries to compare two methods. I would like to generate Control flow graphs (CFG) for all matched methods and use either a topological sort to compare the two graphs.
RPython, the translation toolchain behind PyPy, offers a way of grabbing the flow graph (in the pypy/rpython/flowspace directory of the PyPy project) for type inference.
This works quite well in most cases but generators are not supported. The result will be in SSA form, which might be good or bad, depending on what you want.
There's a Python package called staticfg which does exactly the this -- generation of control flow graphs from a piece of Python code.
For instance, putting the first quick sort Python snippet from Rosseta Code in qsort.py, the following code generates its control flow graph.
from staticfg import CFGBuilder
cfg = CFGBuilder().build_from_file('quick sort', 'qsort.py')
cfg.build_visual('qsort', 'png')
Note that it doesn't seem to understand more advanced control flow like comprehensions.
I found py2cfg has a better representation of Control Flow Graph (CFG) than one from staticfg.
https://gitlab.com/classroomcode/py2cfg
https://pypi.org/project/py2cfg/
Let's take this function in Python:
def fib():
a, b = 0, 1
while True:
yield a
a, b = b, a + b
fib_gen = fib()
for _ in range(10):
next(fib_gen)
Image from StaticCFG:
Image from PY2CFG:
http://pycallgraph.slowchop.com/ looks like what you need.
Python trace module also have option --trackcalls that can be an entrypoint for call tracing machinery in stdlib.