Why is Collections.counter so slow? - python

I'm trying to solve a Rosalind basic problem of counting nucleotides in a given sequence, and returning the results in a list. For those ones not familiar with bioinformatics it's just counting the number of occurrences of 4 different characters ('A','C','G','T') inside a string.
I expected collections.Counter to be the fastest method (first because they claim to be high-performance, and second because I saw a lot of people using it for this specific problem).
But to my surprise this method is the slowest!
I compared three different methods, using timeit and running two types of experiments:
Running a long sequence few times
Running a short sequence a lot of times.
Here is my code:
import timeit
from collections import Counter
# Method1: using count
def method1(seq):
return [seq.count('A'), seq.count('C'), seq.count('G'), seq.count('T')]
# method 2: using a loop
def method2(seq):
r = [0, 0, 0, 0]
for i in seq:
if i == 'A':
r[0] += 1
elif i == 'C':
r[1] += 1
elif i == 'G':
r[2] += 1
else:
r[3] += 1
return r
# method 3: using Collections.counter
def method3(seq):
counter = Counter(seq)
return [counter['A'], counter['C'], counter['G'], counter['T']]
if __name__ == '__main__':
# Long dummy sequence
long_seq = 'ACAGCATGCA' * 10000000
# Short dummy sequence
short_seq = 'ACAGCATGCA' * 1000
# Test 1: Running a long sequence once
print timeit.timeit("method1(long_seq)", setup='from __main__ import method1, long_seq', number=1)
print timeit.timeit("method2(long_seq)", setup='from __main__ import method2, long_seq', number=1)
print timeit.timeit("method3(long_seq)", setup='from __main__ import method3, long_seq', number=1)
# Test2: Running a short sequence lots of times
print timeit.timeit("method1(short_seq)", setup='from __main__ import method1, short_seq', number=10000)
print timeit.timeit("method2(short_seq)", setup='from __main__ import method2, short_seq', number=10000)
print timeit.timeit("method3(short_seq)", setup='from __main__ import method3, short_seq', number=10000)
Results:
Test1:
Method1: 0.224009990692
Method2: 13.7929501534
Method3: 18.9483819008
Test2:
Method1: 0.224207878113
Method2: 13.8520510197
Method3: 18.9861831665
Method 1 is way faster than method 2 and 3 for both experiments!!
So I have a set of questions:
Am I doing something wrong or it is indeed slower than the other two approaches? Could someone run the same code and share the results?
In case my results are correct, (and maybe this should be another question) is there a faster method to solve this problem than using method 1?
If count is faster, then what's the deal with collections.Counter?

It's not because collections.Counter is slow, it's actually quite fast, but it's a general purpose tool, counting characters is just one of many applications.
On the other hand str.count just counts characters in strings and it's heavily optimized for its one and only task.
That means that str.count can work on the underlying C-char array while it can avoid creating new (or looking up existing) length-1-python-strings during the iteration (which is what for and Counter do).
Just to add some more context to this statement.
A string is stored as C array wrapped as python object. The str.count knows that the string is a contiguous array and thus converts the character you want to co to a C-"character", then iterates over the array in native C code and checks for equality and finally wraps and returns the number of found occurrences.
On the other hand for and Counter use the python-iteration-protocol. Each character of your string will be wrapped as python-object and then it (hashes and) compares them within python.
So the slowdown is because:
Each character has to be converted to a Python object (this is the major reason for the performance loss)
The loop is done in Python (not applicable to Counter in python 3.x because it was rewritten in C)
Each comparison has to be done in Python (instead of just comparing numbers in C - characters are represented by numbers)
The counter needs to hash the values and your loop needs to index your list.
Note the reason for the slowdown is similar to the question about Why are Python's arrays slow?.
I did some additional benchmarks to find out at which point collections.Counter is to be preferred over str.count. To this end I created random strings containing differing numbers of unique characters and plotted the performance:
from collections import Counter
import random
import string
characters = string.printable # 100 different printable characters
results_counter = []
results_count = []
nchars = []
for i in range(1, 110, 10):
chars = characters[:i]
string = ''.join(random.choice(chars) for _ in range(10000))
res1 = %timeit -o Counter(string)
res2 = %timeit -o {char: string.count(char) for char in chars}
nchars.append(len(chars))
results_counter.append(res1)
results_count.append(res2)
and the result was plotted using matplotlib:
import matplotlib.pyplot as plt
plt.figure()
plt.plot(nchars, [i.best * 1000 for i in results_counter], label="Counter", c='black')
plt.plot(nchars, [i.best * 1000 for i in results_count], label="str.count", c='red')
plt.xlabel('number of different characters')
plt.ylabel('time to count the chars in a string of length 10000 [ms]')
plt.legend()
Results for Python 3.5
The results for Python 3.6 are very similar so I didn't list them explicitly.
So if you want to count 80 different characters Counter becomes faster/comparable because it traverses the string only once and not multiple times like str.count. This will be weakly dependent on the length of the string (but testing showed only a very weak difference +/-2%).
Results for Python 2.7
In Python-2.7 collections.Counter was implemented using python (instead of C) and is much slower. The break-even point for str.count and Counter can only be estimated by extrapolation because even with 100 different characters the str.count is still 6 times faster.

The time difference here is pretty simple to explain. It all comes down to what runs within Python and what runs as native code. The latter will always be faster since it does not come with lots of evaluation overhead.
Now that’s already the reason why calling str.count() four times is faster than anything else. Although this iterates the string four times, these loops run in native code. str.count is implemented in C, so this has very little overhead, making this very fast. It’s really difficult to beat this, especially when the task is that simple (looking only for simple character equality).
Your second method, of collecting the counts in an array is actually a less performant version of the following:
def method4 (seq):
a, c, g, t = 0, 0, 0, 0
for i in seq:
if i == 'A':
a += 1
elif i == 'C':
c += 1
elif i == 'G':
g += 1
else:
t += 1
return [a, c, g, t]
Here, all four values are individual variables, so updating them is very fast. This is actually a bit faster than mutating list items.
The overall performance “problem” here is however that this iterates the string within Python. So this creates a string iterator and then produces every character individually as an actual string object. That’s a lot overhead and the main reason why every solution that works by iterating the string in Python will be slower.
The same problem is with collection.Counter. It’s implemented in Python so even though it’s very efficient and flexible, it suffers from the same issue that it’s just never near native in terms of speed.

As others have already noted, you are comparing fairly specific code against fairly general one.
Consider that something as trivial as spelling out the loop over the characters you are interested in is already buying you a factor 2, i.e.
def char_counter(text, chars='ACGT'):
return [text.count(char) for char in chars]
%timeit method1(short_seq)
# 100000 loops, best of 3: 18.8 µs per loop
%timeit char_counter(short_seq)
# 10000 loops, best of 3: 40.8 µs per loop
%timeit method1(long_seq)
# 10 loops, best of 3: 172 ms per loop
%timeit char_counter(long_seq)
# 1 loop, best of 3: 374 ms per loop
Your method1() is the fastest but not the most efficient, as the input is looped through entirely for each char you are inspecting, thereby not taking advantage of the fact that you could easily short-circuit your looping as soon as a character gets assigned to one of the character classes.
Unfortunately, Python does not offer a fast method to take advantage of the specific conditions of your problem.
However, you could use Cython for this, and you would then be able to outperform your method1():
%%cython -c-O3 -c-march=native -a
#cython: language_level=3, boundscheck=False, wraparound=False, initializedcheck=False, cdivision=True, infer_types=True
import numpy as np
cdef void _count_acgt(
const unsigned char[::1] text,
unsigned long len_text,
unsigned long[::1] counts):
for i in range(len_text):
if text[i] == b'A':
counts[0] += 1
elif text[i] == b'C':
counts[1] += 1
elif text[i] == b'G':
counts[2] += 1
else:
counts[3] += 1
cpdef ascii_count_acgt(text):
counts = np.zeros(4, dtype=np.uint64)
bin_text = text.encode()
return _count_acgt(bin_text, len(bin_text), counts)
%timeit ascii_count_acgt(short_seq)
# 100000 loops, best of 3: 12.6 µs per loop
%timeit ascii_count_acgt(long_seq)
# 10 loops, best of 3: 140 ms per loop

Related

Efficiently check if an element occurs at least n times in a list

How to best write a Python function (check_list) to efficiently test if an element (x) occurs at least n times in a list (l)?
My first thought was:
def check_list(l, x, n):
return l.count(x) >= n
But this doesn't short-circuit once x has been found n times and is always O(n).
A simple approach that does short-circuit would be:
def check_list(l, x, n):
count = 0
for item in l:
if item == x:
count += 1
if count == n:
return True
return False
I also have a more compact short-circuiting solution with a generator:
def check_list(l, x, n):
gen = (1 for item in l if item == x)
return all(next(gen,0) for i in range(n))
Are there other good solutions? What is the best efficient approach?
Thank you
Instead of incurring extra overhead with the setup of a range object and using all which has to test the truthiness of each item, you could use itertools.islice to advance the generator n steps ahead, and then return the next item in the slice if the slice exists or a default False if not:
from itertools import islice
def check_list(lst, x, n):
gen = (True for i in lst if i==x)
return next(islice(gen, n-1, None), False)
Note that like list.count, itertools.islice also runs at C speed. And this has the extra advantage of handling iterables that are not lists.
Some timing:
In [1]: from itertools import islice
In [2]: from random import randrange
In [3]: lst = [randrange(1,10) for i in range(100000)]
In [5]: %%timeit # using list.index
....: check_list(lst, 5, 1000)
....:
1000 loops, best of 3: 736 µs per loop
In [7]: %%timeit # islice
....: check_list(lst, 5, 1000)
....:
1000 loops, best of 3: 662 µs per loop
In [9]: %%timeit # using list.index
....: check_list(lst, 5, 10000)
....:
100 loops, best of 3: 7.6 ms per loop
In [11]: %%timeit # islice
....: check_list(lst, 5, 10000)
....:
100 loops, best of 3: 6.7 ms per loop
You could use the second argument of index to find the subsequent indices of occurrences:
def check_list(l, x, n):
i = 0
try:
for _ in range(n):
i = l.index(x, i)+1
return True
except ValueError:
return False
print( check_list([1,3,2,3,4,0,8,3,7,3,1,1,0], 3, 4) )
About index arguments
The official documentation does not mention in its Python Tutuorial, section 5 the method's second or third argument, but you can find it in the more comprehensive Python Standard Library, section 4.6:
s.index(x[, i[, j]]) index of the first occurrence of x in s (at or after index i and before index j) (8)
(8) index raises ValueError when x is not found in s. When supported, the additional arguments to the index method allow efficient searching of subsections of the sequence. Passing the extra arguments is roughly equivalent to using s[i:j].index(x), only without copying any data and with the returned index being relative to the start of the sequence rather than the start of the slice.
Performance Comparison
In comparing this list.index method with the islice(gen) method, the most important factor is the distance between the occurrences to be found. Once that distance is on average 13 or more, the list.index has a better performance. For lower distances, the fastest method also depends on the number of occurrences to find. The more occurrences to find, the sooner the islice(gen) method outperforms list.index in terms of average distance: this gain fades out when the number of occurrences becomes really large.
The following graph draws the (approximate) border line, at which both methods perform equally well (the X-axis is logarithmic):
Ultimately short circuiting is the way to go if you expect a significant number of cases will lead to early termination. Let's explore the possibilities:
Take the case of the list.index method versus the list.count method (these were the two fastest according to my testing, although ymmv)
For list.index if the list contains n or more of x and the method is called n times. Whilst within the list.index method, execution is very fast, allowing for much faster iteration than the custom generator. If the occurances of x are far enough apart, a large speedup will be seen from the lower level execution of index. If instances of x are close together (shorter list / more common x's), much more of the time will be spent executing the slower python code that mediates the rest of the function (looping over n and incrementing i)
The benefit of list.count is that it does all of the heavy lifting outside of slow python execution. It is a much easier function to analyse, as it is simply a case of O(n) time complexity. By spending almost none of the time in the python interpreter however it is almost gaurenteed to be faster for short lists.
Summary of selection criteria:
shorter lists favor list.count
lists of any length that don't have a high probability to short circuit favor list.count
lists that are long and likely to short circuit favor list.index
I would recommend using Counter from the collections module.
from collections import Counter
%%time
[k for k,v in Counter(np.random.randint(0,10000,10000000)).items() if v>1100]
#Output:
Wall time: 2.83 s
[1848, 1996, 2461, 4481, 4522, 5844, 7362, 7892, 9671, 9705]
This shows another way of doing it.
Sort the list.
Find the index of the first occurrence of the item.
Increase the index by one less than the number of times the item must occur. (n - 1)
Find if the element at that index is the same as the item you want to find.
def check_list(l, x, n):
_l = sorted(l)
try:
index_1 = _l.index(x)
return _l[index_1 + n - 1] == x
except IndexError:
return False
c=0
for i in l:
if i==k:
c+=1
if c>=n:
print("true")
else:
print("false")
Another possibility might be:
def check_list(l, x, n):
return sum([1 for i in l if i == x]) >= n

Is it faster to make a variable for the length of a string?

I am implementing a reverse(s) function in Python 2.7 and I made a code like this:
# iterative version 1
def reverse(s):
r = ""
for c in range(len(s)-1, -1, -1):
r += s[c];
return r
print reverse("Be sure to drink your Ovaltine")
But for each iteration, it gets the length of the string even though it's been deducted.
I made another version that
# iterative version 2
def reverse(s):
r = ""
l = len(s)-1
for c in range(l, -1, -1):
r += s[c];
return r
print reverse("Be sure to drink your Ovaltine")
This version remembers the length of the string and doesn't ask for it every iteration, is this faster for longer strings (like a string that has the length of 1024) than the first version or does it have no effect at all?
In [12]: %timeit reverse("Be sure to drink your Ovaltine")
100000 loops, best of 3: 2.53 µs per loop
In [13]: %timeit reverse1("Be sure to drink your Ovaltine")
100000 loops, best of 3: 2.55 µs per loop
reverse is your first method, reverse1 is the second.
As you can see from timing there is very little difference in the performance.
You can use Ipython to time your code with the above syntax, just def your functions and use %timeit and then your function and whatever parameters .
In the line
for c in range(len(s)-1, -1, -1):
len(s) is evaluated only once, and the result (minus one) passed as an argument to range. Therefore the two versions are almost identical - if anything, the latter may be (very) slightly slower, as it creates a new name to assign the result of the subtraction.

Higher Order Functions vs loops - running time & memory efficiency?

Does using Higher Order Functions & Lambdas make running time & memory efficiency better or worse?
For example, to multiply all numbers in a list :
nums = [1,2,3,4,5]
prod = 1
for n in nums:
prod*=n
vs
prod2 = reduce(lambda x,y:x*y , nums)
Does the HOF version have any advantage over the loop version other than it's lesser lines of code/uses a functional approach?
EDIT:
I am not able to add this as an answer as I don't have the required reputation.
I tied to profile the loop & HOF approach using timeit as suggested by #DSM
def test1():
s= """
nums = [a for a in range(1,1001)]
prod = 1
for n in nums:
prod*=n
"""
t = timeit.Timer(stmt=s)
return t.repeat(repeat=10,number=100)
def test2():
s="""
nums = [a for a in range(1,1001)]
prod2 = reduce(lambda x,y:x*y , nums)
"""
t = timeit.Timer(stmt=s)
return t.repeat(repeat=10,number=100)
And this is my result:
Loop:
[0.08340786340144211, 0.07211491653462579, 0.07162720686361926, 0.06593182661083438, 0.06399049758613146, 0.06605228229559557, 0.06419744588664211, 0.0671893658461038, 0.06477527090075941, 0.06418023793167627]
test1 average: 0.0644778902685
HOF:
[0.0759414223099324, 0.07616920129277016, 0.07570730355421262, 0.07604965128984942, 0.07547092059389193, 0.07544737286604364, 0.075532959799953, 0.0755039779810629, 0.07567424616704144, 0.07542563650187661]
test2 average: 0.0754917512762
On an average loop approach seems to be faster than using HOFs.
Higher-order functions can be very fast.
For example, map(ord, somebigstring) is much faster than the equivalent list comprehension [ord(c) for c in somebigstring]. The former wins for three reasons:
map() pre-sizes the result string to the length of somebigstring. In contrast, the list-comprehension must make many calls to realloc() as it grows.
map() only has to do one lookup for ord, first checking globals, then checking and finding it in builtins. The list comprehension has to repeat this work on every iteration.
The inner loop for map runs at C speed. The loop body for the list comprehension is a series of pure Python steps that each need to be dispatched or handled by the eval-loop.
Here are some timings to confirm the prediction:
>>> from timeit import Timer
>>> print min(Timer('map(ord, s)', 's="x"*10000').repeat(7, 1000))
0.808364152908
>>> print min(Timer('[ord(c) for c in s]', 's="x"*10000').repeat(7, 1000))
1.2946639061
from my experience loops can do things very fast , provided they are not nested too deeply , and with complex higher math operations , for simple operations and a Single layer of loops it can be as fast as any other way , maybe faster , so long as only integers are used as the index to the loop or loops, it would actually depend on what you are doing too
Also it might very well be that the higher order function will produce just as many loops
as the loop program version and might even be a little slower , you would have to time them both...just to be sure.

Rationale behind Python's preferred for syntax

What is the rationale behind the advocated use of the for i in xrange(...)-style looping constructs in Python? For simple integer looping, the difference in overheads is substantial. I conducted a simple test using two pieces of code:
File idiomatic.py:
#!/usr/bin/env python
M = 10000
N = 10000
if __name__ == "__main__":
x, y = 0, 0
for x in xrange(N):
for y in xrange(M):
pass
File cstyle.py:
#!/usr/bin/env python
M = 10000
N = 10000
if __name__ == "__main__":
x, y = 0, 0
while x < N:
while y < M:
y += 1
x += 1
Profiling results were as follows:
bash-3.1$ time python cstyle.py
real 0m0.109s
user 0m0.015s
sys 0m0.000s
bash-3.1$ time python idiomatic.py
real 0m4.492s
user 0m0.000s
sys 0m0.031s
I can understand why the Pythonic version is slower -- I imagine it has a lot to do with calling xrange N times, perhaps this could be eliminated if there was a way to rewind a generator. However, with this deal of difference in execution time, why would one prefer to use the Pythonic version?
Edit: I conducted the tests again using the code Mr. Martelli provided, and the results were indeed better now:
I thought I'd enumerate the conclusions from the thread here:
1) Lots of code at the module scope is a bad idea, even if the code is enclosed in an if __name__ == "__main__": block.
2) *Curiously enough, modifying the code that belonged to thebadone to my incorrect version (letting y grow without resetting) produced little difference in performance, even for larger values of M and N.
Here's the proper comparison, e.g. in loop.py:
M = 10000
N = 10000
def thegoodone():
for x in xrange(N):
for y in xrange(M):
pass
def thebadone():
x = 0
while x < N:
y = 0
while y < M:
y += 1
x += 1
All substantial code should always be in functions -- putting a hundred million loops at a module's top level shows reckless disregard for performance and makes a mockery of any attempts at measuring said performance.
Once you've done that, you see:
$ python -mtimeit -s'import loop' 'loop.thegoodone()'
10 loops, best of 3: 3.45 sec per loop
$ python -mtimeit -s'import loop' 'loop.thebadone()'
10 loops, best of 3: 10.6 sec per loop
So, properly measured, the bad way that you advocate is about 3 times slower than the good way which Python promotes. I hope this makes you reconsider your erroneous advocacy.
You forgot to reset y to 0 after the inner loop.
#!/usr/bin/env python
M = 10000
N = 10000
if __name__ == "__main__":
x, y = 0, 0
while x < N:
while y < M:
y += 1
x += 1
y = 0
ed: 20.63s after fix vs. 6.97s using xrange
good for iterating over data structures
The for i in ... syntax is great for iterating over data structures. In a lower-level language, you would generally be iterating over an array indexed by an int, but with the python syntax you can eliminate the indexing step.
this is not a direct answer to the question, but i want to open the dialog a bit more on xrange(). two things:
A. there is something wrong with one of the OP statements that no one has corrected yet (yes, in addition to the bug in the code of not resetting y):
"I imagine it has a lot to do with calling xrange N times...."
unlike traditional counting for loops, Python's is more like a shell's foreach... looping over an iterable. therefore, xrange() is called exactly once, not "N times."
B. xrange() is the name of this function in Python 2. it replaces and is renamed to range() in Python 3, so keep this in mind when porting. if you didn't know already, xrange() returns an iterator(-like object) while range() returns lists. since the latter is more inefficient, it has been deprecated in favor of xrange() which is more memory-friendly. the workaround in Python 3, for all those who need to have a list is list(range(N)).
I've repeated the test from #Alex Martelli's answer. The idiomatic for loop is 5 times faster than the while loop:
python -mtimeit -s'from while_vs_for import while_loop as loop' 'loop(10000)'
10 loops, best of 3: 9.6 sec per loop
python -mtimeit -s'from while_vs_for import for_loop as loop' 'loop(10000)'
10 loops, best of 3: 1.83 sec per loop
while_vs_for.py:
def while_loop(N):
x = 0
while x < N:
y = 0
while y < N:
pass
y += 1
x += 1
def for_loop(N):
for x in xrange(N):
for y in xrange(N):
pass
At module level:
$ time -p python for.py
real 4.38
user 4.37
sys 0.01
$ time -p python while.py
real 14.28
user 14.28
sys 0.01

Python tabstop-aware len() and padding functions

Python's len() and padding functions like string.ljust() are not tabstop-aware, i.e. they treat '\t' like any other single-width character, and don't round len() up to the nearest multiple of tabstop. Example:
len('Bear\tnecessities\t')
is 17 instead of 24 ( i.e. 4+(8-4)+11+(8-3) )
and say I also want a function pad_with_tabs(s) such that
pad_with_tabs('Bear', 15) = 'Bear\t\t'
Looking for simple implementations of these - compactness and readability first, efficiency second.
This is a basic but irritating question.
#gnibbler - can you show a purely Pythonic solution, even if it's say 20x less efficient?
Sure you could convert back and forth using str.expandtabs(TABWIDTH), but that's clunky.
Importing math to get TABWIDTH * int( math.ceil(len(s)*1.0/TABWIDTH) ) also seems like massive overkill.
I couldn't manage anything more elegant than the following:
TABWIDTH = 8
def pad_with_tabs(s,maxlen):
s_len = len(s)
while s_len < maxlen:
s += '\t'
s_len += TABWIDTH - (s_len % TABWIDTH)
return s
and since Python strings are immutable and unless we want to monkey-patch our function into string module to add it as a method, we must also assign to the result of the function:
s = pad_with_tabs(s, ...)
In particular I couldn't get clean approaches using list-comprehension or string.join(...):
''.join([s, '\t' * ntabs])
without special-casing the cases where len(s) is < an integer multiple of TABWIDTH), or len(s)>=maxlen already.
Can anyone show better len() and pad_with_tabs() functions?
TABWIDTH=8
def my_len(s):
return len(s.expandtabs(TABWIDTH))
def pad_with_tabs(s,maxlen):
return s+"\t"*((maxlen-len(s)-1)/TABWIDTH+1)
Why did I use expandtabs()?
Well it's fast
$ python -m timeit '"Bear\tnecessities\t".expandtabs()'
1000000 loops, best of 3: 0.602 usec per loop
$ python -m timeit 'for c in "Bear\tnecessities\t":pass'
100000 loops, best of 3: 2.32 usec per loop
$ python -m timeit '[c for c in "Bear\tnecessities\t"]'
100000 loops, best of 3: 4.17 usec per loop
$ python -m timeit 'map(None,"Bear\tnecessities\t")'
100000 loops, best of 3: 2.25 usec per loop
Anything that iterates over your string is going to be slower, because just the iteration is ~4 times slower than expandtabs even when you do nothing in the loop.
$ python -m timeit '"Bear\tnecessities\t".split("\t")'
1000000 loops, best of 3: 0.868 usec per loop
Even just splitting on tabs takes longer. You'd still need to iterate over the split and pad each item to the tabstop
I believe gnibbler's is the best for most prectical cases. But anyway, here is a naive (without accounting CR, LF etc) solution to compute the length of string without creating expanded copy:
def tab_aware_len(s, tabstop=8):
pos = -1
extra_length = 0
while True:
pos = s.find('\t', pos+1)
if pos<0:
return len(s) + extra_length
extra_length += tabstop - (pos+extra_length) % tabstop - 1
Probably it could be useful for some huge strings or even memory mapped files. And here is padding function a bit optimized:
def pad_with_tabs(s, max_len, tabstop=8):
length = tab_aware_len(s, tabstop)
if length<max_len:
s += '\t' * ((max_len-1)//tabstop + 1 - length//tabstop)
return s
TABWIDTH * int( math.ceil(len(s)*1.0/TABWIDTH) ) is indeed a massive over-kill; you can get the same result much more simply. For positive i and n, use:
def round_up_positive_int(i, n):
return ((i + n - 1) // n) * n
This procedure works in just about any language I've ever used, after appropriate translation.
Then you can do next_pos = round_up_positive_int(len(s), TABWIDTH)
For a slight increase in the elegance of your code, instead of
while(s_len < maxlen):
use this:
while s_len < maxlen:
Unfortunately I was unable to make use of accepted answer "as is" so here goes slightly modified version just in case someone would run into same problem and discovers this post via search:
from decimal import Decimal, ROUND_HALF_UP
TABWIDTH = 4
def pad_with_tabs(src, max_len):
return src + "\t" * int(
Decimal((max_len - len(src.expandtabs(TABWIDTH))) / TABWIDTH + 1).quantize(0, ROUND_HALF_UP))
def pad_fields(input):
result = []
longest = max(len(x) for x in input)
for row in input:
result.append(pad_with_tabs(row, longest))
return result
Output list contains properly padded rows having tab count rounded so the resulting data will have same indentation level regardless of corner .5 cases when no tab gets added in the original answer.

Categories