for each loop pyton - nested loop - python

lyrics = ["I want to break free", "I want to break free",
"I want to break free", "yes, I want to break free"]
number_of_lines = 6
I am trying to create a loop that prints as many lines as many number_of_lines. In this specific example I basically need to loop IN lyrics 1.5 times to print the whole list (4 lines) and then the first 2 again to get to 6 = number of lines. How do you do it exactly?
thanks much in advance
for line in lyrics:
print(line)

Use itertools.cycle to repeat the contents of the list as often as necessary to obtain the desired number of lines (using itertools.islice):
from itertools import cycle, islice
lyrics = ["I want to break free", "I want to break free",
"I want to break free", "yes, I want to break free"]
number_of_lines = 6
for line in islice(cycle(lyrics), 6):
print(line)
This incurs some additional memory overhead, though. cycle has to cache the values it reads from its iterator in order to repeat them. Just something to keep in mind when using cycle and large lists, but typically the lists will be small.
The issue is that cycle iterates over an iterator it obtains from its iterable argument, and the list behind the iterator could change between calls to next. For example,
>>> from itertools import cycle
>>> x = [1,2,3]
>>> i = cycle(x)
>>> next(i)
1
>>> next(i)
2
>>> x.clear()
>>> next(i) # Not 3
1
>>> next(i)
2
>>> next(i)
1

Another option (maybe a bit more "Pythonic" as using the modulo operator) is to use the cycle() method available in the with Python coming itertools module as follows:
lyrics = ["I want to break free", "I want to break free", "I want to break free", "yes, I want to break free"]
number_of_lines = 6
from itertools import cycle
line_in_lyrics_cycle = cycle(lyrics)
for _ in range(number_of_lines):
print( next(line_in_lyrics_cycle) )
And because line_in_lyrics_cycle is an iterator you get a line out of it using the Python next() method and don't need the index value provided in the loop by range(). A nice side-effect here is that because of use of next() you can read the code like you would read the explanation what it does.
For the sake of completeness here the code proposed by Kelly Bundy in comments to the other itertools using answer by chepner:
from itertools import islice, chain, repeat
q, r = divmod(number_of_lines, len(lyrics))
fulls = chain.from_iterable(repeat(lyrics, q))
tail = islice(lyrics, r)
for line in chain(fulls, tail):
print(line)
which does expressed in terms of Python list only:
q, r = divmod(number_of_lines, len(lyrics))
lyrics_total = q*lyrics+lyrics[0:r]
for line in lyrics_total:
print(line)
Looking especially at the first version of the code making full use of the itertools iterators the entire BEAUTY of
print( next(line_in_lyrics_cycle) )
is gone, buried so deep under the really hard to understand code, that you need probably three times the lines of text to explain what, how and why just this code.
In this context the code provided by Pingu needs much less explanations, because you only have to understand how the modulo % operator works and must not worry about divmod and/or itertools.
It excels because of its shortness got down to the bare point of cycling which the modulo operator is all about:
for i in range(number_of_lines):
print(lyrics[i%len(lyrics)])

You could use the modulo operator as follows:
lyrics = ["I want to break free", "I want to break free", "I want to break free", "yes, I want to break free"]
number_of_lines = 6
for i in range(number_of_lines):
print(lyrics[i%len(lyrics)])
Output:
I want to break free
I want to break free
I want to break free
yes, I want to break free
I want to break free
I want to break free

Related

Removing specific index from list while iterating / improving nested loops

I am in a situation where I have 3 nested loops. Every x iterations, I want to restart the 2nd for loop.
If an element in the 3rd for loop meets a certain condition, I want to remove that element from the list.
I'm not sure how to implement this and using a list comprehension or creating a new list wouldn't really work based on the similar questions I read.
Example pseudocode:
items_of_interest = ["apple", "pear"]
while True: # restart 10,000 iterations (API key only last 10,000 requests)
api_key = generate_new_api_key()
for i in range(10000):
html = requests.get(f"http://example.com/{api_key}/items").text
for item in items_of_interest:
if item in html:
items_of_interest.remove(item)
The original code is a lot bigger with a lot of checks, constantly parsing an API for something, and it's a bit messy to organize as you can tell. I'm not sure how to reduce the complexity.
Without knowing the full picture, it's hard to say which approach is optimal. In any case, here's one approach using comprehension.
items_of_interest = ["apple", "pear"]
while True: # restart 10,000 iterations (API key only last 10,000 requests)
api_key = generate_new_api_key()
for i in range(10000):
html = requests.get(f"http://example.com/{api_key}/items").text
# Split your text blob into separate strings in a set
haystack = set(html.split(' '))
# Exclude the found items!
items_of_interest = list(set(items_of_interest).difference(haystack))
It works much like you suggest. The relevant keyword is del. eg
>>> x = range(5)
>>> for i in ['a','b','c']:
... print ('i:' + str(i) )
... for j in x:
... print('j:' + str(j))
... if j == 3:
... del x[j]
...
i:a
j:0
j:1
j:2
j:3
i:b
j:0
j:1
j:2
j:4
i:c
j:0
j:1
j:2
j:4
3 has been removed from the list x for the later passes.
See also Python doco https://docs.python.org/3.7/tutorial/datastructures.html and SO answers like Difference between del, remove and pop on lists

Functional approach to file parsing in Python

I have a text file describing an electronic circuit and a few other things done with it. I've built a simple Python code that splits the file into different units which can then be further analyzed if needed.
The syntax of the simulation language defines these units as contained within the following lines:
subckt xxx .....
...
...
ends xxx ...
There is a few of these 'text blocks' and other stuff I'm parsing or leaving out - like comment lines.
To accomplish this, I use the following core:
with open('input') as f:
for l in iter(f):
if 'subckt' not in l:
pass
else:
with open('output') as o:
o.write(l)
for l in iter(f):
if 'ends' in l:
o.write(l)
break
else:
o.write(l)
(can't easily paste the real code, there might be oversights)
The nice thing about it is the fact that iter(f) keeps scanning the file so when I break out of the inner loop as I reached the ends line of a subckt, the outer loop keeps going from that point onward, searching for new occurrences of the token subckt in subsequent lines.
I am looking for suggestions and/or guidance on how to transform the forest of if/then clauses into something more functional, i.e. based on 'pure' functions which just yield values (the file rows or lines) and are then composed as to bring to the final result.
Specifically, I am not sure how to approach the fact that the generator\map\filter should actually yield a different row based on the fact that it has found the subckt token or not.
I can think of a filter of the form:
line = filter(lambda x: 'subckt' in x, iter(f))
but this of course only gives me the lines where that string is present, whereas I would like - from that moment on - yield all lines, until the ends token is found.
Is this something I'd have to handle with recursion? Or maybe itertools.tee?
Seems to me that what I want is to have some form of state, i.e. "you have reached a subckt", but without resorting to a true state variable, which would be against the functional paradigm.
Not sure if this is what you are looking for. blocks(f) is a generator producing the blocks in your file f. Each block is an iterator over the lines between 'subckt' and 'ends'. If you want to include those two lines in the block, you'd have to do some more work in _blocks. But I hope this gives you an idea:
def __block(f):
while 'subckt' not in next(f): pass # raises StopIteration at EOF
return iter(next(iter([])) if 'ends' in l else l.strip() for l in f)
def blocks(f):
while 1: yield __block(f) # StopIteration from __block will stop the generator
f = open('data.txt')
for block in blocks(f):
# process block
for line in block:
# process line
next(iter([])) if is a little hack to terminate a comprehension/generator.
This answer also works, still very keen on hearing comments:
from itertools import takewhile, dropwhile
def start(l): return 'subckt' not in l
def stop(l): return 'ends' not in l
def sub(iter):
while True:
a = list(dropwhile(start,takewhile(stop,iter)))
if len(a):
yield a
else:
return
f = open('file.txt')
for b in sub(f):
#process b
f.close()
Something I couldn't work out yet: enclose the last line (containing ends keyword) in the output.

Transposition Cipher in Python

Im currently trying to code a transposition cipher in python. however i have reached a point where im stuck.
my code:
key = "german"
length = len(key)
plaintext = "if your happy and you know it clap your hands, clap your hands"
Formatted = "".join(plaintext.split()).replace(",","")
split = split_text(formatted,length)
def split_text(formatted,length):
return [formatted[i:i + length] for i in range(0, len(formatted), length)]
def encrypt():
i use that to count the length of the string, i then use the length to determine how many columns to create within the program. So it would create this:
GERMAN
IFYOUR
HAPPYA
NDYOUK
NOWITC
LAPYOU
RHANDS
CLAPYO
URHAND
S
this is know where im stuck. as i want to get the program to create a string by combining the columns together. so it would combine each column to create:
IHNNLRCUSFADOAHLRYPYWPAAH .....
i know i would need a loop of some sort but unsure how i would tell the program to create such a string.
thanks
you can use slices of the string to get each letter of the string in steps of 6 (length)
print(formatted[0::length])
#output:
ihnnlrcus
Then just loop through all the possible start indices in range(length) and link them all together:
def encrypt(formatted,length):
return "".join([formatted[i::length] for i in range(length)])
note that this doesn't actually use split_text, it would take formatted directly:
print(encrypt(formatted,length))
the problem with using the split_text you then cannot make use of tools like zip since they stop when the first iterator stops (so because the last group only has one character in it you only get the one group from zip(*split))
for i in zip("stuff that is important","a"):
print(i)
#output:
("s","a")
#nothing else, since one of the iterators finished.
In order to use something like that you would have to redefine the way zip works by allowing some of the iterators to finish and continue until all of them are done:
def myzip(*iterators):
iterators = tuple(iter(it) for it in iterators)
while True: #broken when none of iterators still have items in them
group = []
for it in iterators:
try:
group.append(next(it))
except StopIteration:
pass
if group:
yield group
else:
return #none of the iterators still had items in them
then you can use this to process the split up data like this:
encrypted_data = ''.join(''.join(x) for x in myzip(*split))

write python code in single line

Can I write the following code in single line in python?
t=int(input())
while t:
t-=1
n=int(input())
a=i=0
while not(n&1<<i):
i+=1
while n&1<<i:
n^=1<<i
a=a*2+1
i+=1
print(n^1<<i)+a/2
If not, How can I write this piece of code in minimum possible lines?(PS: I could reduce this in 6 lines, can it be any better)My Solutiont=int(input())
while t:
t-=1;n=int(input());a=i=0
while not(n&1<<i):i+=1
while n&1<<i:n^=1<<i;a=a*2+1;i+=1
print(n^1<<i)+a/2Thanks
Since pythons list comprehensions are turing complete and require no line breaks, any program can be written as a python oneliner.
If you enforce arbitrary restrictions (like "order of the statements" - what does that even mean? Execution order? First apperarance in sourcecode?), then the answer is: you can eliminate some linebreaks, but not all.
instead of
if x:
do_stuff()
you can do:
if x: do_stuff()
instead of
x = 23
y = 42
you can do:
x,y = 23, 42
and instead of
do_stuff()
do_more_stuff()
you can do
do_stuff; do_more_stuff()
And if you really, really have to, you can exec a multi-line python program in one line, so your program becomes something like:
exec('''t=int(input())\nwhile t:\n t-=1;n=int(input());a=i=0\n while not(n&1<<i):i+=1\n while n&1<<i:n^=1<<i;a=a*2+1;i+=1\n print(n^1<<i)+a/2\n''')
But if you do this in "real" code, e.g. not just for fun, kittens die.
It's not recommended to collapse lines in Python very often, because you lose Python's famous simplicity and clarity that way. And you often cannot collapse lines, because indentation levels are used to define block structure / nesting.
But if you really want a condensed version:
print "s0"
while True:
print "s1"; print "s2"
while True: print "s3"
while True: print "s4"; print "s5"; print "s6"
print "s7"
(Where your expressions have been replaced with True for simplicity`)

What's wrong with my python multiprocessing code?

I am an almost new programmer learning python for a few months. For the last 2 weeks, I had been coding to make a script to search permutations of numbers that make magic squares.
Finally I succeeded in searching the whole 880 4x4 magic square numbers sets within 30 seconds. After that I made some different Perimeter Magic Square program. It finds out more than 10,000,000 permutations so that I want to store them part by part to files. The problem is that my program doesn't use all my processes that while it is working to store some partial data to a file, it stops searching new number sets. I hope I could make one process of my CPU keep searching on and the others store the searched data to files.
The following is of the similar structure to my magic square program.
while True:
print('How many digits do you want? (more than 20): ', end='')
ansr = input()
if ansr.isdigit() and int(ansr) > 20:
ansr = int(ansr)
break
else:
continue
fileNum = 0
itemCount = 0
def fileMaker():
global fileNum, itemCount
tempStr = ''
for i in permutationList:
itemCount += 1
tempStr += str(sum(i[:3])) + ' : ' + str(i) + ' : ' + str(itemCount) + '\n'
fileNum += 1
file = open('{0} Permutations {1:03}.txt'.format(ansr, fileNum), 'w')
file.write(tempStr)
file.close()
numList = [i for i in range(1, ansr+1)]
permutationList = []
itemCount = 0
def makePermutList(numList, ansr):
global permutationList
for i in numList:
numList1 = numList[:]
numList1.remove(i)
for ii in numList1:
numList2 = numList1[:]
numList2.remove(ii)
for iii in numList2:
numList3 = numList2[:]
numList3.remove(iii)
for iiii in numList3:
numList4 = numList3[:]
numList4.remove(iiii)
for v in numList4:
permutationList.append([i, ii, iii, iiii, v])
if len(permutationList) == 200000:
print(permutationList[-1])
fileMaker()
permutationList = []
fileMaker()
makePermutList(numList, ansr)
I added from multiprocessing import Pool at the top. And I replaced two 'fileMaker()' parts at the end with the following.
if __name__ == '__main__':
workers = Pool(processes=2)
workers.map(fileMaker, ())
The result? Oh no. It just works awkwardly. For now, multiprocessing looks too difficult for me.
Anybody, please, teach me something. How should my code be modified?
Well, addressing some things that are bugging me before getting to your asked question.
numList = [i for i in range(1, ansr+1)]
I know list comprehensions are cool, but please just do list(range(1, ansr+1)) if you need the iterable to be a list (which you probably don't need, but I digress).
def makePermutList(numList, ansr):
...
This is quite the hack. Is there a reason you can't use itertools.permutations(numList,n)? It's certainly going to be faster, and friendlier on memory.
Lastly, answering your question: if you are looking to improve i/o performance, the last thing you should do is make it multithreaded. I don't mean you shouldn't do it, I mean that it should literally be the last thing you do. Refactor/improve other things first.
You need to take all of that top-level code that uses globals, apply the backspace key to it, and rewrite functions that pass data around properly. Then you can think about using threads. I would personally use from threading import Thread and manually spawn Threads to do each unit of I/O rather than using multiprocessing.

Categories