How to use for loop when connecting PyQt buttons to lamda functions [duplicate] - python

This question already has answers here:
Connecting slots and signals in PyQt4 in a loop
(3 answers)
PyQt5 clicked button created in loop [duplicate]
(1 answer)
lambda in for loop only takes last value [duplicate]
(3 answers)
Closed 2 days ago.
I use PyQt5 for GUI programming. And I use QPushbutton and connect slots with lambda functions.
cz i need to use several buttons repeatedly, i want use for-loop to shorten code. But result is different when using for-loop or not.
the code is,
openFile_1_1.clicked.connect(lambda: self.btn_openFile(1, 1))
openFile_1_2.clicked.connect(lambda: self.btn_openFile(1, 2))
openFile_2_1.clicked.connect(lambda: self.btn_openFile(2, 1))
openFile_2_2.clicked.connect(lambda: self.btn_openFile(2, 2))
openFile_3_1.clicked.connect(lambda: self.btn_openFile(3, 1))
openFile_3_2.clicked.connect(lambda: self.btn_openFile(3, 2))
openFile_4_1.clicked.connect(lambda: self.btn_openFile(4, 1))
openFile_4_2.clicked.connect(lambda: self.btn_openFile(4, 2))
openFile_5_1.clicked.connect(lambda: self.btn_openFile(5, 1))
openFile_5_2.clicked.connect(lambda: self.btn_openFile(5, 2))
openFile_6_1.clicked.connect(lambda: self.btn_openFile(6, 1))
openFile_6_2.clicked.connect(lambda: self.btn_openFile(6, 2))
openFile_7_1.clicked.connect(lambda: self.btn_openFile(7, 1))
openFile_7_2.clicked.connect(lambda: self.btn_openFile(7, 2))
-----------------------------------------------------------------------------------
def btn_openFile(self, i, j):
fname = QFileDialog.getOpenFileName(self, 'Open file', './')
globals()['file_name_{}_{}'.format(i, j)].setText(fname[0])
using for-loop, I am trying to shorten that code. I used for-loop like this,
for i in range(1, 8):
for j in range(1, 3):
globals()['openFile_{}_{}'.format(i, j)].clicked.connect(lambda: self.btn_openFile(i, j))
-----------------------------------------------------------------------------------
def btn_openFile(self, i, j):
fname = QFileDialog.getOpenFileName(self, 'Open file', './')
globals()['file_name_{}_{}'.format(i, j)].setText(fname[0])
I want connect 'openFile_{}{}' variables to each 'file_name{}_{}' variables.
but when using for-loop, all variables(openFile_1_1 ~ opneFile_7_2) are connected to file_name_7_2 variable.
I wonder how to fix my code.

Related

Checking most used colors in image [duplicate]

This question already has an answer here:
fastest way to find the rgb pixel color count of image
(1 answer)
Closed last month.
I want to know the list of most used colors in this picture:
I tried the following code, but it takes too long:
from PIL import Image
colors = []
class Color:
def __init__(self, m, c):
self.col = c
self.many = m
im = Image.open("~/.../strowberry.jpeg")
def cool():
for i in im.getdata():
i = str(i)
i = i.replace(", ", "")
i = i.replace("(", "")
i = i.replace(")", "")
i = int(i)
colors.append(Color(1, i))
for x in colors:
num = 0
for j in range(len(colors)):
if x.col == colors[num].col:
del colors[num]
num -= 1
x.many += 1
num += 1
for obj in colors:
print(obj.many, obj.col)
cool()
Why is the code so slow and how can I improve the performance?
Do not reinvent the wheel. The Python Standard Library contains a Counter that can do this for you much more efficiently. Using this, you don't need to iterate over the data yourself. You also do not need to define a Class and perform the string operations. The code is very short and simple:
import collections
from PIL import Image
im = Image.open('strawberry.jpg')
counter = collections.Counter(im.getdata())
for color in counter:
print(f'{counter[color]} times color {color}')
If you really need the Color objects (for whatever you want to do with it later in your program), you can easily create this from the counter object using this one-liner:
colors = [Color(counter[color], color) for color in counter]
...and if you really need it in the same string format as in your original code, use this instead:
colors = [Color(counter[color], int(''.join(map(str, color)))) for color in counter]
Note that the two one-liners make use of list comprehension, which is very Pythonic and in many cases very fast as well.
The code int(''.join(map(str, color))) does the same as your 5 lines of code in the inner loop. This uses the fact that the original data is a tuple of integers, which can be converted to strings using map(str, ...) and then concatenated together using ''.join(...).
All this together took about 0.5 second on my machine, without the printing (which is slow anyway).

How do i iterate functions in Python to create array?

In Scala–since it is a functional programming language–I can sequentially iterate a function from a starting value to create an array of [f(initial), f( f(initial)), f( f( f(initial))), ...].
For example, if I want to predict the future temperature based on the current temperature, I can do something like this in Python:
import random as rnd
def estimateTemp( previousTemp):
# function to estimate the temperature, for simplicity assume it is as follows:
return( previousTemp * rnd.uniform(0.8, 1.2) + rnd.uniform(-1.0, 1.0))
Temperature = [0.0 for i in range(100)]
for i in range(1,100):
Temperature[i] = estimateTemp( Temperature[i-1] )
The problem with the previous code is that it uses for loop, requires predefined array for the temperature, and in many languages you can replace the for loop with an iterator. For example, in Scala you can easily do the previous example by using the iterate method to create a list:
val Temperature = List.iterate(0.0,100)( n =>
(n * (scala.util.Random.nextDouble()*0.4+0.8)) +
(scala.util.Random.nextDouble()*2-1)
)
Such an implementation is easy to follow and clearly written.
Python have implemented the itertools module to imitate some functional programming languages. Are there any methods in the itertools module which imitate the Scala iterate method?
You could turn your function into an infinite generator and take an appropriate slice:
import random as rnd
from itertools import islice
def estimateTemp(startTemp):
while 1:
yield startTemp
startTemp = (startTemp * rnd.uniform(0.8, 1.2) + rnd.uniform(-1.0, 1.0))
temperature = list(islice(estimateTemp(0.0), 0, 100))
An equivalent program can be produced by using itertools.accumulate-:
from itertools import accumulate
accumulate(range(0, 100), lambda x, y => estimateTemp(x))
So here we have an accumulator x that is updated, the y parameter (which is the next element of the iterable) is ignored. We use it as a way to iterate 100 times.
Unfortunately, itertools does not have this functionality built-in. Haskell and Scala both have this function, and it bothered me too. An itertools wrapper called Alakazam that I am developing has some additional helper functions, including the aforementioned iterate function.
Runnable example using Alakazam:
import random as rnd
import alakazam as zz
def estimateTemp(previousTemp):
return( previousTemp * rnd.uniform(0.8, 1.2) + rnd.uniform(-1.0, 1.0))
Temperature = zz.iterate(estimateTemp, 0.0).take(100).list()
print(Temperature)

How, in python, to split up lage result of itertools.product into groups and iterate in parallel [duplicate]

This question already has answers here:
How do I "multi-process" the itertools product module?
(2 answers)
Closed 6 years ago.
In python I am using itertools.product to iterate over all possible combinations of a list of characters that produces a very large result.
However when I look at the Windows 10 Task Manager the python process executing this task is only taking 13.5% CPU. I looked into multiprocessing in python, and found that with pool.map I can map an instance of a function to pool, and have multiple instances of the function running in parallel. This is great, but as I am iterating over a single (very large) list and this is done in one instance of a function that takes up a large amount of time, this doesn't help me.
So the way I see it the only way to speed this up is to split the result of itertools.product into groups and iterate over the groups in parallel. If I can get the length of the result itertools.product, I can divide it into groups by the number of processor cores I have available, and then using multiprocessing I can iterate over all these groups in parallel.
So my question is can this be done, and what is the best approach?
Maybe there is a module out there for this sort of thing?
The concept is something like this. (the following actually works but gives MemoryError when I try and scale it up to the full character set commented out):
#!/usr/bin/env python3.5
import sys, itertools, multiprocessing, functools
def process_group(iIterationNumber, iGroupSize, sCharacters, iCombinationLength, iCombintationsListLength, iTotalIterations):
iStartIndex = 0
if iIterationNumber > 1: iStartIndex = (iIterationNumber - 1) * iGroupSize
iStopIndex = iGroupSize * iIterationNumber
if iIterationNumber == iTotalIterations: iStopIndex = iCombintationsListLength
aCombinations = itertools.product(sCharacters, repeat=iCombinationLength)
lstCombinations = list(aCombinations)
print("Iteration#", iIterationNumber, "StartIndex:", iStartIndex, iStopIndex)
for iIndex in range(iStartIndex, iStopIndex):
aCombination = lstCombinations[iIndex];
print("Iteration#", iIterationNumber, ''.join(aCombination))
if __name__ == '__main__':
#_sCharacters = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789~`!##$%^&*()_-+={[}]|\"""':;?/>.<,"
_sCharacters = "123"
_iCombinationLength = 4
aCombinations = itertools.product(_sCharacters, repeat=_iCombinationLength)
lstCombinations = list(aCombinations)
_iCombintationsListLength = len(lstCombinations)
iCPUCores = 4
_iGroupSize = round(_iCombintationsListLength / iCPUCores)
print("Length", _iCombintationsListLength)
pool = multiprocessing.Pool()
pool.map(functools.partial(process_group, iGroupSize = _iGroupSize, sCharacters = _sCharacters, iCombinationLength = _iCombinationLength, iCombintationsListLength = _iCombintationsListLength, iTotalIterations = iCPUCores), range(1,iCPUCores+1))
Thanks for your time.
You can't share the product() output among subprocesses; there is no good way to break this up into chunks per process. Instead, have each subprocess generate new values but give them a prefix to start from.
Remove outer loops from the product() call and create groups from that. For example, you could create len(sCharacters) groups by decreasing iCombinationLength by one and passing in each element from sCharacters as a prefix:
for prefix in sCharacters:
# create group for iCombinationLength - 1 results.
# pass in the prefix
Each group then can loop over product(sCharacters, repeat=iCombinationLength - 1) themselves and combine that with the prefix. So group 1 starts with '0', group 2 starts with '1', etc.
You can extend this by using combinations of 2 or 3 or more characters. For your 10 input characters, that'd create 100 or 1000 groups, respectively. The generic version is:
prefix_length = 3
for prefix in product(sCharacters, repeat=prefix_length):
# create group for iCombinationLength - prefix_length
# pass in the prefix

Concatenation of 3 columns (total number of possible combinations) using DataNitro

I am new to DataNitro and I am also a Python beginner, and currently exploring the endless possibilities of Excel spreadsheet programming using DataNitro.
I would like to concatenate 3 different columns (A, B & C) and would like to generate every possible combination of 3 with them.
A B C
172-000072-00 523-000072-00 120-000172-01
172-000072-04 523-000072-01 120-000172-06
172-000072-01 523-000072-02 120-000172-07
172-000072-05 523-000072-03 120-000172-08
172-000072-08 523-000072-04 120-000161-01
172-000072-09 523-000072-05 120-000161-06
523-000072-06 120-000161-07
523-000072-07 120-000161-08
One combination could be "172-000072-00 / 523-000072-00 / 120-000172-01"
There would be 6 X 8 X 8 = 384 combinations.
How can I generate this in Excel using DataNitro?
I tried to make my own implementation for this problem -
def conctn():
CellRange("E1:E384").value =
[for x in CellRange("A1:A5"):
for y in CellRange("B1:B8"):
for z in CellRange("C1:C8"):
return CellRange(z).value
return CellRange(y).value + CellRange(z).value
return CellRange(x).value + CellRange(y).value + CellRange(z).value]
This should work:
to_write = []
for x in CellRange("A1:A5").value:
for y in CellRange("B1:B8").value:
for z in CellRange("C1:C8").value:
to_write.append(' / '.join([x, y, z]))
Cell("E1").vertical = to_write
Here's what's happening here:
The first line is creating a list you can use to store all the permutations, and the last line is using the 'vertical' keyword to write this list to Excel. This keyword is a shortcut for writing a column starting from a given cell without figuring out how long it is.
The three 'for' loops iterate through every combination of variables.
"' / '.join([x, y, z])" takes a list of striings ("[x, y, z]") and joins them into one string, with " / " as the separator.

Changing existing list using matplotlib event in python

I am working on an program to identify significant frequencies in a power density spectrum. I have found a list of significant peaks in a automated way. But now I want to look at them visually and add / delete peaks from a plot using fig.canvas.mpl_connect('key_press_event', ontype) (http://matplotlib.org/users/event_handling.html)
As I want to add multiple peaks I want to update the used list. Though I get a UnboundLocalError: local variable 'frequencyLIST' referenced before assignment error.
def interactiveMethod(frequency,PDS, frequencyLIST,figureName):
#frequency and PDS is the input list of data, frequencyLIST is my found list of
#frequencies and figureName is the figure I previously made which I want to use the
#event on.
def ontype(event):
if event.key == 'a':
#Getting the event xdata
x = event.xdata
frequencyCut = frequency[np.where((frequency > x - 2) & (frequency < x + 2))]
PDSCut = PDS[np.where((frequency > x - 2) & (frequency < x + 2))]
#Find the maximum PDS, as this corresponds to a peak
PDSMax = np.max(PDSCut)
frequencyMax = frequencyCut[np.where(PDSCut == PDSMax)][0]
#Updating the new list using the found frequency
frequencyLIST = np.append(frequencyLIST,frequencyMax)
figureName.canvas.mpl_connect('key_press_event',ontype)
I have no idea where I should put this frequencyLIST so I can update it.
python version: 2.7.3 32bit
matplotlib version: 1.3.0
numpy version: 1.7.1
ubuntu 13.1
I also have enthough canopy (not sure which version)
You are passing the frequencyLIST variable to the outermost method (interactiveMethod) but not to the inner method (ontype). A quick fix is to add the following line to the inner method:
def ontype(event):
global frequencyLIST
# ... following lines unaltered ...
You can read more on the way Python scopes variables, and a quirk similar to yours, in this Stack Overflow question

Categories