I'm trying to make a 12x12 binary-puzzle game in python.
There's no GUI in the game, just the shell.
To get the values of the tiles, I made an import function which imports values from an excel-document using xlrd.
Now I'm making a function to print out the board, but something's not quite right:
(sorry for too big pictures)
Here is my code:
Main:
import Import as imp
import print as prt
data = imp.getInput("C:/Users/jaspe/PyCharmProjects/BinaryPuzzle/INPUT.xls")
print(data)
print(data[6])
prt.printField(data)
Import:
import xlrd
def getInput(loc):
wb = xlrd.open_workbook(loc)
sheet = wb.sheet_by_index(0)
data = [[], [], [], [], [], [], [], [], [], [], [], []]
i = 0
while i < 12:
data[i].append(sheet.cell_value(i, 0))
data[i].append(sheet.cell_value(i, 1))
data[i].append(sheet.cell_value(i, 2))
data[i].append(sheet.cell_value(i, 3))
data[i].append(sheet.cell_value(i, 4))
data[i].append(sheet.cell_value(i, 5))
data[i].append(sheet.cell_value(i, 6))
data[i].append(sheet.cell_value(i, 7))
data[i].append(sheet.cell_value(i, 8))
data[i].append(sheet.cell_value(i, 9))
data[i].append(sheet.cell_value(i, 10))
data[i].append(sheet.cell_value(i, 11))
j = 0
while j < len(data[i]):
if data[i][j] != "":
data[i][j] = int(data[i][j])
j = j + 1
i = i + 1
return data
Print:
def printField(d):
i = 0
d_ = []
while i < len(d):
d_.append([])
j = 0
while j < len(d[i]):
d_[i].append(d[i][j])
if d_[i][j] == "":
d_[i][j] = "_"
j = j + 1
i = i + 1
space = " "
print(str(d_[0][0]) + space + str(d_[0][1]) + space + str(d_[0][2]) + space + str(d_[0][3]) + space + str(
d_[0][4]) + space + str(d_[0][5]) + space + str(d_[0][
6]) + space + str(d_[0][7]) + space + str(
d_[0][8]) + space + str(d_[0][8]) + space + str(d_[0][9]) + space + str(d_[0][10]) + space + str(d_[0][11]))
print(str(d_[1][0]) + space + str(d_[1][1]) + space + str(d_[1][2]) + space + str(d_[1][3]) + space + str(
d_[1][4]) + space + str(d_[1][5]) + space + str(d_[1][
6]) + space + str(d_[1][7]) + space + str(
d_[1][8]) + space + str(d_[1][8]) + space + str(d_[1][9]) + space + str(d_[1][10]) + space + str(d_[1][11]))
print(str(d_[2][0]) + space + str(d_[2][1]) + space + str(d_[2][2]) + space + str(d_[2][3]) + space + str(
d_[2][4]) + space + str(d_[2][5]) + space + str(d_[2][
6]) + space + str(d_[2][7]) + space + str(
d_[2][8]) + space + str(d_[2][8]) + space + str(d_[2][9]) + space + str(d_[2][10]) + space + str(d_[2][11]))
print(str(d_[3][0]) + space + str(d_[3][1]) + space + str(d_[3][2]) + space + str(d_[3][3]) + space + str(
d_[3][4]) + space + str(d_[3][5]) + space + str(d_[3][
6]) + space + str(d_[3][7]) + space + str(
d_[3][8]) + space + str(d_[3][8]) + space + str(d_[3][9]) + space + str(d_[3][10]) + space + str(d_[3][11]))
print(str(d_[4][0]) + space + str(d_[4][1]) + space + str(d_[4][2]) + space + str(d_[4][3]) + space + str(
d_[4][4]) + space + str(d_[4][5]) + space + str(d_[4][
6]) + space + str(d_[4][7]) + space + str(
d_[4][8]) + space + str(d_[4][8]) + space + str(d_[4][9]) + space + str(d_[4][10]) + space + str(d_[4][11]))
print(str(d_[5][0]) + space + str(d_[5][1]) + space + str(d_[5][2]) + space + str(d_[5][3]) + space + str(
d_[5][4]) + space + str(d_[5][5]) + space + str(d_[5][
6]) + space + str(d_[5][7]) + space + str(
d_[5][8]) + space + str(d_[5][8]) + space + str(d_[5][9]) + space + str(d_[5][10]) + space + str(d_[5][11]))
print(str(d_[6][0]) + space + str(d_[6][1]) + space + str(d_[6][2]) + space + str(d_[6][3]) + space + str(
d_[6][4]) + space + str(d_[6][5]) + space + str(d_[6][
6]) + space + str(d_[6][7]) + space + str(
d_[6][8]) + space + str(d_[6][8]) + space + str(d_[6][9]) + space + str(d_[6][10]) + space + str(d_[6][11]))
print(str(d_[7][0]) + space + str(d_[7][1]) + space + str(d_[7][2]) + space + str(d_[7][3]) + space + str(
d_[7][4]) + space + str(d_[7][5]) + space + str(d_[7][
6]) + space + str(d_[7][7]) + space + str(
d_[7][8]) + space + str(d_[7][8]) + space + str(d_[7][9]) + space + str(d_[7][10]) + space + str(d_[7][11]))
print(str(d_[8][0]) + space + str(d_[8][1]) + space + str(d_[8][2]) + space + str(d_[8][3]) + space + str(
d_[8][4]) + space + str(d_[8][5]) + space + str(d_[8][
6]) + space + str(d_[8][7]) + space + str(
d_[8][8]) + space + str(d_[8][8]) + space + str(d_[8][9]) + space + str(d_[8][10]) + space + str(d_[8][11]))
print(str(d_[9][0]) + space + str(d_[9][1]) + space + str(d_[9][2]) + space + str(d_[9][3]) + space + str(
d_[9][4]) + space + str(d_[9][5]) + space + str(d_[9][
6]) + space + str(d_[9][7]) + space + str(
d_[9][8]) + space + str(d_[9][8]) + space + str(d_[9][9]) + space + str(d_[9][10]) + space + str(d_[9][11]))
print(str(d_[10][0]) + space + str(d_[10][1]) + space + str(d_[10][2]) + space + str(d_[10][3]) + space + str(
d_[10][4]) + space + str(d_[10][5]) + space + str(d_[10][
6]) + space + str(d_[10][7]) + space + str(
d_[10][8]) + space + str(d_[10][8]) + space + str(d_[10][9]) + space + str(d_[10][10]) + space + str(d_[10][11]))
print(str(d_[11][0]) + space + str(d_[11][1]) + space + str(d_[11][2]) + space + str(d_[11][3]) + space + str(
d_[11][4]) + space + str(d_[11][5]) + space + str(d_[11][
6]) + space + str(d_[11][7]) + space + str(
d_[11][8]) + space + str(d_[11][8]) + space + str(d_[11][9]) + space + str(d_[11][10]) + space + str(d_[11][11]))
I know it's some inefficient code but its not stupid if it works(but it doesn't work so it's stupid)
tnx 4 helping!
EDIT:
Tnx to #blorgon and #T1Berger, the issue is fixed, tnx guys!
You got a copy and paste error in every index space with str(d_[x][8])
As #blorgon pointed out you should refactor your code - with for the example a foreach loop going through the elements of your nested array. A simple list comprehension should do the trick and is easier to code and to understand:
[print(' '.join(nested_list )) for nested_list in data]
You have printed str(d_[x][8]) twice for each row.
There are many better options here, but just pointing out the mistake in your method.
Related
Using the Gensim package (both LDA and Mallet), I noticed that when I create a model with more than 20 topics, and I use the print_topics function, it will print a maximum of 20 topics (note, not the first 20 topics, rather any 20 topics), and they will be out of order.
And so my question is, how do i get all of the topics to print? I am unsure if this is a bug or an issue on my end. I have looked back at my library of LDA models (over 5000, different data sources), and have noted this happens in all of them where topics are above 20.
Below is sample code with output. In the output, you will see the topics are not ordered (they should be) and topics are missing such as topic 3.
lda_model = gensim.models.ldamodel.LdaModel(corpus=jr_dict_corpus,
id2word=jr_dict,
num_topics=25,
random_state=100,
update_every=1,
chunksize=100,
passes=10,
alpha='auto',
per_word_topics=True)
pprint(lda_model.print_topics())
#note, if the model contained 20 topics, the topics would be listed in order 0-19
[(21,
'0.001*"commitment" + 0.001*"study" + 0.001*"evolve" + 0.001*"outlook" + '
'0.001*"value" + 0.001*"people" + 0.001*"individual" + 0.001*"client" + '
'0.001*"structure" + 0.001*"proposal"'),
(18,
'0.001*"self" + 0.001*"insurance" + 0.001*"need" + 0.001*"trend" + '
'0.001*"statistic" + 0.001*"propose" + 0.001*"analysis" + 0.001*"perform" + '
'0.001*"impact" + 0.001*"awareness"'),
(2,
'0.001*"link" + 0.001*"task" + 0.001*"collegiate" + 0.001*"universitie" + '
'0.001*"banking" + 0.001*"origination" + 0.001*"security" + 0.001*"standard" '
'+ 0.001*"qualifications_bachelor" + 0.001*"greenfield"'),
(11,
'0.024*"collegiate" + 0.016*"interpersonal" + 0.016*"prepare" + '
'0.016*"invite" + 0.016*"aspect" + 0.016*"college" + 0.016*"statistic" + '
'0.016*"continent" + 0.016*"structure" + 0.016*"project"'),
(10,
'0.049*"enjoy" + 0.049*"ambiguity" + 0.017*"accordance" + 0.017*"liberalize" '
'+ 0.017*"developing" + 0.017*"application" + 0.017*"vacancie" + '
'0.017*"service" + 0.017*"initiative" + 0.017*"discontinuing"'),
(20,
'0.028*"negotiation" + 0.028*"desk" + 0.018*"enhance" + 0.018*"engage" + '
'0.018*"discussion" + 0.018*"ability" + 0.018*"depth" + 0.018*"derive" + '
'0.018*"enjoy" + 0.018*"balance"'),
(12,
'0.036*"individual" + 0.024*"validate" + 0.018*"greenfield" + '
'0.018*"capability" + 0.018*"coordinate" + 0.018*"create" + '
'0.018*"programming" + 0.018*"safety" + 0.010*"evaluation" + '
'0.002*"reliability"'),
(1,
'0.028*"negotiation" + 0.021*"responsibility" + 0.014*"master" + '
'0.014*"mind" + 0.014*"experience" + 0.014*"worker" + 0.014*"ability" + '
'0.007*"summary" + 0.007*"proposal" + 0.007*"alert"'),
(23,
'0.043*"banking" + 0.026*"origination" + 0.026*"round" + 0.026*"credibility" '
'+ 0.026*"entity" + 0.018*"standard" + 0.017*"range" + 0.017*"pension" + '
'0.017*"adapt" + 0.017*"information"'),
(13,
'0.034*"priority" + 0.034*"reconciliation" + 0.034*"purchaser" + '
'0.023*"reporting" + 0.023*"offer" + 0.023*"investor" + 0.023*"share" + '
'0.023*"region" + 0.023*"service" + 0.023*"manipulate"'),
(22,
'0.017*"analyst" + 0.017*"modelling" + 0.016*"producer" + 0.016*"return" + '
'0.016*"self" + 0.009*"scope" + 0.008*"mind" + 0.008*"need" + 0.008*"detail" '
'+ 0.008*"statistic"'),
(9,
'0.021*"decision" + 0.014*"invite" + 0.014*"balance" + 0.014*"commercialize" '
'+ 0.014*"transform" + 0.014*"manage" + 0.014*"optionality" + '
'0.014*"problem_solving" + 0.014*"fuel" + 0.014*"stay"'),
(7,
'0.032*"commitment" + 0.032*"study" + 0.016*"impact" + 0.016*"outlook" + '
'0.011*"operation" + 0.011*"expand" + 0.011*"exchange" + 0.011*"management" '
'+ 0.011*"conde" + 0.011*"evolve"'),
(15,
'0.032*"agility" + 0.019*"feasibility" + 0.019*"self" + 0.014*"deploy" + '
'0.014*"define" + 0.013*"investment" + 0.013*"option" + 0.013*"control" + '
'0.013*"action" + 0.013*"incubation"'),
(5,
'0.020*"desk" + 0.018*"agility" + 0.016*"vender" + 0.016*"coordinate" + '
'0.016*"committee" + 0.012*"acquisition" + 0.012*"target" + '
'0.012*"counterparty" + 0.012*"approval" + 0.012*"trend"'),
(17,
'0.022*"option" + 0.017*"working" + 0.017*"niche" + 0.011*"business" + '
'0.011*"constrain" + 0.011*"meeting" + 0.011*"correspond" + 0.011*"exposure" '
'+ 0.011*"element" + 0.011*"face"'),
(0,
'0.025*"expertise" + 0.025*"banking" + 0.021*"universitie" + '
'0.017*"spreadsheet" + 0.013*"negotiation" + 0.013*"shipment" + '
'0.013*"arise" + 0.013*"billing" + 0.013*"assistance" + 0.013*"sector"'),
(4,
'0.024*"provide" + 0.017*"consider" + 0.017*"allow" + 0.015*"outlook" + '
'0.015*"value" + 0.015*"contract" + 0.012*"study" + 0.012*"technology" + '
'0.012*"scenario" + 0.012*"indicator"'),
(6,
'0.058*"impulse" + 0.027*"shall" + 0.027*"shape" + 0.024*"marketer" + '
'0.017*"availability" + 0.014*"determine" + 0.014*"load" + '
'0.014*"constantly_change" + 0.014*"instrument" + 0.014*"interface"'),
(19,
'0.042*"task" + 0.038*"tariff" + 0.038*"recommend" + 0.024*"example" + '
'0.023*"future" + 0.021*"people" + 0.021*"math" + 0.021*"capacity" + '
'0.021*"spirit" + 0.020*"price"')]
Same model as above, but using 20 topics. As you can see, the output is in order by topic # and it contains all the topics.
lda_model = gensim.models.ldamodel.LdaModel(corpus=jr_dict_corpus,
id2word=jr_dict,
num_topics=20,
random_state=100,
update_every=1,
chunksize=100,
passes=10,
alpha='auto',
per_word_topics=True)
pprint(lda_model.print_topics())
[(0,
'0.031*"enjoy" + 0.031*"ambiguity" + 0.028*"accordance" + 0.016*"statistic" '
'+ 0.016*"initiative" + 0.016*"service" + 0.016*"liberalize" + '
'0.016*"application" + 0.011*"community" + 0.011*"identifie"'),
(1,
'0.016*"transformation" + 0.016*"negotiation" + 0.016*"community" + '
'0.016*"clock" + 0.011*"marketer" + 0.011*"desk" + 0.011*"mandate" + '
'0.011*"closing" + 0.011*"initiative" + 0.011*"experience"'),
(2,
'0.026*"priority" + 0.026*"reconciliation" + 0.026*"purchaser" + '
'0.020*"safety" + 0.020*"region" + 0.020*"query" + 0.020*"share" + '
'0.020*"manipulate" + 0.020*"ibex" + 0.020*"investor"'),
(3,
'0.022*"improve" + 0.021*"committee" + 0.021*"affect" + 0.012*"target" + '
'0.012*"acquisition" + 0.011*"basis" + 0.011*"profitability" + '
'0.011*"economic" + 0.011*"natural" + 0.011*"profit"'),
(4,
'0.024*"provide" + 0.019*"value" + 0.017*"consider" + 0.017*"allow" + '
'0.015*"scenario" + 0.015*"outlook" + 0.015*"contract" + 0.014*"forecast" + '
'0.014*"decision" + 0.012*"indicator"'),
(5,
'0.037*"desk" + 0.030*"coordinate" + 0.030*"agility" + 0.030*"vender" + '
'0.023*"counterparty" + 0.023*"immature_emerge" + 0.023*"metric" + '
'0.022*"approval" + 0.015*"maximization" + 0.015*"undergraduate"'),
(6,
'0.053*"impulse" + 0.025*"shall" + 0.025*"shape" + 0.018*"availability" + '
'0.018*"marketer" + 0.012*"determine" + 0.012*"language" + '
'0.012*"monitoring" + 0.012*"integration" + 0.012*"month"'),
(7,
'0.026*"commitment" + 0.026*"study" + 0.013*"impact" + 0.013*"outlook" + '
'0.009*"operation" + 0.009*"management" + 0.009*"expand" + 0.009*"exchange" '
'+ 0.009*"conde" + 0.009*"balance"'),
(8,
'0.057*"insurance" + 0.029*"propose" + 0.028*"rule" + 0.026*"self" + '
'0.023*"product" + 0.023*"asset" + 0.023*"pricing" + 0.023*"amount" + '
'0.023*"result" + 0.020*"liquidity"'),
(9,
'0.012*"universitie" + 0.012*"need" + 0.012*"statistic" + 0.012*"trend" + '
'0.008*"invite" + 0.008*"commercialize" + 0.008*"transform" + 0.008*"manage" '
'+ 0.008*"problem_solving" + 0.008*"optionality"'),
(10,
'0.024*"background" + 0.024*"curve" + 0.020*"allow" + 0.019*"collect" + '
'0.019*"basis" + 0.017*"accordance" + 0.013*"improve" + 0.013*"datum" + '
'0.013*"component" + 0.013*"reliability"'),
(11,
'0.054*"task" + 0.049*"tariff" + 0.049*"recommend" + 0.031*"future" + '
'0.027*"spirit" + 0.027*"capacity" + 0.027*"math" + 0.022*"ensure" + '
'0.022*"profit" + 0.022*"variable_margin"'),
(12,
'0.001*"impulse" + 0.001*"availability" + 0.001*"reliability" + '
'0.001*"shall" + 0.001*"component" + 0.001*"agent" + 0.001*"marketer" + '
'0.001*"shape" + 0.001*"assisting" + 0.001*"supply"'),
(13,
'0.021*"region" + 0.016*"greenfield" + 0.016*"collegiate" + 0.011*"transfer" '
'+ 0.011*"remuneration" + 0.011*"organization" + 0.011*"structure" + '
'0.011*"continent" + 0.011*"project" + 0.011*"prepare"'),
(14,
'0.033*"originator" + 0.025*"vender" + 0.025*"expertise" + 0.025*"banking" + '
'0.019*"evolve" + 0.017*"management" + 0.017*"market" + 0.017*"site" + '
'0.012*"component" + 0.012*"discontinuing"'),
(15,
'0.027*"agility" + 0.022*"mind" + 0.022*"negotiation" + 0.011*"deploy" + '
'0.011*"define" + 0.011*"ecosystem" + 0.011*"control" + 0.011*"lead" + '
'0.011*"industry" + 0.011*"option"'),
(16,
'0.001*"region" + 0.001*"master" + 0.001*"orginiation" + 0.001*"greenfield" '
'+ 0.001*"agent" + 0.001*"identifie" + 0.001*"remuneration" + 0.001*"mark" + '
'0.001*"reviewing" + 0.001*"closing"'),
(17,
'0.030*"banking" + 0.018*"option" + 0.018*"round" + 0.018*"credibility" + '
'0.018*"origination" + 0.018*"entity" + 0.016*"working" + 0.015*"niche" + '
'0.015*"standard" + 0.012*"coordinate"'),
(18,
'0.027*"negotiation" + 0.018*"reporting" + 0.018*"perform" + 0.018*"world" + '
'0.015*"offer" + 0.015*"manipulate" + 0.011*"query" + 0.010*"control" + '
'0.010*"working" + 0.009*"self"'),
(19,
'0.047*"example" + 0.039*"people" + 0.039*"price" + 0.039*"excel" + '
'0.039*"excellent" + 0.038*"base" + 0.031*"office" + 0.031*"optimizing" + '
'0.031*"participate" + 0.031*"package"')]
The default number of topics for print_topics is 20. You must use the num_topics argument to include topics above 20...
print(lda_model.print_topics(num_topics=25, num_words=10))
I am trying to use python IRR function with PULP maximisation but i am getting the following error
TypeError: float() argument must be a string or a number, not 'LpAffineExpression'
TypeError Traceback (most recent call last)
in
11 name[6]*rate[6]*ratesList2[2] + name[7]*rate[7]*ratesList2[2] + name[8]*rate[8]*ratesList2[2] + name[9]*rate[9]*ratesList2[2] + name[10]*rate[10]*ratesList2[2] + name[11]*rate[11]*ratesList2[2] +
12 name[12]*rate[12]*ratesList2[2] + name[13]*rate[13]*ratesList2[2] + name[14]*rate[14]*ratesList2[2] + name[15]*rate[15]*ratesList2[2] + name[16]*rate[16]*ratesList2[2] + name[17]*rate[17]*ratesList2[2] +
---> 13 name[18]*rate[18]*ratesList2[2])])
14
15
problem += np.irr([(-19660528.00),
(name[0]*rate[0] + name[1]*rate[1] + name[2]*rate[2] + name[3]*rate[3] + name[4]*rate[4] + name[5]*rate[5] +
name[6]*rate[6] + name[7]*rate[7] + name[8]*rate[8] + name[9]*rate[9] + name[10]*rate[10] + name[11]*rate[11] +
name[12]*rate[12] + name[13]*rate[13] + name[14]*rate[14] + name[15]*rate[15] + name[16]*rate[16] + name[17]*rate[17] +
name[18]*rate[18]),
(name[0]*rate[0]*ratesList1[1] + name[1]*rate[1]*ratesList2[1] + name[2]*rate[2]*ratesList2[1] + name[3]*rate[3]*ratesList2[1] + name[4]*rate[4]*ratesList2[1] + name[5]*rate[5]*ratesList2[1] +
name[6]*rate[6]*ratesList2[1] + name[7]*rate[7]*ratesList2[1] + name[8]*rate[8]*ratesList2[1] + name[9]*rate[9]*ratesList2[1] + name[10]*rate[10]*ratesList2[1] + name[11]*rate[11]*ratesList2[1] +
name[12]*rate[12]*ratesList2[1] + name[13]*rate[13]*ratesList2[1] + name[14]*rate[14]*ratesList2[1] + name[15]*rate[15]*ratesList2[1] + name[16]*rate[16]*ratesList2[1] + name[17]*rate[17]*ratesList2[1] +
name[18]*rate[18]*ratesList2[1]),
(name[0]*rate[0]*ratesList1[2] + name[1]*rate[1]*ratesList2[2] + name[2]*rate[2]*ratesList2[2] + name[3]*rate[3]*ratesList2[2] + name[4]*rate[4]*ratesList2[2] + name[5]*rate[5]*ratesList2[2] +
name[6]*rate[6]*ratesList2[2] + name[7]*rate[7]*ratesList2[2] + name[8]*rate[8]*ratesList2[2] + name[9]*rate[9]*ratesList2[2] + name[10]*rate[10]*ratesList2[2] + name[11]*rate[11]*ratesList2[2] +
name[12]*rate[12]*ratesList2[2] + name[13]*rate[13]*ratesList2[2] + name[14]*rate[14]*ratesList2[2] + name[15]*rate[15]*ratesList2[2] + name[16]*rate[16]*ratesList2[2] + name[17]*rate[17]*ratesList2[2] +
name[18]*rate[18]*ratesList2[2])])
problem += (name[0] + name[1] + name[2] + name[3] + name[4] + name[5] + name[6] + name[7] + name[8] + name[9] + name[10] +
name[11] + name[12] + name[13] + name[14] + name[15] + name[16] + name[17] + name[18]) <= sum(marketMix['GLA']), "1st constraint"
The numpy function irr() takes as argument a list of values. You are instead passing a list of linear expressions containing variables that are subject to optimization. irr() is not prepared to handle that. It assumes that all arguments can be coerced into a float. Instead of using function irr() you will have to state the respective expression explicitly.
I have to simplify a transfer function using sympy. I am used to maxima and I am looking for advice to get similar performances in a python environment.
Using the following Maxima code:
A:-Avol0/(1+s/(2*pi*fp));
Zph:Rsh/(1+Rsh*Cj*s);
Zf:Rf/(1+Rf*Cf*s);
alpha:Zf*Zph/(Zf+Zph);
beta:Zph/(Zf+Zph);
BetaA:ratsimp(beta*A,s);
H:ratsimp(alpha*A/(1-BetaA),s);
I get the following:
(H)-> -(2*Avol0*Rf*Rsh*fp*pi)/((Cj+Cf)*Rf*Rsh*s^2+((2*Cj+(2*Avol0+2)*Cf)*Rf*Rsh*fp*pi+Rsh+Rf)*s+((2*Avol0+2)*Rsh+2*Rf)*fp*pi)
The same opertions in sympy do not get to such a nice result:
import numpy as np
import sympy as sy
"""
Formulas
"""
s, Rf, Cf, Rsh, Cj, Cd, Ccm, GBP, Avol0, fp, w = \
sy.symbols("s Rf Cf Rsh Cj Cd Ccm GBP Avol0 fp w")
A = -Avol0/(1+s/(2*np.pi*fp))
Zph = Rsh/(1+Rsh*Cj*s)
Zf = Rf/(1+Rf*Cf*s)
alpha = Zf*Zph/(Zf+Zph)
beta = Zph/(Zf+Zph)
Gloop = sy.ratsimp(beta*A)
H = alpha*A/(1-Gloop)
sy.ratsimp(H)
returns an unreadable result:
-1.0*(1.0*Avol0*Cf**2*Cj*Rf**3*Rsh**3*fp**2*s**3 + 0.159154943091895*Avol0*Cf**2*Cj*Rf**3*Rsh**3*fp*s**4 + 1.0*Avol0*Cf**2*Rf**3*Rsh**2*fp**2*s**2 + 0.159154943091895*Avol0*Cf**2*Rf**3*Rsh**2*fp*s**3 + 1.0*Avol0*Cf*Cj**2*Rf**3*Rsh**3*fp**2*s**3 + 0.159154943091895*Avol0*Cf*Cj**2*Rf**3*Rsh**3*fp*s**4 + 2.0*Avol0*Cf*Cj*Rf**3*Rsh**2*fp**2*s**2 + 0.318309886183791*Avol0*Cf*Cj*Rf**3*Rsh**2*fp*s**3 + 2.0*Avol0*Cf*Cj*Rf**2*Rsh**3*fp**2*s**2 + 0.318309886183791*Avol0*Cf*Cj*Rf**2*Rsh**3*fp*s**3 + 1.0*Avol0*Cf*Rf**3*Rsh*fp**2*s + 0.159154943091895*Avol0*Cf*Rf**3*Rsh*fp*s**2 + 2.0*Avol0*Cf*Rf**2*Rsh**2*fp**2*s + 0.318309886183791*Avol0*Cf*Rf**2*Rsh**2*fp*s**2 + 1.0*Avol0*Cj**2*Rf**2*Rsh**3*fp**2*s**2 + 0.159154943091895*Avol0*Cj**2*Rf**2*Rsh**3*fp*s**3 + 2.0*Avol0*Cj*Rf**2*Rsh**2*fp**2*s + 0.318309886183791*Avol0*Cj*Rf**2*Rsh**2*fp*s**2 + 1.0*Avol0*Cj*Rf*Rsh**3*fp**2*s + 0.159154943091895*Avol0*Cj*Rf*Rsh**3*fp*s**2 + 1.0*Avol0*Rf**2*Rsh*fp**2 + 0.159154943091895*Avol0*Rf**2*Rsh*fp*s + 1.0*Avol0*Rf*Rsh**2*fp**2 + 0.159154943091895*Avol0*Rf*Rsh**2*fp*s)/(1.0*Avol0*Cf**3*Cj*Rf**3*Rsh**3*fp**2*s**4 + 0.159154943091895*Avol0*Cf**3*Cj*Rf**3*Rsh**3*fp*s**5 + 1.0*Avol0*Cf**3*Rf**3*Rsh**2*fp**2*s**3 + 0.159154943091895*Avol0*Cf**3*Rf**3*Rsh**2*fp*s**4 + 1.0*Avol0*Cf**2*Cj**2*Rf**3*Rsh**3*fp**2*s**4 + 0.159154943091895*Avol0*Cf**2*Cj**2*Rf**3*Rsh**3*fp*s**5 + 2.0*Avol0*Cf**2*Cj*Rf**3*Rsh**2*fp**2*s**3 + 0.318309886183791*Avol0*Cf**2*Cj*Rf**3*Rsh**2*fp*s**4 + 3.0*Avol0*Cf**2*Cj*Rf**2*Rsh**3*fp**2*s**3 + 0.477464829275686*Avol0*Cf**2*Cj*Rf**2*Rsh**3*fp*s**4 + 1.0*Avol0*Cf**2*Rf**3*Rsh*fp**2*s**2 + 0.159154943091895*Avol0*Cf**2*Rf**3*Rsh*fp*s**3 + 3.0*Avol0*Cf**2*Rf**2*Rsh**2*fp**2*s**2 + 0.477464829275686*Avol0*Cf**2*Rf**2*Rsh**2*fp*s**3 + 2.0*Avol0*Cf*Cj**2*Rf**2*Rsh**3*fp**2*s**3 + 0.318309886183791*Avol0*Cf*Cj**2*Rf**2*Rsh**3*fp*s**4 + 4.0*Avol0*Cf*Cj*Rf**2*Rsh**2*fp**2*s**2 + 0.636619772367581*Avol0*Cf*Cj*Rf**2*Rsh**2*fp*s**3 + 3.0*Avol0*Cf*Cj*Rf*Rsh**3*fp**2*s**2 + 0.477464829275686*Avol0*Cf*Cj*Rf*Rsh**3*fp*s**3 + 2.0*Avol0*Cf*Rf**2*Rsh*fp**2*s + 0.318309886183791*Avol0*Cf*Rf**2*Rsh*fp*s**2 + 3.0*Avol0*Cf*Rf*Rsh**2*fp**2*s + 0.477464829275686*Avol0*Cf*Rf*Rsh**2*fp*s**2 + 1.0*Avol0*Cj**2*Rf*Rsh**3*fp**2*s**2 + 0.159154943091895*Avol0*Cj**2*Rf*Rsh**3*fp*s**3 + 2.0*Avol0*Cj*Rf*Rsh**2*fp**2*s + 0.318309886183791*Avol0*Cj*Rf*Rsh**2*fp*s**2 + 1.0*Avol0*Cj*Rsh**3*fp**2*s + 0.159154943091895*Avol0*Cj*Rsh**3*fp*s**2 + 1.0*Avol0*Rf*Rsh*fp**2 + 0.159154943091895*Avol0*Rf*Rsh*fp*s + 1.0*Avol0*Rsh**2*fp**2 + 0.159154943091895*Avol0*Rsh**2*fp*s + 1.0*Cf**3*Cj*Rf**3*Rsh**3*fp**2*s**4 + 0.318309886183791*Cf**3*Cj*Rf**3*Rsh**3*fp*s**5 + 0.0253302959105844*Cf**3*Cj*Rf**3*Rsh**3*s**6 + 1.0*Cf**3*Rf**3*Rsh**2*fp**2*s**3 + 0.318309886183791*Cf**3*Rf**3*Rsh**2*fp*s**4 + 0.0253302959105844*Cf**3*Rf**3*Rsh**2*s**5 + 2.0*Cf**2*Cj**2*Rf**3*Rsh**3*fp**2*s**4 + 0.636619772367581*Cf**2*Cj**2*Rf**3*Rsh**3*fp*s**5 + 0.0506605918211689*Cf**2*Cj**2*Rf**3*Rsh**3*s**6 + 4.0*Cf**2*Cj*Rf**3*Rsh**2*fp**2*s**3 + 1.27323954473516*Cf**2*Cj*Rf**3*Rsh**2*fp*s**4 + 0.101321183642338*Cf**2*Cj*Rf**3*Rsh**2*s**5 + 3.0*Cf**2*Cj*Rf**2*Rsh**3*fp**2*s**3 + 0.954929658551372*Cf**2*Cj*Rf**2*Rsh**3*fp*s**4 + 0.0759908877317533*Cf**2*Cj*Rf**2*Rsh**3*s**5 + 2.0*Cf**2*Rf**3*Rsh*fp**2*s**2 + 0.636619772367581*Cf**2*Rf**3*Rsh*fp*s**3 + 0.0506605918211689*Cf**2*Rf**3*Rsh*s**4 + 3.0*Cf**2*Rf**2*Rsh**2*fp**2*s**2 + 0.954929658551372*Cf**2*Rf**2*Rsh**2*fp*s**3 + 0.0759908877317533*Cf**2*Rf**2*Rsh**2*s**4 + 1.0*Cf*Cj**3*Rf**3*Rsh**3*fp**2*s**4 + 0.318309886183791*Cf*Cj**3*Rf**3*Rsh**3*fp*s**5 + 0.0253302959105844*Cf*Cj**3*Rf**3*Rsh**3*s**6 + 3.0*Cf*Cj**2*Rf**3*Rsh**2*fp**2*s**3 + 0.954929658551372*Cf*Cj**2*Rf**3*Rsh**2*fp*s**4 + 0.0759908877317533*Cf*Cj**2*Rf**3*Rsh**2*s**5 + 4.0*Cf*Cj**2*Rf**2*Rsh**3*fp**2*s**3 + 1.27323954473516*Cf*Cj**2*Rf**2*Rsh**3*fp*s**4 + 0.101321183642338*Cf*Cj**2*Rf**2*Rsh**3*s**5 + 3.0*Cf*Cj*Rf**3*Rsh*fp**2*s**2 + 0.954929658551372*Cf*Cj*Rf**3*Rsh*fp*s**3 + 0.0759908877317533*Cf*Cj*Rf**3*Rsh*s**4 + 8.0*Cf*Cj*Rf**2*Rsh**2*fp**2*s**2 + 2.54647908947033*Cf*Cj*Rf**2*Rsh**2*fp*s**3 + 0.202642367284676*Cf*Cj*Rf**2*Rsh**2*s**4 + 3.0*Cf*Cj*Rf*Rsh**3*fp**2*s**2 + 0.954929658551372*Cf*Cj*Rf*Rsh**3*fp*s**3 + 0.0759908877317533*Cf*Cj*Rf*Rsh**3*s**4 + 1.0*Cf*Rf**3*fp**2*s + 0.318309886183791*Cf*Rf**3*fp*s**2 + 0.0253302959105844*Cf*Rf**3*s**3 + 4.0*Cf*Rf**2*Rsh*fp**2*s + 1.27323954473516*Cf*Rf**2*Rsh*fp*s**2 + 0.101321183642338*Cf*Rf**2*Rsh*s**3 + 3.0*Cf*Rf*Rsh**2*fp**2*s + 0.954929658551372*Cf*Rf*Rsh**2*fp*s**2 + 0.0759908877317533*Cf*Rf*Rsh**2*s**3 + 1.0*Cj**3*Rf**2*Rsh**3*fp**2*s**3 + 0.318309886183791*Cj**3*Rf**2*Rsh**3*fp*s**4 + 0.0253302959105844*Cj**3*Rf**2*Rsh**3*s**5 + 3.0*Cj**2*Rf**2*Rsh**2*fp**2*s**2 + 0.954929658551372*Cj**2*Rf**2*Rsh**2*fp*s**3 + 0.0759908877317533*Cj**2*Rf**2*Rsh**2*s**4 + 2.0*Cj**2*Rf*Rsh**3*fp**2*s**2 + 0.636619772367581*Cj**2*Rf*Rsh**3*fp*s**3 + 0.0506605918211689*Cj**2*Rf*Rsh**3*s**4 + 3.0*Cj*Rf**2*Rsh*fp**2*s + 0.954929658551372*Cj*Rf**2*Rsh*fp*s**2 + 0.0759908877317533*Cj*Rf**2*Rsh*s**3 + 4.0*Cj*Rf*Rsh**2*fp**2*s + 1.27323954473516*Cj*Rf*Rsh**2*fp*s**2 + 0.101321183642338*Cj*Rf*Rsh**2*s**3 + 1.0*Cj*Rsh**3*fp**2*s + 0.318309886183791*Cj*Rsh**3*fp*s**2 + 0.0253302959105844*Cj*Rsh**3*s**3 + 1.0*Rf**2*fp**2 + 0.318309886183791*Rf**2*fp*s + 0.0253302959105844*Rf**2*s**2 + 2.0*Rf*Rsh*fp**2 + 0.636619772367581*Rf*Rsh*fp*s + 0.0506605918211689*Rf*Rsh*s**2 + 1.0*Rsh**2*fp**2 + 0.318309886183791*Rsh**2*fp*s + 0.0253302959105844*Rsh**2*s**2)
There is a difference between the maxima code and the python one: const 'pi' is kept symbolic in the first case, and approximated to a floating-point value in the second. Replacing np.pi with pi solves the problem, anyway it is weird how sympy tries to simplify the expression when pi is numeric.
I'm trying to write a script in python to make my job easier.
I need to use os.system to call some functions to an external software.
Is there a way to insert a for loop inside this string, without having to write obs_dir[n] every time??
import os
obs_dir = ['18185','18186','18187','19926','19987','19994','19995','20045','20046','20081']
xid = ['src21']
i=0
os.system("pset combine_spectra src_arfs=/"
+ obs_dir[0] + "/" + xid[i] + "_" + obs_dir[0] + "_spectrum.arf,"
+ "/" + obs_dir[1] + "/" + xid[i] + "_" + obs_dir[1] + "_spectrum.arf,"
+ "/" + obs_dir[2] + "/" + xid[i] + "_" + obs_dir[2] + "_spectrum.arf,"
+ "/" + obs_dir[3] + "/" + xid[i] + "_" + obs_dir[3] + "_spectrum.arf,"
+ "/" + obs_dir[4] + "/" + xid[i] + "_" + obs_dir[4] + "_spectrum.arf,"
+ "/" + obs_dir[5] + "/" + xid[i] + "_" + obs_dir[5] + "_spectrum.arf,"
+ "/" + obs_dir[6] + "/" + xid[i] + "_" + obs_dir[6] + "_spectrum.arf,"
+ "/" + obs_dir[7] + "/" + xid[i] + "_" + obs_dir[7] + "_spectrum.arf,"
+ "/" + obs_dir[8] + "/" + xid[i] + "_" + obs_dir[8] + "_spectrum.arf,"
+ "/" + obs_dir[9] + "/" + xid[i] + "_" + obs_dir[9] + "_spectrum.arf")
You can create the required command by first iterating over the list(obs_dir) and forming the string.
Ex:
import os
obs_dir = ['18185','18186','18187','19926','19987','19994','19995','20045','20046','20081']
xid = ['src21']
s = "pset combine_spectra src_arfs="
for i in obs_dir:
s += "/{0}/{1}_{0}_spectrum.arf, ".format(i, xid[0])
s = s.strip().rstrip(',')
print s
#os.system(s)
I think this might be what you want
import os
obs_dir = ['18185','18186','18187','19926','19987','19994','19995','20045','20046','20081']
xid = ['src21']
str_cmd = "pset combine_spectra src_arfs=" + obs_dir[0]
separator = ""
for dir in obs_dir
str_cmd + = separator + "/" + dir + "/" + xid[i] + "_" + dir + "_spectrum.arf"
separator = ","
os.system(str_cmd)
You have xid[i], but no i, so using xid[0],
"/{}/{}_{}_spectrum.arf".format(obs_dir[1],xid[0],obs_dir[1])
gives
'/18186/src21_18186_spectrum.arf'
So, format helps.
Also, join will help join these into a comma separated string:
",".join(['a', 'b'])
gives
'a,b'
Joining this together you get
s = ",".join(["/{}/{}_{}_spectrum.arf".format(o,xid[0],o) for o in obs_dir])
giving the parameter(s) you want
'/18185/src21_18185_spectrum.arf,/18186/src21_18186_spectrum.arf,/18187/src21_18g187_spectrum.arf,/19926/src21_19926_spectrum.arf,/19987/src21_19987_spectrum.arfg,/19994/src21_19994_spectrum.arf,/19995/src21_19995_spectrum.arf,/20045/src21_20g045_spectrum.arf,/20046/src21_20046_spectrum.arf,/20081/src21_20081_spectrum.arfg'
without a spare ',' on the end.
Then use it
os.system("pset combine_spectra src_arfs=" + s)
Not in the string, but we can build the string using features like list comprehension (in this case, a generator expression) and string joining:
obs_dir = ['18185','18186','18187','19926','19987','19994','19995','20045','20046','20081']
xid = ['src21']
i = 0
print("pset combine_spectra src_arfs=" +
",".join("/{0}/{1}_{0}_spectrum.arf".format(n,xid[i])
for n in obs_dir))
I am new to python and trying to implement topic modelling. I am successful in implementing LDA in pything using gensim , but I am not able to give any label/name to these topics.
How do we name these topics? please help out with the best way to implement in python.
My LDA output is somewhat like this(please let me know if you need the code) :-
0.024*research + 0.021*students + 0.019*conference + 0.019*chi + 0.017*field + 0.014*work + 0.013*student + 0.013*hci + 0.013*group + 0.013*researchers
0.047*research + 0.034*students + 0.020*ustars + 0.018*underrepresented + 0.017*participants + 0.012*researchers + 0.012*mathematics + 0.012*graduate + 0.012*mathematical + 0.012*conference
0.027*students + 0.026*research + 0.018*conference + 0.017*field + 0.015*new + 0.014*participants + 0.013*chi + 0.012*robotics + 0.010*researchers + 0.010*student
0.023*students + 0.019*robotics + 0.018*conference + 0.017*international + 0.016*interact + 0.016*new + 0.016*ph.d. + 0.016*meet + 0.016*ieee + 0.015*u.s.
0.033*research + 0.030*flow + 0.028*field + 0.023*visualization + 0.020*challenges + 0.017*students + 0.015*project + 0.013*shape + 0.013*visual + 0.012*data
0.044*research + 0.020*mathematics + 0.017*program + 0.014*june + 0.014*conference + 0.014*- + 0.013*mathematicians + 0.013*conferences + 0.011*field + 0.011*mrc
0.023*research + 0.021*students + 0.015*field + 0.014*hovering + 0.014*mechanisms + 0.014*dpiv + 0.013*aerodynamic + 0.012*unsteady + 0.012*conference + 0.012*hummingbirds
0.031*research + 0.018*mathematics + 0.016*program + 0.014*flow + 0.014*mathematicians + 0.012*conferences + 0.011*field + 0.011*june + 0.010*visualization + 0.010*communities
0.028*students + 0.028*research + 0.018*ustars + 0.018*mathematics + 0.015*underrepresented + 0.010*program + 0.010*encouraging + 0.010*'', + 0.010*participants + 0.010*conference
0.049*research + 0.021*conference + 0.021*program + 0.020*mathematics + 0.014*mathematicians + 0.013*field + 0.013*- + 0.011*conferences + 0.010*areas
Labeling topics is completely distinct from topic modeling. Here's an article that describes using a keyword extraction technique (KERA) to apply meaningful labels to topics: http://arxiv.org/abs/1308.2359