Generate a Bode-form transfer function using ratsimp - python

I have to simplify a transfer function using sympy. I am used to maxima and I am looking for advice to get similar performances in a python environment.
Using the following Maxima code:
A:-Avol0/(1+s/(2*pi*fp));
Zph:Rsh/(1+Rsh*Cj*s);
Zf:Rf/(1+Rf*Cf*s);
alpha:Zf*Zph/(Zf+Zph);
beta:Zph/(Zf+Zph);
BetaA:ratsimp(beta*A,s);
H:ratsimp(alpha*A/(1-BetaA),s);
I get the following:
(H)-> -(2*Avol0*Rf*Rsh*fp*pi)/((Cj+Cf)*Rf*Rsh*s^2+((2*Cj+(2*Avol0+2)*Cf)*Rf*Rsh*fp*pi+Rsh+Rf)*s+((2*Avol0+2)*Rsh+2*Rf)*fp*pi)
The same opertions in sympy do not get to such a nice result:
import numpy as np
import sympy as sy
"""
Formulas
"""
s, Rf, Cf, Rsh, Cj, Cd, Ccm, GBP, Avol0, fp, w = \
sy.symbols("s Rf Cf Rsh Cj Cd Ccm GBP Avol0 fp w")
A = -Avol0/(1+s/(2*np.pi*fp))
Zph = Rsh/(1+Rsh*Cj*s)
Zf = Rf/(1+Rf*Cf*s)
alpha = Zf*Zph/(Zf+Zph)
beta = Zph/(Zf+Zph)
Gloop = sy.ratsimp(beta*A)
H = alpha*A/(1-Gloop)
sy.ratsimp(H)
returns an unreadable result:
-1.0*(1.0*Avol0*Cf**2*Cj*Rf**3*Rsh**3*fp**2*s**3 + 0.159154943091895*Avol0*Cf**2*Cj*Rf**3*Rsh**3*fp*s**4 + 1.0*Avol0*Cf**2*Rf**3*Rsh**2*fp**2*s**2 + 0.159154943091895*Avol0*Cf**2*Rf**3*Rsh**2*fp*s**3 + 1.0*Avol0*Cf*Cj**2*Rf**3*Rsh**3*fp**2*s**3 + 0.159154943091895*Avol0*Cf*Cj**2*Rf**3*Rsh**3*fp*s**4 + 2.0*Avol0*Cf*Cj*Rf**3*Rsh**2*fp**2*s**2 + 0.318309886183791*Avol0*Cf*Cj*Rf**3*Rsh**2*fp*s**3 + 2.0*Avol0*Cf*Cj*Rf**2*Rsh**3*fp**2*s**2 + 0.318309886183791*Avol0*Cf*Cj*Rf**2*Rsh**3*fp*s**3 + 1.0*Avol0*Cf*Rf**3*Rsh*fp**2*s + 0.159154943091895*Avol0*Cf*Rf**3*Rsh*fp*s**2 + 2.0*Avol0*Cf*Rf**2*Rsh**2*fp**2*s + 0.318309886183791*Avol0*Cf*Rf**2*Rsh**2*fp*s**2 + 1.0*Avol0*Cj**2*Rf**2*Rsh**3*fp**2*s**2 + 0.159154943091895*Avol0*Cj**2*Rf**2*Rsh**3*fp*s**3 + 2.0*Avol0*Cj*Rf**2*Rsh**2*fp**2*s + 0.318309886183791*Avol0*Cj*Rf**2*Rsh**2*fp*s**2 + 1.0*Avol0*Cj*Rf*Rsh**3*fp**2*s + 0.159154943091895*Avol0*Cj*Rf*Rsh**3*fp*s**2 + 1.0*Avol0*Rf**2*Rsh*fp**2 + 0.159154943091895*Avol0*Rf**2*Rsh*fp*s + 1.0*Avol0*Rf*Rsh**2*fp**2 + 0.159154943091895*Avol0*Rf*Rsh**2*fp*s)/(1.0*Avol0*Cf**3*Cj*Rf**3*Rsh**3*fp**2*s**4 + 0.159154943091895*Avol0*Cf**3*Cj*Rf**3*Rsh**3*fp*s**5 + 1.0*Avol0*Cf**3*Rf**3*Rsh**2*fp**2*s**3 + 0.159154943091895*Avol0*Cf**3*Rf**3*Rsh**2*fp*s**4 + 1.0*Avol0*Cf**2*Cj**2*Rf**3*Rsh**3*fp**2*s**4 + 0.159154943091895*Avol0*Cf**2*Cj**2*Rf**3*Rsh**3*fp*s**5 + 2.0*Avol0*Cf**2*Cj*Rf**3*Rsh**2*fp**2*s**3 + 0.318309886183791*Avol0*Cf**2*Cj*Rf**3*Rsh**2*fp*s**4 + 3.0*Avol0*Cf**2*Cj*Rf**2*Rsh**3*fp**2*s**3 + 0.477464829275686*Avol0*Cf**2*Cj*Rf**2*Rsh**3*fp*s**4 + 1.0*Avol0*Cf**2*Rf**3*Rsh*fp**2*s**2 + 0.159154943091895*Avol0*Cf**2*Rf**3*Rsh*fp*s**3 + 3.0*Avol0*Cf**2*Rf**2*Rsh**2*fp**2*s**2 + 0.477464829275686*Avol0*Cf**2*Rf**2*Rsh**2*fp*s**3 + 2.0*Avol0*Cf*Cj**2*Rf**2*Rsh**3*fp**2*s**3 + 0.318309886183791*Avol0*Cf*Cj**2*Rf**2*Rsh**3*fp*s**4 + 4.0*Avol0*Cf*Cj*Rf**2*Rsh**2*fp**2*s**2 + 0.636619772367581*Avol0*Cf*Cj*Rf**2*Rsh**2*fp*s**3 + 3.0*Avol0*Cf*Cj*Rf*Rsh**3*fp**2*s**2 + 0.477464829275686*Avol0*Cf*Cj*Rf*Rsh**3*fp*s**3 + 2.0*Avol0*Cf*Rf**2*Rsh*fp**2*s + 0.318309886183791*Avol0*Cf*Rf**2*Rsh*fp*s**2 + 3.0*Avol0*Cf*Rf*Rsh**2*fp**2*s + 0.477464829275686*Avol0*Cf*Rf*Rsh**2*fp*s**2 + 1.0*Avol0*Cj**2*Rf*Rsh**3*fp**2*s**2 + 0.159154943091895*Avol0*Cj**2*Rf*Rsh**3*fp*s**3 + 2.0*Avol0*Cj*Rf*Rsh**2*fp**2*s + 0.318309886183791*Avol0*Cj*Rf*Rsh**2*fp*s**2 + 1.0*Avol0*Cj*Rsh**3*fp**2*s + 0.159154943091895*Avol0*Cj*Rsh**3*fp*s**2 + 1.0*Avol0*Rf*Rsh*fp**2 + 0.159154943091895*Avol0*Rf*Rsh*fp*s + 1.0*Avol0*Rsh**2*fp**2 + 0.159154943091895*Avol0*Rsh**2*fp*s + 1.0*Cf**3*Cj*Rf**3*Rsh**3*fp**2*s**4 + 0.318309886183791*Cf**3*Cj*Rf**3*Rsh**3*fp*s**5 + 0.0253302959105844*Cf**3*Cj*Rf**3*Rsh**3*s**6 + 1.0*Cf**3*Rf**3*Rsh**2*fp**2*s**3 + 0.318309886183791*Cf**3*Rf**3*Rsh**2*fp*s**4 + 0.0253302959105844*Cf**3*Rf**3*Rsh**2*s**5 + 2.0*Cf**2*Cj**2*Rf**3*Rsh**3*fp**2*s**4 + 0.636619772367581*Cf**2*Cj**2*Rf**3*Rsh**3*fp*s**5 + 0.0506605918211689*Cf**2*Cj**2*Rf**3*Rsh**3*s**6 + 4.0*Cf**2*Cj*Rf**3*Rsh**2*fp**2*s**3 + 1.27323954473516*Cf**2*Cj*Rf**3*Rsh**2*fp*s**4 + 0.101321183642338*Cf**2*Cj*Rf**3*Rsh**2*s**5 + 3.0*Cf**2*Cj*Rf**2*Rsh**3*fp**2*s**3 + 0.954929658551372*Cf**2*Cj*Rf**2*Rsh**3*fp*s**4 + 0.0759908877317533*Cf**2*Cj*Rf**2*Rsh**3*s**5 + 2.0*Cf**2*Rf**3*Rsh*fp**2*s**2 + 0.636619772367581*Cf**2*Rf**3*Rsh*fp*s**3 + 0.0506605918211689*Cf**2*Rf**3*Rsh*s**4 + 3.0*Cf**2*Rf**2*Rsh**2*fp**2*s**2 + 0.954929658551372*Cf**2*Rf**2*Rsh**2*fp*s**3 + 0.0759908877317533*Cf**2*Rf**2*Rsh**2*s**4 + 1.0*Cf*Cj**3*Rf**3*Rsh**3*fp**2*s**4 + 0.318309886183791*Cf*Cj**3*Rf**3*Rsh**3*fp*s**5 + 0.0253302959105844*Cf*Cj**3*Rf**3*Rsh**3*s**6 + 3.0*Cf*Cj**2*Rf**3*Rsh**2*fp**2*s**3 + 0.954929658551372*Cf*Cj**2*Rf**3*Rsh**2*fp*s**4 + 0.0759908877317533*Cf*Cj**2*Rf**3*Rsh**2*s**5 + 4.0*Cf*Cj**2*Rf**2*Rsh**3*fp**2*s**3 + 1.27323954473516*Cf*Cj**2*Rf**2*Rsh**3*fp*s**4 + 0.101321183642338*Cf*Cj**2*Rf**2*Rsh**3*s**5 + 3.0*Cf*Cj*Rf**3*Rsh*fp**2*s**2 + 0.954929658551372*Cf*Cj*Rf**3*Rsh*fp*s**3 + 0.0759908877317533*Cf*Cj*Rf**3*Rsh*s**4 + 8.0*Cf*Cj*Rf**2*Rsh**2*fp**2*s**2 + 2.54647908947033*Cf*Cj*Rf**2*Rsh**2*fp*s**3 + 0.202642367284676*Cf*Cj*Rf**2*Rsh**2*s**4 + 3.0*Cf*Cj*Rf*Rsh**3*fp**2*s**2 + 0.954929658551372*Cf*Cj*Rf*Rsh**3*fp*s**3 + 0.0759908877317533*Cf*Cj*Rf*Rsh**3*s**4 + 1.0*Cf*Rf**3*fp**2*s + 0.318309886183791*Cf*Rf**3*fp*s**2 + 0.0253302959105844*Cf*Rf**3*s**3 + 4.0*Cf*Rf**2*Rsh*fp**2*s + 1.27323954473516*Cf*Rf**2*Rsh*fp*s**2 + 0.101321183642338*Cf*Rf**2*Rsh*s**3 + 3.0*Cf*Rf*Rsh**2*fp**2*s + 0.954929658551372*Cf*Rf*Rsh**2*fp*s**2 + 0.0759908877317533*Cf*Rf*Rsh**2*s**3 + 1.0*Cj**3*Rf**2*Rsh**3*fp**2*s**3 + 0.318309886183791*Cj**3*Rf**2*Rsh**3*fp*s**4 + 0.0253302959105844*Cj**3*Rf**2*Rsh**3*s**5 + 3.0*Cj**2*Rf**2*Rsh**2*fp**2*s**2 + 0.954929658551372*Cj**2*Rf**2*Rsh**2*fp*s**3 + 0.0759908877317533*Cj**2*Rf**2*Rsh**2*s**4 + 2.0*Cj**2*Rf*Rsh**3*fp**2*s**2 + 0.636619772367581*Cj**2*Rf*Rsh**3*fp*s**3 + 0.0506605918211689*Cj**2*Rf*Rsh**3*s**4 + 3.0*Cj*Rf**2*Rsh*fp**2*s + 0.954929658551372*Cj*Rf**2*Rsh*fp*s**2 + 0.0759908877317533*Cj*Rf**2*Rsh*s**3 + 4.0*Cj*Rf*Rsh**2*fp**2*s + 1.27323954473516*Cj*Rf*Rsh**2*fp*s**2 + 0.101321183642338*Cj*Rf*Rsh**2*s**3 + 1.0*Cj*Rsh**3*fp**2*s + 0.318309886183791*Cj*Rsh**3*fp*s**2 + 0.0253302959105844*Cj*Rsh**3*s**3 + 1.0*Rf**2*fp**2 + 0.318309886183791*Rf**2*fp*s + 0.0253302959105844*Rf**2*s**2 + 2.0*Rf*Rsh*fp**2 + 0.636619772367581*Rf*Rsh*fp*s + 0.0506605918211689*Rf*Rsh*s**2 + 1.0*Rsh**2*fp**2 + 0.318309886183791*Rsh**2*fp*s + 0.0253302959105844*Rsh**2*s**2)

There is a difference between the maxima code and the python one: const 'pi' is kept symbolic in the first case, and approximated to a floating-point value in the second. Replacing np.pi with pi solves the problem, anyway it is weird how sympy tries to simplify the expression when pi is numeric.

Related

Python LDA Gensim model with over 20 topics does not print properly

Using the Gensim package (both LDA and Mallet), I noticed that when I create a model with more than 20 topics, and I use the print_topics function, it will print a maximum of 20 topics (note, not the first 20 topics, rather any 20 topics), and they will be out of order.
And so my question is, how do i get all of the topics to print? I am unsure if this is a bug or an issue on my end. I have looked back at my library of LDA models (over 5000, different data sources), and have noted this happens in all of them where topics are above 20.
Below is sample code with output. In the output, you will see the topics are not ordered (they should be) and topics are missing such as topic 3.
lda_model = gensim.models.ldamodel.LdaModel(corpus=jr_dict_corpus,
id2word=jr_dict,
num_topics=25,
random_state=100,
update_every=1,
chunksize=100,
passes=10,
alpha='auto',
per_word_topics=True)
pprint(lda_model.print_topics())
#note, if the model contained 20 topics, the topics would be listed in order 0-19
[(21,
'0.001*"commitment" + 0.001*"study" + 0.001*"evolve" + 0.001*"outlook" + '
'0.001*"value" + 0.001*"people" + 0.001*"individual" + 0.001*"client" + '
'0.001*"structure" + 0.001*"proposal"'),
(18,
'0.001*"self" + 0.001*"insurance" + 0.001*"need" + 0.001*"trend" + '
'0.001*"statistic" + 0.001*"propose" + 0.001*"analysis" + 0.001*"perform" + '
'0.001*"impact" + 0.001*"awareness"'),
(2,
'0.001*"link" + 0.001*"task" + 0.001*"collegiate" + 0.001*"universitie" + '
'0.001*"banking" + 0.001*"origination" + 0.001*"security" + 0.001*"standard" '
'+ 0.001*"qualifications_bachelor" + 0.001*"greenfield"'),
(11,
'0.024*"collegiate" + 0.016*"interpersonal" + 0.016*"prepare" + '
'0.016*"invite" + 0.016*"aspect" + 0.016*"college" + 0.016*"statistic" + '
'0.016*"continent" + 0.016*"structure" + 0.016*"project"'),
(10,
'0.049*"enjoy" + 0.049*"ambiguity" + 0.017*"accordance" + 0.017*"liberalize" '
'+ 0.017*"developing" + 0.017*"application" + 0.017*"vacancie" + '
'0.017*"service" + 0.017*"initiative" + 0.017*"discontinuing"'),
(20,
'0.028*"negotiation" + 0.028*"desk" + 0.018*"enhance" + 0.018*"engage" + '
'0.018*"discussion" + 0.018*"ability" + 0.018*"depth" + 0.018*"derive" + '
'0.018*"enjoy" + 0.018*"balance"'),
(12,
'0.036*"individual" + 0.024*"validate" + 0.018*"greenfield" + '
'0.018*"capability" + 0.018*"coordinate" + 0.018*"create" + '
'0.018*"programming" + 0.018*"safety" + 0.010*"evaluation" + '
'0.002*"reliability"'),
(1,
'0.028*"negotiation" + 0.021*"responsibility" + 0.014*"master" + '
'0.014*"mind" + 0.014*"experience" + 0.014*"worker" + 0.014*"ability" + '
'0.007*"summary" + 0.007*"proposal" + 0.007*"alert"'),
(23,
'0.043*"banking" + 0.026*"origination" + 0.026*"round" + 0.026*"credibility" '
'+ 0.026*"entity" + 0.018*"standard" + 0.017*"range" + 0.017*"pension" + '
'0.017*"adapt" + 0.017*"information"'),
(13,
'0.034*"priority" + 0.034*"reconciliation" + 0.034*"purchaser" + '
'0.023*"reporting" + 0.023*"offer" + 0.023*"investor" + 0.023*"share" + '
'0.023*"region" + 0.023*"service" + 0.023*"manipulate"'),
(22,
'0.017*"analyst" + 0.017*"modelling" + 0.016*"producer" + 0.016*"return" + '
'0.016*"self" + 0.009*"scope" + 0.008*"mind" + 0.008*"need" + 0.008*"detail" '
'+ 0.008*"statistic"'),
(9,
'0.021*"decision" + 0.014*"invite" + 0.014*"balance" + 0.014*"commercialize" '
'+ 0.014*"transform" + 0.014*"manage" + 0.014*"optionality" + '
'0.014*"problem_solving" + 0.014*"fuel" + 0.014*"stay"'),
(7,
'0.032*"commitment" + 0.032*"study" + 0.016*"impact" + 0.016*"outlook" + '
'0.011*"operation" + 0.011*"expand" + 0.011*"exchange" + 0.011*"management" '
'+ 0.011*"conde" + 0.011*"evolve"'),
(15,
'0.032*"agility" + 0.019*"feasibility" + 0.019*"self" + 0.014*"deploy" + '
'0.014*"define" + 0.013*"investment" + 0.013*"option" + 0.013*"control" + '
'0.013*"action" + 0.013*"incubation"'),
(5,
'0.020*"desk" + 0.018*"agility" + 0.016*"vender" + 0.016*"coordinate" + '
'0.016*"committee" + 0.012*"acquisition" + 0.012*"target" + '
'0.012*"counterparty" + 0.012*"approval" + 0.012*"trend"'),
(17,
'0.022*"option" + 0.017*"working" + 0.017*"niche" + 0.011*"business" + '
'0.011*"constrain" + 0.011*"meeting" + 0.011*"correspond" + 0.011*"exposure" '
'+ 0.011*"element" + 0.011*"face"'),
(0,
'0.025*"expertise" + 0.025*"banking" + 0.021*"universitie" + '
'0.017*"spreadsheet" + 0.013*"negotiation" + 0.013*"shipment" + '
'0.013*"arise" + 0.013*"billing" + 0.013*"assistance" + 0.013*"sector"'),
(4,
'0.024*"provide" + 0.017*"consider" + 0.017*"allow" + 0.015*"outlook" + '
'0.015*"value" + 0.015*"contract" + 0.012*"study" + 0.012*"technology" + '
'0.012*"scenario" + 0.012*"indicator"'),
(6,
'0.058*"impulse" + 0.027*"shall" + 0.027*"shape" + 0.024*"marketer" + '
'0.017*"availability" + 0.014*"determine" + 0.014*"load" + '
'0.014*"constantly_change" + 0.014*"instrument" + 0.014*"interface"'),
(19,
'0.042*"task" + 0.038*"tariff" + 0.038*"recommend" + 0.024*"example" + '
'0.023*"future" + 0.021*"people" + 0.021*"math" + 0.021*"capacity" + '
'0.021*"spirit" + 0.020*"price"')]
Same model as above, but using 20 topics. As you can see, the output is in order by topic # and it contains all the topics.
lda_model = gensim.models.ldamodel.LdaModel(corpus=jr_dict_corpus,
id2word=jr_dict,
num_topics=20,
random_state=100,
update_every=1,
chunksize=100,
passes=10,
alpha='auto',
per_word_topics=True)
pprint(lda_model.print_topics())
[(0,
'0.031*"enjoy" + 0.031*"ambiguity" + 0.028*"accordance" + 0.016*"statistic" '
'+ 0.016*"initiative" + 0.016*"service" + 0.016*"liberalize" + '
'0.016*"application" + 0.011*"community" + 0.011*"identifie"'),
(1,
'0.016*"transformation" + 0.016*"negotiation" + 0.016*"community" + '
'0.016*"clock" + 0.011*"marketer" + 0.011*"desk" + 0.011*"mandate" + '
'0.011*"closing" + 0.011*"initiative" + 0.011*"experience"'),
(2,
'0.026*"priority" + 0.026*"reconciliation" + 0.026*"purchaser" + '
'0.020*"safety" + 0.020*"region" + 0.020*"query" + 0.020*"share" + '
'0.020*"manipulate" + 0.020*"ibex" + 0.020*"investor"'),
(3,
'0.022*"improve" + 0.021*"committee" + 0.021*"affect" + 0.012*"target" + '
'0.012*"acquisition" + 0.011*"basis" + 0.011*"profitability" + '
'0.011*"economic" + 0.011*"natural" + 0.011*"profit"'),
(4,
'0.024*"provide" + 0.019*"value" + 0.017*"consider" + 0.017*"allow" + '
'0.015*"scenario" + 0.015*"outlook" + 0.015*"contract" + 0.014*"forecast" + '
'0.014*"decision" + 0.012*"indicator"'),
(5,
'0.037*"desk" + 0.030*"coordinate" + 0.030*"agility" + 0.030*"vender" + '
'0.023*"counterparty" + 0.023*"immature_emerge" + 0.023*"metric" + '
'0.022*"approval" + 0.015*"maximization" + 0.015*"undergraduate"'),
(6,
'0.053*"impulse" + 0.025*"shall" + 0.025*"shape" + 0.018*"availability" + '
'0.018*"marketer" + 0.012*"determine" + 0.012*"language" + '
'0.012*"monitoring" + 0.012*"integration" + 0.012*"month"'),
(7,
'0.026*"commitment" + 0.026*"study" + 0.013*"impact" + 0.013*"outlook" + '
'0.009*"operation" + 0.009*"management" + 0.009*"expand" + 0.009*"exchange" '
'+ 0.009*"conde" + 0.009*"balance"'),
(8,
'0.057*"insurance" + 0.029*"propose" + 0.028*"rule" + 0.026*"self" + '
'0.023*"product" + 0.023*"asset" + 0.023*"pricing" + 0.023*"amount" + '
'0.023*"result" + 0.020*"liquidity"'),
(9,
'0.012*"universitie" + 0.012*"need" + 0.012*"statistic" + 0.012*"trend" + '
'0.008*"invite" + 0.008*"commercialize" + 0.008*"transform" + 0.008*"manage" '
'+ 0.008*"problem_solving" + 0.008*"optionality"'),
(10,
'0.024*"background" + 0.024*"curve" + 0.020*"allow" + 0.019*"collect" + '
'0.019*"basis" + 0.017*"accordance" + 0.013*"improve" + 0.013*"datum" + '
'0.013*"component" + 0.013*"reliability"'),
(11,
'0.054*"task" + 0.049*"tariff" + 0.049*"recommend" + 0.031*"future" + '
'0.027*"spirit" + 0.027*"capacity" + 0.027*"math" + 0.022*"ensure" + '
'0.022*"profit" + 0.022*"variable_margin"'),
(12,
'0.001*"impulse" + 0.001*"availability" + 0.001*"reliability" + '
'0.001*"shall" + 0.001*"component" + 0.001*"agent" + 0.001*"marketer" + '
'0.001*"shape" + 0.001*"assisting" + 0.001*"supply"'),
(13,
'0.021*"region" + 0.016*"greenfield" + 0.016*"collegiate" + 0.011*"transfer" '
'+ 0.011*"remuneration" + 0.011*"organization" + 0.011*"structure" + '
'0.011*"continent" + 0.011*"project" + 0.011*"prepare"'),
(14,
'0.033*"originator" + 0.025*"vender" + 0.025*"expertise" + 0.025*"banking" + '
'0.019*"evolve" + 0.017*"management" + 0.017*"market" + 0.017*"site" + '
'0.012*"component" + 0.012*"discontinuing"'),
(15,
'0.027*"agility" + 0.022*"mind" + 0.022*"negotiation" + 0.011*"deploy" + '
'0.011*"define" + 0.011*"ecosystem" + 0.011*"control" + 0.011*"lead" + '
'0.011*"industry" + 0.011*"option"'),
(16,
'0.001*"region" + 0.001*"master" + 0.001*"orginiation" + 0.001*"greenfield" '
'+ 0.001*"agent" + 0.001*"identifie" + 0.001*"remuneration" + 0.001*"mark" + '
'0.001*"reviewing" + 0.001*"closing"'),
(17,
'0.030*"banking" + 0.018*"option" + 0.018*"round" + 0.018*"credibility" + '
'0.018*"origination" + 0.018*"entity" + 0.016*"working" + 0.015*"niche" + '
'0.015*"standard" + 0.012*"coordinate"'),
(18,
'0.027*"negotiation" + 0.018*"reporting" + 0.018*"perform" + 0.018*"world" + '
'0.015*"offer" + 0.015*"manipulate" + 0.011*"query" + 0.010*"control" + '
'0.010*"working" + 0.009*"self"'),
(19,
'0.047*"example" + 0.039*"people" + 0.039*"price" + 0.039*"excel" + '
'0.039*"excellent" + 0.038*"base" + 0.031*"office" + 0.031*"optimizing" + '
'0.031*"participate" + 0.031*"package"')]
The default number of topics for print_topics is 20. You must use the num_topics argument to include topics above 20...
print(lda_model.print_topics(num_topics=25, num_words=10))

Pulp obtains results as problem is infeasible, while problem is not feasible

I'm trying to solve an assignment problem with pulp. The basic part of the code is as follows:
set_I = range(1, numberOfPoints)
set_J = range(1, numberOfCentroids)
tau = 0.15
Q = 15
# decision variable
x_vars = LpVariable.dicts(name="x_vars", indexs=(set_I, set_J), lowBound=0, upBound=1, cat=LpInteger)
# model name
prob = LpProblem("MIP_Model", LpMinimize)
# constraints
for i in set_I:
prob += lpSum(x_vars[i][j] for j in set_J) == 1, ""
for j in set_J:
prob += lpSum(x_vars[i][j] for i in set_I) >= 1, ""
for j in set_J:
prob += lpSum(x_vars[i][j] for i in set_I) <= Q*(1-tau), ""
for j in set_J:
prob += lpSum(x_vars[i][j] for i in set_I) >= Q*(1+tau), ""
# objective
prob += lpSum(d[i, j]*x_vars[i][j] for i in set_I for j in set_J)
prob.solve()
The result is like this:
Problem MODEL has 31 rows, 76 columns and 304 elements
Coin0008I MODEL read with 0 errors
Problem is infeasible - 0.01 seconds
Option for printingOptions changed from normal to all
However, the problem is not infeasible and results are obtained with other solvers.
I wonder if there is a syntax error and is the problem caused by this?
I have asked a similar question in the next link:
Infeasible solution by pulp
When I run the problem locally, with d a matrix of ones, 20 points, and 3 centroids. It also becomes infeasible for me. Look at the constraints:
_C22: x_vars_10_1 + x_vars_11_1 + x_vars_12_1 + x_vars_13_1 + x_vars_14_1
+ x_vars_15_1 + x_vars_16_1 + x_vars_17_1 + x_vars_18_1 + x_vars_19_1
+ x_vars_1_1 + x_vars_2_1 + x_vars_3_1 + x_vars_4_1 + x_vars_5_1 + x_vars_6_1
+ x_vars_7_1 + x_vars_8_1 + x_vars_9_1 <= 12.75
_C23: x_vars_10_2 + x_vars_11_2 + x_vars_12_2 + x_vars_13_2 + x_vars_14_2
+ x_vars_15_2 + x_vars_16_2 + x_vars_17_2 + x_vars_18_2 + x_vars_19_2
+ x_vars_1_2 + x_vars_2_2 + x_vars_3_2 + x_vars_4_2 + x_vars_5_2 + x_vars_6_2
+ x_vars_7_2 + x_vars_8_2 + x_vars_9_2 <= 12.75
_C24: x_vars_10_1 + x_vars_11_1 + x_vars_12_1 + x_vars_13_1 + x_vars_14_1
+ x_vars_15_1 + x_vars_16_1 + x_vars_17_1 + x_vars_18_1 + x_vars_19_1
+ x_vars_1_1 + x_vars_2_1 + x_vars_3_1 + x_vars_4_1 + x_vars_5_1 + x_vars_6_1
+ x_vars_7_1 + x_vars_8_1 + x_vars_9_1 >= 17.25
_C25: x_vars_10_2 + x_vars_11_2 + x_vars_12_2 + x_vars_13_2 + x_vars_14_2
+ x_vars_15_2 + x_vars_16_2 + x_vars_17_2 + x_vars_18_2 + x_vars_19_2
+ x_vars_1_2 + x_vars_2_2 + x_vars_3_2 + x_vars_4_2 + x_vars_5_2 + x_vars_6_2
+ x_vars_7_2 + x_vars_8_2 + x_vars_9_2 >= 17.25
You require
x_vars_10_2 + x_vars_11_2 + x_vars_12_2 + x_vars_13_2 + x_vars_14_2
+ x_vars_15_2 + x_vars_16_2 + x_vars_17_2 + x_vars_18_2 + x_vars_19_2
+ x_vars_1_2 + x_vars_2_2 + x_vars_3_2 + x_vars_4_2 + x_vars_5_2 + x_vars_6_2
+ x_vars_7_2 + x_vars_8_2 + x_vars_9_2
to be greater than 17.25 and smaller than 12.75 at the same time. That's not possible, of course.

argument must be a string or a number, not 'LpAffineExpression'

I am trying to use python IRR function with PULP maximisation but i am getting the following error
TypeError: float() argument must be a string or a number, not 'LpAffineExpression'
TypeError Traceback (most recent call last)
in
11 name[6]*rate[6]*ratesList2[2] + name[7]*rate[7]*ratesList2[2] + name[8]*rate[8]*ratesList2[2] + name[9]*rate[9]*ratesList2[2] + name[10]*rate[10]*ratesList2[2] + name[11]*rate[11]*ratesList2[2] +
12 name[12]*rate[12]*ratesList2[2] + name[13]*rate[13]*ratesList2[2] + name[14]*rate[14]*ratesList2[2] + name[15]*rate[15]*ratesList2[2] + name[16]*rate[16]*ratesList2[2] + name[17]*rate[17]*ratesList2[2] +
---> 13 name[18]*rate[18]*ratesList2[2])])
14
15
problem += np.irr([(-19660528.00),
(name[0]*rate[0] + name[1]*rate[1] + name[2]*rate[2] + name[3]*rate[3] + name[4]*rate[4] + name[5]*rate[5] +
name[6]*rate[6] + name[7]*rate[7] + name[8]*rate[8] + name[9]*rate[9] + name[10]*rate[10] + name[11]*rate[11] +
name[12]*rate[12] + name[13]*rate[13] + name[14]*rate[14] + name[15]*rate[15] + name[16]*rate[16] + name[17]*rate[17] +
name[18]*rate[18]),
(name[0]*rate[0]*ratesList1[1] + name[1]*rate[1]*ratesList2[1] + name[2]*rate[2]*ratesList2[1] + name[3]*rate[3]*ratesList2[1] + name[4]*rate[4]*ratesList2[1] + name[5]*rate[5]*ratesList2[1] +
name[6]*rate[6]*ratesList2[1] + name[7]*rate[7]*ratesList2[1] + name[8]*rate[8]*ratesList2[1] + name[9]*rate[9]*ratesList2[1] + name[10]*rate[10]*ratesList2[1] + name[11]*rate[11]*ratesList2[1] +
name[12]*rate[12]*ratesList2[1] + name[13]*rate[13]*ratesList2[1] + name[14]*rate[14]*ratesList2[1] + name[15]*rate[15]*ratesList2[1] + name[16]*rate[16]*ratesList2[1] + name[17]*rate[17]*ratesList2[1] +
name[18]*rate[18]*ratesList2[1]),
(name[0]*rate[0]*ratesList1[2] + name[1]*rate[1]*ratesList2[2] + name[2]*rate[2]*ratesList2[2] + name[3]*rate[3]*ratesList2[2] + name[4]*rate[4]*ratesList2[2] + name[5]*rate[5]*ratesList2[2] +
name[6]*rate[6]*ratesList2[2] + name[7]*rate[7]*ratesList2[2] + name[8]*rate[8]*ratesList2[2] + name[9]*rate[9]*ratesList2[2] + name[10]*rate[10]*ratesList2[2] + name[11]*rate[11]*ratesList2[2] +
name[12]*rate[12]*ratesList2[2] + name[13]*rate[13]*ratesList2[2] + name[14]*rate[14]*ratesList2[2] + name[15]*rate[15]*ratesList2[2] + name[16]*rate[16]*ratesList2[2] + name[17]*rate[17]*ratesList2[2] +
name[18]*rate[18]*ratesList2[2])])
problem += (name[0] + name[1] + name[2] + name[3] + name[4] + name[5] + name[6] + name[7] + name[8] + name[9] + name[10] +
name[11] + name[12] + name[13] + name[14] + name[15] + name[16] + name[17] + name[18]) <= sum(marketMix['GLA']), "1st constraint"
The numpy function irr() takes as argument a list of values. You are instead passing a list of linear expressions containing variables that are subject to optimization. irr() is not prepared to handle that. It assumes that all arguments can be coerced into a float. Instead of using function irr() you will have to state the respective expression explicitly.

Integration error in SymPy 1.0

I'm trying to write my mathcad model in python language, but I get some mistake.
The integration function should look like this:
In python I wrote such code
from __future__ import division
import sympy as sp
import numpy as np
import math
from pylab import *
print(sp.__version__)
s = sp.Symbol('s')
x = sp.symbols('x')
t_start = 11
t_info = 1
t_transf = 2
t_stat_analyze = 3
t_repeat = 3.2
P = 0.1
def M1(s):
return P/(t_info*t_start*t_stat_analyze*t_transf*(1 - (-P + 1)/(t_repeat*t_transf*(s + 1/t_repeat)*(s + 1/t_transf)))*(s + 1/t_info)*(s + 1/t_start)*(s + 1/t_stat_analyze)*(s + 1/t_transf)**2) + P/( t_info*t_start*t_stat_analyze*t_transf*(1 - (-P + 1)/(t_repeat*t_transf*(s + 1/t_repeat)*(s + 1/t_transf)))*(s + 1/t_info)*(s + 1/t_start)*(s + 1/t_stat_analyze)**2*(s + 1/t_transf)) + P/(t_info*t_start*t_stat_analyze*t_transf*(1 - (-P + 1)/(t_repeat*t_transf*(s + 1/t_repeat)*(s + 1/t_transf)))*(s + 1/t_info)*(s + 1/t_start)**2*(s + 1/t_stat_analyze)*(s + 1/t_transf)) + P/(t_info*t_start*t_stat_analyze*t_transf*(1 - (-P + 1)/(t_repeat*t_transf*(s + 1/t_repeat)*(s + 1/ t_transf)))*(s + 1/t_info)**2*(s + 1/t_start)*(s + 1/t_stat_analyze)*(s + 1/t_transf)) - P*(-(-P + 1)/(t_repeat*t_transf*(s + 1/t_repeat)*(s + 1/t_transf)**2) - (-P + 1)/(t_repeat*t_transf*(s + 1/t_repeat)**2*(s + 1/t_transf)))/( t_info*t_start*t_stat_analyze*t_transf*(1 - (-P + 1)/(t_repeat*t_transf*(s + 1/t_repeat)*(s + 1/t_transf)))**2*(s + 1/t_info)*(s + 1/t_start)*(s + 1/t_stat_analyze)*(s + 1/t_transf))
def M2(s):
return 2*P*((s + 1/t_transf)**(-2) + 1/((s + 1/t_stat_analyze)*(s + 1/t_transf)) + (s + 1/t_stat_analyze)**(-2) + 1/((s + 1/t_start)*(s + 1/t_transf)) + 1/((s + 1/t_start)*(s + 1/t_stat_analyze)) + (s + 1/t_start)**(-2) + 1/((s + 1/ t_info)*(s + 1/t_transf)) + 1/((s + 1/t_info)*(s + 1/t_stat_analyze)) + 1/((s + 1/t_info)*(s + 1/t_start)) + (s + 1/t_info)**(-2) - (P - 1)*((s + 1/t_transf)**(-2) + 1/((s + 1/t_repeat)*(s + 1/t_transf)) + (s + 1/t_repeat)**(-2))/( t_repeat*t_transf*(1 + (P - 1)/(t_repeat*t_transf*(s + 1/t_repeat)*(s + 1/t_transf)))*(s + 1/t_repeat)*(s + 1/t_transf)) - (P - 1)*(1/(s + 1/t_transf) + 1/(s + 1/t_repeat))/(t_repeat*t_transf*(1 + (P - 1)/(t_repeat*t_transf*(s + 1/ t_repeat)*(s + 1/t_transf)))*(s + 1/t_repeat)*(s + 1/t_transf)**2) - (P - 1)*(1/(s + 1/t_transf) + 1/(s + 1/t_repeat))/(t_repeat*t_transf*(1 + (P - 1)/(t_repeat*t_transf*(s + 1/t_repeat)*(s + 1/t_transf)))*(s + 1/t_repeat)*(s + 1/ t_stat_analyze)*(s + 1/t_transf)) - (P - 1)*(1/(s + 1/t_transf) + 1/(s + 1/t_repeat))/(t_repeat*t_transf*(1 + (P - 1)/(t_repeat*t_transf*(s + 1/t_repeat)*(s + 1/t_transf)))*(s + 1/t_repeat)*(s + 1/t_start)*(s + 1/t_transf)) - (P - 1)*(1/( s + 1/t_transf) + 1/(s + 1/t_repeat))/(t_repeat*t_transf*(1 + (P - 1)/(t_repeat*t_transf*(s + 1/t_repeat)*(s + 1/t_transf)))*(s + 1/t_info)*(s + 1/t_repeat)*(s + 1/t_transf)) + (P - 1)**2*(1/(s + 1/t_transf) + 1/(s + 1/t_repeat))**2/( t_repeat**2*t_transf**2*(1 + (P - 1)/(t_repeat*t_transf*(s + 1/t_repeat)*(s + 1/t_transf)))**2*(s + 1/t_repeat)**2*(s + 1/t_transf)**2))/(t_info*t_start*t_stat_analyze*t_transf*(1 + (P - 1)/(t_repeat*t_transf*(s + 1/t_repeat)*(s + 1/ t_transf)))*(s + 1/t_info)*(s + 1/t_start)*(s + 1/t_stat_analyze)*(s + 1/t_transf))
T_realyze = M1(0)
D = M2(0)-M1(0)**2
alpha = T_realyze**2/D
myu = T_realyze/D
def F(t):
if t<0:
return 0
else:
return sp.integrate((myu**alpha)/(sp.gamma(alpha)*(x**(alpha-1))*sp.exp(myu*x)), (x, 0, t))
t=arange(0, 200, 1)
for i in t:
print(F(i))
i = i+1
So, when I'm trying to execute it, I had such error in
return sp.integrate
function:
$ python2.7 nta.py
1.0
('T_realyze = ', 63.800000000000026)
('D = ', 2696.760000000001)
('alpha = ', 1.5093816283243602)
('myu = ', 0.02365801925273291)
0
('myu*x = ', 0.0236580192527329*x)
('sp.exp(myu*x)', exp(0.0236580192527329*x))
0
1
('myu*x = ', 0.0236580192527329*x)
('sp.exp(myu*x)', exp(0.0236580192527329*x))
Traceback (most recent call last):
File "nta.py", line 48, in <module>
print(F(i))
File "nta.py", line 43, in F
return sp.integrate((myu**alpha)/(sp.gamma(alpha)*(x**(alpha-1))*sp.exp(myu*x)), (x, 0, t))
File "/root/anaconda2/lib/python2.7/site-packages/sympy/integrals/integrals.py", line 1280, in integrate
risch=risch, manual=manual)
File "/root/anaconda2/lib/python2.7/site-packages/sympy/integrals/integrals.py", line 486, in doit
conds=conds)
File "/root/anaconda2/lib/python2.7/site-packages/sympy/integrals/integrals.py", line 887, in _eval_integral
h = heurisch_wrapper(g, x, hints=[])
File "/root/anaconda2/lib/python2.7/site-packages/sympy/integrals/heurisch.py", line 130, in heurisch_wrapper
unnecessary_permutations)
File "/root/anaconda2/lib/python2.7/site-packages/sympy/integrals/heurisch.py", line 657, in heurisch
solution = _integrate('Q')
File "/root/anaconda2/lib/python2.7/site-packages/sympy/integrals/heurisch.py", line 646, in _integrate
numer = ring.from_expr(raw_numer)
File "/root/anaconda2/lib/python2.7/site-packages/sympy/polys/rings.py", line 371, in from_expr
raise ValueError("expected an expression convertible to a polynomial in %s, got %s" % (self, expr))
ValueError: expected an expression convertible to a polynomial in Polynomial ring in _x0, _x1, _x2, _x3 over RR[_A0,_A1,_A2,_A3,_A4,_A5,_A6,_A7,_A8,_A9,_A10,_A11,_A12,_A13,_A14,_A15,_A16,_A17,_A18,_A19,_A20,_A21,_A22,_A23,_A24,_A25,_A26,_A27,_A28,_A29,_A30,_A31,_A32,_A33,_A34] with lex order, got 0.50938162832436*_x3**2.96316463805253*(_A0 + _A10*_x0*_x1 + 2*_A11*_x1*_x3 + _x0**2*_A12 + _A14*_x0*_x2 + _A2*_x0 + 2*_A20*_x0*_x3 + _A24*_x1*_x2 + _x2**2*_A27 + 2*_A28*_x3 + _x1**2*_A30 + 3*_x3**2*_A31 + 2*_A6*_x2*_x3 + _A8*_x2 + _A9*_x1) + 1.50938162832436*_x3**4.92632927610506*(_A10*_x1*_x3 + 2*_A12*_x0*_x3 + _A13*_x1*_x2 + _A14*_x2*_x3 + 2*_A15*_x0 + _A16*_x2 + _x2**2*_A18 + _A2*_x3 + _x3**2*_A20 + _A21 + _x1**2*_A3 + 2*_A33*_x0*_x2 + _A34*_x1 + 3*_x0**2*_A5 + 2*_A7*_x0*_x1) - _A10*_x0*_x3 - _x3**2*_A11 - _A13*_x0*_x2 - _x2**2*_A17 - 2*_A19*_x1*_x2 - _A22 - _A24*_x2*_x3 - 2*_A25*_x1 - 3*_x1**2*_A29 - 2*_A3*_x0*_x1 - 2*_A30*_x1*_x3 - _A34*_x0 - _A4*_x2 - _x0**2*_A7 - _A9*_x3 + _x2*_x3 + 0.0236580192527329*_x2*(_A13*_x0*_x1 + _A14*_x0*_x3 + _A16*_x0 + 2*_A17*_x1*_x2 + 2*_A18*_x0*_x2 + _x1**2*_A19 + 2*_A23*_x2 + _A24*_x1*_x3 + 3*_x2**2*_A26 + 2*_A27*_x2*_x3 + _A32 + _x0**2*_A33 + _A4*_x1 + _x3**2*_A6 + _A8*_x3)
Sympy appears to have difficulties evaluating the integral with floating point coefficients (in this case). However, it can find the integral in closed form when the constants of the integrand expression are symbolic.
a, b, c, t = sp.symbols('a,b,c,t', positive = True)
f = sp.Integral(a * sp.exp(-c*x)/(x**b),(x,0,t)).doit()
print f
Output:
-a*(-b*c**b*gamma(-b + 1)*lowergamma(-b + 1, 0)/(c*gamma(-b + 2)) + c**b*gamma(-b + 1)*lowergamma(-b + 1, 0)/(c*gamma(-b + 2))) + a*(-b*c**b*gamma(-b + 1)*lowergamma(-b + 1, c*t)/(c*gamma(-b + 2)) + c**b*gamma(-b + 1)*lowergamma(-b + 1, c*t)/(c*gamma(-b + 2)))
You can substitute the constants in this expression to get numerical results as follows (here, I use an example value of t=4):
f.subs({a:(myu**alpha)/sp.gamma(alpha), b:(alpha-1), c:myu, t:4}).n()
0.0154626407404632
Another option is to use quad from scipy (again using t=4):
from scipy.integrate import quad
quad(lambda x: (myu**alpha)/(sp.gamma(alpha)*(x**(alpha-1))*sp.exp(myu*x)), 0 ,4)[0]
0.015462640740458165

Naming LDA topics in Python

I am new to python and trying to implement topic modelling. I am successful in implementing LDA in pything using gensim , but I am not able to give any label/name to these topics.
How do we name these topics? please help out with the best way to implement in python.
My LDA output is somewhat like this(please let me know if you need the code) :-
0.024*research + 0.021*students + 0.019*conference + 0.019*chi + 0.017*field + 0.014*work + 0.013*student + 0.013*hci + 0.013*group + 0.013*researchers
0.047*research + 0.034*students + 0.020*ustars + 0.018*underrepresented + 0.017*participants + 0.012*researchers + 0.012*mathematics + 0.012*graduate + 0.012*mathematical + 0.012*conference
0.027*students + 0.026*research + 0.018*conference + 0.017*field + 0.015*new + 0.014*participants + 0.013*chi + 0.012*robotics + 0.010*researchers + 0.010*student
0.023*students + 0.019*robotics + 0.018*conference + 0.017*international + 0.016*interact + 0.016*new + 0.016*ph.d. + 0.016*meet + 0.016*ieee + 0.015*u.s.
0.033*research + 0.030*flow + 0.028*field + 0.023*visualization + 0.020*challenges + 0.017*students + 0.015*project + 0.013*shape + 0.013*visual + 0.012*data
0.044*research + 0.020*mathematics + 0.017*program + 0.014*june + 0.014*conference + 0.014*- + 0.013*mathematicians + 0.013*conferences + 0.011*field + 0.011*mrc
0.023*research + 0.021*students + 0.015*field + 0.014*hovering + 0.014*mechanisms + 0.014*dpiv + 0.013*aerodynamic + 0.012*unsteady + 0.012*conference + 0.012*hummingbirds
0.031*research + 0.018*mathematics + 0.016*program + 0.014*flow + 0.014*mathematicians + 0.012*conferences + 0.011*field + 0.011*june + 0.010*visualization + 0.010*communities
0.028*students + 0.028*research + 0.018*ustars + 0.018*mathematics + 0.015*underrepresented + 0.010*program + 0.010*encouraging + 0.010*'', + 0.010*participants + 0.010*conference
0.049*research + 0.021*conference + 0.021*program + 0.020*mathematics + 0.014*mathematicians + 0.013*field + 0.013*- + 0.011*conferences + 0.010*areas
Labeling topics is completely distinct from topic modeling. Here's an article that describes using a keyword extraction technique (KERA) to apply meaningful labels to topics: http://arxiv.org/abs/1308.2359

Categories