Consider two functions of SymPy symbols e and i:
from sympy import Symbol, expand, Order
i = Symbol('i')
e = Symbol('e')
f = (i**3 + i**2 + i + 1)
g = (e**3 + e**2 + e + 1)
z = expand(f*g)
This will produce
z = e**3*i**3 + e**3*i**2 + e**3*i + e**3 + e**2*i**3 + e**2*i**2 + e**2*i + e**2 + e*i**3 + e*i**2 + e*i + e + i**3 + i**2 + i + 1
However, assume that e and i are both small and we can neglect both terms that are order three or higher. Using Sympy’s series tool or simply adding an O-notation Order class can handle this:
In : z = expand(f*g + Order(i**3) + Order(e**3))
Out: 1 + i + i**2 + e + e*i + e*i**2 + e**2 + e**2*i + e**2*i**2 + O(i**3) + O(e**3)
Looks great. However, I am still left with mixed terms e**2 * i**2. Individual variables in these terms are less than the desired cut-off so SymPy keeps them. However, mathematically small²·small² = small⁴. Likewise, e·i² = small·small² = small³.
At least for my purposes, I want these mixed terms dropped. Adding a mixed Order does not produce the desired result (it seems to ignore the first two orders).
In : expand(f*g + Order(i**3) + Order(e**3) + Order((i**2)*(e**2)))
Out: 1 + i + i**2 + i**3 + e + e*i + e*i**2 + e*i**3 + e**2 + e**2*i + e**3 + e**3*i + O(e**2*i**2, e, i)
Question: Does SymPy have an easy system to quickly remove the n-th order terms, as well as terms that are (e^a)·(i^b) where a+b > n?
Messy Solution: I have found a way to solve this, but it is messy and potentially not general.
z = expand(f*g + Order((e**2)*i) + Order(e*(i**2)))
zz = expand(z.removeO() + Order(e**3) + Order(i**3))
produces
zz = 1 + i + i**2 + e + e*i + e**2 + O(i**3) + O(e**3)
which is exactly what I want. So to specify my question: Is there a way to do this in one step that can be generalized to any n? Also, my solution loses the big-O notation that indicates mixed-terms were lost. This is not needed but would be nice.
As you have a dual limit, you must specify both infinitesimal variables (e and i) in all Order objects, even if they don’t appear in the first argument.
The reason for this is that Order(expr) only automatically chooses those symbols as infinitesimal that actually appear in the expr and thus, e.g., O(e) is only for the limit e→0.
Now, Order objects with different limits don’t mix well, e.g.:
O(e*i)+O(e) == O(e*i) != O(e)+O(e*i) == O(e) # True
This leads to a mess where results depend on the order of addition, which is a good indicator that this is something to avoid.
This can be avoided by explicitly specifying the infinitesimal symbols (as addition arguments of Order), e.g.:
O(e*i)+O(e,e,i) == O(e,e,i)+O(e*i) == O(e,e,i) # True
I haven’t found a way to avoid going through all combinations of e and i manually, but this can be done by a simple iteration:
orders = sum( Order(e**a*i**(n-a),e,i) for a in range(n+1) )
expand(f*g+orders)
# 1 + i + i**2 + e + e*i + e**2 + O(e**2*i, e, i) + O(e*i**2, e, i) + O(i**3, e, i) + O(e**3, e, i)
Without using Order you might try something simple like this:
>>> eq = expand(f*g) # as you defined
>>> def total_degree(e):
... x = Dummy()
... free = e.free_symbols
... if not free: return S.Zero
... for f in free:
... e = e.subs(f, x)
... return degree(e)
>>> eq.replace(lambda x: total_degree(x) > 2, lambda x: S.Zero)
e**2 + e*i + e + i**2 + i + 1
There is a way about it using Poly. I have made a function that keeps the O(...) term and another that does not (faster).
from sympy import Symbol, expand, Order, Poly
i = Symbol('i')
e = Symbol('e')
f = (i**3 + i**2 + i + 1)
g = (e**3 + e**2 + e + 1)
z = expand(f*g)
def neglect(expr, order=3):
z = Poly(expr)
# extract all terms and keep the lower order ones
d = z.as_dict()
d = {t: c for t,c in d.items() if sum(t) < order}
# Build resulting polynomial
return Poly(d, z.gens).as_expr()
def neglectO(expr, order=3):
# This one keeps O terms
z = Poly(expr)
# extract terms of higher "order"
d = z.as_dict()
large = {t: c for t,c in d.items() if sum(t) >= order}
for t in large: # Add each O(large monomial) to the expression
expr += Order(Poly({t:1},z.gens).as_expr(), *z.gens)
return expr
print(neglect(z))
print(neglectO(z))
This code prints the following:
e**2 + e*i + e + i**2 + i + 1
1 + i + i**2 + e + e*i + e**2 + O(e**2*i, e, i) + O(e*i**2, e, i) + O(i**3, e, i) + O(e**3, e, i)
Related
I would like to use python/sympy to solve simple systems of equations coming from impedance calculations in electronics.
In such calculations, due to the "parallel impedance" formula, one often has to deal with expressions of the form:
par(x,y) := (x*y)/(x+y)
Now I have tried with the following code:
from sympy import *
def par(var1,var2):
return (var1 * var2)/(var1+var2)
A = Symbol('A')
B = Symbol('B')
C = Symbol('C')
D = Symbol('D')
E = Symbol('E')
eq1 = A + par(B+50 , (C+ par(D, (E+50)) )) - 50
eq2 = B + par(A+50 , (C+ par(D , (E+50)) )) - 50
eq3 = E + par(D+50, (C+ par(A+50, B+50)) ) - 50
thus defining a system of three equations in five variables {A,B,C,D,E}, but then running
solve([eq1,eq2,eq3], A, B,C,D,E)
the computations just does not terminate.
Do you have any suggestions on how I could approach these type of equations?
Basically polynomials with division by polynomials, with solutions in the complex numbers.
Taking the suggestion of Oscar and focussing on A, B and C you can get a solution for them in terms of D and E:
>>> solve((eq1, eq2, eq3), A, B, C)[0] # one solution
(
50*(D + E)*(D + E + 50)/(3*D**2 - 2*D*E + 150*D - 3*E**2 - 50*E + 5000),
50*(D + E)*(D + E + 50)/(3*D**2 - 2*D*E + 150*D - 3*E**2 - 50*E + 5000),
(-3*D**3*E + 50*D**3 + 2*D**2*E**2 - 500*D**2*E + 10000*D**2 + 3*D*E**3 + 50*D*E**2 - 25000*D*E + 500000*D + 200*E**3 - 5000*E**2 - 500000*E + 12500000)/(3*D**3 + D**2*E + 150*D**2 - 5*D*E**2 + 100*D*E + 5000*D - 3*E**3 - 50*E**2 + 5000*E))
Notice that the solution for A and B is the same (consistent with the symmetry of the first two equations wrt A and B).
>>> sol = Dict(*zip((A,B,C),_))
>>> sol[A] = sol[B]
True
In this form, you can directly substitute values for D and E:
>>> sol.subs({D:1, E:S.Half})
{A: 1030/1367, B: 1030/1367, C: 2265971/1367}
You can also see what relationships between D and E are forbidden by solving for when any of the denominators are 0:
>>> from sympy.solvers.solvers import denoms
>>> set([j.simplify() for i in denoms(sol) for j in solve(i,D)])
{-E, E/3 - sqrt(10*E**2 - 9375)/3 - 25, E/3 + sqrt(10*E**2 - 9375)/3 - 25}
You could also try the Z3 library:
from z3 import Reals, Solver, sat, set_option
def par(var1, var2):
return (var1 * var2) / (var1 + var2)
vars = Reals('A B C D E')
A, B, C, D, E = vars
set_option(rational_to_decimal=True, precision=30)
s = Solver()
s.add(A + par(B + 50, (C + par(D, (E + 50)))) == 50)
s.add(B + par(A + 50, (C + par(D, (E + 50)))) == 50)
s.add(E + par(D + 50, (C + par(A + 50, B + 50))) == 50)
if s.check() == sat:
m = s.model()
print(m)
I get following output:
[D = 0.502507819412956050111927878839?,
C = 0.125,
E = 50.188198722936229689352204927638?,
B = -50.625,
A = -50.625]
The question mark at the end of D's and E's values means they have been approximated.
If you then try the values for A, B and C into the original code, sympy gives two exact expressions:
[{A: -405/8,
B: -405/8,
C: 1/8,
D: -31857/1292 + 5*sqrt(676050154)/5168,
E: 443/1304 + 5*sqrt(676050154)/2608},
{A: -405/8,
B: -405/8,
C: 1/8,
D: -5*sqrt(676050154)/5168 - 31857/1292,
E: 443/1304 - 5*sqrt(676050154)/2608}]
I am manipulating multivariate polynomials in sympy that are all expressed as sums of products of terms of the form (x_i + y_j), where the i and j are indices, and I want to keep it that way, i.e. express everything in terms of sums of one x symbol and one y symbol.
For example, I want
(y_{1} + z_{2})*((0 + 1)*(y_{3} + z_{2}) + y_{1} + z_{1} + 0 + 0)
to become
(y_{1} + z_{2})*(y_{3} + z_{2}) + (y_{1} + z_{2})*(y_{1} + z_{1})
The first thing you can do is replace binomials that fit the pattern with dummy and expand. The problem is that you will have some dangling terms to group together. If you always have two, then it's easy. If you have more it will require more work and a good definition of which subindexed x should go with which y (or whatever letters you want paired up).
So let's start with your unevaluated expression which we will call u
Get the free symbols (assuming there are only x and y of interest):
>>> free = u.free_symbols
Replace existing binomials with a unique dummy variable
>>> reps = {}
>>> u.replace(lambda x:x.is_Add and len(x.args) == 2 and all(
... i in free for i in x.args),
... lambda x: reps.setdefault(x, Dummy()))
_Dummy_45*(_Dummy_47*(0 + 1) + y1 + z1)
Now expand
>>> expand(_)
_Dummy_45*_Dummy_47 + _Dummy_45*y1 + _Dummy_45*z1
and collect products of dummy symbols together
>>> _.replace(lambda x:x.is_Mul and len(x.args) == 2 and all(
... i in reps.values() for i in x.args),
... lambda x: reps.setdefault(x, Dummy())))
_Dummy_45*y1 + _Dummy_45*z1 + _Dummy_51
Collect on dummy symbols to get binomials to appear that were previously dangling
>>> collect(_, reps.values())
_Dummy_45*(y1 + z1) + _Dummy_51
Now replace Dummy symbols with their values (which are the keys in reps so we have to invert that dictionary):
>>> _.xreplace({v:k for k,v in reps.items()})
_Dummy_45*_Dummy_47 + (y1 + z1)*(y1 + z2)
Do it again
>>> _.xreplace({v:k for k,v in reps.items()})
(y1 + z1)*(y1 + z2) + (y1 + z2)*(y3 + z2)
Posting specific expressions that you would like to see re-arranged in some way would help to focus a more robust solution, but these techniques can get you started. Here, too, is a function that pairs up free symbols in an Add and replaces them with Dummy symbols.
def collect_pairs(e, X, Y):
free = e.free_symbols
xvars, yvars = [[i for i in free if i.name.startswith(j)] for j in (X, Y)]
reps = {}
def do(e):
if not e.is_Add: return e
x, cy = sift(e.args, lambda x: x in xvars, binary=True)
y, c = sift(cy, lambda x: x in yvars, binary=True)
if x and len(x) != len(y): return e
args = []
for i,j in zip(ordered(x), ordered(y)):
args.append(reps.setdefault(i+j, Dummy()))
return Add(*(c + args))
# hmmm...this destroys the reps and returns {}
#return {v:k for k,v in reps.items()}, bottom_up(e, do)
return reps, bottom_up(e, do)
>>> e1
(y1 + z2)*(y1 + y3 + z1 + z2)
>>> r, e = collect_pairs(e1,'y','z')
>>> expand(e).xreplace({v:k for k,v in r.items()})
(y1 + z1)*(y1 + z2) + (y1 + z2)*(y3 + z2)
This works with the fully expanded e1 if you factor it first:
>>> e2 = factor(expand(e1)); e2
(y1 + z2)*(y1 + y3 + z1 + z2)
>>> r, e = collect_pairs(e2, 'y', 'z')
>>> expand(e).xreplace({v:k for k,v in r.items()})
(y1 + z1)*(y1 + z2) + (y1 + z2)*(y3 + z2)
Looking at the code you originally posted, I would suggest keeping the binomials together and only replace them at the end, like this:
...
def single_variable_diff(perm_dict,k,m):
ret_dict = {}
for perm,val in perm_dict.items():
if len(perm)<k:
ret_dict[perm] = Add(ret_dict.get(perm,0), reps.setdefault(U(var2[k],var3[m]), Dummy())*val,evaluate=False)
else:
ret_dict[perm] = Add(ret_dict.get(perm,0), reps.setdefault(U(var2[perm[k-1]],var3[m]), Dummy())*val,evaluate=False)
...
reps = {}
U = lambda x,y: UnevaluatedExpr(Add(*ordered((x,y))))
ireps = lambda: {v:k for k,v in reps.items()}
perms=[]
curperm = []
...
coeff_perms.sort(key=lambda x: (inv(x),*x))
def touch(e):
from sympy.core.traversal import bottom_up
def do(e):
return e if not e.args else e.func(*e.args)
return bottom_up(e, do)
undo = ireps()
for perm in coeff_perms:
val = touch(coeff_dict[perm]).expand().xreplace(undo))
print(f"{str(perm):>{width}} {str(val)}")
(3, 4, 1, 2) will be given in terms of binomial products, but some elements will not -- they are just sums of binomials. In ordered to keep them together, you can create them as UnevaluatedExpr, e.g. the U lambda that is defined. I am guessing you don't have to use evaluated=False and will, then, not need the touch function.
I am trying to search for integer solutions to the equation:
y^2 + x^2 = 2n^2
If I search this in wolfram alpha, they are all found almost immediately even for very large n. When I implemented a brute force approach it was very slow:
def psearch(n, count):
for x in range(0, n):
for y in range(0, n):
if x*x + y*y == 2*n**2:
print(x,y)
count += 1
return count
So I assume there is a much faster way to get all of the integer solutions to the equation above. How can I do this in python so that it will have much lower runtime?
Note: I have seen this question however it is about finding lattice points within a circle not the integer solutions to the equation of the circle. Also I am interested in finding the specific solutions not just the number of solutions.
Edit: I am still looking for something an order of magnitude faster. Here is an example: n=5 should have 12 integer solutions to find what those should be search this equation on Wolfram alpha.
Edit 2: #victor zen gave a phenomenal answer to the problem. Can anyone think of a way to optimize his solution further?
In your algorithm, you're searching for all possible y values. This is unnecessary. The trick here is to realize that
y^2 + x^2 = 2n^2
directly implies that
y^2 = 2n^2-x^2
so that means you only have to check that 2n^2-x^2 is a perfect square. You can do that by
y2 = 2*n*n - x*x
#check for perfect square
y = math.sqrt(y2)
if int(y + 0.5) ** 2 == y2:
#We have a perfect square.
Also, in your algorithm, you are only checking x values up to n. This is incorrect. Since y^2 will always be positive or zero, we can determine the highest x value we need to check by setting y^2 to its lowest value (i.e 0). Consequentially, we need to check all integer x values satisfying
x^2 <= 2n^2
which reduces to
abs(x) <= sqrt(2)*n.
Combine this with the optimization of only checking the top quadrant, and you have an optimized psearch of
def psearch(n):
count = 0
top = math.ceil(math.sqrt(2*n*n))
for x in range(1, top):
y2 = 2*n*n - x*x
#check for perfect square
y = math.sqrt(y2)
if int(y + 0.5) ** 2 == y2:
count+=4
return count
It is enough to search inside the first octant y>0, x<y (the four solutions (±n, ±n) are obvious and by symmetry a solution (x, y) yields 8 copies (±x, ±y), (±y, ±x)).
By monotonicity, for a given y there is at most one x. You can find it by following the circular arc incrementally, decreasing y then adjusting x. If you maintain the condition x²+y²≤2n² as tightly as possible, you get the code below which is optimized to use only elementary integer arithmetic (for efficiency, 2x is used instead of x).
x, y, d= 2 * n, 2 * n, 0
while y > 0:
y, d= y - 2, d - y + 1
if d < 0:
x, d= x + 2, d + x + 1
if d == 0:
print(x >> 1, '² + ', y >> 1, '² = 2.', n, '²', sep='')
Here are all solutions for n between 1 and 100:
7² + 1² = 2.5²
14² + 2² = 2.10²
17² + 7² = 2.13²
21² + 3² = 2.15²
23² + 7² = 2.17²
28² + 4² = 2.20²
31² + 17² = 2.25²
35² + 5² = 2.25²
34² + 14² = 2.26²
41² + 1² = 2.29²
42² + 6² = 2.30²
46² + 14² = 2.34²
49² + 7² = 2.35²
47² + 23² = 2.37²
51² + 21² = 2.39²
56² + 8² = 2.40²
49² + 31² = 2.41²
63² + 9² = 2.45²
62² + 34² = 2.50²
70² + 10² = 2.50²
69² + 21² = 2.51²
68² + 28² = 2.52²
73² + 17² = 2.53²
77² + 11² = 2.55²
82² + 2² = 2.58²
84² + 12² = 2.60²
71² + 49² = 2.61²
79² + 47² = 2.65²
85² + 35² = 2.65²
89² + 23² = 2.65²
91² + 13² = 2.65²
92² + 28² = 2.68²
98² + 14² = 2.70²
103² + 7² = 2.73²
94² + 46² = 2.74²
93² + 51² = 2.75²
105² + 15² = 2.75²
102² + 42² = 2.78²
112² + 16² = 2.80²
98² + 62² = 2.82²
97² + 71² = 2.85²
113² + 41² = 2.85²
115² + 35² = 2.85²
119² + 17² = 2.85²
123² + 3² = 2.87²
119² + 41² = 2.89²
126² + 18² = 2.90²
119² + 49² = 2.91²
133² + 19² = 2.95²
137² + 7² = 2.97²
124² + 68² = 2.100²
140² + 20² = 2.100²
You can optimize this algorithm maybe by considering only one quadrant only and the multiplying by 4.
import math
def psearch(n, count):
for x in range( 0 , 2*n + 1):
ysquare = 2*(n**2) - x * x
if (ysquare <0):
break
y = int(math.sqrt(ysquare))
if ysquare == y * y :
print(x,y)
count+=1
return count
print(psearch(13241324,0) * 4)
OUTPUT
(1269716, 18682964)
(1643084, 18653836)
(11027596, 15134644)
(12973876, 13503476)
(13241324, 13241324)
(13503476, 12973876)
(15134644, 11027596)
(18653836, 1643084)
(18682964, 1269716)
36
I am just looking at the Python module SymPy and try, as a simple (useless) example the fit of a function f(x) by a function set g_i(x) in a given interval.
import sympy as sym
def functionFit(f, funcset, interval):
N = len(funcset) - 1
A = sym.zeros(N+1, N+1)
b = sym.zeros(N+1, 1)
x = sym.Symbol('x')
for i in range(N+1):
for j in range(i, N+1):
A[i,j] = sym.integrate(funcset[i]*funcset[j],
(x, interval[0], interval[1]))
A[j,i] = A[i,j]
b[i,0] = sym.integrate(funcset[i]*f, (x, interval[0], interval[1]))
c = A.LUsolve(b)
u = 0
for i in range(len(funcset)):
u += c[i,0]*funcset[i]
return u, c
x = sym.Symbol('x')
f = 10*sym.cos(x)+3*sym.sin(x)
fooset=(sym.sin(x), sym.cos(x))
interval = (1,2)
print("function to approximate:", f)
print("Basic functions:")
for foo in fooset:
print(" - ", foo)
u,c = functionFit(f, fooset, interval)
print()
print("simplified u:")
print(sym.simplify(u))
print()
print("simplified c:")
print(sym.simplify(c))
The result is the fit function u(x), to be returned, together with the coefficients by functionFit.
In my case
f(x) = 10 * sym.cos(x) + 3 * sym.sin(x)
and I want to fit it according to a linear combination of sin(x), cos(x).
So the coefficients should be 3 and 10.
The result is OK, but for u(x) I get
u(x) = (12*sin(2)**2*sin(4)*sin(x) + 3*sin(8)*sin(x) + 12*sin(2)*sin(x) + 40*sin(2)**2*sin(4)*cos(x) + 10*sin(8)*cos(x) + 40*sin(2)*cos(x))/(2*(sin(4) + 2*sin(2))) :
Function to approximate: 3*sin(x) + 10*cos(x)
Basic functions:
- sin(x)
- cos(x)
Simplified u: (12*sin(2)**2*sin(4)*sin(x) + 3*sin(8)*sin(x) + 12*sin(2)*sin(x) + 40*sin(2)**2*sin(4)*cos(x) + 10*sin(8)*cos(x) + 40*sin(2)*cos(x))/(2*(sin(4) + 2*sin(2)))
Simplified c: Matrix([[3], [10]])
which is indeed the same as 10 * cos(x) + 3 * sin(x).
However I wonder why it is not simplified to that expression. I tried several simplifying function available, but none of it gives the expected result.
Is there something wrong in my code or are my expectations to high?
Don't know if this is a solution for you, but I'd simply use the .evalf method of every Sympy expression
In [26]: u.simplify()
Out[26]: (12*sin(2)**2*sin(4)*sin(x) + 3*sin(8)*sin(x) + 12*sin(2)*sin(x) + 40*sin(2)**2*sin(4)*cos(x) + 10*sin(8)*cos(x) + 40*sin(2)*cos(x))/(2*(sin(4) + 2*sin(2)))
In [27]: u.evalf()
Out[27]: 3.0*sin(x) + 10.0*cos(x)
In [28]:
i have the following where a, b, c, d are all sympy symbols
p = a + b + 2*c + d
q = a + b + c + d
i wanted to do this:
p = p.subs(a + b + c + d, q)
and wanted to get this:
p = q + c
but p remains unchanged.
what should i be doing to get to p = q + c?
the matching performed by subs() seems to be looking for strict ordering, and hence it didn't split up the '2*c' term.
should i be using replace() instead of subs().
EDIT:
code as follows:
import sympy
a, b, c, d = sympy.symbols('a,b,c,d')
p = a + b + 2*c + d
q = a + b + c + d
r = p.subs(a + b + c + d, q)
print r
EDIT #2
https://groups.google.com/forum/#!topic/sympy/b_Yv6s15Y0Q
my problem is similar to the groups.google link, just that in that case subs() do its job.
First a comment about your code: since you define q to be the sum a+b+c+d you will never see the change, even if it did work (but it doesn't). Something else that does work is the following:
>>> p = a + b + 2*c + d
>>> q = var('q')
>>> p.extract_additively(a+b+c+d) + q
c + q
There is also an extract_multiplicatively method.