while solving the Palindrome problem on codechef I wrote an algorithm, which gave a TLE on test cases more than 10^6. So taking lead from people who had already solved it I wrote the following code in python.
################################################
### http://www.codechef.com/problems/TAPALIN ###
################################################
def pow(b,e,m):
r=1
while e>0:
if e%2==1:
r=(r*b)%m
e=e>>1
b=(b*b)%m
return r
def cal(n,m):
from math import ceil
c=280000002
a=pow(26, int(ceil(n/2)), m)
if(n%2==0):
return ((52*(a-1+m)%m)*c)%m
else:
return ((52*(((a-1+m)*c)%m))%m+(a*26)%m)%m
c=int(raw_input())
m=1000000007
for z in range(c):
print cal(int(raw_input()),m)
the pow function is the Right-to-left binary method. what i do not understand is:
where did the value 280000002 came from?
why do we need to perform so many mod operations?
is this some famous algorithm of which I am unaware about?
Almost every submitted code on codechef makes use of this very algorithm, but I am unable to decipher it's working. any link to the theory would be appreciated.
I am still unable to figure out what is happening in this exactly. can anyone write a pseudocode for this formula/algo? also help me understand time complexity for this code. another thing that amazes me is, if I write this code as:
################################################
### http://www.codechef.com/problems/TAPALIN ###
################################################
def modular_pow(base, exponent):
result=1
while exponent > 0:
if (exponent%2==1):
result=(result * base)%1000000007
exponent=exponent >> 1
base=(base*base)%1000000007
return result
c=int(raw_input())
from math import ceil
for z in range(c):
n=int(raw_input())
ans=modular_pow(26, int(ceil(n/2)))
if(n%2==0):
print ((52*((ans)-1+ 1000000007)%1000000007)*280000002)%1000000007
else:
print ((52*((((ans)-1+ 1000000007)*280000002)%1000000007))%1000000007+(ans*26)%1000000007)%1000000007
this improves the performance from 0.6secs to 0.4 secs. though the best code runs in 0.0 seconds. I am so much confused.
The number 280000002 is Modular Multiplicative Inverse of 25 mod 10^9 + 7, because we know 10^9 + 7 is prime so it's simply calculated using pow(25, 10^9 + 7 - 2, 10^9 + 7). Read more here: http://en.wikipedia.org/wiki/Modular_multiplicative_inverse
And we need to perform so many mod operations because we don't want to work with big numbers ;-)
Never seen this algorithm before but walking through it with some of the easier test cases starts to reveal what is happening (BTW, my guess is everyone is using it because it was the top answer on code chef and everyone is just copying it, I don't think you have to assume it's the only way to do it).
To answer your questions:
where did the value 280000002 came from?
280000002 is the modulo multiplicative inverse of 25 mod 1000000007. This means that the following congruence is true
280000002 * 25 === 1 (mod 1000000007)
why do we need to perform so many mod operations?
Probably just to not be dealing with huge numbers along the way. Although there is some extra math in there that seems to me to just be making the numbers bigger than they need to be, see my note at the end about that. Theoretically you could just do one big mod at the end and get the same result but it's possible our tiny CPUs don't like that.
is this some famous algorithm of which I am unaware about?
Again, I doubt it. This isn't really an algorithm as it is a mashed up math formula.
Speaking of math, there is some stuff in there that is questionable to me. It's been a while since I messed with this stuff but I'm pretty sure that (52*(a-1+m)%m) will always be equivalent to (52*(a-1)%m since 52m mod m = 0. Not sure why you would be adding that huge number there, you may see some performance gain if you get rid of that.
Related
I want to write a program to ask for the values of Q,y,b,x,S0 then find the value of n from the following image
I used f solve to write this code:
from scipy.optimize import fsolve
def f(n,Q=float(input("Q=")),y=float(input("y=")),b=float(input("b=")),x=float(input("x=")),S_0=float(input("S0="))):
return (1/n)*((y*(b+x*y))**(5/3))/((b+2*y*(1+x**2)**(1/2))**(2/3))*S_0-Q
a=fsolve(f,1)
print(a)
print(f(a))
But it gives a false result as output for my inputs here:
Q=21
y=7.645
b=2
x=1
S0=0.002
/usr/lib/python3/dist-packages/scipy/optimize/minpack.py:236: RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last ten iterations.
warnings.warn(msg, RuntimeWarning)
[ 1.]
[-20.68503025]
I wrote this in Online Python. I don't know what is the meaning of this error. also the output is wrong. answer should be n=0.015 for this specific input. how can I fix this code?
I rearranged your equation, and this somehow gets your expected result. I'm really not quite sure what is the issue, sorry!
return (S_0/Q)*((y*(b+x*y))**(5/3)/(b+2*y*(1+x**2)**(1/2))**(2/3))-n
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I'm trying to solve for an equation in Python without using any scipy features. c = 5 and the equation is c = 10 - 20(exp(-0.15*x) - exp(-0.5*x)).
How do I solve for x with a tolerance of .0001.
Pardon my intro level programming here guys. This is the first class I've ever taken.
from math import exp
c = 5
def x(c):
c = 10 - 20(exp*(-0.15*x) - exp*(-0.5*x))
return x(5)
You might want to have a look at SymPy. It's a dedicated algebraic symbol manipulation library for Python with a BSD license. If you're looking for a "stock"/standard library solution, then as others have mentioned you're going to have to do some homework and potentially implement your own solver.
As a closing thought, unless this is a class assignment or your boss has a pathological hatred of third-party open source libraries, there's really no good reason not to use one of the SciPy packages. IIRC, they're largely implemented as highly-optimized C binaries wrapped in Python modules, so you get blazingly fast performance and the ease-of-use of a Python API.
It seems like you want to implement this "from scratch." A few hints:
We can simplify this a bit with algebra. What you really want is to find x such that exp(-0.15*x) + exp(-0.5*x) - 0.2 = 0
For a given value of x, you know how much error you have. For example, if x = 1, then c(1) = 1.267, so your error is 1.267. You need to keep "guessing" values until your error is less than 0.0001.
Math tells us that this function is monotonically decreasing; so, there is no point checking answers to the left of 1.
Hopefully you can solve it from these hints. But this is supposed to be an answer, so here is the code:
def theFunction(x): return exp(-0.15*x) + exp(-0.5*x) - 0.2
error = 1.267
x = 1
littleBit = 1
while (abs(error) > 0.0001):
if error > 0: x += littleBit
else: x -= littleBit
oldError = error
error = theFunction(x)
if (error*oldError < 0): littleBit *= 0.5
print x
Note, the last three lines in the loop are a little bit 'clever' -- an easier solution would be to just set littleBit = 0.00001 and keep it constant throughout the program (this will be much slower, but will still do the job). As an exercise, I recommend trying to implement it this simpler way, then time how long it takes both ways, and see if you can figure out where the time savings comes in.
I'm getting this error when I try to compute the logistic function for a data mining method I'm implementing:
RuntimeWarning: overflow encountered in exp
My code:
def logistic_function(x):
# x = np.float64(x)
return 1.0 / (1.0 + np.exp(-x))
If I understood correctly from some related questions that the problem is that np.exp() is returning a huge value. I saw suggestions to let numpy ignore the warnings, but the problem is that when I get this error, then the results of my method are horrible. However when I don't get it, then they are as expected. So making numpy ignoring the warning is not a solution for me at all. I don't know what is wrong or how to deal with.
I don't even know if this is a result of a bug because sometimes I get this error and sometimes not! I went through my code many times and everything looks correct!
You should compute the logistic function using either scipy.special.expit, which in recent enough SciPy is more stable than your solution (although earlier versions got it wrong), or by reducing it to tanh:
def logistic_function(x):
return .5 * (1 + np.tanh(.5 * x))
This version of the function is stable, fast, and fairly accurate.
When I run this program, I get no solution at the end, but there should be a solution ( I believe). Any idea what I am doing wrong? If you take away the Q from e2 equation it seems to work correctly.
#!/usr/bin/python
from sympy import *
a,b,w,r = symbols('a b w r',real=True,positive=True)
L,K,Q = symbols('L K Q',real=True,positive=True)
e1=K
e2=(K*Q/2)**(a)
print solve(e1-e2,K)
It works if we do the following:
Set Q=1 or,
Change e2 to e2=(K*a)(Q/2)**(a)
I would still like it to work in the original way though, as my equations are more complicated than this.
This is just a deficiency of solve. solve is based mostly on heuristics, so sometimes it isn't able to figure out how to solve an equation when it's given in a particular form. The workaround here is to just call expand_power_base on the expression, since SymPy is able to solve K - K**a*(Q/2)**a:
In [8]: print(solve(expand_power_base(e1-e2),K))
[(2/Q)**(a/(a - 1))]
It's also worth pointing out that the result of [] from solve does not in any way mean that there are no solutions, only that solve was unable to find any. See the first note at http://docs.sympy.org/latest/tutorial/solvers.html.
I am porting code from Matlab to Python and am having trouble finding a replacement for the firls( ) routine. It is used for, least-squares linear-phase Finite Impulse Response (FIR) filter design.
I looked at scipy.signal and nothing there looked like it would do the trick. Of course I was able to replace my remez and freqz algorithsm, so that's good.
On one blog I found an algorithm that implemented this filter without weighting, but I need one with weights.
Thanks, David
The firls equivalent in python now appears to be implemented as part of the signal package:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.firls.html#scipy.signal.firls
Also I agree with everything that #pev hall stated above especially how firls is optimum in many situations (such as when overall signal to noise is being optimized for a given number of taps), and to not use the boxcar window as he stated, they are not equivalent at all! firls generally outperforms all window and frequency sampling approaches to filter design when designing traditional FIR filters.
To my current understanding, scipy.signal and Octave only support odd length (even order) least squared filters. In this case when I need an even length (Type II or Type IV filter), I resort to using the windowed design approach with a Kaiser Window specifically. I have found the Kaiser window solution to come quite close to the optimum least squares solution.
This blog post contains code detailing how to use scipy.signal to implement FIR filters.
Obviously, this post is somewhat dated, but maybe it is still interesting for some:
I think there are two near-equivalents to firls in Python:
You can try the firwin function with window='boxcar'. This is similar to Matlab where fir1 with a boxcar window delivers the same (? or at least very similar results) as firls.
You could also try the firwin2 method (frequency sampling method, similar to fir2 in Matlab), again using window='boxcar'
I did try one example from the Matlab firls reference and achieved near-identical results for:
Matlab:
F = [0 0.3 0.4 0.6 0.7 0.9];
A = [0 1 0 0 0.5 0.5];
b = firls(24,F,A,'hilbert');
Python:
F = [0, 0.3, 0.4, 0.6, 0.7, 0.9, 1]
A = [0, 1, 0, 0, 0.5, 0.5, 0]
bb = sig.firwin2( 25, F,A, window='boxcar', antisymmetric=True )
I had to increase the order to N = 25 and I also had to add another data point (F = 1, A = 0) which Python insisted upon; the option antisymmetric = True is only necessary for this special case (Hilbert filter)
This post is really in response to
You can try the firwin function with window='boxcar'...
Do don't use boxcar it means no window at all (it is ideal but only works "ideally" with an infinite number of multipliers - sinc in time). The whole perpose of using a window is the reduce the number of multipliers required to get good stop band attenuation. See Window function
When comparing filters please use dB/log scale.
Scipy not having firls (FIR least squares filter) function is a large limitation (as it generates the optimum filter in many situations).
REMEZ has it's place but the flat roll off is a real killer when your trying to get the best results (and not just meeting some managers spec). ( warning scipy remez implementation can give amplification in stop band - see plot at bottom)
If you are using python (or need to use some window) I recommend using the kasiar window which gets very good results and can easily be tweaked for your attenuation vs transition vs multipliers requirement(attenuation (in dB) = 2.285 * (multipliers - 1) * pi * width + 7.95). It performance is not quite as good as firls but it has the benefit of being fast and easy to calculate (great if you don't store the coefficients).
I found a firls() implementation attached here in SciPy ticket 648
Minor changes to get it working:
Swap the following two lines:
bands, desired, weight = array(bands), array(desired), array(weight)
if weight==None : weight = ones(len(bands)/2)
import roots from numpy instead of scipy.signal
Since version 0.18 in July, 2016 scipy includes an implementation of firls, as scipy.signal.firls.
It seems unlikely that you'll find exactly what you seek already written in Python, but perhaps the Matlab function's help page gives or references a description of the algorithm?