I am attempting to find d in Python, such that
(2 ** d) mod n = s2
Where n =
132177882185373774813945506243321607011510930684897434818595314234725602493934515403833460241072842788085178405842019124354553719616350676051289956113618487539608319422698056216887276531560386229271076862408823338669795520077783060068491144890490733649000321192437210365603856143989888494731654785043992278251
and s2 =
18269259493999292542402899855086766469838750310113238685472900147571691729574239379292239589580462883199555239659513821547589498977376834615709314449943085101697266417531578751311966354219681199183298006299399765358783274424349074040973733214578342738572625956971005052398172213596798751992841512724116639637
I am not looking for the solution, but for a reasonably fast way to do this. I've tried using pow and plugging in different values, but this is slow and never gets the solution. How can I find d?
There is no known algorithm that can solve your problem. It's called discrete logarithm problem, and some cryptosystems depend on its complexity (You can't find its solution fast unless you know factorization of n)
Look at the second answer to Is it possible to get RSA private key knowing public key and set of "original data=>encrypted data" entries?. A known-plaintext attack is no easier than known-ciphertext.
The only known discrete logarithm solvers are built around knowing the factors. If you don't have the factors, you need to generate them.
The best reasonable-time algorithm for this is Shor's algorithm. The problem is that you need a quantum computer with enough qubits, and nobody's built one large enough for your sample data yet. And it looks like it'll be quite a few years before anyone does; currently people are still excited about factoring numbers like 15 and 21.
If you want to use classical computing, the best known algorithms are nowhere near "reasonably fast". I believe someone recently showed that the Bonn results on 2^1039-1 should be reproducible in under 4 months with modern PCs. Another 5 years, and maybe it'll be down to a month.
It shouldn't surprise you that there are no known reasonable fast algorithms, because if there were, most private key encryption would be crackable and therefore worthless. It would be major news if someone gave you the answer you're looking for. (Is there an SO question for "Is P=NP?")
Related
I'm looking for an algorithm with the fastest time per query for a problem similar to nearest-neighbor search, but with two differences:
I need to only approximately confirm (tolerating Type I and Type II error) the existence of a neighbor within some distance k or return the approximate distance of the nearest neighbor.
I can query many at once
I'd like better throughput than the approximate nearest neighbor libraries out there (https://github.com/erikbern/ann-benchmarks) which seem better designed for single queries. In particular, the algorithmic relaxation of the first criteria seems like it should leave room for an algorithmic shortcut, but I can't find any solutions in the literature nor can I figure out how to design one.
Here's my current best solution, which operates at about 10k queries / sec on per CPU. I'm looking for something close to an order-of-magnitude speedup if possible.
sample_vectors = np.random.randint(low=0, high=2, size=(10000, vector_size))
new_vectors = np.random.randint(low=0, high=2, size=(100000, vector_size))
import annoy
ann = annoy.AnnoyIndex(vector_size, metric='hamming')
for i, v in enumerate(sample_vectors):
ann.add_item(i, v)
ann.build(20)
for v in new_vectors:
print(ann.get_nns_by_vector(v, n=1, include_distances=True))
I'm a bit skeptical of benchmarks such as the one you have linked, as in my experience I have found that the definition of the problem at hand far outweighs in importance the merits of any one algorithm across a set of other (possibly similar looking) problems.
More simply put, an algorithm being a high performer on a given benchmark does not imply it will be a higher performer on the problem you care about. Even small or apparently trivial changes to the formulation of your problem can significantly change the performance of any fixed set of algorithms.
That said, given the specifics of the problem you care about I would recommend the following:
use the cascading approach described in the paper [1]
use SIMD operations (either SSE on intel chips or GPUs) to accelerate, the nearest neighbour problem is one where operations closer to the metal and parallelism can really shine
tune the parameters of the algorithm to maximize your objective; in particular, the algorithm of [1] has a few easy to tune parameters which will dramatically trade performance for accuracy, make sure you perform a grid search over these parameters to set them to the sweet spot for your problem
Note: I have recommended the paper [1] because I have tried many of the algorithms listed in the benchmark you linked and found them all inferior (for the task of image reconstruction) to the approach listed in [1] while at the same time being much more complicated than [1], both undesirable properties. YMMV depending on your problem definition.
I appreciate the solutions, they gave me some ideas, but I'll answer my own question as I found a solution that mostly resolved my question, and maybe it will help someone else in the future.
I used one of the libraries linked in the benchmarks, hnswlib as it not only has slightly improved performance over annoy, but it also has a bulk-query option. Hnswlib's algorithm also allows a highly flexible performance/accuracy tradeoff in favor of performance, which is well-suited for the highly-error tolerant approximate checking I want to do. Further, even though the parallelization improvements are far from linear per-core, it's still something. The above factors combined for a ~5x speedup in my particular situation.
As ldog said, your milage may vary depending on your problem statement.
I'm new in quantum computing. I mean extremely new. I saw that there is some king of an algorithm called Grover's search algorithm. I have read that it searches through the database containing N-elements in order to find the specific element. I also read that standard computers would be doing it for many, many years while quantum computers would do it in just a few seconds. And that is what confuses me the most. How I understand this:
Let's say we want to search the database containing 50.000 different names and we are looking for a name "Jack". The standard computer wouldn't do it for years right? I think there's matter of seconds or minutes as searching through the database containing names which is probably text won't take long...
Example in python:
names = ["Mark", "Bob", "Katty", "Susan", "Jack"]
for i in range(len(names)):
if names[i] == "Jack":
print("It's Jack!")
else:
print("It's not Jack :(")
That's how I understand it. So let's imagine this list contains 50.000 names and we want to search for "Jack". I guess it wouldn't take long.
So how does this Grover's algorithm works? I really can't figure it out.
Grover's search is not indeed a good replacement for classical database lookup methods. (Note that classical databases will have classical indices in them that will speed up the lookup way beyond your implementation.) You can see this paper for a discussion of practical applications of Grover search.
It is more correct to think about the oracle as a tool to recognize the answer, not to find it. For example, if you're looking to solve a SAT problem, the oracle circuit will encode the Boolean formula for a specific instance of a problem you're trying to solve.
If you were to use Grover's algorithm for database search, the oracle would have to encode the condition you're searching for, but also the criteria of whether the element is in a database. For example, if you're looking for a name starting with A, the oracle needs to recognize all strings starting with A, but it also needs to recognize which of the strings are present in the database - otherwise the algorithm will yield a random string starting with A, which is probably not what you were looking for.
Grover's algorithm has practical application when generalized to amplitude amplification, which shows up as a component of many other quantum algorithms. Amplitude amplification is a way of improving the success likelihood of a probabilistic quantum algorithm.
How to find biggest sum of items not exceeding some value? For example I have 45 values like this: 1.0986122886681098, 1.6094379124341003, 3.970291913552122, 3.1354942159291497, 2.5649493574615367. I need to find biggest possible combination not exceeding 30.7623.
I can't use bruteforce to find all combinations as amount of combination will be huge. So I need to use some greedy algorithm.
This is an instance of the Knapsack problem. It is NP-hard, so with 45 items, you'll have to use some heuristic algorithm (Hill Climbing, for example) to find an acceptable estimate. To find the optimal solution, you have no other option than to try all possibilities (which is infeasible). Knowledge of your distribution might change that. If many of the items by themselves would exceed the limit, they could be discarded. Or if the limit would be very close to the sum of all numbers, you might need only a combination of up to about 5 items to not be included; 45 choose 5 is still feasible.
What you search for is the knapsack problem I guess. Does this solution help you?
http://rosettacode.org/wiki/Knapsack_problem/Unbounded/Python_dynamic_programming
Have a look at the solution under "1.2 DP, single size dimension". I think this is not brute force. You should be able to adapt it to your problem.
You can try to find approximate solution:
Multiply your floats by appropriate number (for example, 100), do the same for sum.
Round floats to the nearest integers (or Ceil them to larger integers)
like Round(1.0986122886681098 * 100) = 110
Solve subset sum problem for integers with DP
Check that sum of floats doesn't excess the limit
(due to rounding errors)
If you have a lot of very close float numbers, you might want to refine solution with optimisation algorithms (like mentioned Hill Climbing)
This problem can be phrased as a zero-one assignment problem, and solved with a linear programming package, such as GLPK, which can handle integer programming problems. The problem is to find binary variables x[i] such that the sum of x[i]*w[i] is as large as possible, and less than the prescribed limit, where w[i] are the values which are added up.
My advice is to use an existing package; combinatorial optimization algorithms are generally very complex. There is probably a Python interface for some package you can use; I don't know if GLPK has such an interface, but probably some package does.
I would like to compare different methods of finding roots of functions in python (like Newton's methods or other simple calc based methods). I don't think I will have too much trouble writing the algorithms
What would be a good way to make the actual comparison? I read up a little bit about Big-O. Would this be the way to go?
The answer from #sarnold is right -- it doesn't make sense to do a Big-Oh analysis.
The principal differences between root finding algorithms are:
rate of convergence (number of iterations)
computational effort per iteration
what is required as input (i.e. do you need to know the first derivative, do you need to set lo/hi limits for bisection, etc.)
what functions it works well on (i.e. works fine on polynomials but fails on functions with poles)
what assumptions does it make about the function (i.e. a continuous first derivative or being analytic, etc)
how simple the method is to implement
I think you will find that each of the methods has some good qualities, some bad qualities, and a set of situations where it is the most appropriate choice.
Big O notation is ideal for expressing the asymptotic behavior of algorithms as the inputs to the algorithms "increase". This is probably not a great measure for root finding algorithms.
Instead, I would think the number of iterations required to bring the actual error below some epsilon ε would be a better measure. Another measure would be the number of iterations required to bring the difference between successive iterations below some epsilon ε. (The difference between successive iterations is probably a better choice if you don't have exact root values at hand for your inputs. You would use a criteria such as successive differences to know when to terminate your root finders in practice, so you could or should use them here, too.)
While you can characterize the number of iterations required for different algorithms by the ratios between them (one algorithm may take roughly ten times more iterations to reach the same precision as another), there often isn't "growth" in the iterations as inputs change.
Of course, if your algorithms take more iterations with "larger" inputs, then Big O notation makes sense.
Big-O notation is designed to describe how an alogorithm behaves in the limit, as n goes to infinity. This is a much easier thing to work with in a theoretical study than in a practical experiment. I would pick things to study that you can easily measure that and that people care about, such as accuracy and computer resources (time/memory) consumed.
When you write and run a computer program to compare two algorithms, you are performing a scientific experiment, just like somebody who measures the speed of light, or somebody who compares the death rates of smokers and non-smokers, and many of the same factors apply.
Try and choose an example problem or problems to solve that is representative, or at least interesting to you, because your results may not generalise to sitations you have not actually tested. You may be able to increase the range of situations to which your results reply if you sample at random from a large set of possible problems and find that all your random samples behave in much the same way, or at least follow much the same trend. You can have unexpected results even when the theoretical studies show that there should be a nice n log n trend, because theoretical studies rarely account for suddenly running out of cache, or out of memory, or usually even for things like integer overflow.
Be alert for sources of error, and try to minimise them, or have them apply to the same extent to all the things you are comparing. Of course you want to use exactly the same input data for all of the algorithms you are testing. Make multiple runs of each algorithm, and check to see how variable things are - perhaps a few runs are slower because the computer was doing something else at a time. Be aware that caching may make later runs of an algorithm faster, especially if you run them immediately after each other. Which time you want depends on what you decide you are measuring. If you have a lot of I/O to do remember that modern operating systems and computer cache huge amounts of disk I/O in memory. I once ended up powering the computer off and on again after every run, as the only way I could find to be sure that the device I/O cache was flushed.
You can get wildly different answers for the same problem just by changing starting points. Pick an initial guess that's close to the root and Newton's method will give you a result that converges quadratically. Choose another in a different part of the problem space and the root finder will diverge wildly.
What does this say about the algorithm? Good or bad?
I would suggest you to have a look at the following Python root finding demo.
It is a simple code, with some different methods and comparisons between them (in terms of the rate of convergence).
http://www.math-cs.gordon.edu/courses/mat342/python/findroot.py
I just finish a project where comparing bisection, Newton, and secant root finding methods. Since this is a practical case, I don't think you need to use Big-O notation. Big-O notation is more suitable for asymptotic view. What you can do is compare them in term of:
Speed - for example here newton is the fastest if good condition are gathered
Number of iterations - for example here bisection take the most iteration
Accuracy - How often it converge to the right root if there is more than one root, or maybe it doesn't even converge at all.
Input - What information does it need to get started. for example newton need an X0 near the root in order to converge, it also need the first derivative which is not always easy to find.
Other - rounding errors
For the sake of visualization you can store the value of each iteration in arrays and plot them. Use a function you already know the roots.
Although this is a very old post, my 2 cents :)
Once you've decided which algorithmic method to use to compare them (your "evaluation protocol", so to say), then you might be interested in ways to run your challengers on actual datasets.
This tutorial explains how to do it, based on an example (comparing polynomial fitting algorithms on several datasets).
(I'm the author, feel free to provide feedback on the github page!)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Where can I find some real world typo statistics?
I'm trying to match people's input text to internal objects, and people tend to make spelling mistakes.
There are 2 kinds of mistakes:
typos - "Helllo" instead of "Hello" / "Satudray" instead of "Saturday" etc.
Spelling - "Shikago" instead of "Chicago"
I use Damerau-Levenshtein distance for the typos and Double Metaphone for spelling (Python implementations here and here).
I want to focus on the Damerau-Levenshtein (or simply edit-distance). The textbook implementations always use '1' for the weight of deletions, insertions substitutions and transpositions. While this is simple and allows for nice algorithms it doesn't match "reality" / "real-world probabilities".
Examples:
I'm sure the likelihood of "Helllo" ("Hello") is greater than "Helzlo", yet they are both 1 edit distance away.
"Gello" is closer than "Qello" to "Hello" on a QWERTY keyboard.
Unicode transliterations: What is the "real" distance between "München" and "Munchen"?
What should the "real world" weights be for deletions, insertions, substitutions, and transpositions?
Even Norvig's very cool spell corrector uses non-weighted edit distance.
BTW- I'm sure the weights need to be functions and not simple floats (per the above
examples)...
I can adjust the algorithm, but where can I "learn" these weights? I don't have access to Google-scale data...
Should I just guess them?
EDIT - trying to answer user questions:
My current non-weighted algorithm fails often when faced with typos for the above reasons. "Return on Tursday": every "real person" can easily tell Thursday is more likely than Tuesday, yet they are both 1-edit-distance away! (Yes, I do log and measure my performance).
I'm developing an NLP Travel Search engine, so my dictionary contains ~25K destinations (expected to grow to 100K), Time Expressions ~200 (expected 1K), People expressions ~100 (expected 300), Money Expressions ~100 (expected 500), "glue logic words" ("from", "beautiful", "apartment") ~2K (expected 10K) and so on...
Usage of the edit distance is different for each of the above word-groups. I try to "auto-correct when obvious", e.g. 1 edit distance away from only 1 other word in the dictionary. I have many other hand-tuned rules, e.g. Double Metaphone fix which is not more than 2 edit distance away from a dictionary word with a length > 4... The list of rules continues to grow as I learn from real world input.
"How many pairs of dictionary entries are within your threshold?": well, that depends on the "fancy weighting system" and on real world (future) input, doesn't it? Anyway, I have extensive unit tests so that every change I make to the system only makes it better (based on past inputs, of course). Most sub-6 letter words are within 1 edit distance from a word that is 1 edit distance away from another dictionary entry.
Today when there are 2 dictionary entries at the same distance from the input I try to apply various statistics to better guess which the user meant (e.g. Paris, France is more likely to show up in my search than Pārīz, Iran).
The cost of choosing a wrong word is returning semi-random (often ridiculous) results to the end-user and potentially losing a customer. The cost of not understanding is slightly less expensive: the user will be asked to rephrase.
Is the cost of complexity worth it? Yes, I'm sure it is. You would not believe the amount of typos people throw at the system and expect it to understand, and I could sure use the boost in Precision and Recall.
Possible source for real world typo statistics would be in the Wikipedia's complete edit history.
http://download.wikimedia.org/
Also, you might be interested in the AWB's RegExTypoFix
http://en.wikipedia.org/wiki/Wikipedia:AWB/T
I would advise you to check the trigram alogrithm. In my opinion it works better for finding typos then edit distance algorithm. It should work faster as well and if you keep dictionary in postgres database you can make use of index.
You may find useful stackoverflow topic about google "Did you mean"
Probability Scoring for Spelling Correction by Church and Gale might help. In that paper, the authors model typos as a noisy channel between the author and the computer. The appendix has tables for typos seen in a corpus of Associated Press publications. There is a table for each of the following kinds of typos:
deletion
insertion
substitution
transposition
For example, examining the insertion table, we can see that l was incorrectly inserted after l 128 times (the highest number in that column). Using these tables, you can generate the probabilities you're looking for.
If the research is your interest I think continuing with that algorithm, trying to find decent weights would be fruitful.
I can't help you with typo stats, but I think you should also play with python's difflib. Specifically, the ratio() method of SequenceMatcher. It uses an algorithm which the docs http://docs.python.org/library/difflib.html claim is well suited to matches that 'look right', and may be useful to augment or test what you're doing.
For python programmers just looking for typos it is a good place to start. One of my coworkers has used both Levenshtein edit distance and SequenceMatcher's ratio() and got much better results from ratio().
Some questions for you, to help you determine whether you should be asking your "where do I find real-world weights" question:
Have you actually measured the effectiveness of the uniform weighting implementation? How?
How many different "internal objects" do you have -- i.e. what is the size of your dictionary?
How are you actually using the edit distance e.g. John/Joan, Marmaduke/Marmeduke, Featherstonehaugh/Featherstonhaugh: is that "all 1 error" or is it 25% / 11.1% / 5.9% difference? What threshold are you using?
How many pairs of dictionary entries are within your threshold (e.g. John vs Joan, Joan vs Juan, etc)? If you introduced a fancy weighting system, how many pairs of dictionary entries would migrate (a) from inside the threshold to outside (b) vice versa?
What do you do if both John and Juan are in your dictionary and the user types Joan?
What are the penalties/costs of (1) choosing the wrong dictionary word (not the one that the user meant) (2) failing to recognise the user's input?
Will introducing a complicated weighting system actually reduce the probabilities of the above two error types by sufficient margin to make the complication and slower speed worthwhile?
BTW, how do you know what keyboard the user was using?
Update:
"""My current non-weighted algorithm fails often when faced with typos for the above reasons. "Return on Tursday": every "real person" can easily tell Thursday is more likely than Tuesday, yet they are both 1-edit-distance away! (Yes, I do log and measure my performance)."""
Yes, Thursday -> Tursday by omitting an "h", but Tuesday -> Tursday by substituting "r" instead of "e". E and R are next to each other on qwERty and azERty keyboards. Every "real person" can easily guess that Thursday is more likely than Tuesday. Even if statistics as well as guesses point to Thursday being more likely than Tuesday (perhaps omitting h will cost 0.5 and e->r will cost 0.75), will the difference (perhaps 0.25) be significant enough to always pick Thursday? Can/will your system ask "Did you mean Tuesday?" or does/will it just plough ahead with Thursday?