Computation time for RSA? [closed] - python

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm quite new to programming and had done a Cryptography course on Coursera before I stared learning Python. Recently, as a project, I wanted to write my own code for the RSA algorithm. I have just finished writing the encryption process which is as such:
However, the program is running now and is taking a long time. I did notice, it took a long time for the keys and modulos to compute because of the sheer size. Because I am new to all of this, I don't know enough and was wondering if there was any way to speed up the process?
If my code is required to be posted, I can do it however I would prefer a more general answer on how to speed up code.
Thanks

I too took the course on coursera. You should check the following libraries out, it can speed up your calculations tremendously :
1.) http://userpages.umbc.edu/~rcampbel/Computers/Python/lib/numbthy.py ( check the powmod function)
2.) gmpy2 (gmpy2.readthedocs.org/en/latest/mpz.html)
3.) mpmath (code.google.com/p/mpmath/)

Related

Can dividing code too much make it inefficient? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
If code is divided into too many segments, can this make the program slow?
For example - Creating a separate file for just a single function.
In my case, I'm using Python, and suppose there are two functions that I need in the main.py file. If I placed them in different files (just containing the function).
(Suppose) Also, If I'm using the same library for the two functions and I've divided the functions into separate files.
How can this affect efficiency? (Machine performance-wise and Team-wise).
It depends on the language, the framework you use etc. However, dividing the code too much can make it unreadable, which is (most of the time) the bigger problem. Since most of the time you will (or should) be working in a team, you should consider how readable your code would be for them.
However, answering this in a definite way is difficult. You should ask a Senior developer on your team for guidelines.

Why is list comprehension so prevalent in python? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Often you see question asked about a better method of doing something, or just generally a looping question and very often the top answers will use some form of convoluted list/dict/tuple comprehension that takes longer for others to understand than create themselves. While a simple and understandable loop could have just been made.
Since it cannot provide any speed benefits that I could imagine, is there any use of it in python other than to look smart or be Pythonic?
Thanks.
I believe the goal in this case to make your code as concise and efficient as possible. At times it can seem convoluted, but the computer looping through multiple lines as opposed to a single line adds processing time, which in large applications and across many iterations can cause some delays.
Additionally, although it seems harder to understand initially, for an outside individual reading your code, it's much quicker for them to read simplified expressions than pages of loops to get an idea of what you're attempting to accomplish.

How can I give maximum CPU Utilization to this Python Process [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm using Spyder IDE for Data Analysis using Python. My dataset is pretty large and hence I wish to give it maximum priority. I've set the priority to realtime however it is only using 13-15% of CPU. How can I give 100% CPU usage to it? I'm using Dell Insiron 15Z ultrabook with 2 RAMs of 4 GB each.
Edit: I'm now running two scripts on two different consoles. Now the CPU usage has increased to 75%. I know this isn't the technically correct way of implementing parallelism however being a beginner in Python, I had no other option. Thanks for the help :)
You're probably not using any multi-threading code or other parallelization methods. Because of this, you're code is running in just one thread, which can only run on one CPU at a time. Since you have eight CPUs, this results in 1/8 of total CPU consumption.
Parallelization of code is not a trivial task, and is highly dependent on the type of work your program is doing.

How can Matlab or Octave be so fast? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am really confused about the computation speed of Matlab or Octave.
How is it possible to give the result of a computation like 5^5^5^5 (= 2.351*10^87 if you wanna know) instantly?
I found some results about the speed for matrix computations (this article), but nothing about other matters. And this is not the explanation (my (naive) implementation in Python is running for about 5 minutes right now).
5^5^5^5 doesn't require so many operations after all. For example, at each power step, say a^b, you can compute exp(log(a)*b), which gives the same result.
I'm not saying this is necessarily how Matlab does it, and there may be numerical precision issues. But this illustrates that a multiple-power operation is not so hard as its direct computation would suggest.
As for numerical precision:
>> format long
>> 5^5^5^5
ans =
2.350988701644576e+087
>> exp(log(exp(log(exp(log(5)*5))*5))*5)
ans =
2.350988701644561e+087
The relative error is
>> 1 - (5^5^5^5 / exp(log(exp(log(exp(log(5)*5))*5))*5))
ans =
-6.661338147750939e-015
which is not very far from eps.

What do I lose if I move from R to Python? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I am intermediate in R and a beginner in Python. However my core abilities lie less in data analysis and more in programming and developing large software systems in teams, and I don't have time to become an expert in both.
Given the advances in the Python world in numpy, scipy, pandas, and its prevalence in data science and in general programming, I think I need to concentrate on Python (even though I enjoy R a lot), and accept that for some tasks I might be 75% as efficient, say, as I would be in R. I'd find this efficiency loss acceptable in order to be a master of one language rather than intermediate at both.
However I don't know enough about either language to really be sure of my facts. I would be very interested in hearing from anyone who is experienced in both R and Python and can say what would be the significant disadvantages, if any, of dropping R in favour of Python?
Edit 5: this question on stats.stackexchange is similar and has some great answers.
(Edits 3-4: reverted content/title to original question, which was closed. The original question attracted a lot of expert comment, my attempt to narrow the question to reopen it failed, and I'd prefer to have these comments below the original text they were commenting on.)

Categories