Long Int literal - Invalid Syntax? - python

The Python tutorial book I'm using is slightly outdated, but I've decided to continue using it with the latest version of Python to practice debugging. Sometimes there are a few things in the book's code that I learn have changed in the updated Python, and I'm not sure if this is one of them.
While fixing a program so that it can print longer factorial values, it uses a long int to solve the problem. The original code is as follows:
#factorial.py
# Program to compute the factorial of a number
# Illustrates for loop with an accumulator
def main():
n = input("Please enter a whole number: ")
fact = 1
for factor in range(int(n), 0, -1):
fact = fact * factor
print("The factorial of ", n, " is ", fact)
main()
The long int version is as follows:
#factorial.py
# Program to compute the factorial of a number
# Illustrates for loop with an accumulator
def main():
n = input("Please enter a whole number: ")
fact = 1L
for factor in range(int(n), 0, -1):
fact = fact * factor
print("The factorial of ", n, " is ", fact)
main()
But running the long int version of the program in the Python shell generates the following error:
>>> import factorial2
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
import factorial2
File "C:\Python34\factorial2.py", line 7
fact = 1L
^
SyntaxError: invalid syntax

Just drop the L; all integers in Python 3 are long. What was long in Python 2 is now the standard int type in Python 3.
The original code doesn't have to use a long integer either; Python 2 switches to the long type transparently as needed anyway.
Note that all Python 2 support is shortly ending (no more updates after 2020/01/01), so at this point in time you'd be much better of switching tutorials and invest your time in learning Python 3. For beginner programmers I recommend Think Python, 2nd edition as it is fully updated for Python 3 and freely available online. Or pick any of the other Stack Overflow Python chatroom recommended books and tutorials
If you must stick to your current tutorial, you could install a Python 2.7 interpreter instead, and work your way through the book without having to learn how to port Python 2 to Python 3 code first. However, you'd then also have to learn how transition from Python 2 to Python 3 in addition.

You just need remove L
fact = 1
Python 3.X integers support unlimited size in contrast to Python 2.X that has a separate type for long integers.

Related

How can I run/call a python script while using swift?

I am trying to make an iOS app that will calculate pi to a set about of decimal places based on what the user wants. I currently have a python script that will take in an integer value from the user and then use that in the calculation of Pi. I also have a swift/Xcode project that will only allow a user to input an integer value. So I am wondering if there is a way for me to pass this value that the user enters from a textbox in my swift project to then give that to the python code, run it, and then output the result? In my python code, I am using "timer" to be able to calculate the time it takes to calculate pi to the set digits that was requested. So I would only need to be able to display this result.
I have thought about rewriting the python script into swift, but I am unsure of how to do that. This is the definition I am using in my python script to calculate pi:
def pi():
decimal.getcontext().prec += 2
three = decimal.Decimal(3)
first, second, third, forth, fifth, sixth, numb = 0, three, 1, 0, 0, 24, 3
while numb != first:
first = numb
third, forth = third + forth, forth + 8
fifth, sixth = fifth + sixth, sixth + 32
second = (second * third) / fifth
numb += second
decimal.getcontext().prec -= 2
return +numb
But I was unsure of how to rewrite this into swift language, so I figured getting swift to pass the precision value into python would be easier.
You definitely can run Python code in Swift since Python is designed to be embedded and has a C interface API. Check how Swift for TensorFlow uses Python interoperability (although I couldn't find a quick way to only use that module and not the whole TensorFlow). You can also check PythonKit out.
However, I don't think rewriting that script would be too difficult, and it might be better to avoid more libraries and dependencies in your project.
Edit: As Joakim Danielson pointed out, you'll need the Python runtime, and it doesn't seem to be available in iOS, so you seem to be limited to macOS for this.

For statement does not repeated strings?

I what the user input to be repeated 10 times but I get an error.
no = input(' any thing typed there will x10')
for i in range(10):
print(no)
But if I change the code to int or something else it works .
no = int (input(' any thing typed there will x10'))
for i in range(10):
print(no)
I am probably missing something basic ,but thanks in advance.
I used an app called QPython on my android which might be the problem
What you have written is valid Python 3, but not valid Python 2. I suspect you are running Python 2, given the question. Python changed how the input function works between Python 3 and Python 2. But given your issue, this will fix the issue for Python 2 users (you):
no = raw_input(' any thing typed there will x10')
for i in range(10):
print(no)

Why does Python "preemptively" hang when trying to calculate a very large number?

I've asked this question before about killing a process that uses too much memory, and I've got most of a solution worked out.
However, there is one problem: calculating massive numbers seems to be untouched by the method I'm trying to use. This code below is intended to put a 10 second CPU time limit on the process.
import resource
import os
import signal
def timeRanOut(n, stack):
raise SystemExit('ran out of time!')
signal.signal(signal.SIGXCPU, timeRanOut)
soft,hard = resource.getrlimit(resource.RLIMIT_CPU)
print(soft,hard)
resource.setrlimit(resource.RLIMIT_CPU, (10, 100))
y = 10**(10**10)
What I expect to see when I run this script (on a Unix machine) is this:
-1 -1
ran out of time!
Instead, I get no output. The only way I get output is with Ctrl + C, and I get this if I Ctrl + C after 10 seconds:
^C-1 -1
ran out of time!
CPU time limit exceeded
If I Ctrl + C before 10 seconds, then I have to do it twice, and the console output looks like this:
^C-1 -1
^CTraceback (most recent call last):
File "procLimitTest.py", line 18, in <module>
y = 10**(10**10)
KeyboardInterrupt
In the course of experimenting and trying to figure this out, I've also put time.sleep(2) between the print and large number calculation. It doesn't seem to have any effect. If I change y = 10**(10**10) to y = 10**10, then the print and sleep statements work as expected. Adding flush=True to the print statement or sys.stdout.flush() after the print statement don't work either.
Why can I not limit CPU time for the calculation of a very large number? How can I fix or at least mitigate this?
Additional information:
Python version: 3.3.5 (default, Jul 22 2014, 18:16:02) \n[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)]
Linux information: Linux web455.webfaction.com 2.6.32-431.29.2.el6.x86_64 #1 SMP Tue Sep 9 21:36:05 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
TLDR: Python precomputes constants in the code. If any very large number is calculated with at least one intermediate step, the process will be CPU time limited.
It took quite a bit of searching, but I have discovered evidence that Python 3 does precompute constant literals that it finds in the code before evaluating anything. One of them is this webpage: A Peephole Optimizer for Python. I've quoted some of it below.
ConstantExpressionEvaluator
This class precomputes a number of constant expressions and stores them in the function's constants list, including obvious binary and unary operations and tuples consisting of just constants. Of particular note is the fact that complex literals are not represented by the compiler as constants but as expressions, so 2+3j appears as
LOAD_CONST n (2)
LOAD_CONST m (3j)
BINARY_ADD
This class converts those to
LOAD_CONST q (2+3j)
which can result in a fairly large performance boost for code that uses complex constants.
The fact that 2+3j is used as an example very strongly suggests that not only small constants are being precomputed and cached, but also any constant literals in the code. I also found this comment on another Stack Overflow question (Are constant computations cached in Python?):
Note that for Python 3, the peephole optimizer does precompute the 1/3 constant. (CPython specific, of course.) – Mark Dickinson Oct 7 at 19:40
These are supported by the fact that replacing
y = 10**(10**10)
with this also hangs, even though I never call the function!
def f():
y = 10**(10**10)
The good news
Luckily for me, I don't have any such giant literal constants in my code. Any computation of such constants will happen later, which can be and is limited by the CPU time limit. I changed
y = 10**(10**10)
to this,
x = 10
print(x)
y = 10**x
print(y)
z = 10**y
print(z)
and got this output, as desired!
-1 -1
10
10000000000
ran out of time!
The moral of the story: Limiting a process by CPU time or memory consumption (or some other method) will work if there is not a large literal constant in the code that Python tries to precompute.
Use a function.
It does seem that Python tries to precompute integer literals (I only have empirical evidence; if anyone has a source please let me know). This would normally be a helpful optimization, since the vast majority of literals in scripts are probably small enough to not incur noticeable delays when precomputing. To get around this, you need to make your literal be the result of a non-constant computation, like a function call with parameters.
Example:
import resource
import os
import signal
def timeRanOut(n, stack):
raise SystemExit('ran out of time!')
signal.signal(signal.SIGXCPU, timeRanOut)
soft,hard = resource.getrlimit(resource.RLIMIT_CPU)
print(soft,hard)
resource.setrlimit(resource.RLIMIT_CPU, (10, 100))
f = lambda x=10:x**(x**x)
y = f()
This gives the expected result:
xubuntu#xubuntu-VirtualBox:~/Desktop$ time python3 hang.py
-1 -1
ran out of time!
real 0m10.027s
user 0m10.005s
sys 0m0.016s

memory error in python

Traceback (most recent call last):
File "/run-1341144766-1067082874/solution.py", line 27, in
main()
File "/run-1341144766-1067082874/solution.py", line 11, in main
if len(s[i:j+1]) > 0:
MemoryError
Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook
from apport.fileutils import likely_packaged, get_recent_crashes
File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in
from apport.report import Report
MemoryError
Original exception was:
Traceback (most recent call last):
File "/run-1341144766-1067082874/solution.py", line 27, in
main()
File "/run-1341144766-1067082874/solution.py", line 11, in main
if len(s[i:j+1]) > 0:
MemoryError
The above errors appeared when I tried to run the following program. Can someone explain what is a memory error, and how to overcome this problem? . The program takes strings as input and finds all possible sub strings and creates a set(in a lexicographical order) out of it and it should print the value at the respective index asked by the user otherwise it should print 'Invalid'
def main():
no_str = int(raw_input())
sub_strings= []
for k in xrange(0,no_str):
s = raw_input()
a=len(s)
for i in xrange(0, a):
for j in xrange(0, a):
if j >= i:
if len(s[i:j+1]) > 0:
sub_strings.append(s[i:j+1])
sub_strings = list(set(sub_strings))
sub_strings.sort()
queries= int(raw_input())
resul = []
for i in xrange(0,queries):
resul.append(int(raw_input()))
for p in resul:
try:
print sub_strings[p-1]
except IndexError:
print 'INVALID'
if __name__ == "__main__":
main()
If you get an unexpected MemoryError and you think you should have plenty of RAM available, it might be because you are using a 32-bit python installation.
The easy solution, if you have a 64-bit operating system, is to switch to a 64-bit installation of python.
The issue is that 32-bit python only has access to ~4GB of RAM. This can shrink even further if your operating system is 32-bit, because of the operating system overhead.
You can learn more about why 32-bit operating systems are limited to ~4GB of RAM here: https://superuser.com/questions/372881/is-there-a-technical-reason-why-32-bit-windows-is-limited-to-4gb-of-ram
This one here:
s = raw_input()
a=len(s)
for i in xrange(0, a):
for j in xrange(0, a):
if j >= i:
if len(s[i:j+1]) > 0:
sub_strings.append(s[i:j+1])
seems to be very inefficient and expensive for large strings.
Better do
for i in xrange(0, a):
for j in xrange(i, a): # ensures that j >= i, no test required
part = buffer(s, i, j+1-i) # don't duplicate data
if len(part) > 0:
sub_Strings.append(part)
A buffer object keeps a reference to the original string and start and length attributes. This way, no unnecessary duplication of data occurs.
A string of length l has l*l/2 sub strings of average length l/2, so the memory consumption would roughly be l*l*l/4. With a buffer, it is much smaller.
Note that buffer() only exists in 2.x. 3.x has memoryview(), which is utilized slightly different.
Even better would be to compute the indexes and cut out the substring on demand.
A memory error means that your program has ran out of memory. This means that your program somehow creates too many objects.
In your example, you have to look for parts of your algorithm that could be consuming a lot of memory. I suspect that your program is given very long strings as inputs. Therefore, s[i:j+1] could be the culprit, since it creates a new list. The first time you use it though, it is not necessary because you don't use the created list. You could try to see if the following helps:
if j + 1 < a:
sub_strings.append(s[i:j+1])
To replace the second list creation, you should definitely use a buffer object, as suggested by glglgl.
Also note that since you use if j >= i:, you don't need to start your xrange at 0. You can have:
for i in xrange(0, a):
for j in xrange(i, a):
# No need for if j >= i
A more radical alternative would be to try to rework your algorithm so that you don't pre-compute all possible sub-strings. Instead, you could simply compute the substring that are asked.
Either there's a error in your code or you are out of memory, you can upgrade it or for quick solution try increasing your virtual memory.
Open My Computer
Right click and select Properties
Go to Advanced System Settings
Click on Advance Tab
Click on settings under Performance
Click on Change under Advance Tab
Increase the memory size, that will increase virtual memory size.
If there's issue with your memory space, it will be resolved now!
you could try to create the same script that popups that error, dividing the script into several script by importing from external script. Example, hello.py expect an error Memory error, so i divide hello.py into several scripts h.py e.py ll.py o.py all of them have to get into a folder "hellohello" into that folder create init.py into init write import h,e,ll,o and then on ide you write import hellohello
check program with this input:abc/if you got something like ab ac bc abc program works well and you need a stronger RAM otherwise the program is wrong.
Using python 64 bit solves lot of problems.

What can be done about "The command is too long to execute" error in MATLAB?

I am calling a Python program from MATLAB and passing an array to the program. I am writing the following lines in MATLAB workspace:
% Let us assume some random array
num1 = ones(1,100);
% I am forced to pass parameters as string due to the MATLAB-Python interaction.
num2 = num2str(num1);
% The function is saved in a Python program called squared.py
z=python('squared.py',num2);
The program works fine when the size of num1 is small (e.g. 100). However, when it is large, e.g., 500000, MATLAB shows the following error:
??? Error using ==> dos
The command is too long to execute.
Error in ==> python at 68
[status, result] = dos(pythonCmd);
Does anyone know how to fix this error?
On Windows, the command passed to the dos function is limited to 32768 characters. This limitation comes from the Windows limitation on the lpCommandLine parameter to CreateProcess.
I think Fredrik's idea of writing the data to a file and reading it from Python is your best alternative.

Categories