In the question we're asked to remove all even numbers from an array, hence I tried to create a function:
import numpy as np
A = np.array([2,3,4,5])
def remove_even(A):
if ((A[0])/2) != int: #check if the first value is an integer when divided by 2
A = A[0:len(A)+1: 2]
return A
else:
A = A[1:len(A)+1:2]
However, regardless of my array starting with either an even number (i.e. 2) or an odd number (i.e. 1) the execution of the code goes only as far as to the if statement but not to theelse.
What am I missing? I would appreciate any feedback!
In numpy you can just use a boolean mask:
A[(A % 2).astype(bool)]
returns
array([3, 5])
You can do the below if you need without numpy solution.
l = [1,2,3,4,5,6,7,8]
a = [i for i in l if not i%2]
##print(a) output
##[2, 4, 6, 8]
Your code never reaches the else clause because the if test is always true.
No number is equal to int because int is a class. != is an equality test not a membership test.
4/2 is not of type int because the / operator gives a float result, so the answer is 2.0. That means type(A[0]/2) will always be a float irrespective of the value of A[0]. So testing the result of the division for membership of int, even if correctly done, won't do what you want.
Do this instead:
if not (A[0] % 2):
This will be true if A[0] is not an even number, whether integer or float.
Related
I have a function within a program that performs calculations upon every element of the inputted list, where the list can be several hundred numbers long. I want to be able to take a list and find the elements where an imaginary component is negative. When this condition is true I then want to multiply both the real and imaginary components of that element by -1.
I've tried using the numpy package as nditer seemed to be a function that would suit my needs however this returns the wrong values and appears to improperly apply the condition.
for value in np.nditer( num_list, op_flags = ['readwrite'] ):
if value.imag < 0:
value * -1
return num_list
This condition check is the last operation that the function would do, so if the code could look like this then that would be very helpful. Thank you!
num_list= [ 4+5j, 3+2j, 7-2j]
def function( num_list )
#other calculations are here
#applies condition here
return num_out
print(num_out)
Output: [ 4+5j, 3+2j, -7+2j]
value * -1 is just the value of the multiplication, it's not actually replacing the value, perhaps you meant:
value *= -1
Test:
import numpy as np
num_list= np.array([4+5j, 3+2j, 7-2j])
def function(num_list):
for value in np.nditer(num_list, op_flags = ['readwrite']):
if value.imag < 0:
value *= -1
return num_list
print(function(num_list))
Output:
[ 4.+5.j 3.+2.j -7.+2.j]
I am working with Bitmasks in python. As far as I know, these are arrays of integers that when they are unpacked into binary format they tell you which of the 32 bits are set (=1) for a given element in the array.
I would like to know the fastest way to check whether 4 specific bits are set or not for any element of an array. I do not care about the rest. I have tried the following solution but it is not fast enough for my purpose:
def detect(bitmask, check=(18,22,23,24), bits=32):
boolmask = np.zeros(len(bitmask), dtype=bool)
for i, val in enumerate(bitmask):
bithost = np.zeros(bits, dtype='i1')
masklist = list(bin(val)[2:])
bithost[:len(masklist)] = np.flip(masklist,axis=0)
if len(np.intersect1d(np.nonzero(bithost)[0] ,check)) != 0:
boolmask[i] = True
else:
boolmask[i] = False
if any(boolmask):
print("There are some problems")
else:
print("It is clean")
For example, if a given bitmask contains the integer 24453656 (1011101010010001000011000 in binary), the output of function detect would be "There are some problems" since bit 22 is set:
bit: ... 20, 21, 22, 23, 24,...
mask: ... 0, 0, 1, 0, 0,...
Any ideas on how to improve the performance?
Integers are nothing but sequence of bits in the computer.
So, if you get integer, let's say: 333 it is a sequence of bits 101001101 to the computer. It doesn't need any unpacking into bits. It is bits.
Therefore, if the mask is also an integer, you don't need any unpacking, just apply bitwise operations to it. Check wikipedia for details of how these work.
In order to check if ANY of the bits xyz are set in an integer abc, you do:
(abc & xyz) > 0. If you absolutely need checking mask to be a tuple of bit places, you do some packing, like this:
def detect(bitmask,check=(18,22,23,24)):
checkmask=sum(2**x for x in check)
if (bitmask & checkmask) > 0:
print "There are some problems"
else:
print "Everything OK"
Note that bitmasks start with 0 based bit indices. First bit is bit 0.
I am not sure what's in your bitmask argument. Regarless, you should probably use bitwise operators.
Make a bit mask like this:
def turn_bits_on(bits):
n = 0
for k in bits:
n = (n | (1 << (k - 1))) if k > 0 else n
return n
bits_to_check = turn_bits_on([18, 22, 23, 24])
Then, for a single number, you can detect with:
def is_ok(value, mask):
return not (value & mask)
print(is_ok(24453656, bits_to_check))
Finally, depending on what your bitmask value is (a list, a DataFrame, etc), apply the is_ok() function to each value.
Hope this helps!
I'm new to Python, I was reading this page where I saw a weird statement:
if n+1 == n: # catch a value like 1e300
raise OverflowError("n too large")
x equals to a number greater than it?! I sense a disturbance in the Force.
I know that in Python 3, integers don't have fixed byte length. Thus, there's no integer overflow, like how C's int works. But of course the memory can't store infinite data.
I think that's why the result of n+1 can be the same as n: Python can't allocate more memory to preform the summation, so it is skipped, and n == n is true. Is that correct?
If so, this could lead to incorrect result of the program. Why don't Python raise an error when operations are not possible, just like C++'s std::bad_alloc?
Even if n is not too large and the check evaluates to false, result - due to the multiplication - would need much more bytes. Could result *= factor fail for the same reason?
I found it in the offical Python documentation. Is it really the correct way to check big integers / possible integer "overflow"?
Python3
Only floats have
a hard limit in python. Integers are implemented as “long” integer objects of arbitrary size in python3 and do not normally overflow.
You can test that behavior with the following code
import sys
i = sys.maxsize
print(i)
# 9223372036854775807
print(i == i + 1)
# False
i += 1
print(i)
# 9223372036854775808
f = sys.float_info.max
print(f)
# 1.7976931348623157e+308
print(f == f + 1)
# True
f += 1
print(f)
# 1.7976931348623157e+308
You may also want to take a look at sys.float_info and sys.maxsize
Python2
In python2 integers are automatically casted to long integers if too large as described in the documentation for numeric types
import sys
i = sys.maxsize
print type(i)
# <type 'int'>
i += 1
print type(i)
# <type 'long'>
Could result *= factor fail for the same reason?
Why not try it?
import sys
i = 2
i *= sys.float_info.max
print i
# inf
Python has a special float value for infinity (and negative infinity too) as described in the docs for float
Integers don't work that way in Python.
But float does. That is also why the comment says 1e300, which is a float in scientific notation.
I had a problem of with integer overlflows in python3, but when I inspected the types, I understood the reason:
import numpy as np
a = np.array([3095693933], dtype=int)
s = np.sum(a)
print(s)
# 3095693933
s * s
# -8863423146896543127
print(type(s))
# numpy.int64
py_s = int(s)
py_s * py_s
# 9583320926813008489
Some pandas and numpy functions, such as sum on arrays or Series return an np.int64 so this might be the reason you are seeing int overflows in Python3.
What is the significance of the return part when evaluating functions? Why is this necessary?
Your assumption is right: dfdx[0] is indeed the first value in that array, so according to your code that would correspond to evaluating the derivative at x=-1.0.
To know the correct index where x is equal to 0, you will have to look for it in the x array.
One way to find this is the following, where we find the index of the value where |x-0| is minimal (so essentially where x=0 but float arithmetic requires taking some precautions) using argmin :
index0 = np.argmin(np.abs(x-0))
And we then get what we want, dfdx at the index where x is 0 :
print dfdx[index0]
An other but less robust way regarding float arithmetic trickery is to do the following:
# we make a boolean array that is True where x is zero and False everywhere else
bool_array = (x==0)
# Numpy alows to use a boolean array as a way to index an array
# Doing so will get you the all the values of dfdx where bool_array is True
# In our case that will hopefully give us dfdx where x=0
print dfdx[bool_array]
# same thing as oneliner
print dfdx[x==0]
You give the answer. x[0] is -1.0, and you want the value at the middle of the array.`np.linspace is the good function to build such series of values :
def f1(x):
g = np.sin(math.pi*np.exp(-x))
return g
n = 1001 # odd !
x=linspace(-1,1,n) #x[n//2] is 0
f1x=f1(x)
df1=np.diff(f1(x),1)
dx=np.diff(x)
df1dx = - math.pi*np.exp(-x)*np.cos(math.pi*np.exp(-x))[:-1] # to discard last element
# In [3]: np.allclose(df1/dx,df1dx,atol=dx[0])
# Out[3]: True
As an other tip, numpy arrays are more efficiently and readably used without loops.
While running a unit test to confirm the output type, I am getting an AssertionError: != type 'int' on this function and cannot figure out why.
def averagePix(image):
totalNumber = image.size
counter = 0
it = np.nditer(image)
for (m) in it:
counter = counter + m
average = counter / totalNumber
return average
I need to return a type int. If I comment out the np.nditer block, it passes the test with type int. Can someone please help me figure out how this block is screwing things up?
it = np.nditer(image)
for (m) in it:
counter = counter + m
Thanks!
Not the neatest solution, but you can also convert it to int.
Eg
average = counter / totalNumber
return int(average)
If you want average to be an integer you should use integer division //.
Eg:
average = counter // totalNumber
In Python 2, a / b will give an integer result if both a and b are integers. But in Python 3 a / b will always result in a float.
FWIW, in Python 2 you can get / to behave the Python 3 way by putting
from __future__ import division
at the start of your script.
See In Python, what is the difference between '/' and '//' when used for division?
See if you're importing division from __future__ at some point.
Also, see if image.size is not float maybe.
Also, use numpy directly for averaging. :P
In an IPython3 session, I reproduce your code with:
In [228]: image=np.ones((4,4),dtype=int)
In [230]: counter=0
In [231]: it=np.nditer(image)
In [232]: for (m) in it:
.....: counter = counter + m
In [233]: counter
Out[233]: 16
In [234]: type(counter)
Out[234]: numpy.int32
In [235]: average = counter/image.size
In [236]: type(average)
Out[236]: numpy.float64
In [237]: average = counter//image.size # force integer division
In [238]: type(average)
Out[238]: numpy.int32
Both of those average values will fail a type()==int test. The numpy.int32 number has a numpy wrapper around an int. Usually that's not a problem, but to satisfy the test you'd have test average.item() - take it out of the wrapper.
nditer is actually feeding your counter addition an array
In [240]: type(m)
Out[240]: numpy.ndarray
In [241]: m
Out[241]: array(1)
You could get around that by applying item to m:
for (m) in it:
counter = counter + m.item()
Why are you using nditer? To learn how to use it? This isn't the best application for it. Alternatives include
counter = np.sum(image)
counter = np.sum.flat(image)
average = np.average(image)
These results will still be int32 or float64. So you still have to deal with assertion test. Does it really have to be int?