List/Array of strings to numpy float array - python

I am new to scikit learn and numpy. How can I represent my dataset made of list/array of strings eg
[["aa bb","a","bbb","à"], [bb cc","c","ddd","à"], ["kkk","a","","a"]]
to a numpy array of dtype float?

I think what your looking for is a numeric representation of your words. You can use gensim and map each word to a token id and from that create your numpy arrays as follows:
import numpy as np
from gensim import corpora
toconvert = [["aa bb","a","bbb","à"], ["bb", "cc","c","ddd","à"], ["kkk","a","","a"]]
# convert your list of lists into token id's. For example, 'aa bb' could be represented as a 2, a as a 1, etc.
tdict = corpora.Dictionary(toconvert)
# given nested structure, you can append nested numpy arrays
newlist = []
for l in toconvert:
tmplist = []
for word in l:
# append to intermediate list the id for the given word under observation
tmplist.append(tdict.token2id[word])
# convert to numpy array and append to main list
newlist.append(np.array(tmplist).astype(float)) # type float
print(newlist) # desired output: [array([ 2., 0., 1., 0.]), array([ 5., 3., 4., 6., 0.]), array([ 7., 0., 8., 0.])]
# and to see what id's represent which strings:
tdict[0] # 'a'

Related

Python Median Filter for 1D numpy array

I have a numpy.array with a dimension dim_array. I'm looking forward to obtain a median filter like scipy.signal.medfilt(data, window_len).
This in fact doesn't work with numpy.array may be because the dimension is (dim_array, 1) and not (dim_array, ).
How to obtain such filter?
Next, another question, how can I obtain other filter, i.e., min, max, mean?
Based on this post, we could create sliding windows to get a 2D array of such windows being set as rows in it. These windows would merely be views into the data array, so no memory consumption and thus would be pretty efficient. Then, we would simply use those ufuncs along each row axis=1.
Thus, for example sliding-median` could be computed like so -
np.median(strided_app(data, window_len,1),axis=1)
For the other ufuncs, just use the respective ufunc names there : np.min, np.max & np.mean. Please note this is meant to give a generic solution to use ufunc supported functionality.
For the best performance, one must still look into specific functions that are built for those purposes. For the four requested functions, we have the builtins, like so -
Median : scipy.signal.medfilt.
Max : scipy.ndimage.filters.maximum_filter1d.
Min : scipy.ndimage.filters.minimum_filter1d.
Mean : scipy.ndimage.filters.uniform_filter1d
The fact that applying of a median filter with the window size 1 will not change the array gives us a freedom to apply the median filter row-wise or column-wise.
For example, this code
from scipy.ndimage import median_filter
import numpy as np
arr = np.array([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]])
median_filter(arr, size=3, cval=0, mode='constant')
#with cval=0, mode='constant' we set that input array is extended with zeros
#when window overlaps edges, just for visibility and ease of calculation
outputs an expected filtered with window (3, 3) array
array([[0., 2., 0.],
[2., 5., 3.],
[0., 5., 0.]])
because median_filter automatically extends the size to all dimensions, so the same effect we can get with:
median_filter(arr, size=(3, 3), cval=0, mode='constant')
Now, we can also apply median_filter row-wise with setting 1 to the first element of size
median_filter(arr, size=(1, 3), cval=0, mode='constant')
Output:
array([[1., 2., 2.],
[4., 5., 5.],
[7., 8., 8.]])
And column-wise with the same logic
median_filter(arr, size=(3, 1), cval=0, mode='constant')
Output:
array([[1., 2., 3.],
[4., 5., 6.],
[4., 5., 6.]])

Problems on how to transpose single column data in python

I created a text file called 'column.txt' containing the following data:
1
2
3
4
9
8
Then I wrote the code below to transpose my data to a single-row text file.
import numpy as np
x=np.loadtxt('column.txt')
z=x.T
y=x.transpose()
np.savetxt('row.txt',y, fmt='%i')
I tried two different ways - using matrix multiplication (the commented line in my code) and using transpose command. The problem was the output was exactly the same as the input!
Afterwards, I added another column to the input file, ran the code and surprisingly this time the output was completely fine (The output contained two rows!)
So my question is:
Is there anyway to transpose a single column file to a single row one? If yes, could you please describe how?
You can use numpy.reshape to transpose data and change the shape of your array like the following:
>>> import numpy as np
>>> arr=np.loadtxt('column.txt')
>>> arr
array([ 1., 2., 3., 4., 9., 8.])
>>> arr.shape
(6,)
>>> arr=arr.reshape(6,1)
>>> arr
array([[ 1.],
[ 2.],
[ 3.],
[ 4.],
[ 9.],
[ 8.]])
or you can just give the number of an array dimension as an input to the numpy.loadtxt function
>>> np.loadtxt('column.txt', ndmin=2)
array([[ 1.],
[ 2.],
[ 3.],
[ 4.],
[ 9.],
[ 8.]])
But if you want to convert a single column to a single row and write it into a file just you need to do as following
>>> parr=arr.reshape(1,len(arr))
np.savetxt('column.txt',parr, fmt='%i')
If your input data only consists of a single column, np.loadtxt() will return an one-dimensional array. Transposing basically means to reverse the order of the axes. For a one-dimensional array with only a single axis, this is a no-op. You can convert the array into a two-dimensional array in many different ways, and transposing will work as expected for the two-dimensional array, e.g.
x = np.atleast_2d(np.loadtxt('column.txt'))
It is because the transpose of a 1D array is the same as itself, as there is no other dimension to transpose to.
You could try adding a 2nd dimension by doing this,
>>> import numpy as np
>>> x = np.array([[1], [2], [3], [4], [9], [8]])
>>> x.T
array([[1, 2, 3, 4, 9, 8]])

Array of hermite values in numpy

I have a data structure that looks like a list values and I am trying to compute the (x,y) 2d hermite functions from them using numpy. I'm trying to use as many numpy arrays as possible due to the performance boost you get from getting to Fortran as quickly as possible (I'm expecting x to be in practice many thousands of 3-arrays). Specifically, my code looks like this:
x = np.array([[1., 2., 3.], [4., 5., 6.]])
coefs = np.array([[[1., 0.],[0., 1.]], [[0., 1.], [1., 0.]]])
z = np.array([0., 0.])
z[:] = hermval2d(x[:,0], x[:,1], coefs[:])
This returns an error about the shape of hermval2d, which according to just running the hermval2d function instead of assigning it:
In [XX]: hermval2d(x[:,0], x[:,1], coefs[:])
Out[XX]:
array([[ 9., 81.],
[ 6., 18.]])
I would expect the hermval2d to be a scalar for every x, y, and coefficient matrix, which is what you would expect from the documentation. So what am I missing here? What's the score?
It's right there in the docs :)
hermval2d(x, y, c)
[...]
The shape of the result will be c.shape[2:] + x.shape
In your case this seems to return the Hermite values for x and y evaluated for each ith 2d array in c[:,:,i].

logically adding in numpy

I want to do the following operation. But It likes the histogram operation.
maxIndex = 6
dst =zeros((1,6))
a =array([1,2,3,4,7,0,3,4,5,7])
index=array([1,1,1,3,3,4,4,5,5,5])
a's length == index's length,
for i in (a.size):
dst[index[i]] = dst[index[i]] + a[i]
How can I do this more pythonic. and more efficiently
If I understand correctly, I think you are looking for numpy.bincount:
dst = numpy.bincount(index, weights=a, minlength=maxIndex)
This give me array([ 0., 6., 0., 11., 3., 16.]) as the output. If you don't want to calculate maxIndex by hand, you can omit minlength parameter from the function call and numpy will return an appropriately-sized array for you.

expanding (adding a row or column) a scipy.sparse matrix

Suppose I have a NxN matrix M (lil_matrix or csr_matrix) from scipy.sparse, and I want to make it (N+1)xN where M_modified[i,j] = M[i,j] for 0 <= i < N (and all j) and M[N,j] = 0 for all j. Basically, I want to add a row of zeros to the bottom of M and preserve the remainder of the matrix. Is there a way to do this without copying the data?
Scipy doesn't have a way to do this without copying the data but you can do it yourself by changing the attributes that define the sparse matrix.
There are 4 attributes that make up the csr_matrix:
data: An array containing the actual values in the matrix
indices: An array containing the column index corresponding to each value in data
indptr: An array that specifies the index before the first value in data for each row. If the row is empty then the index is the same as the previous column.
shape: A tuple containing the shape of the matrix
If you are simply adding a row of zeros to the bottom all you have to do is change the shape and indptr for your matrix.
x = np.ones((3,5))
x = csr_matrix(x)
x.toarray()
>> array([[ 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1.]])
# reshape is not implemented for csr_matrix but you can cheat and do it yourself.
x._shape = (4,5)
# Update indptr to let it know we added a row with nothing in it. So just append the last
# value in indptr to the end.
# note that you are still copying the indptr array
x.indptr = np.hstack((x.indptr,x.indptr[-1]))
x.toarray()
array([[ 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1.],
[ 0., 0., 0., 0., 0.]])
Here is a function to handle the more general case of vstacking any 2 csr_matrices. You still end up copying the underlying numpy arrays but it is still significantly faster than the scipy vstack method.
def csr_vappend(a,b):
""" Takes in 2 csr_matrices and appends the second one to the bottom of the first one.
Much faster than scipy.sparse.vstack but assumes the type to be csr and overwrites
the first matrix instead of copying it. The data, indices, and indptr still get copied."""
a.data = np.hstack((a.data,b.data))
a.indices = np.hstack((a.indices,b.indices))
a.indptr = np.hstack((a.indptr,(b.indptr + a.nnz)[1:]))
a._shape = (a.shape[0]+b.shape[0],b.shape[1])
return a
Not sure if you're still looking for a solution, but maybe others can look into hstack and vstack - http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.hstack.html. I think we can define a csr_matrix for the single additional row and then vstack it with the previous matrix.
I don't think that there is any way to really escape from doing the copying. Both of those types of sparse matrices store their data as Numpy arrays (in the data and indices attributes for csr and in the data and rows attributes for lil) internally and Numpy arrays can't be extended.
Update with more information:
LIL does stand for LInked List, but the current implementation doesn't quite live up to the name. The Numpy arrays used for data and rows are both of type object. Each of the objects in these arrays are actually Python lists (an empty list when all values are zero in a row). Python lists aren't exactly linked lists, but they are kind of close and quite frankly a better choice due to O(1) look-up. Personally, I don't immediately see the point of using a Numpy array of objects here rather than just a Python list. You could fairly easily change the current lil implementation to use Python lists instead which would allow you to add a row without copying the whole matrix.

Categories