I'm trying to initialize a NumPy array that contains named tuples. Everything works fine when I initialize the array with empty data and set that data afterwards; when using the numpy.array constructor, however, NumPy doesn't do what I had expected.
The output of
import numpy
data = numpy.random.rand(10, 3)
print data[0]
# Works
a = numpy.empty(
len(data),
dtype=numpy.dtype([('nodes', (float, 3))])
)
a['nodes'] = data
print
print a[0]['nodes']
# Doesn't work
b = numpy.array(
data,
dtype=numpy.dtype([('nodes', (float, 3))])
)
print
print b[0]['nodes']
is
[ 0.28711363 0.89643579 0.82386232]
[ 0.28711363 0.89643579 0.82386232]
[[ 0.28711363 0.28711363 0.28711363]
[ 0.89643579 0.89643579 0.89643579]
[ 0.82386232 0.82386232 0.82386232]]
This is with NumPy 1.8.1.
Any hints on how to organize the array constructor?
This is awful, but:
Starting with your example copied and pasted into an ipython, try
dtype=numpy.dtype([('nodes', (float, 3))])
c = numpy.array([(aa,) for aa in data], dtype=dtype)
it seems to do the trick.
It's instructive to construct a different array:
dt3=np.dtype([('x','<f8'),('y','<f8'),('z','<f8')])
b=np.zeros((10,),dtype=dt3)
b[:]=[tuple(x) for x in data]
b['x'] = data[:,0] # alt
np.array([tuple(x) for x in data],dtype=dt3) # or in one statement
a[:1]
# array([([0.32726803375966484, 0.5845638956708634, 0.894278688117277],)], dtype=[('nodes', '<f8', (3,))])
b[:1]
# array([(0.32726803375966484, 0.5845638956708634, 0.894278688117277)], dtype=[('x', '<f8'), ('y', '<f8'), ('z', '<f8')])
I don't think there's a way of assigning data to all the fields of b without some sort of iteration.
genfromtxt is a common way of generating record arrays like this. Looking at its code I see a pattern like:
data = list(zip(*[...]))
output = np.array(data, dtype)
Which inspired me to try:
dtype=numpy.dtype([('nodes', (float, 3))])
a = np.array(zip(data), dtype=dtype)
(speed's basically the same as eickenberg's comprehension; so it's doing the same pure Python list operations.)
And for the 3 fields:
np.array(zip(*data.T), dtype=dt3)
curiously, explicitly converting to list first is even faster (almost 2x the zip(data) calc)
np.array(zip(*data.T.tolist()), dtype=dt3)
Related
I am trying to save a pandas dataframe to a matlab .mat file using scipy.io.
I have the following:
array1 = np.array([1,2,3])
array2 = np.array(['a','b','c'])
array3 = np.array([1.01,2.02,3.03])
df = DataFrame({1:array1, 2:array2,3:array3}, index=('array1','array2','array3'))
recarray_ = df.to_records()
## Produces:
# rec.array([('array1', 1, 'a', 1.01), ('array2', 2, 'b', 2.02),
# ('array3', 3, 'c', 3.03)],
# dtype=[('index', 'O'), ('1', '<i4'), ('2', 'O'), ('3', '<f8')])
scipy.io.savemat('test_recarray_struct.mat', {'struct':df.to_records()})
In Matlab, I would expect this to produce a struct containing three arrays (one int, one char, one float) but it actually produces is a struct containing 3 more structs, each containing four variables; 'index', 1, '2', 3. When trying to select 1, '2' or 3 I get the error 'The variable struct(1, 1).# does not exist.'
Can anyone explain the expected behaviour and how best to save DataFrames to .mat files?
I am using the following workaround in the meantime. Please let me know if you have a better solution:
a_dict = {col_name : df[col_name].values for col_name in df.columns.values}
## optional if you want to save the index as an array as well:
# a_dict[df.index.name] = df.index.values
scipy.io.savemat('test_struct_to_mat.mat', {'struct':a_dict})
I think what you need is to create the dataframe like this:
df = DataFrame({'array1':array1, 'array2':array2,'array3':array3})
and save it like this:
scipy.io.savemat('test_recarray_struct.mat', {'struct':df.to_dict("list")})
So the code should be something like:
# ... import appropritely
array1 = np.array([1,2,3])
array2 = np.array(['a','b','c'])
array3 = np.array([1.01,2.02,3.03])
df = DataFrame({'array1':array1, 'array2':array2,'array3':array3})
scipy.io.savemat('test_recarray_struct.mat', {'struct':df.to_dict("list")})
Say I have a file myfile.txt containing:
1 2.0000 buckle_my_shoe
3 4.0000 margery_door
How do I import data from the file to a numpy array as an int, float and string?
I am aiming to get:
array([[1,2.0000,"buckle_my_shoe"],
[3,4.0000,"margery_door"]])
I've been playing around with the following to no avail:
a = numpy.loadtxt('myfile.txt',dtype=(numpy.int_,numpy.float_,numpy.string_))
EDIT: Another approach might be to use the ndarray type and convert afterwards.
b = numpy.loadtxt('myfile.txt',dtype=numpy.ndarray)
array([['1', '2.0000', 'buckle_my_shoe'],
['3', '4.0000', 'margery_door']], dtype=object)
Use numpy.genfromtxt:
import numpy as np
np.genfromtxt('filename', dtype= None)
# array([(1, 2.0, 'buckle_my_shoe'), (3, 4.0, 'margery_door')],
# dtype=[('f0', '<i4'), ('f1', '<f8'), ('f2', '|S14')])
Pandas can do that for you. The docs for the function you could use are here.
Assuming your columns are tab separated, this should do the trick (adapted from this question):
df = DataFrame.from_csv('myfile.txt', sep='\t')
array = df.values # the array you are interested in
Basically, I have a bunch of data where the first column is a string (label) and the remaining columns are numeric values. I run the following:
data = numpy.genfromtxt('data.txt', delimiter = ',')
This reads most of the data well, but the label column just gets 'nan'. How can I deal with this?
By default, np.genfromtxt uses dtype=float: that's why you string columns are converted to NaNs because, after all, they're Not A Number...
You can ask np.genfromtxt to try to guess the actual type of your columns by using dtype=None:
>>> from StringIO import StringIO
>>> test = "a,1,2\nb,3,4"
>>> a = np.genfromtxt(StringIO(test), delimiter=",", dtype=None)
>>> print a
array([('a',1,2),('b',3,4)], dtype=[('f0', '|S1'),('f1', '<i8'),('f2', '<i8')])
You can access the columns by using their name, like a['f0']...
Using dtype=None is a good trick if you don't know what your columns should be. If you already know what type they should have, you can give an explicit dtype. For example, in our test, we know that the first column is a string, the second an int, and we want the third to be a float. We would then use
>>> np.genfromtxt(StringIO(test), delimiter=",", dtype=("|S10", int, float))
array([('a', 1, 2.0), ('b', 3, 4.0)],
dtype=[('f0', '|S10'), ('f1', '<i8'), ('f2', '<f8')])
Using an explicit dtype is much more efficient than using dtype=None and is the recommended way.
In both cases (dtype=None or explicit, non-homogeneous dtype), you end up with a structured array.
[Note: With dtype=None, the input is parsed a second time and the type of each column is updated to match the larger type possible: first we try a bool, then an int, then a float, then a complex, then we keep a string if all else fails. The implementation is rather clunky, actually. There had been some attempts to make the type guessing more efficient (using regexp), but nothing that stuck so far]
If your data file is structured like this
col1, col2, col3
1, 2, 3
10, 20, 30
100, 200, 300
then numpy.genfromtxt can interpret the first line as column headers using the names=True option. With this you can access the data very conveniently by providing the column header:
data = np.genfromtxt('data.txt', delimiter=',', names=True)
print data['col1'] # array([ 1., 10., 100.])
print data['col2'] # array([ 2., 20., 200.])
print data['col3'] # array([ 3., 30., 300.])
Since in your case the data is formed like this
row1, 1, 10, 100
row2, 2, 20, 200
row3, 3, 30, 300
you can achieve something similar using the following code snippet:
labels = np.genfromtxt('data.txt', delimiter=',', usecols=0, dtype=str)
raw_data = np.genfromtxt('data.txt', delimiter=',')[:,1:]
data = {label: row for label, row in zip(labels, raw_data)}
The first line reads the first column (the labels) into an array of strings.
The second line reads all data from the file but discards the first column.
The third line uses dictionary comprehension to create a dictionary that can be used very much like the structured array which numpy.genfromtxt creates using the names=True option:
print data['row1'] # array([ 1., 10., 100.])
print data['row2'] # array([ 2., 20., 200.])
print data['row3'] # array([ 3., 30., 300.])
data=np.genfromtxt(csv_file, delimiter=',', dtype='unicode')
It works fine for me.
For a dataset of this format:
CONFIG000 1080.65 1080.87 1068.76 1083.52 1084.96 1080.31 1081.75 1079.98
CONFIG001 414.6 421.76 418.93 415.53 415.23 416.12 420.54 415.42
CONFIG010 1091.43 1079.2 1086.61 1086.58 1091.14 1080.58 1076.64 1083.67
CONFIG011 391.31 392.96 391.24 392.21 391.94 392.18 391.96 391.66
CONFIG100 1067.08 1062.1 1061.02 1068.24 1066.74 1052.38 1062.31 1064.28
CONFIG101 371.63 378.36 370.36 371.74 370.67 376.24 378.15 371.56
CONFIG110 1060.88 1072.13 1076.01 1069.52 1069.04 1068.72 1064.79 1066.66
CONFIG111 350.08 350.69 352.1 350.19 352.28 353.46 351.83 350.94
This code works for my application:
def ShowData(data, names):
i = 0
while i < data.shape[0]:
print(names[i] + ": ")
j = 0
while j < data.shape[1]:
print(data[i][j])
j += 1
print("")
i += 1
def Main():
print("The sample data is: ")
fname = 'ANOVA.csv'
csv = numpy.genfromtxt(fname, dtype=str, delimiter=",")
num_rows = csv.shape[0]
num_cols = csv.shape[1]
names = csv[:,0]
data = numpy.genfromtxt(fname, usecols = range(1,num_cols), delimiter=",")
print(names)
print(str(num_rows) + "x" + str(num_cols))
print(data)
ShowData(data, names)
Python-2 output:
The sample data is:
['CONFIG000' 'CONFIG001' 'CONFIG010' 'CONFIG011' 'CONFIG100' 'CONFIG101'
'CONFIG110' 'CONFIG111']
8x9
[[ 1080.65 1080.87 1068.76 1083.52 1084.96 1080.31 1081.75 1079.98]
[ 414.6 421.76 418.93 415.53 415.23 416.12 420.54 415.42]
[ 1091.43 1079.2 1086.61 1086.58 1091.14 1080.58 1076.64 1083.67]
[ 391.31 392.96 391.24 392.21 391.94 392.18 391.96 391.66]
[ 1067.08 1062.1 1061.02 1068.24 1066.74 1052.38 1062.31 1064.28]
[ 371.63 378.36 370.36 371.74 370.67 376.24 378.15 371.56]
[ 1060.88 1072.13 1076.01 1069.52 1069.04 1068.72 1064.79 1066.66]
[ 350.08 350.69 352.1 350.19 352.28 353.46 351.83 350.94]]
CONFIG000:
1080.65
1080.87
1068.76
1083.52
1084.96
1080.31
1081.75
1079.98
CONFIG001:
414.6
421.76
418.93
415.53
415.23
416.12
420.54
415.42
CONFIG010:
1091.43
1079.2
1086.61
1086.58
1091.14
1080.58
1076.64
1083.67
CONFIG011:
391.31
392.96
391.24
392.21
391.94
392.18
391.96
391.66
CONFIG100:
1067.08
1062.1
1061.02
1068.24
1066.74
1052.38
1062.31
1064.28
CONFIG101:
371.63
378.36
370.36
371.74
370.67
376.24
378.15
371.56
CONFIG110:
1060.88
1072.13
1076.01
1069.52
1069.04
1068.72
1064.79
1066.66
CONFIG111:
350.08
350.69
352.1
350.19
352.28
353.46
351.83
350.94
You can use numpy.recfromcsv(filename): the types of each column will be automatically determined (as if you use np.genfromtxt() with dtype=None), and by default delimiter=",". It's basically a shortcut for np.genfromtxt(filename, delimiter=",", dtype=None) that Pierre GM pointed at in his answer.
Here is a working example start to finish:
If I want to import numbers from a file without the first line:
I like trains #this is the first line, a string
1 \t 2 \t 3 #\t is to signify that the delimeter (separation) is tab and not komma
4 \t 5 \t 6
Then running the following code:
import numpy as np #contains genfromtxt
import matplotlib.pyplot as plt #enables plots
from pathlib import Path # easier using path instead of writing it again and again when you have many files in the same folder
path = r'some_path' #location of your file in your computer like r'C:my comp\folder\folder2' r is there to make the win 10 path readable in python, it means "just text"
fileNames = [r'\I like trains.txt',
r'\die potato.txt']
data=np.genfromtxt(path + fileNames[0], delimiter='\t', skip_header=1)
Produces this result:
data = [1 2 3
4 5 6]
where each number has its own cell and can be reached separately
I'm learning Matplotlib, and trying to implement a simple linear regression by hand.
However, I've run into a problem when importing and then working with my data after using csv2rec.
data= matplotlib.mlab.csv2rec('KC_Filtered01.csv',delimiter=',')
x = data['list_price']
y = data['square_feet']
sumx = x.sum()
sumy = y.sum()
sumxSQ = sum([sq**2 for sq in x])
sumySQ = sum([sq**2 for sq in y])
I'm reading in a list of housing prices, and trying to get the sum of the squares. However, when csv2rec reads in the prices from the file, it stores the values as an int32. Since the sum of the squares of the housing prices is greater than a 32 bit integer, it overflows. However I don't see a method of changing the data type that is assigned when csv2rec reads the file. How can I change the data type when the array is read in or assigned?
x = data['list_price'].astype('int64')
and the same with y.
And: csv2rec has a converterd argument: http://matplotlib.sourceforge.net/api/mlab_api.html#matplotlib.mlab.csv2rec
Instead of mlab.csv2rec, you can use an equivalent function of numpy, numpy.loadtxt (documentation), to read your data. This function has an argument to specify the dtype of your data.
Or if you want to work with column names (as in your example code), the function numpy.genfromtxt (documentation). This is like loadtxt, but with more options, such as to read in the column names from the first line of your file (with names = True).
An example of its usage:
In [9]:
import numpy as np
from StringIO import StringIO
data = StringIO("a, b, c\n 1, 2, 3\n 4, 5, 6")
np.genfromtxt(data, names=True, dtype = 'int64', delimiter = ',')
Out[9]:
array([(1L, 2L, 3L), (4L, 5L, 6L)],
dtype=[('a', '<i8'), ('b', '<i8'), ('c', '<i8')])
Another remark on your code, when using numpy arrays you do't have to use for-loops. To calculate the square, you can just do:
xSQ = x**2
sumxSQ = xSQ.sum()
or in one line:
sumxSQ = numpy.sum(x**2)
What is the cleanest way to add a field to a structured numpy array? Can it be done destructively, or is it necessary to create a new array and copy over the existing fields? Are the contents of each field stored contiguously in memory so that such copying can be done efficiently?
If you're using numpy 1.3, there's also numpy.lib.recfunctions.append_fields().
For many installations, you'll need to import numpy.lib.recfunctions to access this. import numpy will not allow one to see the numpy.lib.recfunctions
import numpy
def add_field(a, descr):
"""Return a new array that is like "a", but has additional fields.
Arguments:
a -- a structured numpy array
descr -- a numpy type description of the new fields
The contents of "a" are copied over to the appropriate fields in
the new array, whereas the new fields are uninitialized. The
arguments are not modified.
>>> sa = numpy.array([(1, 'Foo'), (2, 'Bar')], \
dtype=[('id', int), ('name', 'S3')])
>>> sa.dtype.descr == numpy.dtype([('id', int), ('name', 'S3')])
True
>>> sb = add_field(sa, [('score', float)])
>>> sb.dtype.descr == numpy.dtype([('id', int), ('name', 'S3'), \
('score', float)])
True
>>> numpy.all(sa['id'] == sb['id'])
True
>>> numpy.all(sa['name'] == sb['name'])
True
"""
if a.dtype.fields is None:
raise ValueError, "`A' must be a structured numpy array"
b = numpy.empty(a.shape, dtype=a.dtype.descr + descr)
for name in a.dtype.names:
b[name] = a[name]
return b