This should be quick and easy, I'm just trying to convert some VBA into python and I believe I don't understand how loops work here.
Basically I'm trying to count howmany series there are in a chart, then iterate through those series with for iseries in range(1, nseries):
And I end up getting the following error:
Traceback (most recent call last): File "xx.py", line 10, in
for iseries in range(1, nseries): TypeError: 'method' object cannot be interpreted as an integer
Full script below. The print statements is my attempt at trying to see if the loop worked correctly and counted the correct amount of series/points. That also seems to not work as nothing has been printed, so maybe this is the issue?:
from pptx import Presentation
prs = Presentation('Test.pptx')
for slide in prs.slides:
for shape in slide.shapes:
if not shape.has_chart:
continue
nseries = shape.chart.series.count
print('Series number:', nseries)
for iseries in range(1, nseries):
series = shape.chart.series(iseries)
npoint = series.points.count
print('Point number:', npoint)
prs.save('test3.pptx')
This is likely because count is a function, not an attribute. Try changing the line to:
nseries = shape.chart.series.count()
A better way to loop through the series, however, would be to do it directly instead of using an index:
for series in shape.chart.series:
# do something with series
Okay. Let me explain the things first. I have used a specific module named Biopython in this code. I am explaining the necessary details to solve the problem if you are not accustomed with the module.
The code is:
#!/usr/bin/python
from Bio.PDB.PDBParser import PDBParser
import numpy as np
parser=PDBParser(PERMISSIVE=1)
structure_id="mode_7"
filename="mode_7.pdb"
structure=parser.get_structure(structure_id, filename)
model1=structure[0]
s=(124,3)
newc=np.zeros(s,dtype=np.float32)
coord=[]
#for chain1 in model1.get_list():
# for residue1 in chain1.get_list():
# ca1=residue1["CA"]
# coord1=ca1.get_coord()
# newc.append(coord1)
for i in range(0,29):
model=structure[i]
for chain in model.get_list():
for residue in chain.get_list():
ca=residue["CA"]
coord.append(ca.get_coord())
newc=np.add(newc,coord)
print newc
print "END"
PDB file is the protein data bank file. The file I'm working with can be downloaded from https://drive.google.com/open?id=0B8oUhqYoEX6YVFJBTGlNZGNBdlk
If you remove the hashes from the first for loop, you'll find that get_coord() returns a (124,3) array with dtype float32. Likewise, the next for loop is supposed to return the same.
It gives out a strange error:
Traceback (most recent call last):
File "./average.py", line 27, in <module>
newc=np.add(newc,coord)
ValueError: operands could not be broadcast together with shapes (124,3) (248,3)
I am absolutely clueless how it manages to make a 248,3 array. I just want to add the array coord over itself. I tried with another modification of the code:
#!/usr/bin/python
from Bio.PDB.PDBParser import PDBParser
import numpy as np
parser=PDBParser(PERMISSIVE=1)
structure_id="mode_7"
filename="mode_7.pdb"
structure=parser.get_structure(structure_id, filename)
model1=structure[0]
s=(124,3)
newc=np.zeros(s,dtype=np.float32)
coord=[]
newc2=[]
#for chain1 in model1.get_list():
# for residue1 in chain1.get_list():
# ca1=residue1["CA"]
# coord1=ca1.get_coord()
# newc.append(coord1)
for i in range(0,29):
model=structure[i]
for chain in model.get_list():
for residue in chain.get_list():
ca=residue["CA"]
coord.append(ca.get_coord())
newc2=np.add(newc,coord)
print newc
print "END"
It gives out the same error. Can you help???
I'm not sure I fully understand what you're doing, but it looks like you need to reset the coords list at the start of every iteration:
for i in range(0,29):
coords = []
model=structure[i]
for chain in model.get_list():
for residue in chain.get_list():
ca=residue["CA"]
coord.append(ca.get_coord())
newc=np.add(newc,coord)
If you keep appending without clearing the list you add 124 items to coords at every iteration of the outer loop. The exception you see is likely raised during the second iteration.
I am writing a program that will append a list with a single element pulled from a 2 dimensional numpy array. So far, I have:
# For loop to get correlation data of selected (x,y) pixel for all bands
zdata = []
for n in d.bands:
cor_xy = np.array(d.bands[n])
zdata.append(cor_xy[y,x])
Every time I run my program, I get the following error:
Traceback (most recent call last):
File "/home/sdelgadi/scr/plot_pixel_data.py", line 36, in <module>
cor_xy = np.array(d.bands[n])
TypeError: only integer arrays with one element can be converted to an index
My method works when I try it from the python interpreter without using a loop, i.e.
>>> zdata = []
>>> a = np.array(d.bands[0])
>>> zdata.append(a[y,x])
>>> a = np.array(d.bands[1])
>>> zdata.append(a[y,x])
>>> print(zdata)
[0.59056658, 0.58640128]
What is different about creating a for loop and doing this manually, and how can I get my loop to stop causing errors?
You're treating n as if it's an index into d.bands when it's an element of d.bands
zdata = []
for n in d.bands:
cor_xy = np.array(n)
zdata.append(cor_xy[y,x])
You say a = np.array(d.bands[0]) works. The first n should be exactly the same thing as d.bands[0]. If so then np.array(n) is all you need.
I have a function inside a class that it's called inside a while loop.
This function gets 2 6x6 matrices and returns a vector magnified by the dimensions specified in np.linspace, i.e. 6x91. I want to store these three arrays in a file (using savetxt) by concatenating them as float numbers. Below there's a sample of the function run separately from the class, which works fine.
The problem comes when I declare the list output before the loop and by updating EulArr via a Tkinter window, append the arrays to the list. I can only save the arrays with %s format, but not anymore by %f. Why?
LegForces(self, Li, ti)
import numpy as np
from math import pi
M=100. #total estimated mass to move (upper platform+upper legs+payload)
g=9.81
nsteps= 91
Li=np.matrix([[-10.64569774, -93.1416122, 116.35191853],\
[-10.68368329, 93.17236065, 116.35542498],\
[85.985087,37.35196994, 116.20350534],[-75.34703551,-55.83790049, 115.44528196],\
[-75.33938926, 55.78964226, 115.44457613],[86.0307188,-37.33446016, 116.19929305]])
ti=np.matrix([[88.15843159,88.04450508,50.10006323, -138.28903445, -138.26610178,50.2369224],\
[-108.75675186, 108.84408749, 130.72504635, 21.82594871, -21.97569549,-130.6774372],\
[ 119.40585161, 119.40170883, 119.23577854, 118.41560138, 118.41643529,119.24075525]])
ti=ti.T #transpose the ti for compatible format
x_cm= 1.
y_cm= -1.
z_cm=87.752
z=np.linspace(0,90,nsteps)
Fx=-M*g*np.cos(pi*z/180)
Fy= M*g*np.sin(pi*z/180)
Fz=50 # including braking forces [N]
#specify here the centre of mass coordinates retrieved from Inventor
Mx=Fz*y_cm-Fy*z_cm
My=-Fz*x_cm+Fx*z_cm
Mz=Fy*x_cm-Fx*y_cm
mc=np.zeros((6,3),'float')
ex_forces=np.array([Fx,Fy,Fz,Mx,My,Mz])
for i in range(6):
mc[i,:]=np.cross(ti[i,:],Li[i,:])
r1=[Li[0,0],Li[1,0],Li[2,0],Li[3,0],Li[4,0],Li[5,0]]
r2=[Li[0,1],Li[1,1],Li[2,1],Li[3,1],Li[4,1],Li[5,1]]
r3=[Li[0,2],Li[1,2],Li[2,2],Li[3,2],Li[4,2],Li[5,2]]
r4=[mc[0,0],mc[1,0],mc[2,0],mc[3,0],mc[4,0],mc[5,0]]
r5=[mc[0,1],mc[1,1],mc[2,1],mc[3,1],mc[4,1],mc[5,1]]
r6=[mc[0,2],mc[1,2],mc[2,2],mc[3,2],mc[4,2],mc[5,2]]
DMatrix=np.vstack([r1,r2,r3,r4,r5,r6])
print 'DMatrix form:\n', DMatrix
invD=np.linalg.inv(DMatrix)
print ' inv(DMatrix) is:\n', invD
legF=np.dot(ex_forces,invD)
#slice it!
legF=legF.tolist()
a,b=np.shape(legF)
print 'check identity matrix:\n', np.dot(invD,DMatrix)
print 'leg forces:\n',legF, type(legF)
newlegF=np.reshape(legF,(1,a*b))
strokeValues= np.array([[-0.3595, .1450483, -0.3131,0.4210,-0.0825,.19124]])
print 'strokeValues shape:\n', np.shape(strokeValues)
print 'leg forces vector shape:', np.shape(newlegF)
EulArr=np.array([[0.12,0.2,0,-3.,-1.,15.]])
output=np.concatenate((strokeValues,EulArr,newlegF),axis=1)
np.savetxt('leg_forces.dat', output,fmt=' %f')
print output ,np.shape(output)
The class will look like this:
class Hexapod(self,....):
output=[]
while 1:
...
LegAxialF=self.LegForces(Li,ti)
#create a list for EUl parameters which can be refreshed
EulArr=np.zeros((6,1),'float')
EulArr[0]=180/pi*self.EulPhi
EulArr[1]=180/pi*self.EulTheta
EulArr[2]=180/pi*self.EulPsi
EulArr[3]=self.EulX
EulArr[4]=self.EulY
EulArr[5]=self.EulZ-self.height
#print meaningful values to the specified file
EulArr=np.reshape(EulArr,(1,6))
EulArrList=EulArr.tolist()
strokes=np.reshape(strokeValues,(1,6))
strokeList=strokes.tolist()
output.append(np.concatenate((strokeList,\
EulArrList,LegFList),axis=1))
np.savetxt('act_lengths.dat', output, fmt='%f')
def LegForces(self, Li, ti):
I get the following error:
np.savetxt('act_lengths.dat', output, fmt='%f')
File "C:\Python27\lib\site-packages\numpy\lib\npyio.py", line 1047, in savetxt
fh.write(asbytes(format % tuple(row) + newline))
TypeError: float argument required, not numpy.ndarray
Output is a list containing ndarrays. I think in your code, the elements of output must be ndarrays of varying size and internally NumPy is failing to convert this into a float array due to the ragged data, instead creating an array of objects*. I can reproduce your error with
>>> output = [np.random.rand(3), np.random.rand(4)]
>>> np.savetxt("test", output, fmt='%f')
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Users/yotam/anaconda/lib/python2.7/site-packages/numpy/lib/npyio.py", line 1083, in savetxt
fh.write(asbytes(format % tuple(row) + newline))
TypeError: float argument required, not numpy.ndarray
>>>
Whereas if I instead use output = [np.random.rand(3), np.random.rand(3)] it works.
(*) i.e. you are getting
>>> np.asarray(output)
array([array([ 0.87346791, 0.10046296, 0.60304887]),
array([ 0.25116526, 0.29174373, 0.26067348, 0.68317986])], dtype=object)
I know there is a ton of these threads but all of them are for very simple cases like 3x3 matrices and things of that sort and the solutions do not even begin to apply to my situation. So I'm trying to graph G versus l1 (that's not an eleven, but an L1). The data is in the file that I loaded from an excel file. The excel file is 14x250 so there are 14 arguments, each with 250 data points. I had another user (shout out to Hugh Bothwell!) help me with an error in my code, but now another error has surfaced.
So here is the code in question:
# format for CSV file:
header = ['l1', 'l2', 'l3', 'l4', 'l5', 'EI',
'S', 'P_right', 'P1_0', 'P3_0',
'w_left', 'w_right', 'G_left', 'G_right']
def loadfile(filename, skip=None, *args):
skip = set(skip or [])
with open(filename, *args) as f:
cr = csv.reader(f, quoting=csv.QUOTE_NONNUMERIC)
return np.array(row for i,row in enumerate(cr) if i not in skip)
#plot data
outputs_l1 = [loadfile('C:\\Users\\Chris\\Desktop\\Work\\Python Stuff\\BPCROOM - Shingles analysis\\ERR analysis\\l_1 analysis//BS(1) ERR analysis - l_1 - P_3 = {}.csv'.format(p)) for p in p3_arr]
col = {name:i for i,name in enumerate(header)}
fig = plt.figure()
for data,color in zip(outputs_l1, colors):
xs = data[:, col["l1" ]]
gl = data[:, col["G_left" ]] * 1000.0 # column 12
gr = data[:, col["G_right"]] * 1000.0 # column 13
plt.plot(xs, gl, color + "-", gr, color + "--")
for output, col in zip(outputs_l1, colors):
plt.plot(output[:,0], output[:,11]*1E3, col+'--')
plt.ticklabel_format(axis='both', style='plain', scilimits=(-1,1))
plt.xlabel('$l1 (m)$')
plt.ylabel('G $(J / m^2) * 10^{-3}$')
plt.xlim(xmin=.2)
plt.ylim(ymax=2, ymin=0)
plt.subplots_adjust(top=0.8, bottom=0.15, right=0.7)
After running the entire program, I recieve the error message:
Traceback (most recent call last):
File "C:/Users/Chris/Desktop/Work/Python Stuff/New Stuff from Brenday 8 26 2014/CD_ssa_plot(2).py", line 115, in <module>
xs = data[:, col["l1" ]]
IndexError: too many indices for array
and before I ran into that problem, I had another involving the line a few below the one the above error message refers to:
Traceback (most recent call last): File "FILE", line 119, in <module>
gl = data[:, col["G_left" ]] * 1000.0 # column 12
IndexError: index 12 is out of bounds for axis 1 with size 12
I understand the first error, but am just having problems fixing it. The second error is confusing for me though. My boss is really breathing down my neck so any help would be GREATLY appreciated!
I think the problem is given in the error message, although it is not very easy to spot:
IndexError: too many indices for array
xs = data[:, col["l1" ]]
'Too many indices' means you've given too many index values. You've given 2 values as you're expecting data to be a 2D array. Numpy is complaining because data is not 2D (it's either 1D or None).
This is a bit of a guess - I wonder if one of the filenames you pass to loadfile() points to an empty file, or a badly formatted one? If so, you might get an array returned that is either 1D, or even empty (np.array(None) does not throw an Error, so you would never know...). If you want to guard against this failure, you can insert some error checking into your loadfile function.
I highly recommend in your for loop inserting:
print(data)
This will work in Python 2.x or 3.x and might reveal the source of the issue. You might well find it is only one value of your outputs_l1 list (i.e. one file) that is giving the issue.
The message that you are getting is not for the default Exception of Python:
For a fresh python list, IndexError is thrown only on index not being in range (even docs say so).
>>> l = []
>>> l[1]
IndexError: list index out of range
If we try passing multiple items to list, or some other value, we get the TypeError:
>>> l[1, 2]
TypeError: list indices must be integers, not tuple
>>> l[float('NaN')]
TypeError: list indices must be integers, not float
However, here, you seem to be using matplotlib that internally uses numpy for handling arrays. On digging deeper through the codebase for numpy, we see:
static NPY_INLINE npy_intp
unpack_tuple(PyTupleObject *index, PyObject **result, npy_intp result_n)
{
npy_intp n, i;
n = PyTuple_GET_SIZE(index);
if (n > result_n) {
PyErr_SetString(PyExc_IndexError,
"too many indices for array");
return -1;
}
for (i = 0; i < n; i++) {
result[i] = PyTuple_GET_ITEM(index, i);
Py_INCREF(result[i]);
}
return n;
}
where, the unpack method will throw an error if it the size of the index is greater than that of the results.
So, Unlike Python which raises a TypeError on incorrect Indexes, Numpy raises the IndexError because it supports multidimensional arrays.
Before transforming the data into a list, I transformed the data into a list
data = list(data) data = np.array(data)