While training py-faster-rcnn on a custom dataset following the instructions at https://github.com/deboc/py-faster-rcnn/blob/master/help/Readme.md
I encountered some errors like
AttributeError: 'numpy.ndarray' object has no attribute 'toarray' in py-faster-rcnn
which I managed to bypass by editing https://github.com/rbgirshick/py-faster-rcnn/blob/master/lib/roi_data_layer/roidb.py
gt_overlaps = roidb[i]['gt_overlaps']
gt_overlaps = sp.sparse.csr_matrix(gt_overlaps).toarray()
However, during the training process, I received a warning twice
RuntimeWarning: invalid value encountered in log targets_dw = np.log(gt_widths / ex_widths)
in the file https://github.com/rbgirshick/py-faster-rcnn/blob/master/lib/fast_rcnn/bbox_transform.py
Are the results going to be affected by this ?
Do I need to do something different ?
maybe u should try to modify “lib/datasets/pascal_voc.py”
in the function “_load_pascal_annotation(,)”
the right one should be:
x1 = float(bbox.find('xmin').text)
y1 = float(bbox.find('ymin').text)
x2 = float(bbox.find('xmax').text)
y2 = float(bbox.find('ymax').text)
the reason is in your own data, x1 or y1 maybe equals to 1, if minus 1,then the number is negative, which caused the error
Related
I am trying to run rolling regression for several x_variable using the below code:
# Get data for specified date range
pred = pred[beg_date:end_date]
# Lag and standardize pred
pred[var_list_all] = pred[var_list_all].shift(1)
pred[var_list_all] = preprocessing.StandardScaler().fit_transform(pred[var_list_all])
# Initialize dictionary of lists
d = {}
for i in ['coeff', 't-stat', 'r2']:
d[i] = []
# Run bi-variate regression for each pred variable
y_var = 'EP'
for x_var in var_list_all:
formula = y_var + ' ~ ' + x_var
results = RollingOLS.from_formula(formula, window=60, data=pred)
d['coeff'].append(results.params[x_var])
d['t-stat'].append(results.params[x_var] / results.bse[x_var])
d['r2'].append(results.rsquared * 100)
The aim is to collect the coefficient, t-stat and r2 for each variable and plot the r2 for each on separate graphs.
Whenever I am running the regression I keep on getting the following message:
AttributeError: 'NoneType' object has no attribute 'f_locals'
I've also tried the example on https://www.statsmodels.org/stable/examples/notebooks/generated/rolling_ls.html to see if I do understand what is going on, but I get the same error message.
I am not quite sure what I'm doing wrong.
Would appreciate any help in solving this issue.
Thanks in advance.
I am trying to conduct Kprototype clustering algorithm. When I both run the model and try to do the cost graph as follows, I always get a 'no attribute' error for labels_ and cost_ functions. I checked the examples on several web sites, but there is no difference. What can I do? Thank you for your help.
1)
from kmodes.kmodes import KModes
from kmodes.kprototypes import KPrototypes
kproto1 = KPrototypes(n_clusters=15, init='Cao').fit_predict(data,categorical = [23])
labels= kproto1.labels_
**AttributeError: 'numpy.ndarray' object has no attribute 'label_'**
cost = []
range_cluster=[5,8,10,15,20,25,30,35,40,45,50,55,70,85,100]
for num_clusters in range_cluster:
kproto = KPrototypes(n_clusters=num_clusters, init='Cao').fit_predict(data, categorical=[23])
cost.append(kproto.cost_)
plt.plot(cost)
According to the source code, there are 2 ways to achieve this :
fit_predict method will return a tuple of labels, cost. So to get your labels, you should :
kproto1_result = KPrototypes(n_clusters=15, init='Cao').fit_predict(data,categorical = [23])
labels= kproto1[0]
or the 2nd method is just using the fit method :
kproto1 = KPrototypes(n_clusters=15, init='Cao').fit(data,categorical = [23])
labels = kproto1.labels_
I am starting to work with the Kymatio library, in order to use the Scattering transform as an extractor of 1D signal characteristics. The final idea is to classify 1D signals.
I followed the example available at the link
https://www.kymat.io/gallery_1d/plot_classif_torch.html#sphx-glr-gallery-1d-plot-classif-torch-py
Based on this example, I imported three .mat files that contain the compiled data from the COOLL dataset (https://coolldataset.github.io/). Two variables were imported:
x2 contains the values of the appliance currents. x2 is a matrix with 840 lines and 4 * 8192 columns.
y2 contains the Label list. It has 840 positions, one for each appliance.
I'm trying to calculate the coefficients of the Scattering1D transform for each of the signals that x2 contains. For this, I am doing the following:
T=32768;
J=8;
Q=12;
if use_cuda:
scattering.cuda()
x2 = x2.cuda()
y2 = y2.cuda()
Sx_all = scattering.forward(x2)
When I do this, the following error appears:
RuntimeError Traceback (most recent call last)
<ipython-input-62-26c538d90a70> in <module>()
1 #Sx_all = scattering(x2)
----> 2 Sx_all = scattering.forward(x2)
1 frames
/usr/local/lib/python3.6/dist-packages/kymatio/backend/torch_backend.py in input_checks(x)
9
10 if not x.is_contiguous():
---> 11 raise RuntimeError('The input must be contiguous.')
12
13 def _is_complex(x):
RuntimeError: The input must be contiguous.
This error does not appear when I run the original program, from the example available at https://www.kymat.io/gallery_1d/plot_classif_torch.html#sphx-glr-gallery-1d-plot-classif-torch-py.
What exactly does the error message 'The input must be contiguous' mean, and how do you suggest I fix the problem? I tried to read the library documentation but I still haven't solved the problem.
I believe that I have found the solution, using tensor.contiguous().
x2 = x_all_import['x_all']
x2 = torch.from_numpy(x2)
x2 = x2.contiguous();
y2 = y_all_import['y_all']
y2 = y2.flatten()
y2 = torch.from_numpy(y2)
y2 = y2.contiguous();
I am trying to model and solve an optimization problem, with python and gurobi optimizer. It is my first experience to solve a problem using optimizer. firstly I wrote a really big problem and add all variables and constraints, step by step. But there was problem(S) in that. so I reduce the problem to the small version, again and again. After all, now I have a very simple code:
from gurobipy import *
m = Model('net')
x = m.addVar(name = 'x')
y = m.addVar(name = 'y')
m.addConstr(x >= 0 and x <= 9000, name = 'flow0')
m.addConstr(y >= 0 and y <= 1000, name = 'flow1')
m.addConstr(y + x == 9990, name = 'total_flow')
m.setObjective(x *(4 + 0.6*(x/9000)) + (y * (4 + 0.6*(y/1000))), GRB.MINIMIZE)
solo = m.optimize()
if solo:
print ('find!!!')
It actually is a simple network flow problem (for a graph with two nodes and two edges) I want to calculate the flow of each edge (x and y). Obviously the flow of each edge cant be negative and cant be bigger than edge capacity(x(capa) = 9000, y(capa) = 1000). and the third constraint shows the the total flow limitation on both edges. Finally, the objective function has to minimize the equation.
Now I have some question on this code:
why 'solo' is (None)?
How can I print solution variables. I used getAttr() function. but I couldn't find out the role of variables name (x, y or flow0, flow1)
3.Ive got this result. But I really cant understand this!!!!
for example: what dose it calculate in each iteration??!
Tnx in advance, and excuse for my simple question...
The optimize() method always returns None, see print(help(m.optimize)). The status of your model after calling this method is stored in m.status while the solution values are stored in the .X attribute for each variable (assumed the model was solved to optimality). To access them you can use m.getVars():
# your model ...
m.optimize()
if m.status = GRB.OPTIMAL:
for var in m.getVars():
print(var.VarName, var.X)
Your posted log shows for each iteration of the barrier method (also known as interior point method) the objective value. See here for a detailed overview.
I want to fit an ARMA(p,q) model to simulated data, y, and check the effect of different estimation methods on the results. However, fitting a model to the same object like so
model = tsa.ARMA(y,(1,1))
results_mle = model.fit(trend='c', method='mle', disp=False)
results_css = model.fit(trend='c', method='css', disp=False)
and printing the results
print result_mle.summary()
print result_css.summary()
generates the following error
File "C:\Anaconda\lib\site-packages\statsmodels\tsa\arima_model.py", line 1572, in summary
smry.add_table_params(self, alpha=alpha, use_t=False)
File "C:\Anaconda\lib\site-packages\statsmodels\iolib\summary.py", line 885, in add_table_params
use_t=use_t)
File "C:\Anaconda\lib\site-packages\statsmodels\iolib\summary.py", line 475, in summary_params
exog_idx]
IndexError: index 3 is out of bounds for axis 0 with size 3
If, instead, I do this
model1 = tsa.ARMA(y,(1,1))
model2 = tsa.ARMA(y,(1,1))
result_mle = model1.fit(trend='c',method='css-mle',disp=False)
print result_mle.summary()
result_css = model2.fit(trend='c',method='css',disp=False)
print result_css.summary()
no error occurs. Is that supposed to be or a Bug that should be fixed?
BTW the ARMA process I generated as follows
from __future__ import division
import statsmodels.tsa.api as tsa
import numpy as np
# generate arma
a = -0.7
b = -0.7
c = 2
s = 10
y1 = np.random.normal(c/(1-a),s*(1+(a+b)**2/(1-a**2)))
e = np.random.normal(0,s,(100,))
y = [y1]
for t in xrange(e.size-1):
arma = c + a*y[-1] + e[t+1] + b*e[t]
y.append(arma)
y = np.array(y)
You could report this as a bug, even though it looks like a consequence of the current design.
Some attributes of the model change when the estimation method is changed, which should in general be avoided. Since both results instances access the same model, the older one is inconsistent with it in this case.
http://www.statsmodels.org/dev/pitfalls.html#repeated-calls-to-fit-with-different-parameters
In general, statsmodels tries to keep all parameters that need to change the model in the model.__init__ and not as arguments in fit, and attach the outcome of fit and results to the Results instance.
However, this is not followed everywhere, especially not in older models that gained new options along the way.
trend is an example that is supposed to go into the ARMA.__init__ because it is now handled together with the exog (which is an ARMAX model), but wasn't in pure ARMA. The estimation method belongs in fit and should not cause problems like these.
Aside: There is a helper function to simulate an ARMA process that uses scipy.signal.lfilter and should be much faster than an iteration loop in Python.