I am trying to run some simle example with Pyomo + glpk Solver (Anaconda2 64bit Spyder):
from pyomo.environ import *
model = ConcreteModel()
model.x_1 = Var(within=NonNegativeReals)
model.x_2 = Var(within=NonNegativeReals)
model.obj = Objective(expr=model.x_1 + 2*model.x_2)
model.con1 = Constraint(expr=3*model.x_1 + 4*model.x_2 >= 1)
model.con2 = Constraint(expr=2*model.x_1 + 5*model.x_2 >= 2)
opt = SolverFactory("glpk")
instance = model.create()
#results = opt.solve(instance)
#results.write()
But i get the following error message:
invalid literal for int() with base 10: 'c'
Traceback (most recent call last):
File "<ipython-input-5-e074641da66d>", line 1, in <module>
runfile('D:/..../Exampe.py', wdir='D:.../exercises/pyomo')
File "C:\...\Continuum\Anaconda21\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 699, in runfile
execfile(filename, namespace)
File "C:\....\Continuum\Anaconda21\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 74, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "D:/...pyomo/Exampe.py", line 34, in <module>
results = opt.solve(instance)
File "C:\....\Continuum\Anaconda21\lib\site-packages\pyomo\opt\base\solvers.py", line 580, in solve
result = self._postsolve()
File "C:\...Continuum\Anaconda21\lib\site-packages\pyomo\opt\solver\shellcmd.py", line 267, in _postsolve
results = self.process_output(self._rc)
File "C:\...\Continuum\Anaconda21\lib\site-packages\pyomo\opt\solver\shellcmd.py", line 329, in process_output
self.process_soln_file(results)
File "C:\....\Continuum\Anaconda21\lib\site-packages\pyomo\solvers\plugins\solvers\GLPK.py", line 454, in process_soln_file
raise ValueError(msg)
ValueError: Error parsing solution data file, line 1
I downloaded glpk from http://winglpk.sourceforge.net/ --> unziped + added parth to the environmental variable "path".
Hope someone can help me - thank you!
This is a known problem with GLPK 4.60 (glpsol changed the format of their output which broke Pyomo 4.3's parser). You can either download an older release of GLPK, or upgrade Pyomo to 4.4.1 (which contains an updated parser).
Related
I'm trying to run a model using tensorflow-probability on an Apple M1 (I've successfully run other tensorflow models on this computer, but I mention the architecture in case it's part of the issue). I have the following model code where berndfs is a dictionary that contains two pandas dataframes for each value of 's', one containing the sample and one containing survey data:
def tfdmrp_run():
n_chains = 4
dtype = tf.float32
berndfs = load_tfdmrp_datacache()
shortnames = list(berndfs.keys())
sample = berndfs[s]['svy']
age_shape = len(berndfs[s]['census'].agecat.unique())
gender_shape = len(berndfs[s]['census'].sex.unique())
edu_shape = len(berndfs[s]['census'].educat.unique())
counts = sample['count'].values.tolist()
agecatlist = sample.agecat.values.tolist()
genderlist = sample.gender.values.tolist()
edulist = sample.educat.values.tolist()
modlist = [
tfd.HalfNormal(1),
lambda sigma_age: tfd.Sample(tfd.Normal(0,sigma_age),sample_shape=age_shape),
tfd.HalfNormal(1),
lambda sigma_gender: tfd.Sample(tfd.Normal(0,sigma_gender),sample_shape=gender_shape),
tfd.HalfNormal(1),
lambda sigma_edu: tfd.Sample(tfd.Normal(0,sigma_edu),sample_shape=edu_shape),
tfd.Normal(0,1), #intercept
lambda intercept,coef_edu,a,coef_gender,b,coef_age: tfd.Independent(
tfd.Binomial(
total_count=tf.cast(counts,tf.int32),
logits=intercept
+ tf.squeeze(tf.gather(coef_age, tf.cast(agecatlist,tf.int32),axis=-1))
+ tf.squeeze(tf.gather(coef_gender,tf.cast(genderlist,tf.int32),axis=-1))
+ tf.squeeze(tf.gather(coef_edu,tf.cast(edulist,tf.int32),axis=-1))
),
reinterpreted_batch_ndims=1
)
]
model = tfd.JointDistributionSequential(modlist)
model.resolve_graph()
The code returns the following error at the call to resolve_graph():
*** TypeError: Found incompatible dtypes, <class 'numpy.int32'> and <class 'numpy.float32'>. Seen so far: [<class 'numpy.int32'>, <class 'numpy.float32'>, ...]
I get the same error if I try to call sample. I'm completely stuck because at this point no inputs to the model are in anything other than a native python datatype. I'm using tensorflow-macos version 2.10.0, tensorflow-probability version 0.18.0, and numpy version 1.23.3. All of the dependency lists I can find suggest that there shouldn't be any compatibility issues. Does anyone know what could be causing this and/or how to fix it?
Full traceback:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/jacobtucker/Documents/Projects/repos/rutracker/dmrp/dmrptfp.py", line 124, in tfdmrp_run
model.resolve_graph()
File "/Users/jacobtucker/miniconda3/envs/tfdmrp/lib/python3.10/site-packages/tensorflow_probability/python/distributions/joint_distribution_sequential.py", line 460, in resolve_graph
distribution_names = self._flat_resolve_names(
File "/Users/jacobtucker/miniconda3/envs/tfdmrp/lib/python3.10/site-packages/tensorflow_probability/python/distributions/joint_distribution_sequential.py", line 473, in _flat_resolve_names
for d in self._get_single_sample_distributions()]
File "/Users/jacobtucker/miniconda3/envs/tfdmrp/lib/python3.10/site-packages/tensorflow_probability/python/distributions/joint_distribution.py", line 353, in _get_single_sample_distributions
ds = self._execute_model(
File "/Users/jacobtucker/miniconda3/envs/tfdmrp/lib/python3.10/site-packages/tensorflow_probability/python/distributions/joint_distribution.py", line 1045, in _execute_model
d = gen.send(next_value)
File "/Users/jacobtucker/miniconda3/envs/tfdmrp/lib/python3.10/site-packages/tensorflow_probability/python/distributions/joint_distribution_sequential.py", line 399, in _model_coroutine
dist = dist_fn(*xs)
File "/Users/jacobtucker/miniconda3/envs/tfdmrp/lib/python3.10/site-packages/tensorflow_probability/python/distributions/joint_distribution_sequential.py", line 610, in dist_fn_wrapped
return dist_fn(*reversed(xs[-len(args):]))
File "/Users/jacobtucker/Documents/Projects/repos/rutracker/dmrp/dmrptfp.py", line 112, in <lambda>
tfd.Binomial(
File "/Users/jacobtucker/miniconda3/envs/tfdmrp/lib/python3.10/site-packages/decorator.py", line 232, in fun
return caller(func, *(extras + args), **kw)
File "/Users/jacobtucker/miniconda3/envs/tfdmrp/lib/python3.10/site-packages/tensorflow_probability/python/distributions/distribution.py", line 342, in wrapped_init
default_init(self_, *args, **kwargs)
File "/Users/jacobtucker/miniconda3/envs/tfdmrp/lib/python3.10/site-packages/tensorflow_probability/python/distributions/binomial.py", line 371, in __init__
dtype = dtype_util.common_dtype([total_count, logits, probs], tf.float32)
File "/Users/jacobtucker/miniconda3/envs/tfdmrp/lib/python3.10/site-packages/tensorflow_probability/python/internal/dtype_util.py", line 104, in common_dtype
raise TypeError(
TypeError: Found incompatible dtypes, <class 'numpy.int32'> and <class 'numpy.float32'>. Seen so far: [<class 'numpy.int32'>, <class 'numpy.float32'>, ...]
I have a deep network using Keras and I need to apply cropping on the output of one layer and then send to the next layer. for this aim, I write the following code as a lambda layer:
def cropping_fillzero(img, rate=0.6): # Q = percentage
residual_shape = img.get_shape().as_list()
h, w = residual_shape[1:3]
blacked_pixels = int((rate) * (h*w))
ratio = int(np.floor(np.sqrt(blacked_pixels)))
# width = int(np.ceil(np.sqrt(blacked_pixels)))
x = np.random.randint(0, w - ratio)
y = np.random.randint(0, h - ratio)
cropingImg = img[y:y+ratio, x:x+ratio].assign(tf.zeros([ratio, ratio]))
return cropingImg
decoded_noise = layers.Lambda(cropping_fillzero, arguments={'rate':residual_cropRate}, name='residual_cropout_attack')(bncv11)
but it produces the following error and I do not know why?!
Traceback (most recent call last):
File "", line 1, in
runfile('E:/code_v28-7-19/CIFAR_los4x4convolvedw_5_cropAttack_RandomSize_Pad.py',
wdir='E:/code_v28-7-19')
File
"D:\software\Anaconda3\envs\py36\lib\site-packages\spyder_kernels\customize\spydercustomize.py",
line 704, in runfile
execfile(filename, namespace)
File
"D:\software\Anaconda3\envs\py36\lib\site-packages\spyder_kernels\customize\spydercustomize.py",
line 108, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File
"E:/code_v28-7-19/CIFAR_los4x4convolvedw_5_cropAttack_RandomSize_Pad.py",
line 143, in
decoded_noise = layers.Lambda(cropping_fillzero, arguments={'rate':residual_cropRate},
name='residual_cropout_attack')(bncv11)
File
"D:\software\Anaconda3\envs\py36\lib\site-packages\keras\engine\base_layer.py",
line 457, in call
output = self.call(inputs, **kwargs)
File
"D:\software\Anaconda3\envs\py36\lib\site-packages\keras\layers\core.py",
line 687, in call
return self.function(inputs, **arguments)
File
"E:/code_v28-7-19/CIFAR_los4x4convolvedw_5_cropAttack_RandomSize_Pad.py",
line 48, in cropping_fillzero
cropingImg = img[y:y+ratio, x:x+ratio].assign(tf.zeros([ratio, ratio]))
File
"D:\software\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\array_ops.py",
line 700, in assign
raise ValueError("Sliced assignment is only supported for variables")
ValueError: Sliced assignment is only supported for variables
could you please tell me how can I solve this error?
Thank you
h, w = residual_shape[1:3]
I'm not entirely sure what you're trying to do here, but Python interprets this as 'return between the 2nd element and the 4th'. Maybe you mean residual_shape[1], residual_shape[3]?
cropingImg = img[y:y+ratio, x:x+ratio].assign(tf.zeros([ratio, ratio]))
You are trying to use slice notation on a tensor. Slice notation may only be used on variables, as the error message says. In order to get the actual variable, try loading the previous layer output using tf.identity():
self.output = your_layer
self.output = tf.identity(self.output, name='output')
output = graph.get_tensor_by_name('output:0')
I am trying to use XGBoost library using .train function and DMatrix but I am a little stuck because of an error :
Traceback (most recent call last):
File "", line 1, in
runfile('E:/CrossValidation.py', wdir='E:/')
File
"C:\Users\users\Anaconda3\envs\Lightgbm\lib\site-packages\spyder\utils\site\sitecustomize.py",
line 705, in runfile
execfile(filename, namespace)
File
"C:\Users\users\Anaconda3\envs\Lightgbm\lib\site-packages\spyder\utils\site\sitecustomize.py",
line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "E:/CrossValidation.py", line 218, in
mainXGB()
File "E:/CrossValidation.py", line 214, in mainXGB
crossval_preds, val_preds = cv.train(X_data=X_train.values, y_data=y_train.values, X_test=X_val.values, params=xgb_params)
File "E:/CrossValidation.py", line 136, in train
early_stopping_rounds=early_stopping_rounds)
File
"C:\Users\users\Anaconda3\envs\Lightgbm\lib\site-packages\xgboost\training.py",
line 204, in train
xgb_model=xgb_model, callbacks=callbacks)
File
"C:\Users\users\Anaconda3\envs\Lightgbm\lib\site-packages\xgboost\training.py",
line 32, in _train_internal
bst = Booster(params, [dtrain] + [d[0] for d in evals])
File
"C:\Users\users\Anaconda3\envs\Lightgbm\lib\site-packages\xgboost\training.py",
line 32, in
bst = Booster(params, [dtrain] + [d[0] for d in evals])
TypeError: 'DMatrix' object does not support indexing
Here my piece of code :
dtrain = xgb.DMatrix(X_data[train_idx], label=np.log1p(y_data[train_idx])) # datas.slice(train_idx)
dtest = xgb.DMatrix(X_data[val_idx], label=np.log1p(y_data[val_idx]))
print('data created for xgboost')
model = self.model_base.train(params=params, dtrain=dtrain, num_boost_round=number_iteration, evals=[dtest], early_stopping_rounds=early_stopping_rounds)
Does anyone know how to solve the problem ?
The problem is with the evals argument. A list of tuples is expected, so change evals=[dtest] to evals=[(dtest, "Test")].
I want to use gensim to convert Wikipedia dump to plain text using python -m gensim.scripts.make_wiki script.
I use it as :
python -m gensim.scripts.make_wiki ./enwiki-latest-pages-articles.xml.bz2 ./results
gives me an error at the end:
2016-04-06 20:43:46,471 : INFO : storing corpus in Matrix Market format to ./results/_bow.mm
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/usr/local/lib/python2.7/dist-packages/gensim-0.12.3-py2.7-linux-x86_64.egg/gensim/scripts/make_wiki.py", line 88, in <module>
MmCorpus.serialize(outp + '_bow.mm', wiki, progress_cnt=10000) # another ~9h
File "/usr/local/lib/python2.7/dist-packages/gensim-0.12.3-py2.7-linux-x86_64.egg/gensim/corpora/indexedcorpus.py", line 89, in serialize
offsets = serializer.save_corpus(fname, corpus, id2word, progress_cnt=progress_cnt, metadata=metadata)
File "/usr/local/lib/python2.7/dist-packages/gensim-0.12.3-py2.7-linux-x86_64.egg/gensim/corpora/mmcorpus.py", line 49, in save_corpus
return matutils.MmWriter.write_corpus(fname, corpus, num_terms=num_terms, index=True, progress_cnt=progress_cnt, metadata=metadata)
File "/usr/local/lib/python2.7/dist-packages/gensim-0.12.3-py2.7-linux-x86_64.egg/gensim/matutils.py", line 486, in write_corpus
mw = MmWriter(fname)
File "/usr/local/lib/python2.7/dist-packages/gensim-0.12.3-py2.7-linux-x86_64.egg/gensim/matutils.py", line 436, in __init__
self.fout = utils.smart_open(self.fname, 'wb+') # open for both reading and writing
File "build/bdist.linux-x86_64/egg/smart_open/smart_open_lib.py", line 111, in smart_open
NotImplementedError: unknown file mode wb+
Does anybody know what is going on?
Not sure of the command line script, but the following works for me -
def parse_wiki(wiki_bz_file):
output = open('./wiki_text_dump.txt', 'w')
i = 0
wiki = WikiCorpus(wiki_bz_file, lemmatize=False, dictionary={}) #vocab dict not needed
for text in wiki.get_texts():
output.write(u.listToStr(chunk) + '\n')
i = i + 1
if i%50000 == 0:
logger.info("Saved " + str(i) + " articles")
output.close()
logger.info("Finished Saved " + str(i) + " articles")
return
I was trying to create a volume weighted average price using python.
Here is my code:
vwap = []
for i in range(window, int(price_times_qty)):
vol_total = 0
vol_price = 0
for j in range(window):
vol_total = vol_total + lastqty[i-j]
vol_price = vol_price + lastqty[i-j] * lastprice[i-j]
vwap.append( vol_price / vol_total )
I believe my idea is right, but when I execute the code, I have the error:
Traceback (most recent call last):
File "<ipython-input-9-9724942aa7be>", line 1, in <module>
runfile('/home/intern2/Desktop/VWAP/vwap-v1.py', wdir='/home/intern2/Desktop/VWAP')
File "/usr/lib/python3/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 586, in runfile
execfile(filename, namespace)
File "/usr/lib/python3/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 48, in execfile
exec(compile(open(filename, 'rb').read(), filename, 'exec'), namespace)
File "/home/intern2/Desktop/VWAP/vwap-v1.py", line 25, in <module>
vol_total = vol_total + lastqty[i-j]
IndexError: index out of bounds
I just have no idea why the error is index out of bound. When I check my code, I think the index is well within the bound.
Could anyone please help me with it?