I'm currently trying to select a layer. In Qgis 2, this was done by doing
from qgis import processing
lyrConsumer = processing.getObject('contours-iris-2014')
But now, the documentation says that I have to use QgsProcessingUtils.mapLayerFromString() in Qgis3. Apparently, I need to put a second argument now, as I get this error.
Traceback (most recent call last):
File "C:\OSGEO4~1\apps\Python37\lib\code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
TypeError: QgsProcessingUtils.mapLayerFromString(): not enough arguments
What is the second argument?
The second parameter is a QgsProcessingContext(), which lets the algorithm know what is the context in which it will run.
You can set it in this way:
context = QgsProcessingContext()
context.setProject(QgsProject.instance())
QgsProcessingUtils.mapLayerFromString('my_layer', context)
However, since you said you're trying to select a layer, if you're attempting to get a layer from the QGIS layer tree, you can have a look at Getting layer by name in PyQGIS?.
Visit QGIS API Documentation you will find the answer.
Related
I've been trying to use the VGG-Face descriptor model (http://www.robots.ox.ac.uk/~vgg/software/vgg_face/) for a project of mine. All I want to do is to simply obtain the outputs of the network from an input image.
I haven't used any of MatConvNet, Caffe or PyTorch before and so I picked PyTorch at random. It turns out that the model (of class torch.legacy.nn.Sequential.Sequential) was saved in an older version of PyTorch and the syntax was thus slightly different to the ones on PyTorch's documentation.
I was able to load the lua .t7 model like so:
vgg_net = load_lua('./vgg_face_torch/VGG_FACE.t7', unknown_classes=True)
And loading in the input image:
# load image
image = imread('./ak.png')
# convert to tensor
input = torch.from_numpy(image).float()
Gleefully, I loaded in the image into the model with much anticipation:
# load into vgg_net
output = vgg_net.forward(input)
However, my hopes of it cooperating at all was quickly dashed when the code fails to compile. Leaving behind a cryptic error message:
Traceback (most recent call last):
File "~/Documents/python/vgg-face-test/vgg-pytorch.py", line 25, in <module>
output = vgg_net.forward(input)
File "~/.local/lib/python3.6/site-packages/torch/legacy/nn/Module.py", line 33, in forward
return self.updateOutput(input)
File "~/.local/lib/python3.6/site-packages/torch/utils/serialization/read_lua_file.py", line 235, in updateOutput_patch
return obj.updateOutput(*args)
File "~/.local/lib/python3.6/site-packages/torch/legacy/nn/Sequential.py", line 36, in updateOutput
currentOutput = module.updateOutput(currentOutput)
TypeError: 'NoneType' object is not callable
Of which I am absolutely dumbfounded by.
This is why I sought help on Stackoverflow. I hope someone here could perhaps lend me a hand in setting up the model - not even necessarily in Torch, in fact any working model will do, where I can simply get the description for any particular image.
try output = vgg_net(input) without the forward.
This apparently calls a default method defined in the module but I'm having trouble understanding why this is necessary.
I have been following https://github.com/kvfrans/twitch/blob/master/main.py tutorial to create and train a chatbot based on rnn using tensorflow. From what I understand, the tutorials was written on an older version of tensorflow, so some parts are outdated and give me an error like:
Traceback (most recent call last):
File "main.py", line 33, in <module>
outputs, last_state = tf.nn.seq2seq.rnn_decoder(inputs, initialstate, cell, loop_function=None, scope='rnnlm')
AttributeError: 'module' object has no attribute 'seq2seq'
I fixed some of them, but can't figure out what is the alternative to tf.nn.seq2seq.rnn_decoder and what should be the new module's parameters. What I currently fixed:
tf.nn.rnn_cell.BasicLSTMCell(embedsize) changed to
tf.contrib.rnn.BasicLSTMCell(embedsize)
tf.nn.rnn_cell.DropoutWrapper(lstm_cell,keep_prob) changed to tf.contrib.rnn.DropoutWrapper(lstm_cell,keep_prob)
tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * numlayers) changed to
tf.contrib.rnn.MultiRNNCell([lstm_cell] * numlayers)
Can someone please help me figure out what tf.nn.seq2seq.rnn_decoder will be?
I think this is the one you need:
tf.contrib.legacy_seq2seq.rnn_decoder
I'm trying to execute this code with mpi4py:
from mpi4py import MPI
import numpy
comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()
inp = numpy.random.rand(size)
senddata = inp[rank]
recvdata=comm.reduce(senddata,None,root=0,op=MPI.MINLOC)
print 'on task',rank,'reduce: ',senddata,recvdata
recvdata=comm.allreduce(senddata,None,op=MPI.MINLOC)
print 'on task',rank,'allreduce: ',senddata,recvdata
With this command:
$ mpirun -np 4 python ./reduce_minlock.py
But instead of the expected result I'm getting this message:
Traceback (most recent call last):
Traceback (most recent call last):
File "./reduce_minlock.py", line 11, in <module>
Traceback (most recent call last):
File "./reduce_minlock.py", line 11, in <module>
recvdata=comm.reduce(senddata,None,root=0,op=MPI.MINLOC)
File "MPI/Comm.pyx", line 1298, in mpi4py.MPI.Comm.reduce (src/mpi4py.MPI.c:109386)
TypeError: reduce() got multiple values for keyword argument 'op'
recvdata=comm.reduce(senddata,None,root=0,op=MPI.MINLOC)
File "MPI/Comm.pyx", line 1298, in mpi4py.MPI.Comm.reduce (src/mpi4py.MPI.c:109386)
TypeError: reduce() got multiple values for keyword argument 'op'
Traceback (most recent call last):
File "./reduce_minlock.py", line 11, in <module>
recvdata=comm.reduce(senddata,None,root=0,op=MPI.MINLOC)
File "MPI/Comm.pyx", line 1298, in mpi4py.MPI.Comm.reduce (src/mpi4py.MPI.c:109386)
TypeError: reduce() got multiple values for keyword argument 'op'
File "./reduce_minlock.py", line 11, in <module>
recvdata=comm.reduce(senddata,None,root=0,op=MPI.MINLOC)
File "MPI/Comm.pyx", line 1298, in mpi4py.MPI.Comm.reduce (src/mpi4py.MPI.c:109386)
TypeError: reduce() got multiple values for keyword argument 'op'
I got this code from this Tutorial. What I don't understand is why there is a type error for reduce when I'm using the exact number of parameters. I wonder if MPI.MINLOC is supported by mpi4py. I did not find any warning about this operation on the documentation. These are my system configurations:
$ mpirun --version
mpirun (Open MPI) 1.10.3
Report bugs to http://www.open-mpi.org/community/help/
$ python --version
Python 2.7.12
$ cat /etc/fedora-release
Fedora release 24 (Twenty Four)
Any help?
Reading more carefully into the error messages and trying to understand them could save a lot of potential trouble.
TypeError: reduce() got multiple values for keyword argument 'op'
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is a purely Pythonic run-time error and has nothing to do with MPI per se. It should prompt you to look up the correct signature of MPI.Comm.reduce() first and only after checking with it state that the number of arguments is exact. And indeed, a look into Comm.pyx reveals that reduce() takes only three arguments (one required and two defaulted) besides the self reference:
def reduce(self, sendobj, op=SUM, int root=0):
You are providing two arguments as positional and two as name-value pairs. The second positional argument None and the second named one both provide values for op, therefore the type error. Similarly, one could check that allreduce() takes only two arguments and not three.
The conclusion is that the tutorial is wrong and probably based on an earlier version of mpi4py and the number of arguments you are passing to reduce() and allreduce() is actually not exact. You should drop the None argument from both calls.
Recently I managed to compile newest opencv 3.1 with cuda support.
After some tinkering I properly converted most of my python code from 2.4.x to 3.1.x wihout any problems.
But when it came time to try out the stereCalibrate capability, the error occured:
Exception in thread Thread-5:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "./stereo_compute.py", line 245, in calibrate
flags)
TypeError: an integer is required
Here is how I call the function itself:
criteria = (cv2.TERM_CRITERIA_MAX_ITER + cv2.TERM_CRITERIA_EPS,
30, 1e-56)
flags = (cv2.CALIB_FIX_ASPECT_RATIO +
cv2.CALIB_ZERO_TANGENT_DIST +
cv2.CALIB_SAME_FOCAL_LENGTH)
(value,
self.np_calib_data['lmtx'], self.np_calib_data['ldist'],
self.np_calib_data['rmtx'], self.np_calib_data['rdist'],
self.np_calib_data['R'], self.np_calib_data['T'],
self.np_calib_data['E'], self.np_calib_data['F']
) = cv2.stereoCalibrate(
object_points,
l_image_points,
r_image_points,
(image_size[1], image_size[0],),
self.np_calib_data['lmtx'],
self.np_calib_data['ldist'],
self.np_calib_data['rmtx'],
self.np_calib_data['rdist'],
self.np_calib_data['R'],
self.np_calib_data['T'],
self.np_calib_data['E'],
self.np_calib_data['F'],
flags,
criteria)
Everything runs in a thread, that's why it's mentioned in the exception.
I can't get the correct set of parameters.
In addition the call worked for me under 2.4.x version with the same set of data.
Please help!
I have noticed that with Python bindings for OpenCV, if a function has a parameter with default value, say, None, you often can't explicitly use this parameter with its default value. This is quite against normal Python conventions and expected behaviour.
For example, function cv2.goodFeaturesToTrack has parameter blockSize with default value None, so you would expect that calling
cv2.goodFeaturesToTrack(image=img, maxCorners=10, qualityLevel=0.1, minDistance=10, mask=None, blockSize=None)
would be the same as
cv2.goodFeaturesToTrack(image=img, maxCorners=10, qualityLevel=0.1, minDistance=10, mask=None)
but in fact, first way of using this function will result in
TypeError: an integer is required
So, with OpenCV you have to either not provide and argument, or provide correct value (and the one which is default, according to Python function/method signature, may not be correct).
You will have to check C++ sources to find actual default values.
I read in the howto documentation to install Trigger, but when I test in python environment, I get the error below:
>>> from trigger.netdevices import NetDevices
>>> nd = NetDevices()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/trigger/netdevices/__init__.py", line 913, in __init__
with_acls=with_acls)
File "/usr/local/lib/python2.7/dist-packages/trigger/netdevices/__init__.py", line 767, in __init__
production_only=production_only, with_acls=with_acls)
File "/usr/local/lib/python2.7/dist-packages/trigger/netdevices/__init__.py", line 83, in _populate
# device_data = _munge_source_data(data_source=data_source)
File "/usr/local/lib/python2.7/dist-packages/trigger/netdevices/__init__.py", line 73, in _munge_source_data
# return loader.load_metadata(path, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/trigger/netdevices/loader.py", line 163, in load_metadata
raise RuntimeError('No data loaders succeeded. Tried: %r' % tried)
RuntimeError: No data loaders succeeded. Tried: [<trigger.netdevices.loaders.filesystem.XMLLoader object at 0x7f550a1ed350>, <trigger.netdevices.loaders.filesystem.JSONLoader object at 0x7f550a1ed210>, <trigger.netdevices.loaders.filesystem.SQLiteLoader object at 0x7f550a1ed250>, <trigger.netdevices.loaders.filesystem.CSVLoader object at 0x7f550a1ed290>, <trigger.netdevices.loaders.filesystem.RancidLoader object at 0x7f550a1ed550>]
Does anyone have some idea how to fix it?
The NetDevices constructor is apparently trying to find a "metadata source" that isn't there.
Firstly, you need to define the metadata. Second, your code should handle the exception where none is found.
I'm the lead developer of Trigger. Check out the the doc Working with NetDevices. It is probably what you were missing. We've done some work recently to improve the quality of the setup/install docs, and I hope that this is more clear now!
If you want to get started super quickly, you can feed Trigger a CSV-formatted NetDevices file, like so:
test1-abc.net.example.com,juniper
test2-abc.net.example.com,cisco
Just put that in a file, e.g. /tmp/netdevices.csv and then set the NETDEVICES_SOURCE environment variable:
export NETDEVICES_SOURCE=/tmp/netdevices.csv
And then fire up python and continue on with your examples and you should be good to go!
I found that the default of /etc/trigger/netdevices.xml wasn't listed in the setup instructions. It did indicate to copy from the trigger source folder:
cp conf/netdevices.json /etc/trigger/netdevices.json
But, I didn't see how to specify this instead of the default NETDEVICES_SOURCE on the installation page. But, as soon as I had a file that NETDEVICES_SOURCE pointed to in my /etc/trigger folder, it worked.
I recommend this to get the verifying functionality examples to work right away with minimal fuss:
cp conf/netdevices.xml /etc/trigger/netdevices.xml
Using Ubuntu 14.04 with Python 2.7.3