Particle tracking with trackpy gives 'keframe' error? - python

I have tiff image stacks of 1000 frames in which I'm trying to track particles diffusing in Brownian motion. So I'm using trackpy for particle tracking: import the tiff stack, locate the features (particles) and then plot their trajectories. I'm using the following code (as from the trackpy walkthrough page):
import pims
import trackpy as tp
frames = pims.open("image.tif")
f = tp.locate(frames[0], 9, invert=False)
But this last line (tp.locate) gives a traceback:
AttributeError Traceback (most recent call last)
<ipython-input-78-0fbce96715a7> in <module>
----> 1 f = tp.locate(frames[0], 9, invert=False)
~\anaconda3\lib\site-packages\slicerator\__init__.py in __getitem__(self, i)
186 indices, new_length = key_to_indices(i, len(self))
187 if new_length is None:
--> 188 return self._get(indices)
189 else:
190 return cls(self, indices, new_length, propagate_attrs)
~\anaconda3\lib\site-packages\pims\base_frames.py in __getitem__(self, key)
96 """__getitem__ is handled by Slicerator. In all pims readers, the data
97 returning function is get_frame."""
---> 98 return self.get_frame(key)
99
100 def __iter__(self):
~\anaconda3\lib\site-packages\pims\tiff_stack.py in get_frame(self, j)
119 t = self._tiff[j]
120 data = t.asarray()
--> 121 return Frame(data, frame_no=j, metadata=self._read_metadata(t))
122
123 def _read_metadata(self, tiff):
~\anaconda3\lib\site-packages\pims\tiff_stack.py in _read_metadata(self, tiff)
124 """Read metadata for current frame and return as dict"""
125 # tags are only stored as a TiffTags object on the parent TiffPage now
--> 126 tags = tiff.keyframe.tags
127 md = {}
128 for name in ('ImageDescription', 'image_description'):
~\anaconda3\lib\site-packages\skimage\external\tifffile\tifffile.py in __getattr__(self, name)
2752 setattr(self, name, value)
2753 return value
-> 2754 raise AttributeError(name)
2755
2756 def __str__(self):
AttributeError: keyframe
where I'm going wrong?
I also tried using
imageio.imread("image.tif")
to import the image and then used
f=tp.locate(frames[0], 9, invert=False)
to locate the particles. But the output of this is supposed to be data for both x and y coordinates, like so
whereas what I get is just for the x axis:

I just had this same problem and solved it by entering "conda install pims" into my anaconda prompt. I had done "pip install pims" before, and I think that messed it up.

Related

Brightway2: LCA scores & calculations

My problem is about getting emissions results of my functional unit from a ecoinvent excel spreadsheet format.
I managed to get activities/process impacts thanks to ca.annotated_top_processes(lca) or lca.top_activities()but emissions/biosphere flows can't be displayed but through ca.hinton_matrix(lca, rows=10, cols=10). How can I get specific scores ?
Here's the situation:
import brightway2 as bw
from stats_arrays import *
import bw2analyzer as bwa
projects.set_current("excel_import_verif1")
bw.databases
db = bw.Database('IoTBOLLCA') #Excel spreadsheet
CC = [method for method in bw.methods if "('ReCiPe Midpoint (H) V1.13', 'climate change', 'GWP100')" in str(method)][0]
FU = [i for i in db if 'FU' in i['name']][0]
lca = bw.LCA({FU:1},CC)
lca.lci()
lca.lcia()
lca.score
ca = bwa.ContributionAnalysis()
lca.top_emissions()
and I get this error
TypeError Traceback (most recent call last)
File ~\Anaconda3\envs\bw2\lib\site-packages\scipy\sparse\_sputils.py:208, in isintlike(x)
207 try:
--> 208 operator.index(x)
209 except (TypeError, ValueError):
TypeError: 'numpy.float64' object cannot be interpreted as an integer
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Input In [28], in <cell line: 1>()
----> 1 lca.top_emissions()
File ~\Anaconda3\envs\bw2\lib\site-packages\bw2calc\lca.py:575, in LCA.top_emissions(self, **kwargs)
573 except ImportError:
574 raise ImportError("`bw2analyzer` is not installed")
--> 575 return ContributionAnalysis().annotated_top_emissions(self, **kwargs)
File ~\Anaconda3\envs\bw2\lib\site-packages\bw2analyzer\contribution.py:152, in ContributionAnalysis.annotated_top_emissions(self, lca, names, **kwargs)
146 """Get list of most damaging biosphere flows in an LCA, sorted by ``abs(direct impact)``.
147
148 Returns a list of tuples: ``(lca score, inventory amount, activity)``. If ``names`` is False, they returns the process key as the last element.
149
150 """
151 ra, rp, rb = lca.reverse_dict()
--> 152 results = [
153 (score, lca.inventory[index, :].sum(), rb[index])
154 for score, index in self.top_emissions(
155 lca.characterized_inventory, **kwargs
156 )
157 ]
158 if names:
159 results = [(x[0], x[1], get_activity(x[2])) for x in results]
File ~\Anaconda3\envs\bw2\lib\site-packages\bw2analyzer\contribution.py:153, in <listcomp>(.0)
146 """Get list of most damaging biosphere flows in an LCA, sorted by ``abs(direct impact)``.
147
148 Returns a list of tuples: ``(lca score, inventory amount, activity)``. If ``names`` is False, they returns the process key as the last element.
149
150 """
151 ra, rp, rb = lca.reverse_dict()
152 results = [
--> 153 (score, lca.inventory[index, :].sum(), rb[index])
154 for score, index in self.top_emissions(
155 lca.characterized_inventory, **kwargs
156 )
157 ]
158 if names:
159 results = [(x[0], x[1], get_activity(x[2])) for x in results]
File ~\Anaconda3\envs\bw2\lib\site-packages\scipy\sparse\_index.py:47, in IndexMixin.__getitem__(self, key)
46 def __getitem__(self, key):
---> 47 row, col = self._validate_indices(key)
49 # Dispatch to specialized methods.
50 if isinstance(row, INT_TYPES):
File ~\Anaconda3\envs\bw2\lib\site-packages\scipy\sparse\_index.py:152, in IndexMixin._validate_indices(self, key)
149 M, N = self.shape
150 row, col = _unpack_index(key)
--> 152 if isintlike(row):
153 row = int(row)
154 if row < -M or row >= M:
File ~\Anaconda3\envs\bw2\lib\site-packages\scipy\sparse\_sputils.py:216, in isintlike(x)
214 if loose_int:
215 msg = "Inexact indices into sparse matrices are not allowed"
--> 216 raise ValueError(msg)
217 return loose_int
218 return True
ValueError: Inexact indices into sparse matrices are not allowed
This is an error as of Scipy version 1.9; for now, you can force a downgrade to Scipy 1.8.something.
This has been noted as an issue, but the focus for BW development is in other areas currently.

RuntimeError: Numpy is not available, torch

I try to run the git repository : https://github.com/DinoMan/speech-driven-animation
on jupyter.
I followed the steps indicated but I encountered an error when I wanted to run example :
Here is my code amd the error :
import sda
va = sda.VideoAnimator(gpu=-1, model_path="crema")# Instantiate the animator
vid, aud = va("example/image.bmp", "example/audio.wav")
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [7], in <cell line: 1>()
----> 1 vid, aud = va("example/image.bmp", "example/audio.wav")
File ~\IVERSE\sda\sda.py:229, in VideoAnimator.__call__(self, img, audio, fs, aligned)
226 frame = img
228 if not aligned:
--> 229 frame = self.preprocess_img(frame)
231 if isinstance(audio, str): # if we have a path then grab the audio clip
232 info = mediainfo(audio)
File ~\IVERSE\sda\sda.py:191, in VideoAnimator.preprocess_img(self, img)
190 def preprocess_img(self, img):
--> 191 src = self.fa.get_landmarks(img)[0][self.stablePntsIDs, :]
192 dst = self.mean_face[self.stablePntsIDs, :]
193 tform = tf.estimate_transform('similarity', src, dst) # find the transformation matrix
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\face_alignment\api.py:110, in FaceAlignment.get_landmarks(self, image_or_path, detected_faces, return_bboxes, return_landmark_score)
98 def get_landmarks(self, image_or_path, detected_faces=None, return_bboxes=False, return_landmark_score=False):
99 """Deprecated, please use get_landmarks_from_image
100
101 Arguments:
(...)
108 return_landmark_score {boolean} -- If True, return the keypoint scores along with the keypoints.
109 """
--> 110 return self.get_landmarks_from_image(image_or_path, detected_faces, return_bboxes, return_landmark_score)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\autograd\grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs)
24 #functools.wraps(func)
25 def decorate_context(*args, **kwargs):
26 with self.clone():
---> 27 return func(*args, **kwargs)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\face_alignment\api.py:141, in FaceAlignment.get_landmarks_from_image(self, image_or_path, detected_faces, return_bboxes, return_landmark_score)
138 image = get_image(image_or_path)
140 if detected_faces is None:
--> 141 detected_faces = self.face_detector.detect_from_image(image.copy())
143 if len(detected_faces) == 0:
144 warnings.warn("No faces were detected.")
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\face_alignment\detection\sfd\sfd_detector.py:45, in SFDDetector.detect_from_image(self, tensor_or_path)
42 def detect_from_image(self, tensor_or_path):
43 image = self.tensor_or_path_to_ndarray(tensor_or_path)
---> 45 bboxlist = detect(self.face_detector, image, device=self.device)[0]
46 bboxlist = self._filter_bboxes(bboxlist)
48 return bboxlist
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\face_alignment\detection\sfd\detect.py:15, in detect(net, img, device)
12 # Creates a batch of 1
13 img = np.expand_dims(img, 0)
---> 15 img = torch.from_numpy(img.copy()).to(device, dtype=torch.float32)
17 return batch_detect(net, img, device)
RuntimeError: Numpy is not available
I uploaded models via Google drive, I tried to uninstall numpy and reinstall a lower version (1.19) but this doesn't work too..
The error was:
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Could you help me understand what can I do to resolve the problem please ?
Thanks

Getting "AttributeError: 'NoneType' object has no attribute 'groupdict'" when executing an Excel file in Python using the Formulas library

My problem is to save data into an excel file and then make it run its calculations and then read it using python without having to open it. For this I have tried using the formulas library. It worked perfectly fine on a test excel sheet that I tries that had a few calculations. However when I try it on a larger excel sheet with a lot more and complex calculations I get this error message.
AttributeError: 'NoneType' object has no attribute 'groupdict
I will also post the entire error message here:
AttributeError: 'NoneType' object has no attribute 'groupdict'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/tmp/ipykernel_113/60233469.py in <module>
1 fpath = 'MTOOutput.xlsx'
2 dirname = 'MTOOutput'
----> 3 xl_model = formulas.ExcelModel().loads(fpath).finish(circular=True)
4 xl_model.calculate()
5 xl_model.write(dirpath=dirname)
~/.local/lib/python3.8/site-packages/formulas/excel/__init__.py in loads(self, *file_names)
93 def loads(self, *file_names):
94 for filename in file_names:
---> 95 self.load(filename)
96 return self
97
~/.local/lib/python3.8/site-packages/formulas/excel/__init__.py in load(self, filename)
98 def load(self, filename):
99 book, context = self.add_book(filename)
--> 100 self.pushes(*book.worksheets, context=context)
101 return self
102
~/.local/lib/python3.8/site-packages/formulas/excel/__init__.py in pushes(self, context, *worksheets)
106 def pushes(self, *worksheets, context=None):
107 for ws in worksheets:
--> 108 self.push(ws, context=context)
109 return self
110
~/.local/lib/python3.8/site-packages/formulas/excel/__init__.py in push(self, worksheet, context)
119 for c in row:
120 if hasattr(c, 'value'):
--> 121 self.add_cell(
122 c, context, references=references,
123 formula_references=formula_references,
~/.local/lib/python3.8/site-packages/formulas/excel/__init__.py in add_cell(self, cell, context, references, formula_references, formula_ranges, external_links)
232 val = cell.data_type == 'f' and val[:2] == '==' and val[1:] or val
233 check_formula = cell.data_type != 's'
--> 234 cell = Cell(crd, val, context=ctx, check_formula=check_formula).compile(
235 references=references, context=ctx
236 )
~/.local/lib/python3.8/site-packages/formulas/cell.py in compile(self, references, context)
88 def compile(self, references=None, context=None):
89 if self.builder:
---> 90 func = self.builder.compile(
91 references=references, context=context, **{CELL: self.range}
92 )
~/.local/lib/python3.8/site-packages/formulas/builder.py in compile(self, references, context, **inputs)
127 else:
128 try:
--> 129 i[k] = Ranges().push(k, context=context)
130 except ValueError:
131 i[k] = None
~/.local/lib/python3.8/site-packages/formulas/ranges.py in push(self, ref, value, context)
167
168 def push(self, ref, value=sh.EMPTY, context=None):
--> 169 rng = self.get_range(self.format_range, ref, context)
170 return self.set_value(rng, value)
171
~/.local/lib/python3.8/site-packages/formulas/ranges.py in get_range(format_range, ref, context)
159 def get_range(format_range, ref, context=None):
160 ctx = (context or {}).copy()
--> 161 for k, v in _re_range.match(ref).groupdict().items():
162 if v is not None:
163 if k == 'ref':
AttributeError: 'NoneType' object has no attribute 'groupdict'
I initially had not used the circular=True argument inside finish. But after a bit of trouble shooting I realized that could be the problem. I say that could because I did not create the excel sheet and hence do not know much about what is in there.
I also tried installing the formulas[all] version again after a bit of troubleshooting before getting the same error. I tried including the use of range references in my test file to check if that was causing the issue. However the test file worked fine with range references. I was wondering if anyone who had faced an issue like this with formulas had found a solution or an alternative.
PS: It would not be possible for me to attach the excel file as it is work related.
Thanks and Regards,
Yadhu

pysd library ParseError

I'm using a library called pysd to translate vensim files to Python, but when I try to do it (library functions) I get a parse error but don't understand what it means.
This is my log.
ParseError Traceback (most recent call last)
<ipython-input-1-9b0f6b9bac1f> in <module>()
1 get_ipython().magic(u'pylab inline')
2 import pysd
----> 3 model = pysd.read_vensim('201520_1A_Volare_Ev.Tecnica.itmx')
/Library/Python/2.7/site-packages/pysd/pysd.pyc in read_vensim(mdl_file)
45 """
46 from .vensim2py import translate_vensim
---> 47 py_model_file = translate_vensim(mdl_file)
48 model = PySD(py_model_file)
49 model.mdl_file = mdl_file
/Library/Python/2.7/site-packages/pysd/vensim2py.pyc in translate_vensim(mdl_file)
651 for section in file_sections:
652 if section['name'] == 'main':
--> 653 model_elements += get_model_elements(section['string'])
654
655 # extract equation components
/Library/Python/2.7/site-packages/pysd/vensim2py.pyc in get_model_elements(model_str)
158 """
159 parser = parsimonious.Grammar(model_structure_grammar)
--> 160 tree = parser.parse(model_str)
161
162 class ModelParser(parsimonious.NodeVisitor):
/Library/Python/2.7/site-packages/parsimonious/grammar.pyc in parse(self, text, pos)
121 """
122 self._check_default_rule()
--> 123 return self.default_rule.parse(text, pos=pos)
124
125 def match(self, text, pos=0):
/Library/Python/2.7/site-packages/parsimonious/expressions.pyc in parse(self, text, pos)
108
109 """
--> 110 node = self.match(text, pos=pos)
111 if node.end < len(text):
112 raise IncompleteParseError(text, node.end, self)
/Library/Python/2.7/site-packages/parsimonious/expressions.pyc in match(self, text, pos)
125 node = self.match_core(text, pos, {}, error)
126 if node is None:
--> 127 raise error
128 return node
129
ParseError: Rule 'escape_group' didn't match at '' (line 1, column 20243).
.itmx is an iThink extension, which unfortunately PySD doesn't support (yet). In the future, we'll work out a conversion pathway that lets you bring these in.

H2O python rbind error

I have a 2000 rows data frame and I'm trying to slice the same data frame into two and combine them together.
t1 = test[:10, :]
t2 = test[20:, :]
temp = t1.rbind(t2)
temp.show()
Then I got this error:
---------------------------------------------------------------------------
EnvironmentError Traceback (most recent call last)
<ipython-input-37-8daeb3375743> in <module>()
2 t2 = test[20:, :]
3 temp = t1.rbind(t2)
----> 4 temp.show()
5 print len(temp)
6 print len(test)
/usr/local/lib/python2.7/dist-packages/h2o/frame.pyc in show(self, use_pandas)
383 print("This H2OFrame has been removed.")
384 return
--> 385 if not self._ex._cache.is_valid(): self._frame()._ex._cache.fill()
386 if H2ODisplay._in_ipy():
387 import IPython.display
/usr/local/lib/python2.7/dist-packages/h2o/frame.pyc in _frame(self, fill_cache)
423
424 def _frame(self, fill_cache=False):
--> 425 self._ex._eager_frame()
426 if fill_cache:
427 self._ex._cache.fill()
/usr/local/lib/python2.7/dist-packages/h2o/expr.pyc in _eager_frame(self)
67 if not self._cache.is_empty(): return self
68 if self._cache._id is not None: return self # Data already computed under ID, but not cached locally
---> 69 return self._eval_driver(True)
70
71 def _eager_scalar(self): # returns a scalar (or a list of scalars)
/usr/local/lib/python2.7/dist-packages/h2o/expr.pyc in _eval_driver(self, top)
81 def _eval_driver(self, top):
82 exec_str = self._do_it(top)
---> 83 res = ExprNode.rapids(exec_str)
84 if 'scalar' in res:
85 if isinstance(res['scalar'], list): self._cache._data = [float(x) for x in res['scalar']]
/usr/local/lib/python2.7/dist-packages/h2o/expr.pyc in rapids(expr)
163 The JSON response (as a python dictionary) of the Rapids execution
164 """
--> 165 return H2OConnection.post_json("Rapids", ast=expr,session_id=H2OConnection.session_id(), _rest_version=99)
166
167 class ASTId:
/usr/local/lib/python2.7/dist-packages/h2o/connection.pyc in post_json(url_suffix, file_upload_info, **kwargs)
515 if __H2OCONN__ is None:
516 raise ValueError("No h2o connection. Did you run `h2o.init()` ?")
--> 517 return __H2OCONN__._rest_json(url_suffix, "POST", file_upload_info, **kwargs)
518
519 def _rest_json(self, url_suffix, method, file_upload_info, **kwargs):
/usr/local/lib/python2.7/dist-packages/h2o/connection.pyc in _rest_json(self, url_suffix, method, file_upload_info, **kwargs)
518
519 def _rest_json(self, url_suffix, method, file_upload_info, **kwargs):
--> 520 raw_txt = self._do_raw_rest(url_suffix, method, file_upload_info, **kwargs)
521 return self._process_tables(raw_txt.json())
522
/usr/local/lib/python2.7/dist-packages/h2o/connection.pyc in _do_raw_rest(self, url_suffix, method, file_upload_info, **kwargs)
592 raise EnvironmentError(("h2o-py got an unexpected HTTP status code:\n {} {} (method = {}; url = {}). \n"+ \
593 "detailed error messages: {}")
--> 594 .format(http_result.status_code,http_result.reason,method,url,detailed_error_msgs))
595
596
EnvironmentError: h2o-py got an unexpected HTTP status code:
500 Server Error (method = POST; url = http://localhost:54321/99/Rapids).
detailed error messages: []
If I count rows (len(temp)), it works find. Also if I change the slicing index a little bit, it works find too. For example, if I change to this, it shows the data frame.
t1 = test[:10, :]
t2 = test[:5, :]
Do I miss something here? Thanks.
Unclear what happened without more information (logs would probably say why the rbind did not take).
What version are you using? I tried your code with iris on the bleeding edge and it all worked as expected.
By the way, rbind is typically going to be expensive, especially since what you're semantically after is a subset:
test[range(10) + range(20,test.nrow),:]
should also give you the desired subset (with caveat that you make the full list of row indices in python and pass it over REST to h2o).

Categories