I'm working with image processing which means I'm doing operations on large matrices. I'm trying to debug which means I need to explore the elements, but it's a real pain doing it with print statements. Is there some kind of python plugin that will let me view arrays in a GUI for the purpose of debugging?
yes just use the python debugger and put a break point
or use something like q
$ easy_install q
import q
my_array = numpy.arange(1000)
q.d() #open a terminal where you have access to my_array
you will see something like below
Python console opened by q.d() in <some_module>
>>> print my_array[5]
you can also use pill to generate an image from the array (not sure if it will work right without tweaking)
>>> import Image
>>> img = Image.fromarray(my_array, 'RGB')
>>> img.save('test.png')
However, if you would like to display numpy array as an image, you can use OpenCV Image Viewer Plugin, which I've just released.
https://plugins.jetbrains.com/plugin/14371-opencv-image-viewer
Related
After building and installing the Python engine shipped with Matlab 2019b in Anaconda
(TestEnvironment) PS C:\Program Files\MATLAB\R2019b\extern\engines\python> C:\Users\USER\Anaconda3\envs\TestEnvironment\python.exe .\setup.py build -b C:\Users\USER\MATLAB\build_temp install
for Python 3.7 I wrote a simple script to test a couple of features I'm interested in:
import matlab.engine as ml_e
# Start Matlab engine
eng = ml_e.start_matlab()
# Load MAT file into engine. The result is a dictionary
mat_file = "samples/lena.mat"
lenaMat = eng.load("samples/lena.mat")
print("Variables found in \"" + mat_file + "\"")
for key in lenaMat.keys():
print(key)
# print(lenaMat["lena512"])
# Use the variable from the MAT file to display it as an image
eng.imshow(lenaMat["lena512"], [])
I have a problem with imshow() (or any similar function that displays a figure in the Matlab GUI on the screen) namely that it shows quickly and then disappears, which - I guess - at least confirms that it is possible to use it. The only possibility to keep it on the screen is to add an infinite loop at the end:
while True:
continue
For obvious reasons this is not a good solution. I am not looking for a conversion of Matlab data to NumPy or similar and displaying it using matplotlib or similar third party libraries (I am aware that SciPy can load MAT files for example). The reason is simple - I would like to use Matlab (including loading whole environments) and for debugging purposes I'd like to be able to show this and that result without having to go through loops and hoops of converting the data manually.
I want to produce plots like this, except with many more particles. Matplotlib is woefully inadequate.
Right now I am using mayavi in python 3.5 running through a jupyter notebook. As I need to plot 5x10^5 spheres it will not be practical, since time is a limiting factor already at 2x10^4 spheres.
Here is my python code to produce the mayavi plot. I have a numpy array of values [a,r,x,y,z]. It's not relevant what the first quantity is for this problem.
"""VISUALIZATION WITH MAYAVI"""
#I think this is too slow to be practical.
#view particles with mayavi
import mayavi
from mayavi import mlab
%gui qt
def plot_sphere(p): #feed it p and get back a sphere \n",
t1,R,a,b,c = p
[phi,theta] = np.mgrid[0:2*np.pi:12j,0:np.pi:12j] #increase the numbers before j for better resolution but more time
x = R*np.cos(phi)*np.sin(theta) + a
y = R*np.sin(phi)*np.sin(theta) + b
z = R*np.cos(theta) + c
return mlab.mesh(x, y, z)
#run s over all particles and plot it
def view(particles):
for p in particles:
plot_sphere(p)
view(spheres)
This code produces plots like this:
I have been told I should look into writing my numpy arrays to .vtk files using evtk, then visualizing these in paraview. I downloaded paraview and read this, but perhaps my version of python is limiting me? First, install pyevtk-- okay:
I tried conda install -c opengeostat pyevtk=1.0.0, but it fails due to incompatibility with my python version. I looked for details but could not find any.
Next I downloaded the repository [here][https://pypi.python.org/pypi/PyEVTK/1.0.0], then used pip to install it successfully.
Next I put evtk.py, vtk.py, hl.py, and xml.py, and tried some examples in the repository-- none of them work. Seemingly there is some problem with
from .vtk import *
type commands. I tried replacing all of these in the four .py files with
from evtk import vtk
from vtk import *
and such, but no luck. Long story short, I can't get pyevtk working to export my numpy arrays as .vtk files. I could use some help in this regard, or better yet I would love a different option to get my numpy arrays rendered by paraview. Any help is appreciated !
Ok, I solved my own problem. This image is made using paraview, after converting numpy arrays to a .vtu object using pyevtk.
Out of the box, the repository did not work, there was some problem with importing the modules inside the four .py files, so I modified them all. Instead of from .vtk import *, I changed it to from vtk import *, and so on, in every module in the library. evtk.py was not able to import a class from xml.py, so I just copied it and pasted, then deleted xml.py. After some tinkering and clueless modifying to make the errors go away, eventually it worked.
I have started to use Pytesser, which works great with both english and chinese, but is there a way to have both languages work at the same time? Would I have to make my own traineddata file? My code is:
import Image
from pytesser import *
print image_to_string(Image.open("chinese_and_english.jpg"), lang="eng")
#also want to have chinese be recognized
I'm not sure about Pytesser but using tesserocr you can specify multiple languages. For example:
import tesserocr
with tesserocr.PyTessBaseAPI(lang='eng+chi_tra') as api:
api.SetImageFile('eSXSz.jpg')
print api.GetUTF8Text()
# or simply
print tesserocr.file_to_text('eSXSz.jpg', lang='eng+chi_tra')
Example output for your image:
In [8]: print tesserocr.file_to_text('eSXSz.jpg', lang='eng+chi_tra')
Character, Chmese 動m川爬d
胸肌岫馴伽 H枷﹏ P﹏… …
〔Manda‥﹝ 二 Standard C…爬虯
一
口
X慣ng怕ng
Note that it's more efficient to initialize the API once as in the first example and re-use it for multiple images by calling SetImageFile (or SetImage with a PIL.Image object) to avoid re-initializing the API every time.
So my research mates and I are trying to save a pretty big (47104,5) array into a TTree in a ROOT file. The array on the Python side works fine. We can access everything and run normal commands, but when we run the root_numpy.array2root() command, we get a weird error.
Object of type 'NoneType' has no len()
The code we are running for this portion is as follows:
import root_numpy as rnp
import numpy as np
import scipy
import logging
def save_array(outputArray, outputName):
outputString =str(outputName)
logging.info("Creating .Root file")
rnp.array2root(outputArray,outputString,treename="Training_Variables",mode="recreate")
We placed the outputString variable as a way to make sure we were putting the filename in as a string. ( In our python terminal, we add .root at the end of outputName to save it as a .root file.).
Here is a picture of the terminal.
Showing exact error location in root_numpy
Pretty much, we are confused about why array2root() is calling for the len() of an object, which we dont think should have a len? It should just have a shape. Any insight would be greatly appreciated.
The conversion routines from NumPy arrays to ROOT datatypes work with structured arrays. See the two following links. (Not tested, but this is very likely the problem as the routines use the arr.dtypes.names and arr.dtypes.fields attributes).
http://rootpy.github.io/root_numpy/reference/generated/root_numpy.array2tree.html#root_numpy.array2tree
http://rootpy.github.io/root_numpy/reference/generated/root_numpy.array2root.html#root_numpy.array2root
I am trying to load a CCITT T.3 compressed tiff into python, and get the pixel matrix from it. It should just be a logical matrix.
I have tried using pylibtiff and PIL, but when I load it with them, the matrix it returns is empty. I have read in a lot of places that these two tools support loading CCITT but not accessing the pixels.
I am open to converting the image, as long as I can get the logical matrix from it and do it in python code. The crazy thing is is that if I open one of my images in paint, save it without altering it, then try to load it with pylibtiff, it works. Paint re-compresses it to the LZW compression.
So I guess my real question is: Is there a way to either natively load CCITT images to matricies or convert the images to LZW using python??
Thanks,
tylerthemiler
It seems the best way is to not use Python entirely but lean on netpbm:
import Image
import ImageFile
import subprocess
tiff = 'test.tiff'
im = Image.open(tiff)
print 'size', im.size
try:
print 'extrema', im.getextrema()
except IOError as e:
print 'help!', e, '\n'
print 'I Get by with a Little Help from my Friends'
pbm_proc = subprocess.Popen(['tifftopnm', tiff],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
(pbm_data, pbm_error) = pbm_proc.communicate()
ifp = ImageFile.Parser()
ifp.feed(pbm_data)
im = ifp.close()
print 'conversion message', pbm_error,
print 'extrema', im.getextrema()
print 'size', im.size
# houston: we have an image
im.show()
Seems to do the trick:
$ python g3fax.py
size (1728, 2156)
extrema help! decoder group3 not available
I Get by with a Little Help from my Friends
conversion message tifftopnm: writing PBM file
extrema (0, 255)
size (1728, 2156)
How about running tiffcp with subprocess to convert to LZW (-c lzw switch), then process normally with pylibtiff? There are Windows builds of tiffcp lying around on the web. Not exactly Python-native solution, but still...