macos: converting dot to png - python

I've studied the solutions outlined here:
Converting dot to png in python
However, none of these solutions work for me. In particular, when I try the check_call method, I get the following error:
File "/Users/anaconda/lib/python2.7/subprocess.py", line 1343, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
When I use pydot, I get this error: (graph,)=pydot.graph_from_dot_data(dotfile.getvalue())
TypeError: 'Dot' object is not iterable
Here is some example code I found on one of the above posts that I've been testing:
from sklearn import tree
import pydot
import StringIO
from subprocess import check_call
# Define training and target set for the classifier
train = [[1,2,3],[2,5,1],[2,1,7]]
target = [10,20,30]
# Initialize Classifier. Random values are initialized with always the same random seed of value 0
# (allows reproducible results)
dectree = tree.DecisionTreeClassifier(random_state=0)
dectree.fit(train, target)
# Test classifier with other, unknown feature vector
test = [2,2,3]
predicted = dectree.predict(test)
dotfile = StringIO.StringIO()
tree.export_graphviz(dectree, out_file=dotfile)
check_call(['dot','-Tpng','InputFile.dot','-o','OutputFile.png'])
(graph,)=pydot.graph_from_dot_data(dotfile.getvalue())
graph.write_png("dtree.png")
Thanks in advance.

OSError: [Errno 2] No such file or directory indicates that InputFile.dot may not be present on your filesystem.
In case you're just interested in converting dot to png, I've created a simple python example sample_tree.py which generates a png from a dot file which is working on my mac,
import pydot
from subprocess import check_call
graph = pydot.Dot(graph_type='graph')
for i in xrange(2):
edge = pydot.Edge("a", "b%d" % i)
graph.add_edge(edge)
graph.write_png('sample_tree.png')
# If a dot file needs to be created as well
graph.write_dot('sample_tree.dot')
check_call(['dot','-Tpng','sample_tree.dot','-o','OutputFile.png'])
Btw, this dtree example has also been used here Sckit learn with GraphViz exports empty outputs just in case any other similar issues are encountered. Thanks.

Related

"OSError: Failed to interpret file as a pickle" after saving

My code began giving this error after I opened and saved NEweights.npy:
OSError: Failed to interpret file 'D:\\NeuralNetwork\\NEweights.npy' as a pickle
It was working initially before I saved it. Why am I receiving this error only now, and is there any way I can still access the data in NEweights.npy? (Just for context, NEweights.npy is an array of neural network weights trained via Nesterov Accelerated Gradient. I was testing different NN optimizers.)
I have this code to save the numpy arrays in a npy file:
np.save(f'{path}GDweights.npy', np.array(weights, dtype=object))
I have this to access the numpy arrays:
def getWeights(path):
return np.load(path, allow_pickle=True)
path = 'D:\\NeuralNetwork\\'
inputs, outputs = grab(f'{path}test.csv')
weightsGD = getWeights(f'{path}GDweights.npy')
weightsM = getWeights(f'{path}Mweights.npy')
weightsNE = getWeights(f'{path}NEweights.npy')
weightsNA = getWeights(f'{path}NAweights.npy')
weightsD = getWeights(f'{path}Dweights.npy')
This error is raised as an IOError and according to this If the input file does not exist or cannot be read, this error is raised.

Augmenting images in a dataset - encountering ValueError: Could not find a format to read the specified file in mode 'i'

I'm in a beginner neural networks class and am really struggling.
I have a dataset of images that isn't big enough to train my network with, so I'm trying to augment them (rotate/noise addition etc.) and add the augmented images onto the original set. I'm following the code found on Medium: https://medium.com/#thimblot/data-augmentation-boost-your-image-dataset-with-few-lines-of-python-155c2dc1baec
However, I'm encountering ValueError: Could not find a format to read the specified file in mode 'i'
Not sure what this error means or how to go about solving it. Any help would be greatly appreciated.
import random
from scipy import ndarray
import skimage as sk
from skimage import transform
from skimage import util
path1 = "/Users/.../"
path2 = "/Users/.../"
listing = os.listdir(path1)
num_files_desired = 1000
image = [os.path.join(path2, f) for f in os.listdir(path2) if os.path.isfile(os.path.join(path2, f))]
num_generated_files = 0
while num_generated_files <= num_files_desired:
image_path = random.choice(image)
image_to_transform = sk.io.imread(image_path)
137 if format is None:
138 raise ValueError(
--> 139 "Could not find a format to read the specified file " "in mode %r" % mode
140 )
141
ValueError: Could not find a format to read the specified file in mode 'i'
I can see few possiblities. Before passing to them. I'd like to express what is your error. It's basically an indicator that your images cannot be read by sk.io.imread(). Let me pass to the possible things to do:
Your [os.path.join(path2, f) for f in os.listdir(path2) if os.path.isfile(os.path.join(path2, f))] part may not give the image path correctly. You have to correct it manually. If so, you can manually give the exact folder without doing such kind of a loop. Just simply use os.listdir() and read the files manually.
You can also use glob to read the files that having same extension like .jpg or stuff.
Your files may be corrupted. You can simply eliminate them by using PIL and read the images with PIL like image = Image.open() first and use image.verify() method.
Try to read about sk.io.imread(filename, plugin='' the plugin part may resolve your issue.
Hope it helps.

Pytorch datasets.ImageFolder flag FileNotFoundError: [Errno 2] No such file or directory: '\u2068/

Trying to load data from the local directory in Pytorch using dataset.ImageLoader but getting FileNotFoundError...
import torch
from torchvision import datasets, transforms
data_dir = '⁨/Users/Desktop/Udacity/AI for Trading/deep-learning-v2-pytorch/intro-to-pytorch/data⁩/Cat_Dog_data⁩/⁨train⁩'
transform = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor()])
dataset = datasets.ImageFolder(root=data_dir, transform=transform)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)
FileNotFoundError: [Errno 2] No such file or directory: '\u2068/Users/Desktop/Udacity/AI for Trading/deep-learning-v2-pytorch/intro-to-pytorch/data\u2069/Cat_Dog_data\u2069/\u2068train\u2069'
Answer: Retyping the Image Dir Path manually instead of copy-pasting solved the problem.
This error is not related to pytorch or dataset library, it seems getting reported by python's OS library..
I tried loading the simple path like
import os
data_dir = '/Users'
os.chdir(data_dir)
Even above code failed, post researching it seems, the error is because of the below reasons:
There are some invisible 'LEFT-TO-RIGHT MARK' (u200e) and 'FIRST STRONG ISOLATE' (u2068) characters in the st
And retyping the path manually (instead of copy-pasting) solved the problem
This stackoverflow helped me fix the problem, I still thought to post it again, as a problem in the context of pytorch was not reported and hopefully, this help folks connects the dots.

Loading .mat image dataset in python

I have an image dataset in the .mat format, what I want is to load this dataset and visualize it's images to interact with them such as resize them and save them in folder in the format that enable me to show them such as .jpg, .png, etc. How can I do that?
What I did is save the dataset in the scipy.io path in the python site-packages and write the following code:
import scipy.io as sio
dbpath = sio.loadmat('COFW_train_color.mat')
listing = os.listdir(dbpath)
num_samples = size(dbpath)
for file in listing:
im = (dbpath + '\\' + file)
imag = cv2.imread(im)
cv2.imshow(imag)
But this did not give me what i need and also return me the following error:
FileNotFoundError: [Errno 2] No such file or directory: 'COFW_train_color.mat'
I also tried to use the full path to the dataset as folloe:
dbpath = "C:\\Users\\SONY\\AppData\\Local\\Programs\\Python\\Python35\\Lib\\site-packages\\scipy\\io\\COFW_train_color.mat"
but I received another error message:
NotImplementedError: Please use HDF reader for matlab v7.3 files
How can I reach and interact with this type of dataset and visualize it's images? can anyone please help me and I will be thankful.
pip install mat73
import mat73
data_dict = mat73.loadmat('COFW_train_color.mat')

How to load SVM data from file in OpenCV 3.1?

I have a problem with load trained SVM from file. I use Python and OpenCv 3.1.0. I create svm object by:
svm = cv2.ml.SVM_create()
Next, I train svm and save to file by:
svm.save('data.xml')
Now i want to load this file in other Python script. In docs i can't find any methods to do it.
Is there a trick to load svm from file? Thanks for any responses.
I think it's a litte bit confusing that there is no svm.load(filepath) method as a counterpart of svm.save(filepath), but when I read the module help it makes sense to me that SVM_load is a child of cv2.ml (sibling of SVM_create).
Be sure that your opencv master branch is up-to-date (currently version 3.1.0-dev)
>>> import cv2
>>> cv2.__version__
'3.1.0-dev'
>>> help(cv2.ml)
returns
SVM_create(...)
SVM_create() -> retval
SVM_load(...)
SVM_load(filepath) -> retval
so you can simply use something like:
if not os.path.isfile('svm.dat'):
svm = cv2.ml.SVM_create()
...
svm.save('svm.dat')
else:
svm = cv2.ml.SVM_load('svm.dat')

Categories