I'm trying to create a small tool where I select all the files (textures) in my scene and apply a specific filter to them.
I get the list of '.fileTextureName' attributes that exist on my scene and I get every .exr and .tif that I have. But I am trying to remove the .exr from my list and only apply the filter only to the .tif.
I haven't find a way to make a list of the attributes or to select just the type of file I want.
Here is just the begining of the script:
import maya.cmds as cmds
allFileNodes = cmds.ls(type="file")
def attrList():
for eachFile in allFileNodes:
currentFile = cmds.getAttr(eachFile + '.fileTextureName')
print currentFile
attrList()
Any help is appreciated!!
If you're simply wanting to filter what to operate on based on its file extension then you can use .endswith to only include tif extensions:
import maya.cmds as cmds
all_file_nodes = cmds.ls(type="file")
for each_file in all_file_nodes:
image_path = cmds.getAttr(each_file + ".fileTextureName") # Get the image's path the file is referencing.
if image_path.lower().endswith(".jpg"): # Only continue if the image ends with `tif`, we include `.lower()` in case the extension is upper case.
print image_path # Only tifs beyond this point, do what you want.
I'm based on Window 10, Jupyter Notebook, Pytorch 1.0, Python 3.6.x currently.
At first I confirm to the correct path of files using this code : print(os.listdir('./Dataset/images/')).
and I could check that this path is correct.
but I met Error :
RuntimeError: Found 0 files in subfolders of: ./Dataset/images/
Supported extensions are: .jpg,.jpeg,.png,.ppm,.bmp,.pgm,.tif"
What is the matter?
Could you suggest a solution?
I tried to ./dataset/1/images like this method. but the result was same....
img_dir = './Dataset/images/'
img_data = torchvision.datasets.ImageFolder(os.path.join(img_dir), transforms.Compose([
transforms.Scale(256),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
]))
img_batch = data.DataLoader(img_data, batch_size=batch_size,
shuffle = True, drop_last=True)
I met the same problem when using celebA, including 200,000 images. As we can see there are many images. But in a small sample situation (I tried 20 images), I checked, the error will not be raised, which means we can read images successfully.
But when the number grows, we should use other methods.
I solved the problem according to this website. Thanks to QimingChen
Github solution
Simply, adding another folder named 1 (/train/--->train/1/) in the original folder will enable our program to work, without changing the path. That's because when facing large datasets, images should be sorted in subfolders of different classes.
The Original answer on Github:
Let's say I am going to use ImageFolder("/train/") to read jpg files in folder train.
The file structure is
/train/
-- 1.jpg
-- 2.jpg
-- 3.jpg
I failed to load them, leading to errors:
RuntimeError: Found 0 images in subfolders of: ./data
Supported image extensions are: .jpg,.JPG,.jpeg,.JPEG,.png,.PNG,.ppm,.PPM,.bmp,.BMP
I read the solution above and tried tens of times. When I changed the structure to
/train/1/
-- 1.jpg
-- 2.jpg
-- 3.jpg
But the read in code is still -- ImageFolder("/train/"), IT WORKS.
It seems like the program tends to recursively read in files, that is convenient in some cases.
Hope this would help!!
Can you post the structure of your files? In your case, it is supposed to be:
img_dir
|_class1
|_a.jpg
|_b.jpg
|_class2
|_a.jpg
|_b.jpg
...
According to the rules of the DataLoader in pytorch you should choose the the superior path of the image path. That means if your images locate in './Dataset/images/', the path of the data loader should be './Dataset' instead. I hope it can fix your bug.:)
You can modify the ImageFolder class to get to the root folder directly (without subfolders):
class ImageFolder(Dataset):
def __init__(self, root, transform=None):
#Call make_dataset to collect files.
self.samples = make_dataset(opt.dataroot)
self.imgs = self.samples
self.transformA = transformA
...
We call the make_dataset method to collect our files:
def make_dataset(dir):
import os
images = []
d = os.path.expanduser(dir)
if not os.path.exists(dir):
print('path does not exist')
for root, _, fnames in sorted(os.walk(d)):
for fname in sorted(fnames):
path = os.path.join(root, fname)
images.append(path)
return images
All the action takes place in the loop containing os.walk. Here, the files are collected from the 'root' directory, which we specify as the directory containing our files.
See the documentation of ImageFolder dataset to see how this dataset class expects the images to be organized into subfolders under `./Dataset/images' according to image classes. Make sure your images adhere to this order.
Apparently, the solution is just making the picture name alpha-numeric. They may be another solution but this work.
I am trying to create a unit test for a function that reads every image from a folder and saves them in a list.
Here is a simplified version of the function:
def read_images(directory):
image_paths = os.listdir(directory)
images = []
for im in image_paths:
images.append(cv2.imread(os.path.join(directory, im)))
return images
This other question brought me close to the solution, but in my case I want the fake files created to be images (basically, arrays) so I can read them with cv2.imread.
My idea is not having to create any temporary folder and, of course, not having to connect with any external folder or database. Is this possible?
Edit: to be clear, I'd like to not have to create temporary folders, nor temporary image files. I'd like to know if there is a way of telling the program: "There is a folder here, and inside it there are some images/arrays with this shape", but with actually not having to create anything in memory.
If you actually need temporary files, you should check tempfile.
It allows you to create temporary files and directories which provide automatic cleanup, so there are no trash files if you use this while having the opportunity to test what you want.
EDIT
If you don't really want to use tempfiles nor tempfolders, here is another solution concerning your problem:
Generate in-memory image for your test.
from io import BytesIO
from PIL import Image
def create_in_memory_image():
in_memory_file = BytesIO()
image = Image.new('RGBA',
size=(0, 0),
color=(155, 0, 0))
image.save(in_memory_file,
'png')
in_memory_file.name = 'tmp_testing_name.png'
in_memory_file.seek(0)
return in_memory_file
how do I mock a fake folder with fake images inside?
def local_cv2_imread():
# use as a side effect
return 'fakeImg1'
def test_read_images(self):
with mock.patch('os.listdir') as mock_listdir:
with mock.patch('package.module.cv2.imread') as mock_imread:
mock_listdir.return_value = ['fake_path']
mock_imread.side_effect = local_cv2_imread
images = read_images('a_real_path')
self.assertEqual(images, ['fakeImg1']
I am trying to read several images from archive with skimage.io.imread_collection, but for some reason it throws an error:
"There is no item named '00071198d059ba7f5914a526d124d28e6d010c92466da21d4a04cd5413362552/masks/*.png' in the archive".
I checked several times, such directory exists in archive and with *.png I just specify that I want to have all images in my collection, and imread_collection works well, when I am trying to download images not from archive, but from extracted folder.
#specify folder name
each_img_idx = '00071198d059ba7f5914a526d124d28e6d010c92466da21d4a04cd5413362552'
with zipfile.ZipFile('stage1_train.zip') as archive:
mask_ = skimage.io.imread_collection(archive.open(str(each_img_idx) + '/masks/*.png')).concatenate()
May some one explain me, what's going on?
Not all scikit-image plugins support reading from bytes, so I recommend using imageio. You'll also have to tell ImageCollection how to access the images inside the archive, which is done using a customized load_func:
from skimage import io
import imageio
archive = zipfile.ZipFile('foo.zip')
images = [f.filename for f in zf.filelist]
def zip_imread(fn):
return imageio.imread(archive.read(fn))
ic = io.ImageCollection(images, load_func=zip_imread)
ImageCollection has some benefits like not loading all images into memory at the same time. But if you simply want a long list of NumPy arrays, you can do:
collection = [imageio.imread(zf.read(f)) for f in zf.filelist]
I want to be able to load large number of images one by one from the given folder. And also without knowing the names of the each image (only the name of the folder where all images are located). Currently I can load only one image using it's name (pic.jpg):
pixmap = QtGui.QPixmap("pic.jpg")
item = QtGui.QGraphicsPixmapItem(pixmap)
self.scene.addItem(item)
self.scene.update()
Is there any way to do this? Thanks in advance!
The os module contains filesystem access functions.
import os
dir = "dirname"
for file in os.listdir(dir):
... = QtGui.QPixmap(os.path.join(dir, file))
Note: os.path.join is there so you are platform agnostic.