I am working on DICOM images, I have 5 scans(folders) each scan contain multiple images, after working some preprocessing on the images, I want to save the processed images in a single file using "np.save", I have the code below that save each folder in a separate file:
data_path = 'E:/jupyter/test/LIDC-IDRI/'
patients_data = os.listdir(data_path)
for pd in range(len(patients_data)):
full_path = load_scan(data_path + patients_data[pd])
after_pixel_hu = get_pixels_hu(full_path)
after_resample, spacing = resample(after_pixel_hu, full_path, [1,1,1])
np.save(output_path + "images_of_%s_patient.npy" % (patients_data[pd]), after_resample)
load_scan is a function for loading(reading) DICOM files, what I want to do with this code is to save all processed images in a single file, not in five files, can anyone tell me how to do that, please?
The first thing to notice is that you are using %s with patients_data[pd]. I assume patients_data is a list of the names of the patients, which means you are constructing a different output path for each patient - you are asking numpy to save each of your processed images to a new location.
Secondly, .npy is probably not the file type you want to use for your purposes, as it does not handle appending data. You probably want to pick a different file type, and then np.save() to the same file path each time.
Edit: Regarding file type, a pdf may be your best option, where you can make each of your images a separate page.
Related
I am new in python and following this article https://www.mygreatlearning.com/blog/face-recognition/#whatsopencv to extract features from face. After trying it, I realise that I do looping inside directory Images for each image and then save it inside face_enc:
datas = {"encodings": knownEncodings, "names": knownNames}
f = open("face_enc", "wb")
f.write(pickle.dumps(datas))
f.close()
So, the thing that makes me confused is, let's say that I have 50 images inside Images directory then I add another 100 images (just for example), so I will do looping from the start (1-150 images) and then save it in face_enc. Is there a way to update data inside face_enc without saving it from the start to saving time?
I am looking for help trying to combine multiple images belonging to one user into a single image file using python script. For example,User 12568 has 7 images which i am trying to combine it into one file which will have 1-7 pages in it vertically. Same needs to be applied for over 100K + users
A nice way would be to use imageio.
import imageio
import os
path = "C:/path_to_folder/"
image_path_list = os.listdir(path)
with imageio.get_writer("new_image.tif") as new_image:
for image_path in image_path_list:
image = imageio.imread(path+image_path)
new_image.append_data(image)
This will create a tif file with a page per image.
Note that with this code folders should only contain images.
So I have this bit of code, which clips out a shapefile of a tree out of a Lidar Pointcloud. When doing this for a single shapefile it works well.
What I want to do: I have 180 individual tree shapefiles and want to clip every file out of the same pointcloud and save it as a individual .las file.
So in the end I should have 180 .las files. E.g. Input_shp: Tree11.shp -> Output_las: Tree11.las
I am sure that there is a way to do all of this at once. I just dont know how to select all shapefiles and save the output to 180 individual .las files.
Im really new to Python and any help would be appreciated.
I already tried to get this with placeholders (.format()) but couldnt really get anywhere.
from WBT.whitebox_tools import WhiteboxTools
wbt = WhiteboxTools()
wbt.work_dir = "/home/david/Documents/Masterarbeit/Pycrown/Individual Trees/"
wbt.clip_lidar_to_polygon(i="Pointcloud_to_clip.las", polygons="tree_11.shp", output="Tree11.las")
I don't have the plugin you are using, but you may be looking for this code snippet:
from WBT.whitebox_tools import WhiteboxTools
wbt = WhiteboxTools()
workDir = "/home/david/Documents/Masterarbeit/Pycrown/Individual Trees/"
wbt.work_dir = workDir
# If you want to select all the files in your work dir you can use the following.
# though you may need to make it absolute, depending on where you run this:
filesInFolder = os.listDir(workDir)
numberOfShapeFiles = len([_ for _ in filesInFolder if _.endswith('.shp')])
# assume shape files start at 0 and end at n-1
# loop over all your shape files.
for fileNumber in range(numberOfShapeFiles):
wbt.clip_lidar_to_polygon(
i="Pointcloud_to_clip.las",
polygons=f"tree_{fileNumber}.shp",
output=f"Tree{fileNumber}.las"
)
This makes use of python format string templates.
Along with the os.listdir function.
I'm currently working in a project which handles .nii files from neuro images. I converted that file into 80 .png files. Now I need to combine those 80 .png into .nii files again.
Please help.
Thanks.
The strict answer is no, you can't do it. Because png files do not contain those information needed for NIfTI file.
However if you don't care whether the coordinate and left-right information is correct or not, you could generate a fake nii file. You can read your png files (I suppose they have the same dimension) using a for loop:
for i = 1:numberOfPNG_file
img(:,:,i) = imread(png_Files{i});
end
The you can use the Matlab NIfTI tool to create nii file:
nii = nii_tool('init', img);
nii_tool('save', nii, 'my_nii.nii');
Hope this helps
%step 1: get the names of the files
files=dir('*.png');
file_names={files.name}';
%step 2: sort the files
%extract the numbers
%Here, the format of the name shoul be enterd and %d should replate the
%number, this is so that the files will be load in the right order
filenum = cellfun(#(x)sscanf(x,'%d.png'), file_names);
% sort them, and get the sorting order
[~,Sidx] = sort(filenum) ;
% use to this sorting order to sort the filenames
SortedFilenames = file_names(Sidx);
%step 3: combine images to single matrix:
%get number of files
num_of_files=numel(SortedFilenames);
for i=1:num_of_files
nifti_mat(:,:,i)=imread(SortedFilenames{i});
end
%step 4: conver to nifti and save:
filename='here_goes_the_name_of_the_file';
niftiwrite(nifti_mat,filename);
I have a series of kmz files (1000+) within one folder that I have generated of each polygon of a feature class and a corresponding image (all images are in a separate folder). These kmz's are automatically generated from the attribute tables of my shapefiles from arcGIS. Within each kmz file I have a link to an image that corresponds to that feature as such:
<tr>
<td>Preview</td>
<td>G:\Temp\Figures\Ovr0.png</td>
</tr>
At the moment each image is but a tabular text referencing an image in the directory /Temp/Figures. What id like is to convert all those texts into links something along the lines of
<img src="file:///G:/Temp/Figures/Ovr0.png" width = 750 height=500/>
Given the large volume of files it would be ideal if this could be done within python, simplekml? On another note - at some stage I would like to share a few of these kmz files and therefore I was wondering if the best solution was to subdivide each kmz and image pair into their own respective directories and rezip the kmz file somehow?
I have managed to solve my problem by iterating each kmz and image and using the zipfile module to read the contents, rewrite the doc.kml and rezipping the files into a kmz. At the moment the images are placed after the < body >in the kmz but a more complex argument could be written with re I presume.
If there is a more efficient method please let me know...
def edit_kmz(kmz,output,image):
##Read the doc.kml file in the kmz and rewrite the doc.kml file
zf = zipfile.ZipFile(kmz)
temp = r'tempfolder\doc.kml'
for line in zf.read("doc.kml").split("\n"):
with open(temp,'a') as wf: #Create the doc.kml
if "</body>" in line:
wf.write("</body>\n<img src='files/Ovr0.png' width = 750 height=500</img>\n")
else:
wf.write('%s\n'%(line))
zf.close()
##Rezip the file
zf = zipfile.ZipFile(output,'a')
zf.write(image,arcname='files/Ovr0.png') ##Relative Path to the Image
zf.write(temp,arcname='doc.kml') ##Add revised doc.kml file
zf.close()