I need to download single or multiple images from the collection of ee. (preferably multiple but I can just put a single image code in a loop).
My main problem --> Download every month image from a start date to end date for a specific location (lat: "", long: "") with zoom 9
I am trying to download historical simple satellite data from the SKYSAT/GEN-A/PUBLIC/ORTHO/RGB. This gives an image like -->
https://developers.google.com/earth-engine/datasets/catalog/SKYSAT_GEN-A_PUBLIC_ORTHO_RGB#description
I am working on this in python. So this code will give me the collection but I can't download the entire collection, I need to select an image out of it.
import ee
# Load a Landsat 8 ImageCollection for a single path-row.
collection = (ee.ImageCollection('SKYSAT/GEN-A/PUBLIC/ORTHO/RGB').filterDate('2016-03-01', '2018-03-01'))
#pp.pprint('Collection: '+str(collection.getInfo())+'\n')
# Get the number of images.
count = collection.size()
print('Count: ', str(count.getInfo()))
image = ee.Image(collection.sort('CLOUD_COVER').first())
Here the image contains the <ee.image.Image at 0x23d6cf5dc70> property but I don't know how to download it.
Also, how do I specify I want it for a specific location(lat, long) with zoom 19.
Thanks for anything
You can download your whole images in the image collection using geetools.
You can install with pip install geetools.
Then, change the area, dates, and folder name(if the folder does not exist, it will be created on your drive) and execute this code.
import geetools
StartDate= ee.Date.fromYMD(2018,1,16)
EndDate = ee.Date.fromYMD(2021,10,17)
Area = ee.Geometry.Polygon(
[[[-97.93534101621628, 49.493287372441955],
[-97.93534101621628, 49.49105034378085],
[-97.93049158231736, 49.49105034378085],
[-97.93049158231736, 49.493287372441955]]])
collection =(ee.ImageCollection('SKYSAT/GEN-A/PUBLIC/ORTHO/RGB')
.filterDate(StartDate,EndDate)
.filterBounds(Area)
data_type = 'float32'
name_pattern = '{system_date}'
date_pattern = 'yMMdd' # dd: day, MMM: month (JAN), y: year
scale = 10
folder_name = 'GEE_Images'
tasks = geetools.batch.Export.imagecollection.toDrive(
collection=collection,
folder=folder_name ,
region=Area ,
namePattern=name_pattern,
scale=scale,
dataType=data_type,
datePattern=date_pattern,
verbose=True,
maxPixels=1e10
)
Or;
You can use the Earth Engine library to export images.
nimg = collection.toList(collection.size().getInfo()).size().getInfo()
for i in range(nimg):
img = ee.Image(collection.toList(nimg).get(i))
date = img.date().format('yyyy-MM-dd').getInfo()
task = ee.batch.Export.image.toDrive(img.toFloat(),
description=date,
folder='Folder_Name',
fileNamePrefix= date,
region = Area,
dimensions = (256,256),
# fileFormat = 'TFRecord',
maxPixels = 1e10)
task.start()
There are two file formats; TFRecord and GeoTIFF. The default format is GeoTIFF. Also, you can extract images with specific dimensions as shown above. If you want to download images with a scale factor, just remove the dimension line and add a scale factor instead of it.
You can read this document for more information.
Insert your region of analysis (geom) by constructing a bounding box. Then, use the code below to batch download the images.
// Example geometry. You could also insert points etc.
var geom = ee.Geometry.Polygon(
[[[-116.8, 44.7],
[-116.8, 42.6],
[-110.6, 42.6],
[-110.6, 44.7]]], None, False)
for (var i = 0; i < count ; i++) {
var img = ee.Image(collection.toList(1, i).get(0));
var geom = img.geometry().getInfo();
Export.image(img, img.get('system:index').getInfo(), {crs: crs, scale: scale, region: geom});
}
Related
To best understand, please reproduce the code in a Jupyternotebook:
I have two files: img.jpg and img.txt. Img.jpg is the image and img.txt is the face landmarks....If you plot them both, it will look like this:
I rotated the image by 24.5 degree....but how to do I also rotate the coordinates?
import cv2
img = cv2.imread('img.jpg')
plt.imshow(img)
plt.show()
# In[130]:
landmarks = []
with open('img.txt') as f:
for line in f:
landmarks.extend([float(number) for number in line.split()])
landmarks.pop(0) #Remove first line.
#Store all points inside the variable.
landmarkPoints = [] #Store the points in this
for j in range(int(len(landmarks))):
if j%2 == 1:
continue
landmarkPoints.append([int(landmarks[j]),int(landmarks[j+1])])
# In[ ]:
def rotate_bound(image, angle):
# grab the dimensions of the image and then determine the
# center
(h, w) = image.shape[:2]
(cX, cY) = (w // 2, h // 2)
# grab the rotation matrix (applying the negative of the
# angle to rotate clockwise), then grab the sine and cosine
# (i.e., the rotation components of the matrix)
M = cv2.getRotationMatrix2D((cX, cY), -angle, 1.0)
cos = np.abs(M[0, 0])
sin = np.abs(M[0, 1])
# compute the new bounding dimensions of the image
nW = int((h * sin) + (w * cos))
nH = int((h * cos) + (w * sin))
# adjust the rotation matrix to take into account translation
M[0, 2] += (nW / 2) - cX
M[1, 2] += (nH / 2) - cY
# perform the actual rotation and return the image
return cv2.warpAffine(image, M, (nW, nH))
# In[131]:
imgcopy = img.copy()
for i in range(len(landmarkPoints)):
cv2.circle(imgcopy, (landmarkPoints[i][0], landmarkPoints[i][1]), 5, (0, 255, 0), -1)
plt.imshow(imgcopy)
plt.show()
landmarkPoints
# In[146]:
print(img.shape)
print(rotatedImage.shape)
# In[153]:
face_angle = 24.5
rotatedImage = rotate_bound(img, -face_angle)
for i in range(len(landmarkPoints)):
x,y = (landmarkPoints[i][0], landmarkPoints[i][1])
cv2.circle(rotatedImage, (int(x),int(y)), 5, (0, 255, 0), -1)
plt.imshow(rotatedImage)
plt.show()
Please download img.jpg and img.txt for reproducing this: https://drive.google.com/file/d/1FhQUFvoKi3t7TrIepx2Es0mBGAfT755w/view?usp=sharing
I tried this function, but y-axis is wrong
def rotatePoint(angle, pt):
a = np.radians(angle)
cosa = np.cos(a)
sina = np.sin(a)
return pt[0]*cosa - pt[1]*sina, pt[0] * sina + pt[1] * cosa
Edit: The above function gives me this result:
Although it has been long time since the question was asked. But I have decided to answer it as it has no accepted answer yet, even if it is a well accepted question. I have added a lot of comments to make the implementation clear. So, the code is hopefully self-explanatory. But I am also describing the ImageAugmentation's parameters for further clarification:
Here, original_data_dir is the directory to the parent folder, where all of the image's folders exists (yes it can read from multiple image folders). This parameter is compulsory.
augmentation_data_dir is the folder directory where you want to save the outputs. The program will automatically create all sub-folders inside of the output directory just like they appear in input directory. It is totally optional, it can generate the output directory by mimicking the input directory by appending the string _augmentation after the input folder name.
keep_original is another optional parameter. In many cases you may want to keep the original image with the augmented images in the output folder. If you want so, make it True (default).
num_of_augmentations_per_image is the total number of augmented images to be generated from each image. Although you wanted only rotation, but this program is designed to do other augmentations as well, change them, add or remove them as you need. I have also added a link to documentation where you will find other augmentations which can be introduced here in this code. It is defaulted to 3, if you keep the original image, there will be 3 + 1 = 4 images will be generated in the output.
discard_overflow_and_underflow is for handling the case where due to spatial transformation, the augmented points along with the image underneath can go outside of image's resolution, you can optionally keep them. But it is discarded here by default. Again, it will also discard images having width or height values <= 0. Defaulted to True.
put_landmarks means if you want the landmarks to be shown in the output. Make it True or False as required. It is False by default.
Hope you like it!
import logging
import imgaug as ia
import imgaug.augmenters as iaa
from imgaug.augmentables import Keypoint
from imgaug.augmentables import KeypointsOnImage
import os
import cv2
import re
SEED = 31 # To reproduce the result
class ImageAugmentation:
def __init__(self, original_data_dir, augmentation_data_dir = None, keep_original = True, num_of_augmentations_per_image = 3, discard_overflow_and_underflow = True, put_landmarks = False):
self.original_data_dir = original_data_dir
if augmentation_data_dir != None:
self.augmentation_data_dir = augmentation_data_dir
else:
self.augmentation_data_dir = self.original_data_dir + '_augmentation'
# Most of the time you will want to keep the original images along with the augmented images
self.keep_original = keep_original
# For example for self.num_of_augmentations_per_image = 3, from 1 image we will get 3 more images, totaling 4 images.
self.num_of_augmentations_per_image = num_of_augmentations_per_image
# if discard_overflow_and_underflow is True, the program will discard all augmentation where landmark (and image underneath) goes outside of image resolution
self.discard_overflow_and_underflow = discard_overflow_and_underflow
# Optionally put landmarks on output images
self.put_landmarks = put_landmarks
def get_base_annotations(self):
"""This method reads all the annotation files (.txt) and make a list
of annotations to be used by other methods.
"""
# base_annotations are the annotations which has come with the original images.
base_annotations = []
def get_info(content):
"""This utility function reads the content of a single annotation
file and returns the count of total number of points and a list of coordinates
of the points inside a dictionary.
As you have provided in your question, the annotation file looks like the following:
106
282.000000 292.000000
270.000000 311.000000
259.000000 330.000000
.....
.....
Here, the first line is the number of points.
The second and the following lines gives their coordinates.
"""
# As all the lines newline separated, hence splitting them
# accordingly first
lines = content.split('\n')
# The first line is the total count of the point, we can easily get it just by counting the points
# so we are not taking this information.
# From the second line to the end all lines are basically the coordinate values
# of each point (in each line). So, going to each of the lines (from the second line)
# and taking the coordinates as tuples.
# We will end up with a list of tuples and which will be inserted to the dict "info"
# under the key "point_coordinates"
points = []
for line in lines[1:]:
# Now each of the line can be splitted into two numbers representing coordinates
try:
# Keeping inside try block, as some of the lines might be accidentally contain
# a single number, or it can be the case that there might be some extra newlines
# where there is no number.
col, row = line.split(' ')
points.append((float(col), float(row)))
except:
pass
# Returns: List of tuples
return points
for subdir, dirs, files in os.walk(self.original_data_dir):
for file in files:
ext = os.path.splitext(file)[-1].lower()
# Looping through image files (instead of annotation files which are in '.txt' format)
# because image files can have very different extensions and we have to preserve them.
# Whereas, all the annotation files are assumed to be in '.txt' format.
# Annotation file's (.txt) directory will be generated from here.
if ext not in ['.txt']:
input_image_file_dir = os.path.join(subdir, file)
# As the image filenames and associated annotation text filenames are the same,
# so getting the common portion of them, it will be used to generate the annotation
# file's directory.
# Also assuming, there are no dots (.) in the input_annotation_file_dir except before the file extension.
image_annotation_base_dir = self.split_extension(input_image_file_dir)[0]
# Generating annotation file's directory
input_annotation_file_dir = image_annotation_base_dir + '.txt'
try:
with open(input_annotation_file_dir, 'r') as f:
content = f.read()
image_annotation_base_dir = os.path.splitext(input_annotation_file_dir)[0]
if os.path.isfile(input_image_file_dir):
image = cv2.imread(input_image_file_dir)
# Taking image's shape is basically surving dual purposes.
# First of all, we will need the image's shape for sanity checking after augmentation
# Again, if any of the input image is corrupt this following line will through exception
# and we will be able to skip that corrput image.
image_shape = image.shape # height (y), width (x), channels (depth)
# Collecting the directories of original annotation files and their contents.
# The same folder structure will be used to save the augmented data.
# As the image filenames and associated annotation text filenames are the same, so
base_annotations.append({'image_file_dir': input_image_file_dir,
'annotation_data': get_info(content = content),
'image_resolution': image_shape})
except:
logging.error(f"Unable to read the file: {input_annotation_file_dir}...SKIPPED")
return base_annotations
def get_augmentation(self, base_annotation, seed):
image_file_dir = base_annotation['image_file_dir']
image_resolution = base_annotation['image_resolution']
list_of_coordinates = base_annotation['annotation_data']
ia.seed(seed)
# We have to provide the landmarks in specific format as imgaug requires
landmarks = []
for coordinate in list_of_coordinates:
# coordinate[0] is along x axis (horizontal axis) and coordinate[1] is along y axis (vertical axis) and (left, top) corner is (0, 0)
landmarks.append(Keypoint(x = coordinate[0], y = coordinate[1]))
landmarks_on_original_img = KeypointsOnImage(landmarks, shape = image_resolution)
original_image = cv2.imread(image_file_dir)
"""
Here the magic happens. If you only want rotation then remove other transformations from here.
You can even add other various types of augmentation, see documentation here:
# Documentation for image augmentation with keypoints
https://imgaug.readthedocs.io/en/latest/source/examples_keypoints.html
# Here you will find other possible transformations
https://imgaug.readthedocs.io/en/latest/source/examples_basics.html
"""
seq = iaa.Sequential([
iaa.Affine(
scale={"x": (0.8, 1.2), "y": (0.8, 1.2)}, # scale images to 80-120% of their size, individually per axis
translate_percent={"x": (-0.2, 0.2), "y": (-0.2, 0.2)}, # translate by -20 to +20 percent (per axis)
rotate=(-90, 90), # rotate by -90 to +90 degrees; for specific angle (say 30 degree) use rotate = (30)
shear=(-16, 16), # shear by -16 to +16 degrees
)
], random_order=True) # Apply augmentations in random order
augmented_image, _landmarks_on_augmented_img = seq(image = original_image, keypoints = landmarks_on_original_img)
# Now for maintaining consistency, making the augmented landmarks to maintain same data structure like base_annotation
# i.e, making it a list of tuples.
landmarks_on_augmented_img = []
for index in range(len(landmarks_on_original_img)):
landmarks_on_augmented_img.append((_landmarks_on_augmented_img[index].x,
_landmarks_on_augmented_img[index].y))
return augmented_image, landmarks_on_augmented_img
def split_extension(self, path):
# Assuming there is no dots (.) except just before extension
# Returns [directory_of_file_without_extension, extension]
return os.path.splitext(path)
def sanity_check(self, landmarks_aug, image_resolution):
# Returns false if the landmark is outside of image resolution.
# Or, if the resolution is faulty.
for index in range(len(landmarks_aug)):
if landmarks_aug[index][0] < 0 or landmarks_aug[index][1] < 0:
return False
if landmarks_aug[index][0] >= image_resolution[1] or landmarks_aug[index][1] >= image_resolution[0]:
return False
if image_resolution[0] <= 0:
return False
if image_resolution[1] <= 0:
return False
return True
def serialize(self, serialization_data, image):
"""This method to write the annotation file and the corresponding image.
"""
# Now it is time to actually writing the image file and the annotation file!
# We have to make sure the output folder exists
# and "head" is the folder's directory here.
image_file_dir = serialization_data['image_file_dir']
annotation_file_dir = self.split_extension(image_file_dir)[0] + '.txt'
point_coordinates = serialization_data['annotation_data'] # List of tuples
total_points = len(point_coordinates)
# Getting the corresponding output folder for current image
head, tail = os.path.split(image_file_dir)
# Creating the folder if it doesn't exist
if not os.path.isdir(head):
os.makedirs(head)
# Writing annotation file
with open(annotation_file_dir, 'w') as f:
s = ""
s += str(total_points)
s += '\n'
for point in point_coordinates:
s += "{:.6f}".format(point[0]) + ' ' + "{:6f}".format(point[1]) + '\n'
f.write(s)
if self.put_landmarks:
# Optionally put landmarks in the output images.
for index in range(total_points):
cv2.circle(image, (int(point_coordinates[index][0]), int(point_coordinates[index][1])), 2, (255, 255, 0), 2)
cv2.imwrite(image_file_dir, image)
def augmentat_with_landmarks(self):
base_annotations = self.get_base_annotations()
for base_annotation in base_annotations:
if self.keep_original == True:
# As we are basically copying the same original data in new directory, changing the original image's directory with the new one with re.sub()
base_data = {'image_file_dir': re.sub(self.original_data_dir, self.augmentation_data_dir, base_annotation['image_file_dir']),
'annotation_data': base_annotation['annotation_data']}
self.serialize(serialization_data = base_data, image = cv2.imread(base_annotation['image_file_dir']))
for index in range(self.num_of_augmentations_per_image):
# Getting a new augmented image in each iteration from the same base image.
# Seeding (SEED) for reproducing same result across all execution in the future.
# Also seed must be different for each iteration, otherwise same looking augmentation will be generated.
image_aug, landmarks_aug = self.get_augmentation(base_annotation, seed = SEED + index)
# As for spatial transformations for some images, the landmarks can go outside of the image.
# So, we have to discard those cases (optionally).
if self.sanity_check(landmarks_aug, base_annotation['image_resolution']) or not self.discard_overflow_and_underflow:
# Getting the filename without extension to insert an index number in between to generate a new filename for augmented image
filepath_without_ext, ext = self.split_extension(base_annotation['image_file_dir'])
# As we are writing newly generated images to similar sub folders (just in different base directory)
# that is replacing original_data_dir with augmentation_data_dir.
# So, to do this we are using, re.sub(what_to_replace, with_which_to_replace, from_where_to_replace)
filepath_for_aug_img_without_ext = re.sub(self.original_data_dir, self.augmentation_data_dir, filepath_without_ext)
new_filepath_wo_ext = filepath_for_aug_img_without_ext + '_' + str(index)
augmentation_data = {
'image_file_dir': new_filepath_wo_ext + ext,
'annotation_data': landmarks_aug
}
self.serialize(serialization_data = augmentation_data, image = image_aug)
# Make put_landmarks = False if you do not want landmarks to be shown in output
# original_data_dir is the single parent folder directory inside of which all image folder(s) exist.
img_aug = ImageAugmentation(original_data_dir = 'parent/folder/directory/of/img/folder', put_landmarks = True)
img_aug.augmentat_with_landmarks()
Following is a snapshot of sample output of the code:
Please note that, I have used a package imgaug. I will suggest you to install the 0.4.0 version, as I have found it to be working. See the reason here and it's accepted answer.
When you try things like that it's very important to choose the proper coordinate system. In your case you have to put the origin (0,0) point in the center of the image.
Once you apply the rotation to the coordinates with the origin point in the center, the face points will be properly aligned on the new image.
This post isn't a question but a solution to a problem I have been trying to solve for a while. Hopefully somebody else will find the code useful!
I wanted to export Sentinel-2 Satellite imagery (https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2) with a cloud masking filter applied from Google Earth Engine to my Google Drive using the Python API. However, not all images fully overlapped with the geometry I was interested in and the cloud mask made parts of some images invisible. I therefore needed to create a mosaic of the images closest to the date I was interested.
The solution which eventually worked is below:
# This is the cloud masking function provided by GEE but adapted for use in Python.
def maskS2clouds(image):
qa = image.select('QA60')
# Bits 10 and 11 are clouds and cirrus, respectively.
cloudBitMask = 1 << 10
cirrusBitMask = 1 << 11
# Both flags should be set to zero, indicating clear conditions.
mask = qa.bitwiseAnd(cloudBitMask).eq(0)
mask = mask.bitwiseAnd(cirrusBitMask).eq(0)
return image.updateMask(mask).divide(10000)
# Define the geometry of the area for which you would like images.
geom = ee.Geometry.Polygon([[33.8777, -13.4055],
[33.8777, -13.3157],
[33.9701, -13.3157],
[33.9701, -13.4055]])
# Call collection of satellite images.
collection = (ee.ImageCollection("COPERNICUS/S2")
# Select the Red, Green and Blue image bands, as well as the cloud masking layer.
.select(['B4', 'B3', 'B2', 'QA60'])
# Filter for images within a given date range.
.filter(ee.Filter.date('2017-01-01', '2017-03-31'))
# Filter for images that overlap with the assigned geometry.
.filterBounds(geom)
# Filter for images that have less then 20% cloud coverage.
.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 20))
# Apply cloud mask.
.map(maskS2clouds)
)
# Sort images in the collection by index (which is equivalent to sorting by date),
# with the oldest images at the front of the collection.
# Convert collection into a single image mosaic where only images at the top of the collection are visible.
image = collection.sort('system:index', opt_ascending=False).mosaic()
# Assign visualization parameters to the image.
image = image.visualize(bands=['B4', 'B3', 'B2'],
min=[0.0, 0.0, 0.0],
max=[0.3, 0.3, 0.3]
)
# Assign export parameters.
task_config = {
'region': geom.coordinates().getInfo(),
'folder': 'Example_Folder_Name',
'scale': 10,
'crs': 'EPSG:4326',
'description': 'Example_File_Name'
}
# Export Image
task = ee.batch.Export.image.toDrive(image, **task_config)
task.start()
After using the maskS2clouds function above, the images in my imageCollection lose 'system:time_start'.
I changed the function to the following and seems it is working. We may need the 'system:time_start' for mosaicing later:
def maskS2clouds(image):
qa = image.select('QA60')
# Bits 10 and 11 are clouds and cirrus, respectively.
cloudBitMask = 1 << 10
cirrusBitMask = 1 << 11
# Both flags should be set to zero, indicating clear conditions.
mask = qa.bitwiseAnd(cloudBitMask).eq(0)
mask = mask.bitwiseAnd(cirrusBitMask).eq(0)
helper = image.updateMask(mask).divide(10000)
helper = ee.Image(helper.copyProperties(image, properties=["system:time_start"]))
return helper
A bit more correction to consider cirrusBitMask as well. (for considering cirrusBitMask, we need to use "qa" variable not "mask"):
def maskS2clouds(image):
qa = image.select('QA60')
# Bits 10 and 11 are clouds and cirrus, respectively.
cloudBitMask = 1 << 10
cirrusBitMask = 1 << 11
# Both flags should be set to zero, indicating clear conditions.
mask1 = qa.bitwiseAnd(cloudBitMask).eq(0)
mask2 = qa.bitwiseAnd(cirrusBitMask).eq(0)
helper = image.updateMask(mask1).updateMask(mask2).divide(10000)
helper = ee.Image(helper.copyProperties(image, properties=["system:time_start"]))
return helper
I have a lot of images (pydicom files). I would like to divide in half. From 1 image, I would like 2 images: part left and part right.
Input: 1000x1000
Output: 500x1000 (width x height).
Currently, I can only read a file.
ds = pydicom.read_file(image_fps[0]) # read dicom image from filepath
First part, I would like to put half in one folder and the other half to second.
This is what I have:
enter image description here
This is what I want:
enter image description here
I use Mask-RCNN to object localization problem. I would like crop 50% of image size (pydicom file).
EDIT1:
import SimpleITK as sitk
filtered_image = sitk.GetImageFromArray(left_part)
sitk.WriteImage(filtered_image, '/home/wojtek/Mask/nnna.dcm', True)
I have dicom file, but I can't display it.
this transfer syntax JPEG 2000 Image Compression (Lossless Only), can not be read because Pillow lacks the jpeg 2000 decoder plugin
Once you have executed pydicom.dcm_read() your pixel data is available at ds.pixel_array. You can just slice the data you want and save it with any suitable library. In this example I will be using matplotlib as I also use that for verifying whether my slicing is correct. Adjust to your needs obviously, one thing you need to do is generate the correct path/filenames for saving. Have fun!
(this script assumes the filepaths are available in a paths variable)
import pydicom
import matplotlib
# for testing if the slice is correct
from matplotlib import pyplot as plt
for path in paths:
# read the dicom file
ds = pydicom.dcmread(path)
# find the shape of your pixel data
shape = ds.pixel_array.shape
# get the half of the x dimension. For the y dimension use shape[0]
half_x = int(shape[1] / 2)
# slice the halves
# [first_axis, second_axis] so [:,:half_x] means slice all from first axis, slice 0 to half_x from second axis
left_part = ds.pixel_array[:, :half_x]
right_part = ds.pixel_array[:,half_x:]
# to check whether the slices are correct, matplotlib can be convenient
# plt.imshow(left_part); do not do this in the loop
# save the files, see the documentation for matplotlib if you want a different format
# bmp, png are surely supported
path_to_left_image = 'generate\the\path\and\filename\for\the\left\image.bmp'
path_to_right_image = 'generate\the\path\and\filename\for\the\right\image.bmp'
matplotlib.image.imsave(path_to_left_image, left_part)
matplotlib.image.imsave(path_to_right_image, right_part)
If you want to save the DICOM files keep in mind that they may not be valid DICOM if you do not update the appropriate data. For instance the SOP Instance UID is technically not allowed to be the same as in the original DICOM file, or any other SOP Instance UID for that matter. How important that is, is up to you.
With a script like below you can define named slices and split any dicom image file it finds in the supplied path into the appropriate slices.
import os
import pydicom
import numpy as np
def save_partials(parts, path_to_directory):
"""
parts: list of tuples, each tuple specifying a name and a list of four slice offsets
path_to_directory: path to directory containing dicom files
any file with a .dcm extension will have its image data split into the specified slices and saved accordingly.
original file will not be modified
"""
dir_content = [os.path.join(path_to_directory, item) for item in os.listdir(path_to_directory)]
files = [i for i in dir_content if os.path.isfile(os.path.join(path_to_directory, i))]
for file in files:
root, extension = os.path.splitext(file)
if extension.lower() != '.dcm':
# not a .dcm file, continue with next iteration of loop
continue
for part in parts:
ds = pydicom.read_file(file)
if not isinstance(ds.pixel_array, np.ndarray):
# no image data available
continue
part_name = part[0]
p = part[1] # slice list
ds.PixelData = ds.pixel_array[p[0]:p[1], p[2]:p[3]].tobytes()
ds.Rows = p[1] - p[0]
ds.Columns = p[3] - p[2]
##
## Here you can modify any tags using ds.KeyWord
##
new_file_name = "{r}-{pn}{ext}".format(r=root, pn=part_name, ext=extension)
ds.save_as(new_file_name)
print('saved {}'.format(new_file_name))
dir_path = '/home/wojtek/Mask'
parts = [('left', [0,512,0,256]),
('right', [0,512,256,512])]
save_partials(parts, dir_path)
I have a satellite GeoTIFF Image and a corresponding OSM file with only the highways. I want to convert the longitude latitude value in the OSM file to the pixels and want to highlight highway on the satellite image.
I have tried several methods that are explained on StackExchange. But I get the negative and same pixel value for every longitude and latitude values. Could somebody explain, what am I missing?
Here is the information of the image that I have gathered using OTB application.
Here is the code that i am using.
from osgeo import gdal, osr
import numpy as np
import xml.etree.ElementTree as xml
src_filename = 'image.tif'
dst_filename = 'foo.tiff'
def readLongLat(path):
lonlatList = []
latlongtuple = ()
root = xml.parse(path).getroot()
for i in root:
if i.tag == "node":
latlong = []
lat = float(i.attrib["lat"])
long = float(i.attrib["lon"])
latlong.append(lat)
latlong.append(long)
lonlatList.append(latlong)
return lonlatList
# Opens source dataset
src_ds = gdal.Open(src_filename)
format = "GTiff"
driver = gdal.GetDriverByName(format)
# Open destination dataset
dst_ds = driver.CreateCopy(dst_filename, src_ds, 0)
# Get raster projection
epsg = 4269 # http://spatialreference.org/ref/sr-org/lambert_conformal_conic_2sp/
srs = osr.SpatialReference()
srs.ImportFromEPSG(epsg)
# Make WGS84 lon lat coordinate system
world_sr = osr.SpatialReference()
world_sr.SetWellKnownGeogCS('WGS84')
transform = src_ds.GetGeoTransform()
gt = [transform[0],transform[1],0,transform[3],0,-transform[5]]
#Reading the osm file
lonlat = readLongLat("highways.osm")
# Transform lon lats into XY
coord_transform = osr.CoordinateTransformation(world_sr, srs)
newpoints = coord_transform.TransformPoints(lonlat) # list of XYZ tuples
# Make Inverse Geotransform (try:except due to gdal version differences)
try:
success, inverse_gt = gdal.InvGeoTransform(gt)
except:
inverse_gt = gdal.InvGeoTransform(gt)
# [Note 1] Set pixel values
marker_array_r = np.array([[255]], dtype=np.uint8)
marker_array_g = np.array([[0]], dtype=np.uint8)
marker_array_b = np.array([[0]], dtype=np.uint8)
for x,y,z in newpoints:
pix_x = int(inverse_gt[0] + inverse_gt[1] * x + inverse_gt[2] * y)
pix_y = int(inverse_gt[3] + inverse_gt[4] * x + inverse_gt[5] * y)
dst_ds.GetRasterBand(1).WriteArray(marker_array_r, pix_x, pix_y)
dst_ds.GetRasterBand(2).WriteArray(marker_array_g, pix_x, pix_y)
dst_ds.GetRasterBand(3).WriteArray(marker_array_b, pix_x, pix_y)
# Close files
dst_ds = None
src_ds = None
Something I have tried recently is using the xarray module. I think of xarray as a hybrid between pandas and numpy that allows you to store information as an array but access it using simply .sel requests. Docs here.
UPDATE: Seems as if rasterio and xarray are required to be installed for the below method to work. See link.
It is a much simpler way of translating a GeoTiff file to a user-friendly array. See my example below:
import xarray as xr
ds = xr.open_rasterio("/path/to/image.tif")
# Insert your lat/lon/band below to extract corresponding pixel value
ds.sel(band=2, lat=19.9, lon=39.5, method='nearest').values
>>> [10.3]
This does not answer your question directly, but may help you identify a different (and probably simpler) approach that I've recently switched to.
Note: obviously care needs to be taken to ensure that your lat/lon pairs are in the same coordinate system as the GeoTiff file, but I think you're handling that anyway.
I was able to do that using the library geoio.
import geoio
img = geoio.GeoImage(src_filename)
pix_x, pix_y = img.proj_to_raster(lon,lat)
In the original Caffe framework, there was an executable under caffe/build/tools called convert_imageset, which took a directory of JPEG images and a text file with labels for each image, and output an LMDB that could be fed to a Caffe model to train, test, etc.
What is the best way to convert raw JPEG images and labels to an LMDB that Caffe2 can ingest using the AddInput() function from this MNIST tutorial on the Caffe2 website?
According to my research, you cannot simply create an LMDB file using this tool and feed a Caffe2 model.
The tutorial script just downloads two LMDBs (mnist-train-nchw-lmdb and mnist-test-nchw-lmdb) and passes them to AddInput(), but gives no insight as to how the LMDBs were created.
There is a binary called make_image_db.cc which does precisely what you are describing. It is located in caffe2/build/bin/make_image_db:
// This script converts an image dataset to a database.
//
// caffe2::FLAGS_input_folder is the root folder that holds all the images
//
// caffe2::FLAGS_list_file is the path to a file containing a list of files
// and their labels, as follows:
//
// subfolder1/file1.JPEG 7
// subfolder1/file2.JPEG 7
// subfolder2/file1.JPEG 8
// ...
As described in https://github.com/caffe2/caffe2/issues/1755 you can use the binary in the following way (also with fewer parameters):
caffe2/build/bin/make_image_db -color -db lmdb -input_folder ./some_input_folder
-list_file ./labels_file -num_threads 10 -output_db_name ./some_output_folder -raw -scale 256 -shuffle
A full Caffe2 example on how to create and read a lmdb database (for random images) can be found in the official github repository and can be used as a skeleton to adapt to your own images https://github.com/caffe2/caffe2/blob/master/caffe2/python/examples/lmdb_create_example.py. Since I have not used this method yet, I will simply copy the example. In order to create the database, one can use:
import argparse
import numpy as np
import lmdb
from caffe2.proto import caffe2_pb2
from caffe2.python import workspace, model_helper
def create_db(output_file):
print(">>> Write database...")
LMDB_MAP_SIZE = 1 << 40 # MODIFY
env = lmdb.open(output_file, map_size=LMDB_MAP_SIZE)
checksum = 0
with env.begin(write=True) as txn:
for j in range(0, 128):
# MODIFY: add your own data reader / creator
label = j % 10
width = 64
height = 32
img_data = np.random.rand(3, width, height)
# ...
# Create TensorProtos
tensor_protos = caffe2_pb2.TensorProtos()
img_tensor = tensor_protos.protos.add()
img_tensor.dims.extend(img_data.shape)
img_tensor.data_type = 1
flatten_img = img_data.reshape(np.prod(img_data.shape))
img_tensor.float_data.extend(flatten_img)
label_tensor = tensor_protos.protos.add()
label_tensor.data_type = 2
label_tensor.int32_data.append(label)
txn.put(
'{}'.format(j).encode('ascii'),
tensor_protos.SerializeToString()
)
checksum += np.sum(img_data) * label
if (j % 16 == 0):
print("Inserted {} rows".format(j))
print("Checksum/write: {}".format(int(checksum)))
return checksum
The database can then by loaded by:
def read_db_with_caffe2(db_file, expected_checksum):
print(">>> Read database...")
model = model_helper.ModelHelper(name="lmdbtest")
batch_size = 32
data, label = model.TensorProtosDBInput(
[], ["data", "label"], batch_size=batch_size,
db=db_file, db_type="lmdb")
checksum = 0
workspace.RunNetOnce(model.param_init_net)
workspace.CreateNet(model.net)
for _ in range(0, 4):
workspace.RunNet(model.net.Proto().name)
img_datas = workspace.FetchBlob("data")
labels = workspace.FetchBlob("label")
for j in range(batch_size):
checksum += np.sum(img_datas[j, :]) * labels[j]
print("Checksum/read: {}".format(int(checksum)))
assert np.abs(expected_checksum - checksum < 0.1), \
"Read/write checksums dont match"
Last but not least, there is also a tutorial on how to create a minidb database: https://github.com/caffe2/caffe2/blob/master/caffe2/python/tutorials/create_your_own_dataset.ipynb. For this, one could use the following function:
def write_db(db_type, db_name, features, labels):
db = core.C.create_db(db_type, db_name, core.C.Mode.write)
transaction = db.new_transaction()
for i in range(features.shape[0]):
feature_and_label = caffe2_pb2.TensorProtos()
feature_and_label.protos.extend([
utils.NumpyArrayToCaffe2Tensor(features[i]),
utils.NumpyArrayToCaffe2Tensor(labels[i])])
transaction.put(
'train_%03d'.format(i),
feature_and_label.SerializeToString())
# Close the transaction, and then close the db.
del transaction
del db
Features would be a tensor containing your images as numpy arrays. Labels are the corresponding true labels for the features. You would then simply call the function as
write_db("minidb", "train_images.minidb", train_features, train_labels)
Finally, you would load the images from the database by
net_proto = core.Net("example_reader")
dbreader = net_proto.CreateDB([], "dbreader", db="train_images.minidb", db_type="minidb")
net_proto.TensorProtosDBInput([dbreader], ["X", "Y"], batch_size=16)
for create database in lmbd:
create the train data folder
create train.txt file conataining filename label
create validation data folder
create val.txt file contatining filename and label
edit this file
gedit examples/imagenet/create_imagenet.sh
EXAMPLE= path to where *.lmbd folder wil be stored
DATA= path where val.txt and train.txt is present
TOOLS=build/tools
TRAIN_DATA_ROOT=test/make_caffe_data/train/ # path to trainfiles
VAL_DATA_ROOT=test/make_caffe_data/val/ # path to test_files
Set RESIZE=true to resize the images to 256x256. Leave as false if images have
already been resized using another tool.
RESIZE=true
./examples/imagenet/create_imagenet.sh