I am trying to create my own tool to use in ArcMap but keep running into a problem. I want to create a buffer (which I can do) and then clip the points that fall within the buffer. The problem I run into is that I cannot figure out how to use the buffer as the input feature for the clip section of my tool.
import arcpy
import os
from arcpyimmport env
env.workspace = "C:/LabData"
arcpy.env.overwriteOutput = True
In_lake = arcpy.GetParameterAsText(0)
Out_Buff = arcpy.GetParameterAsText(1)
Buffer_Distance = arcpy.GetParameterAstext(2)
in_cities = arcpy.GetParameterAsText(3)
cliped_cities = GetParameterAsText(4)
New_Table = arcpy.GetParameterAsText(5)
Join_Input = arcpy.GetParameteAsText(6)
# step 1 create a buffer around the lakes
arcpy.Buffer_analysis(In_Lake, Out_Buff, Buffer_Distance)
# Step 2 Clip all cities that fall within the buffer
arcpy.Clip_analysis( in_cities,out_Buff, clipped_cities)
# Step 3
arcpy.Statistics_analysis(clipped_cities, New_Table, statistics_fields,\
'Population SUM', 'CNTRY_NAME')
# Step 5
arcpy.AddField_management (New_Table, 'Country', 'TEXT')
]1
Check carefully that your variable names match -- Python and ArcPy are case sensitive.
In_Lake = arcpy.GetParameterAsText(0) ## was In_lake
Out_Buff = arcpy.GetParameterAsText(1)
Buffer_Distance = arcpy.GetParameterAstext(2)
in_cities = arcpy.GetParameterAsText(3)
clipped_cities = GetParameterAsText(4) ## was cliped_cities
New_Table = arcpy.GetParameterAsText(5)
Join_Input = arcpy.GetParameteAsText(6)
# step 1 create a buffer around the lakes
arcpy.Buffer_analysis(In_Lake, Out_Buff, Buffer_Distance)
# Step 2 Clip all cities that fall within the buffer
arcpy.Clip_analysis(in_cities, Out_Buff, clipped_cities) ## was out_Buff
Unless you want to keep the lake buffer, it doesn't necessarily need to be an input parameter specified by the user. Consider instead using the in_memory workspace -- just be aware any data in it will be deleted once the tool execution is completed.
Out_Buff = r'in_memory\lakeBuffer'
A similar strategy can be used for any intermediate feature class or table that you don't really care about. However, it's sometimes useful to have those intermediate results around to verify that your tool is doing what you expect at every step.
Related
I'm a new python user. I'm trying to use obspy to create xml for a seismic array. I downloaded the template found at https://docs.obspy.org/tutorial/code_snippets/stationxml_file_from_scratch.html.
import obspy
from obspy.core.inventory import Inventory, Network, Station, Channel, Site
from obspy.clients.nrl import NRL
# We'll first create all the various objects. These strongly follow the
# hierarchy of StationXML files.
inv = Inventory(
# We'll add networks later.
networks=[],
# The source should be the id whoever create the file.
source="ObsPy-Tutorial")
net = Network(
# This is the network code according to the SEED standard.
code="XX",
# A list of stations. We'll add one later.
stations=[],
description="A test stations.",
# Start-and end dates are optional.
start_date=obspy.UTCDateTime(2016, 1, 2))
sta = Station(
# This is the station code according to the SEED standard.
code="ABC",
latitude=1.0,
longitude=2.0,
elevation=345.0,
creation_date=obspy.UTCDateTime(2016, 1, 2),
site=Site(name="First station"),
code="DEF",
latitude=10.0,
longitude=20.0,
elevation=3450.0,
creation_date=obspy.UTCDateTime(2016, 1, 3),
site=Site(name="Second station"))
cha = Channel(
# This is the channel code according to the SEED standard.
code="HHZ",
# This is the location code according to the SEED standard.
location_code="",
# Note that these coordinates can differ from the station coordinates.
latitude=1.0,
longitude=2.0,
elevation=345.0,
depth=10.0,
azimuth=0.0,
dip=-90.0,
sample_rate=200)
# By default this accesses the NRL online. Offline copies of the NRL can
# also be used instead
nrl = NRL()
# The contents of the NRL can be explored interactively in a Python prompt,
# see API documentation of NRL submodule:
# http://docs.obspy.org/packages/obspy.clients.nrl.html
# Here we assume that the end point of data logger and sensor are already
# known:
response = nrl.get_response( # doctest: +SKIP
sensor_keys=['Streckeisen', 'STS-1', '360 seconds'],
datalogger_keys=['REF TEK', 'RT 130 & 130-SMA', '1', '200'])
# Now tie it all together.
cha.response = response
sta.channels.append(cha)
net.stations.append(sta)
inv.networks.append(net)
# And finally write it to a StationXML file. We also force a validation against
# the StationXML schema to ensure it produces a valid StationXML file.
#
# Note that it is also possible to serialize to any of the other inventory
# output formats ObsPy supports.
inv.write("station.xml", format="stationxml", validate=True)
I'm stuck on a silly question: how can I add another station in sta? Something like
code="DEF",
latitude=10.0,
longitude=20.0,
elevation=3450.0,
creation_date=obspy.UTCDateTime(2016, 1, 3),
site=Site(name="Second station"))
I'm using Spyder 5.3.3.
Thank you for your help!
You can make another Station object, for example sta2 = Station(code=...), and then add it to the inventory using net.stations.append(sta2) :
import obspy
from obspy.core.inventory import Inventory, Network, Station, Channel, Site
from obspy.clients.nrl import NRL
# We'll first create all the various objects. These strongly follow the
# hierarchy of StationXML files.
inv = Inventory(
# We'll add networks later.
networks=[],
# The source should be the id whoever create the file.
source="ObsPy-Tutorial")
net = Network(
# This is the network code according to the SEED standard.
code="XX",
# A list of stations. We'll add one later.
stations=[],
description="A test stations.",
# Start-and end dates are optional.
start_date=obspy.UTCDateTime(2016, 1, 2))
sta = Station(
# This is the station code according to the SEED standard.
code="ABC",
latitude=1.0,
longitude=2.0,
elevation=345.0,
creation_date=obspy.UTCDateTime(2016, 1, 2),
site=Site(name="First station"))
# Second station
sta2 = Station(
code="DEF",
latitude=10.0,
longitude=20.0,
elevation=3450.0,
creation_date=obspy.UTCDateTime(2016, 1, 3),
site=Site(name="Second station"))
cha = Channel(
# This is the channel code according to the SEED standard.
code="HHZ",
# This is the location code according to the SEED standard.
location_code="",
# Note that these coordinates can differ from the station coordinates.
latitude=1.0,
longitude=2.0,
elevation=345.0,
depth=10.0,
azimuth=0.0,
dip=-90.0,
sample_rate=200)
# By default this accesses the NRL online. Offline copies of the NRL can
# also be used instead
nrl = NRL()
# The contents of the NRL can be explored interactively in a Python prompt,
# see API documentation of NRL submodule:
# http://docs.obspy.org/packages/obspy.clients.nrl.html
# Here we assume that the end point of data logger and sensor are already
# known:
response = nrl.get_response( # doctest: +SKIP
sensor_keys=['Streckeisen', 'STS-1', '360 seconds'],
datalogger_keys=['REF TEK', 'RT 130 & 130-SMA', '1', '200'])
# Now tie it all together.
cha.response = response
sta.channels.append(cha)
net.stations.append(sta)
# add second station to network
net.stations.append(sta2)
inv.networks.append(net)
# And finally write it to a StationXML file. We also force a validation against
# the StationXML schema to ensure it produces a valid StationXML file.
# Note that it is also possible to serialize to any of the other inventory
# output formats ObsPy supports.
inv.write("station.xml", format="stationxml", validate=True)
You can double check that the new station is there with print(inv) (I find XML files rather hard to read sometimes):
Inventory created at 2022-10-14T00:33:59.433461Z
Created by: ObsPy 1.3.0
https://www.obspy.org
Sending institution: ObsPy-Tutorial
Contains:
Networks (1):
XX
Stations (2):
XX.ABC (First station)
XX.DEF (Second station)
Channels (1):
XX.ABC..HHZ
You can use a similar method to add channels to a station using sta.channels.append(new channel). Cheers!
import arcpy.sa
arcpy.env.workspace = r"C:\Users\nhaddad\Desktop\project_8"
arcpy.env.overwriteOutput = True
altitude_cursor = arcpy.da.SearchCursor("solar_points", "Altitude")
azimuth_cursor = arcpy.da.SearchCursor("solar_points", "Azimuth")
for i,j in zip(altitude_cursor,azimuth_cursor):
output = arcpy.sa.Hillshade(r"C:\Users\nhaddad\Desktop\final_exam\worcester_dem", j[0], i[0], "SHADOWS", 0.348)
I can only create 1 output map, when I need the loop to iterate through the 10 rows of the table and make 10 maps.
You never save your hillshade or add it to a list (or other data structure) to use the results later. You can save the hillshade result by simply using output.save("/path/to/destination/file/name").
Something else, please use with when working with arcpy.da.SearchCursor to ensure that the cursor is closed again after you don't need it anymore. Furthermore, you don't need two cursors.
import os
import arcpy, arcpy.sa
DEM = r"C:\Input\Hillshade\worcester_dem"
SOLAR_POINTS_FOLDER = r"C:\Input\SolarPoints"
OUTPUT_FOLDER = r"C:\Output"
with arcpy.EnvManager(workspace=SOLAR_POINTS_FOLDER):
with arcpy.da.SearchCursor("solar_points", ["Azimuth", "Altitude"]) as cursor:
for azimuth, altitude in cursor:
output = arcpy.sa.Hillshade(DEM, azimuth, altitude, "SHADOWS", 0.348)
# either save file (or add it to an array for later use)
output.save( \
os.path.join(OUTPUT_FOLDER, f"worcester_{azimuth}_{altitude}"))
Note: Above example is written from memory without being tested/executed.
I have a load of 3 hour MP3 files, and every ~15 minutes a distinct 1 second sound effect is played, which signals the beginning of a new chapter.
Is it possible to identify each time this sound effect is played, so I can note the time offsets?
The sound effect is similar every time, but because it's been encoded in a lossy file format, there will be a small amount of variation.
The time offsets will be stored in the ID3 Chapter Frame MetaData.
Example Source, where the sound effect plays twice.
ffmpeg -ss 0.9 -i source.mp3 -t 0.95 sample1.mp3 -acodec copy -y
Sample 1 (Spectrogram)
ffmpeg -ss 4.5 -i source.mp3 -t 0.95 sample2.mp3 -acodec copy -y
Sample 2 (Spectrogram)
I'm very new to audio processing, but my initial thought was to extract a sample of the 1 second sound effect, then use librosa in python to extract a floating point time series for both files, round the floating point numbers, and try to get a match.
import numpy
import librosa
print("Load files")
source_series, source_rate = librosa.load('source.mp3') # 3 hour file
sample_series, sample_rate = librosa.load('sample.mp3') # 1 second file
print("Round series")
source_series = numpy.around(source_series, decimals=5);
sample_series = numpy.around(sample_series, decimals=5);
print("Process series")
source_start = 0
sample_matching = 0
sample_length = len(sample_series)
for source_id, source_sample in enumerate(source_series):
if source_sample == sample_series[sample_matching]:
sample_matching += 1
if sample_matching >= sample_length:
print(float(source_start) / source_rate)
sample_matching = 0
elif sample_matching == 1:
source_start = source_id;
else:
sample_matching = 0
This does not work with the MP3 files above, but did with an MP4 version - where it was able to find the sample I extracted, but it was only that one sample (not all 12).
I should also note this script takes just over 1 minute to process the 3 hour file (which includes 237,426,624 samples). So I can imagine that some kind of averaging on every loop would cause this to take considerably longer.
Trying to directly match waveforms samples in the time domain is not a good idea. The mp3 signal will preserve the perceptual properties but it is quite likely the phases of the frequency components will be shifted so the sample values will not match.
You could try trying to match the volume envelopes of your effect and your sample.
This is less likely to be affected by the mp3 process.
First, normalise your sample so the embedded effects are the same level as your reference effect. Constructing new waveforms from the effect and the sample by using the average of the peak values over time frames that are just short enough to capture the relevant features. Better still use overlapping frames. Then use cross-correlation in the time domain.
If this does not work then you could analyze each frame using an FFT this gives you a feature vector for each frame. You then try to find matches of the sequence of features in your effect with the sample. Similar to https://stackoverflow.com/users/1967571/jonnor suggestion. MFCC is used in speech recognition but since you are not detecting speech FFT is probably OK.
I am assuming the effect playing by itself (no background noise) and it is added to the recording electronically (as opposed to being recorded via a microphone). If this is not the case the problem becomes more difficult.
This is an Audio Event Detection problem. If the sound is always the same and there are no other sounds at the same time, it can probably be solved with a Template Matching approach. At least if there is no other sounds with other meanings that sound similar.
The simplest kind of template matching is to compute the cross-correlation between your input signal and the template.
Cut out an example of the sound to detect (using Audacity). Take as much as possible, but avoid the start and end. Store this as .wav file
Load the .wav template using librosa.load()
Chop up the input file into a series of overlapping frames. Length should be same as your template. Can be done with librosa.util.frame
Iterate over the frames, and compute cross-correlation between frame and template using numpy.correlate.
High values of cross-correlation indicate a good match. A threshold can be applied in order to decide what is an event or not. And the frame number can be used to calculate the time of the event.
You should probably prepare some shorter test files which have both some examples of the sound to detect as well as other typical sounds.
If the volume of the recordings is inconsistent you'll want to normalize that before running detection.
If cross-correlation in the time-domain does not work, you can compute the melspectrogram or MFCC features and cross-correlate that. If this does not yield OK results either, a machine learning model can be trained using supervised learning, but this requires labeling a bunch of data as event/not-event.
To follow up on the answers by #jonnor and #paul-john-leonard, they are both correct, by using frames (FFT) I was able to do Audio Event Detection.
I've written up the full source code at:
https://github.com/craigfrancis/audio-detect
Some notes though:
To create the templates, I used ffmpeg:
ffmpeg -ss 13.15 -i source.mp4 -t 0.8 -acodec copy -y templates/01.mp4;
I decided to use librosa.core.stft, but I needed to make my own implementation of this stft function for the 3 hour file I'm analysing, as it's far too big to keep in memory.
When using stft I tried using a hop_length of 64 at first, rather than the default (512), as I assumed that would give me more data to work with... the theory might be true, but 64 was far too detailed, and caused it to fail most of the time.
I still have no idea how to get cross-correlation between frame and template to work (via numpy.correlate)... instead I took the results per frame (the 1025 buckets, not 1024, which I believe relate to the Hz frequencies found) and did a very simple average difference check, then ensured that average was above a certain value (my test case worked at 0.15, the main files I'm using this on required 0.55 - presumably because the main files had been compressed quite a bit more):
hz_score = abs(source[0:1025,x] - template[2][0:1025,y])
hz_score = sum(hz_score)/float(len(hz_score))
When checking these scores, it's really useful to show them on a graph. I often used something like the following:
import matplotlib.pyplot as plt
plt.figure(figsize=(30, 5))
plt.axhline(y=hz_match_required_start, color='y')
while x < source_length:
debug.append(hz_score)
if x == mark_frame:
plt.axvline(x=len(debug), ymin=0.1, ymax=1, color='r')
plt.plot(debug)
plt.show()
When you create the template, you need to trim off any leading silence (to avoid bad matching), and an extra ~5 frames (it seems that the compression / re-encoding process alters this)... likewise, remove the last 2 frames (I think the frames include a bit of data from their surroundings, where the last one in particular can be a bit off).
When you start finding a match, you might find it's ok for the first few frames, then it fails... you will probably need to try again a frame or two later. I found it easier having a process that supported multiple templates (slight variations on the sound), and would check their first testable (e.g. 6th) frame and if that matched, put them in a list of potential matches. Then, as it progressed on to the next frames of the source, it could compare it to the next frames of the template, until all frames in the template had been matched (or failed).
This might not be an answer, it's just where I got to before I start researching the answers by #jonnor and #paul-john-leonard.
I was looking at the Spectrograms you can get by using librosa stft and amplitude_to_db, and thinking that if I take the data that goes in to the graphs, with a bit of rounding, I could potentially find the 1 sound effect being played:
https://librosa.github.io/librosa/generated/librosa.display.specshow.html
The code I've written below kind of works; although it:
Does return quite a few false positives, which might be fixed by tweaking the parameters of what is considered a match.
I would need to replace the librosa functions with something that can parse, round, and do the match checks in one pass; as a 3 hour audio file causes python to run out of memory on a computer with 16GB of RAM after ~30 minutes before it even got to the rounding bit.
import sys
import numpy
import librosa
#--------------------------------------------------
if len(sys.argv) == 3:
source_path = sys.argv[1]
sample_path = sys.argv[2]
else:
print('Missing source and sample files as arguments');
sys.exit()
#--------------------------------------------------
print('Load files')
source_series, source_rate = librosa.load(source_path) # The 3 hour file
sample_series, sample_rate = librosa.load(sample_path) # The 1 second file
source_time_total = float(len(source_series) / source_rate);
#--------------------------------------------------
print('Parse Data')
source_data_raw = librosa.amplitude_to_db(abs(librosa.stft(source_series, hop_length=64)))
sample_data_raw = librosa.amplitude_to_db(abs(librosa.stft(sample_series, hop_length=64)))
sample_height = sample_data_raw.shape[0]
#--------------------------------------------------
print('Round Data') # Also switches X and Y indexes, so X becomes time.
def round_data(raw, height):
length = raw.shape[1]
data = [];
range_length = range(1, (length - 1))
range_height = range(1, (height - 1))
for x in range_length:
x_data = []
for y in range_height:
# neighbours = []
# for a in [(x - 1), x, (x + 1)]:
# for b in [(y - 1), y, (y + 1)]:
# neighbours.append(raw[b][a])
#
# neighbours = (sum(neighbours) / len(neighbours));
#
# x_data.append(round(((raw[y][x] + raw[y][x] + neighbours) / 3), 2))
x_data.append(round(raw[y][x], 2))
data.append(x_data)
return data
source_data = round_data(source_data_raw, sample_height)
sample_data = round_data(sample_data_raw, sample_height)
#--------------------------------------------------
sample_data = sample_data[50:268] # Temp: Crop the sample_data (318 to 218)
#--------------------------------------------------
source_length = len(source_data)
sample_length = len(sample_data)
sample_height -= 2;
source_timing = float(source_time_total / source_length);
#--------------------------------------------------
print('Process series')
hz_diff_match = 18 # For every comparison, how much of a difference is still considered a match - With the Source, using Sample 2, the maximum diff was 66.06, with an average of ~9.9
hz_match_required_switch = 30 # After matching "start" for X, drop to the lower "end" requirement
hz_match_required_start = 850 # Out of a maximum match value of 1023
hz_match_required_end = 650
hz_match_required = hz_match_required_start
source_start = 0
sample_matched = 0
x = 0;
while x < source_length:
hz_matched = 0
for y in range(0, sample_height):
diff = source_data[x][y] - sample_data[sample_matched][y];
if diff < 0:
diff = 0 - diff
if diff < hz_diff_match:
hz_matched += 1
# print(' {} Matches - {} # {}'.format(sample_matched, hz_matched, (x * source_timing)))
if hz_matched >= hz_match_required:
sample_matched += 1
if sample_matched >= sample_length:
print(' Found # {}'.format(source_start * source_timing))
sample_matched = 0 # Prep for next match
hz_match_required = hz_match_required_start
elif sample_matched == 1: # First match, record where we started
source_start = x;
if sample_matched > hz_match_required_switch:
hz_match_required = hz_match_required_end # Go to a weaker match requirement
elif sample_matched > 0:
# print(' Reset {} / {} # {}'.format(sample_matched, hz_matched, (source_start * source_timing)))
x = source_start # Matched something, so try again with x+1
sample_matched = 0 # Prep for next match
hz_match_required = hz_match_required_start
x += 1
#--------------------------------------------------
I am running a model evaluation protocol for Modeller. It evaluates every model and writes its result to a separate file. However I have to run it for every model and write to a single file.
This is the original code:
from modeller import *
from modeller.scripts import complete_pdb
log.verbose() # request verbose output
env = environ()
env.libs.topology.read(file='$(LIB)/top_heav.lib') # read topology
env.libs.parameters.read(file='$(LIB)/par.lib') # read parameters
# read model file
mdl = complete_pdb(env, 'TvLDH.B99990001.pdb')
# Assess all atoms with DOPE:
s = selection(mdl)
s.assess_dope(output='ENERGY_PROFILE NO_REPORT', file='TvLDH.profile',
normalize_profile=True, smoothing_window=15)
I added a loop to evaluate every model in a single run, however I am creating several files (one for each model) and I want is to print all evaluations in a single file
from modeller import *
from modeller.scripts import complete_pdb
log.verbose() # request verbose output
env = environ()
env.libs.topology.read(file='$(LIB)/top_heav.lib') # read topology
env.libs.parameters.read(file='$(LIB)/par.lib') # read parameters
#My loop starts here
for i in range (1,1001):
number=str(i)
if i<10:
name='000'+number
else:
if i<100:
name='00'+number
else:
if i<1000:
name='0'+number
else:
name='1000'
# read model file
mdl = complete_pdb(env, 'TcP5CDH.B9999'+name+'.pdb')
# Assess all atoms with DOPE: this is the assesment that i want to print in the same file
s = selection(mdl)
savename='TcP5CDH.B9999'+name+'.profile'
s.assess_dope(output='ENERGY_PROFILE NO_REPORT',
file=savename,
normalize_profile=True, smoothing_window=15)
As I am new to programming, any help will be very helpful!
Welcome :-) Looks like you're very close. Let's introduce you to using a python function and the .format() statement.
Your original has a comment line # read model file, which looks like it could be a function, so let's try that. It could look something like this.
from modeller import *
from modeller.scripts import complete_pdb
log.verbose() # request verbose output
# I'm assuming this can be done just once
# and re-used for all your model files...
# (if not, the env stuff should go inside the
# read_model_file() function.
env = environ()
env.libs.topology.read(file='$(LIB)/top_heav.lib') # read topology
env.libs.parameters.read(file='$(LIB)/par.lib') # read parameters
def read_model_file(file_name):
print('--- read_model_file(file_name='+file_name+') ---')
mdl = complete_pdb(env, file_name)
# Assess all atoms with DOPE:
s = selection(mdl)
output_file = file_name+'.profile'
s.assess_dope(
output='ENERGY_PROFILE NO_REPORT',
file=output_file,
normalize_profile=True,
smoothing_window=15)
for i in range(1,1001):
file_name = 'TcP5CDH.B9999{:04d}pdb'.format(i)
read_model_file(file_name)
Using .format() we can get ride of the multiple if-statement checks for 10? 100? 1000?
Basically .format() replaces {} curly braces with the argument(s).
It can be pretty complex but you don't need to digetst all of it.
Example:
'Hello {}!'.format('world') yields Hello world!. The {:04d} stuff uses formatting, basically that says "Please make a 4-character wide digit-substring and zero-fill it, so you should get '0001', ..., '0999', '1000'.
Just {:4d} (no leading zero) would give you space padded results (e.g. ' 1', ..., ' 999', '1000'.
Here's a little more on the zero-fill: Display number with leading zeros
I am new to google earth engine and was trying to understand how to use the Google Earth Engine python api. I can create an image collection, but apparently the getdownloadurl() method operates only on individual images. So I am trying to understand how to iterate over and download all of the images in the collection.
Here is my basic code. I broke it out in great detail for some other work I am doing.
import ee
ee.Initialize()
col = ee.ImageCollection('LANDSAT/LC08/C01/T1')
col.filterDate('1/1/2015', '4/30/2015')
pt = ee.Geometry.Point([-2.40986111110000012, 26.76033333330000019])
buff = pt.buffer(300)
region = ee.Feature.bounds(buff)
col.filterBounds(region)
So I pulled the Landsat collection, filtered by date and a buffer geometry. So I should have something like 7-8 images in the collection (with all bands).
However, I could not seem to get iteration to work over the collection.
for example:
for i in col:
print(i)
The error indicates TypeError: 'ImageCollection' object is not iterable
So if the collection is not iterable, how can I access the individual images?
Once I have an image, I should be able to use the usual
path = col[i].getDownloadUrl({
'scale': 30,
'crs': 'EPSG:4326',
'region': region
})
It's a good idea to use ee.batch.Export for this. Also, it's good practice to avoid mixing client and server functions (reference). For that reason, a for-loop can be used, since Export is a client function. Here's a simple example to get you started:
import ee
ee.Initialize()
rectangle = ee.Geometry.Rectangle([-1, -1, 1, 1])
sillyCollection = ee.ImageCollection([ee.Image(1), ee.Image(2), ee.Image(3)])
# This is OK for small collections
collectionList = sillyCollection.toList(sillyCollection.size())
collectionSize = collectionList.size().getInfo()
for i in xrange(collectionSize):
ee.batch.Export.image.toDrive(
image = ee.Image(collectionList.get(i)).clip(rectangle),
fileNamePrefix = 'foo' + str(i + 1),
dimensions = '128x128').start()
Note that converting a collection to a list in this manner is also dangerous for large collections (reference). However, this is probably the most scalable method if you really need to download.
Here is my solution:
import ee
ee.Initialize()
pt = ee.Geometry.Point([-2.40986111110000012, 26.76033333330000019])
region = pt.buffer(10)
col = ee.ImageCollection('LANDSAT/LC08/C01/T1')\
.filterDate('2015-01-01','2015-04-30')\
.filterBounds(region)
bands = ['B4','B5'] #Change it!
def accumulate(image,img):
name_image = image.get('system:index')
image = image.select([0],[name_image])
cumm = ee.Image(img).addBands(image)
return cumm
for band in bands:
col_band = col.map(lambda img: img.select(band)\
.set('system:time_start', img.get('system:time_start'))\
.set('system:index', img.get('system:index')))
# ImageCollection to List
col_list = col_band.toList(col_band.size())
# Define the initial value for iterate.
base = ee.Image(col_list.get(0))
base_name = base.get('system:index')
base = base.select([0], [base_name])
# Eliminate the image 'base'.
new_col = ee.ImageCollection(col_list.splice(0,1))
img_cummulative = ee.Image(new_col.iterate(accumulate,base))
task = ee.batch.Export.image.toDrive(
image = img_cummulative.clip(region),
folder = 'landsat',
fileNamePrefix = band,
scale = 30).start()
print('Export Image '+ band+ ' was submitted, please wait ...')
img_cummulative.bandNames().getInfo()
A reproducible example can you found it here: https://colab.research.google.com/drive/1Nv8-l20l82nIQ946WR1iOkr-4b_QhISu
You could possibly use ee.ImageCollection.iterate() with a function that gets the image and adds it to a list.
import ee
def accumluate_images(image, images):
images.append(image)
return images
for img in col.iterate(accumulate_images, []):
url = img.getDownloadURL(dict(scale=30, crs='EPSG:4326', region=region))
Unfortunately I am not able to test this code as I do not have access to the API, but it might help you arrive at a solution.
I have a similar problem and was not able o solve with presented solutions. Then I have elaborated a sample code for this purpose. It iterates over an image collection in client side, then it is not affected by limitations (server side only) of .map() or .iterate().
It is possible to download the code and see its explanation here
It basically transform the ImageCollection into a list (ic.toList()). Then it performs a standard loop, and for each individual image it is possible to convert it back to ee.Image(list.get(i)), and then process one by one taking all images in the collection.
In your particular case, to download each image, the function to be called within the loop could be: getDOwnloadURL() or getThumbURL():
var url = imgNew.getDownloadURL({
region: geometry,
});
var thumbURL = imgNew.getThumbURL({region: geometry,dimensions: 512, format: 'png'});