How to download a land cover image using geemap? - python

I am trying to download an image from gee using the geemap python library, from the Dynamic Earth land cover dataset for a particular lat/long.
However, I'm struggling using the geemap.download_ee_image() function as the land cover data is not stored in a single layer or as a ee.Image object.
I have been following this guide, but the input data don't seem to match up
Any ideas of what I could do?
Here is the code I'm trying to modify:
Map = geemap.Map()
region = ee.Geometry.BBox(-89.7088, 42.9006, -89.0647, 43.2167)
start_date = '2021-01-01'
end_date = '2022-01-01'
dw_class = geemap.dynamic_world(region, start_date, end_date, return_type='class')
dw = geemap.dynamic_world(region, start_date, end_date, return_type='hillshade')
dw_vis = {
"min": 0,
"max": 8,
"palette": [
"#419BDF",
"#397D49",
"#88B053",
"#7A87C6",
"#E49635",
"#DFC35A",
"#C4281B",
"#A59B8F",
"#B39FE1",
],
}
Map.addLayer(dw_class, dw_vis, 'DW Land Cover', False)
Map.addLayer(dw, {}, 'DW Land Cover Hillshade')
Map.add_legend(title="Dynamic World Land Cover", builtin_legend='Dynamic_World')
Map.setCenter(region)
Map

Related

Python: Same code plotting different charts during different runs

I am running a mini project for remote sensing water level in the underground tank.
I am collecting data and uploading it using an ESP8266 Wifi to supabase into a table > reading the tables using python and plotting the data using streamlit.
But for some reason on a given run, it shows the correct local IST timing in the chart, on other runs it plots a different UTC timestamp seemingly randomly. I am unable to troubleshoot the problem.
Any help would be appreciated.
Below is my code(Ignore the redundancies, I am still learning to code and will progressively iron out the code with time):
from supabase import create_client
import pandas as pd
import streamlit as st
import plotly.express as px
from datetime import datetime, timedelta
API_URL = [redacted]
API_KEY = [redacted]
supabaseList = supabase.table('Water Level').select('*').execute().data
# time range variables updation to put in x axis range parameters
today = datetime.now()
today = today + \
timedelta(minutes = 30)
present_date = today.strftime("%Y-%m-%d %X")
hrs48 = today - \
timedelta(days = 2)
back_date = hrs48.strftime("%Y-%m-%d %X")
df = pd.DataFrame()
for row in supabaseList:
row["created_at"] = row["created_at"].split(".")[0]
row["time"] = row["created_at"].split("T")[1]
row["date"] = row["created_at"].split("T")[0]
row["DateTime"] = row["created_at"]
df = df.append(row, ignore_index=True)
orignal_title = '<h1 style="font-family:Helvetica; color:Black; font-size: 45px; text-align: center">[Redacted]</p>'
st.markdown(orignal_title, unsafe_allow_html=True)
st.text("")
custom_range = [back_date, present_date]
st.write(custom_range)
fig = px.area(df, x="DateTime", y="water_level", title='',markers=False)
fig.update_layout(
title={
'text': "Water level in %",
'y':0.9,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'})
fig.update_layout(yaxis_range=[0,120])
fig.update_layout(xaxis_range=custom_range)
#Add Horizontal line for pump trigger level
fig.add_hline(y=80, line_width=3, line_color="black",
annotation_text="Pump Start Level",
annotation_position="top left",
annotation_font_size=15,
annotation_font_color="black"
)
st.plotly_chart(fig,use_container_width=True)
I am expecting it to always plot in IST timestamps, but it prints out UTC on seemingly random runs.
It seems like the issue was with the database. I created a new project with a new table & it runs flawlessly now. Still, couldn't figure out what caused supabase to send different timestamps on different runs for the same data. But creating a completely new table & relinking with the other system has solved it.

How to annotate every frame with gcp video intelligence detect pose

google cloud platform provides video intelligence framework to detect person landmarks in video. Sample code is from gcp documentation.
I'm struggling to find a way to force the framework to annotated every single frame of the video, as this call returns only timestamps of 0.1 seconds. I only could think of a hack of manually triple length of the video (that is 30fps) to get annotation per every frame, but I really don't like the idea of doing that.
Any idea on how to do this? Thanks.
Sample code:
import io
from google.cloud import videointelligence_v1 as videointelligence
def detect_person(local_file_path="path/to/your/video-file.mp4"):
"""Detects people in a video from a local file."""
client = videointelligence.VideoIntelligenceServiceClient()
with io.open(local_file_path, "rb") as f:
input_content = f.read()
# Configure the request
config = videointelligence.types.PersonDetectionConfig(
include_bounding_boxes=True,
include_attributes=True,
include_pose_landmarks=True,
)
context = videointelligence.types.VideoContext(person_detection_config=config)
# Start the asynchronous request
operation = client.annotate_video(
request={
"features": [videointelligence.Feature.PERSON_DETECTION],
"input_content": input_content,
"video_context": context,
}
)
print("\nProcessing video for person detection annotations.")
result = operation.result(timeout=300)
print("\nFinished processing.\n")
# Retrieve the first result, because a single video was processed.
annotation_result = result.annotation_results[0]
for annotation in annotation_result.person_detection_annotations:
print("Person detected:")
for track in annotation.tracks:
print(
"Segment: {}s to {}s".format(
track.segment.start_time_offset.seconds
+ track.segment.start_time_offset.microseconds / 1e6,
track.segment.end_time_offset.seconds
+ track.segment.end_time_offset.microseconds / 1e6,
)
)
# Each segment includes timestamped objects that include
# characteristic - -e.g.clothes, posture of the person detected.
# Grab the first timestamped object
timestamped_object = track.timestamped_objects[0]
box = timestamped_object.normalized_bounding_box
print("Bounding box:")
print("\tleft : {}".format(box.left))
print("\ttop : {}".format(box.top))
print("\tright : {}".format(box.right))
print("\tbottom: {}".format(box.bottom))
# Attributes include unique pieces of clothing,
# poses, or hair color.
print("Attributes:")
for attribute in timestamped_object.attributes:
print(
"\t{}:{} {}".format(
attribute.name, attribute.value, attribute.confidence
)
)
# Landmarks in person detection include body parts such as
# left_shoulder, right_ear, and right_ankle
print("Landmarks:")
for landmark in timestamped_object.landmarks:
print(
"\t{}: {} (x={}, y={})".format(
landmark.name,
landmark.confidence,
landmark.point.x, # Normalized vertex
landmark.point.y, # Normalized vertex
)
)
Attributes of the PersonDetectionConfig
"""
Attributes:
include_bounding_boxes (bool):
Whether bounding boxes are included in the
person detection annotation output.
include_pose_landmarks (bool):
Whether to enable pose landmarks detection. Ignored if
'include_bounding_boxes' is set to false.
include_attributes (bool):
Whether to enable person attributes detection, such as cloth
color (black, blue, etc), type (coat, dress, etc), pattern
(plain, floral, etc), hair, etc. Ignored if
'include_bounding_boxes' is set to false.
"""
Sample output
Landmarks: 2.5025s
nose: 0.7880418300628662 (x=0.10155333578586578, y=0.29470884799957275)
left_eye: 0.8498712182044983 (x=0.10444415360689163, y=0.2844345271587372)
right_eye: 0.05180135369300842 (x=0.1029987558722496, y=0.28700312972068787)
left_ear: 0.8830078840255737 (x=0.11745283752679825, y=0.28700312972068787)
right_ear: 0.03634342923760414 (x=0.12757070362567902, y=0.28957170248031616)
left_shoulder: 0.8504171371459961 (x=0.1145620197057724, y=0.34094318747520447)
right_shoulder: 0.7254488468170166 (x=0.14925184845924377, y=0.34351176023483276)
left_elbow: 0.7874324321746826 (x=0.10444415360689163, y=0.4205690324306488)
right_elbow: 0.873414158821106 (x=0.18249624967575073, y=0.4154318571090698)
left_wrist: 0.8134297132492065 (x=0.07987220585346222, y=0.44882336258888245)
right_wrist: 0.5310596227645874 (x=0.1579243242740631, y=0.4693719446659088)
left_hip: 0.48307809233665466 (x=0.12901613116264343, y=0.5130376815795898)
right_hip: 0.4054966866970062 (x=0.14347021281719208, y=0.507900595664978)
left_knee: 0.707266092300415 (x=0.14057938754558563, y=0.6363293528556824)
right_knee: 0.536503791809082 (x=0.1029987558722496, y=0.615780770778656)
left_ankle: 0.6422659158706665 (x=0.21429529786109924, y=0.6620150804519653)
right_ankle: 0.7963647842407227 (x=0.08276302367448807, y=0.7467780113220215)
Landmarks: 2.6026s
nose: 0.6816550493240356 (x=0.030738139525055885, y=0.3216055631637573)
left_eye: 0.7059812545776367 (x=0.03222046047449112, y=0.3110688030719757)
right_eye: 0.038844481110572815 (x=0.030738139525055885, y=0.3110688030719757)
left_ear: 0.7951924800872803 (x=0.045561421662569046, y=0.3110688030719757)
right_ear: 0.03984677046537399 (x=0.05742005258798599, y=0.3110688030719757)
left_shoulder: 0.812483012676239 (x=0.047043751925230026, y=0.36638668179512024)
right_shoulder: 0.7965729236602783 (x=0.08113731443881989, y=0.36375248432159424)
left_elbow: 0.614456295967102 (x=0.05000840872526169, y=0.45331475138664246)
right_elbow: 0.589381992816925 (x=0.10040760785341263, y=0.45331475138664246)
left_wrist: 0.6238939762115479 (x=0.029255803674459457, y=0.49809587001800537)
right_wrist: 0.31396669149398804 (x=0.0796549841761589, y=0.49282753467559814)
left_hip: 0.3904641270637512 (x=0.05890238285064697, y=0.5376086235046387)
right_hip: 0.3786430358886719 (x=0.0796549841761589, y=0.5323402285575867)
left_knee: 0.46311870217323303 (x=0.035185132175683975, y=0.648244321346283)
right_knee: 0.33408626914024353 (x=0.0411144383251667, y=0.6429758667945862)

Why am I receiving the error "APIError(code=-1013): Filter failure: LOT_SIZE" from python-binance library?

I have created a triangle arbitrage bot with the Python programming language to trade on Binance from a VPS.
I have bought BTC, which is the currency with which I want to operate, specifically a total of "0.00278926". I also have a list of various tokens to do the triangular arbitrage.
Imagine that the bot has found an Arbitrage opportunity with these pairs: ['BNBBTC', 'ADABNB', 'ADABTC']. Obviously, the bot should buy with BTC an amount of BNB, then with BNB it would buy ADA and then sell ADA to BTC. Until then fine.
However, when it finds that opportunity and tries to buy, the bot sends me this error in the console:
APIError(code=-1013): Filter failure: LOT_SIZE
And I have previously checked to see if the minimum amount to buy BNB with BTC was larger than what I had and it is not, I have amount between the minimum and the maximum.
This is my code once the arbitrage opportunity (discounting the fees) is going to bring us profit:
from binance.client import Client
from binance.enums import *
import time
from datetime import datetime
import matplotlib
from matplotlib import cm
import matplotlib.pyplot as plt
import math
from binance_key import BinanceKey
import json
"""
... Here is the code for the Keys, the initialization of the bot, etc.
"""
list_of_arb_sym = [
['BNBBTC', 'ADABNB', 'ADABTC'], #Buying BNB
['BNBBTC', 'AAVEBNB', 'AAVEBTC'],
['...'] #Imagine that here there are a list of 300 pairs
#Then I have taken care of passing the values ​​of a line from the list to list_of_sym [0], list_of_sym [1], list_of_sym [2] but I do not add it so that it is not excessively long
"""
... here will be that code
"""
# Get values and prices
price_order_1 = client.get_avg_price(symbol=list_of_sym[0])
price_order_2 = client.get_avg_price(symbol=list_of_sym[1])
price_order_3 = client.get_avg_price(symbol=list_of_sym[2])
price1 = float(price_order_1["price"])
price2 = float(price_order_2["price"])
price3 = float(price_order_3["price"])
"""
... more irrelevant code that is working
"""
if arb_opportunity == 'Yes':
place_order_msg = "STARTING TO TRADE\n\n"
print(place_order_msg)
data_log_to_file(place_order_msg)
#First buy
balance1 = client.get_asset_balance('BTC')
quantity1 = (float(balance1['free'])/price1)
quantity_1 = round(quantity1, 5)
order_1 = client.order_market_buy(
symbol=list_of_sym[0],
quantity=quantity_1)
first_order = "FIRST ORDER DID IT.\n\n"
print(first_order)
data_log_to_file(first_order)
#Second buy
simbolo2 =list_of_sym[0]
simbolo2form = simbolo2[0:3]
balance2 = client.get_asset_balance(simbolo2form)
quantity2 = (float(balance2['free'])/price2)
quantity_2 = round(quantity2, 5)
order_2 = client.order_market_buy(
symbol=list_of_sym[1],
quantity=quantity_2)
second_order = "SECOND ORDER DID IT.\n\n"
print(second_order)
data_log_to_file(second_order)
#Sell
simbolo3 = list_of_sym[1]
simbolo3form = simbolo3[0:-3]
balance3 = client.get_asset_balance(simbolo3form)
quantity3 = (float(balance3['free'])/price3)
quantity_3 = round(quantity3, 5)
order_3 = client.order_market_sell(
symbol=list_of_sym[2],
quantity=quantity_3)
third_order = "SELL DID. \n\n"
third_order += "THE BOT HAS FINISHED"
print(third_order)
data_log_to_file(third_order)
I don't know how to fix it, I think I have done the right thing all the time since when I changed the 'quantity' in the first 'order_market_buy' to 0.26, it bought without problems.
The minimal quantity for trading BNBBTC on Binance is: 0.01
You can check this by sending a GET request to https://api.binance.com/api/v3/exchangeInfo
Here is an extract of the response (LOT_SIZE for BNBBTC):
{
"symbol": "BNBBTC",
"status": "TRADING",
"baseAsset": "BNB",
"baseAssetPrecision": 8,
"quoteAsset": "BTC",
"quotePrecision": 8,
"quoteAssetPrecision": 8,
"baseCommissionPrecision": 8,
"quoteCommissionPrecision": 8,
...
"filters": [
...
{
"filterType": "LOT_SIZE",
"minQty": "0.01000000",
"maxQty": "100000.00000000",
"stepSize": "0.01000000"
},
],
}

How to extract data from text files?

So I am having a set of files that I need to extract data from and write in a new txt file, and I am not sure how to do this with Python. Below is a sample data. I am trying to extract the parts from NSF Org, File and Abstract.
Title : CRB: Genetic Diversity of Endangered Populations of Mysticete Whales:
Mitochondrial DNA and Historical Demography
Type : Award
NSF Org : DEB
Latest
Amendment
Date : August 1, 1991
File : a9000006
Award Number: 9000006
Award Instr.: Continuing grant
Prgm Manager: Scott Collins
DEB DIVISION OF ENVIRONMENTAL BIOLOGY
BIO DIRECT FOR BIOLOGICAL SCIENCES
Start Date : June 1, 1990
Expires : November 30, 1992 (Estimated)
Expected
Total Amt. : $179720 (Estimated)
Investigator: Stephen R. Palumbi (Principal Investigator current)
Sponsor : U of Hawaii Manoa
2530 Dole Street
Honolulu, HI 968222225 808/956-7800
NSF Program : 1127 SYSTEMATIC & POPULATION BIOLO
Fld Applictn: 0000099 Other Applications NEC
61 Life Science Biological
Program Ref : 9285,
Abstract :
Commercial exploitation over the past two hundred years drove the great
Mysticete whales to near extinction. Variation in the sizes of populations
prior to exploitation, minimalpopulation size during exploitation and
current population sizes permit analyses of the effects of differing levels
of exploitation on species with different biogeographical distributions and
life-history characteristics.
You're not giving me much to go on but, what I do to read input files from a txt file. This is in Java, hopefully you'll know how to store it in an array of some sort
import java.util.Scanner;
import java.io.*;
public class ClockAngles{
public static void main (String [] args) throws IOException {
Scanner reader = null;
String input = "";
try {
reader = new Scanner (new BufferedReader (new FileReader("FilePath")));
while (reader.hasNext()) {
input = reader.next();
System.out.print(input);
}
}
finally {
if (reader != null) {
reader.close();
}
}
Python code
#!/bin/env python2.7
# Change this to the file with the time input
filename = "filetext"
storeData = []
class Whatever:
def __init__(self, time_str):
times_list = time_str.split('however you want input to be read')
self.a = int(times_list[0])
self.b = int(times_list[1])
self.c = int(times_list[2])
# prints the data
def __str__(self):
return str(self.a) + " " + str(self.b) + " " + str(self.c)

Big Graph visualization on a webpage : networkx, vivagraph

I have a graph of about 5000 nodes and 5000 links, that i can visualize in Chrome thanks to the vivagraph javascript library (webgl is faster than svg - in d3 for example).
My workflow is :
Building with the networkx python library and output the result as a json file.
Load the json and construct the graph with the vivagraph javascript library.
Nodes positions are computed by the js library
The problem is that it takes time to render the layout with well positionned nodes.
My approach is to pre-compute the nodes position in networkx for example. The really good point on this approach is that it minimize client work on the browser. But i can't achieve good positions on the webpage. I need help on this step.
The relevant python code for the node position computation is :
## positionning
try:
# Position nodes using Fruchterman-Reingold force-directed algorithm.
pos=nx.spring_layout(G)
for k,v in pos.iteritems():
# scaling tentative
# from small float like 0.5555 to higher values
# casting to int because precision is not important
pos[k] = [ int(i*1000) for i in v.tolist() ]
except Exception, e:
print "positionning failed"
raise
## setting positions
try:
# set position of nodes as a node attribute
# that will be used with the js library
nx.set_node_attributes(G,'pos', pos)
except Exception, e:
print "getting positions failed"
raise e
# output all the stuff
d = json_graph.node_link_data(G)
with open(args.output,'w') as f:
json.dump(d,f)
Then in my page, in javascript :
/*global Viva*/
function graph(file){
var file = file;
$.getJSON(file, function(data) {
var graphGenerator = Viva.Graph.generator();
graph = Viva.Graph.graph();
# building the graph with the json data :
data.nodes.forEach(function(n,i) {
var node = graph.addNode(n.id,{d: n.d});
# node position is defined in the json element attribute 'pos'
node.position = {
x : n.pos[0],
y : n.pos[1]
};
})
# adding links between nodes
data.links.forEach(function(l,i) {
graph.addLink(data.nodes[l.source].id, data.nodes[l.target].id);
})
var max_link = 55
var min_link = 1
var colors = d3.scale.linear().domain([min_link,max_link]).range(['#F0F0F0','#252525']);
var layout = Viva.Graph.Layout.forceDirected(graph, {
springLength : 80,
springCoeff : 0.0008,
dragCoeff : 0.001,
gravity : -5.0,
theta : 0.8
});
var graphics = Viva.Graph.View.webglGraphics();
graphics
.node(function(node){
# color and size of nodes
color = colors(node.links.length)
if(node.id == "root"){
// pin node on canvas, so no position update
node.isPinned = true;
size = 60;
} else {
size = 20+(7-node.id.length)*(7-node.id.length);
}
return Viva.Graph.View.webglSquare(size,color);
})
.link(function(link) {
# color on links
fromId = link.fromId;
toId = link.toId;
if(toId == "root" || fromId == "root"){
return Viva.Graph.View.webglLine("#252525");
} else {
if( fromId[0] == toId[0]){
linkcolor = linkcolors(fromId[0])
return Viva.Graph.View.webglLine(linkcolor);
} else {
linkcolor = averageRGB(linkcolors(fromId[0]),linkcolors(toId[0]))
return Viva.Graph.View.webglLine('#'+linkcolor);
}
}
});
renderer = Viva.Graph.View.renderer(graph,
{
layout : layout,
graphics : graphics,
enableBlending: false,
renderLinks : true,
prerender : true
});
renderer.run();
});
}
I am now trying Gephi, but i don't want to use the gephi toolkit as i am not used to java.
If somebody got some hints on this, please avoid me hundred of trials and maybe failure ;)
Spring Layout assumes that the edge weights uphold the metric property, i.e Weight(A,B)+Weight(A,C) > Weight(B,C). If this is not the case, then networkx tries to place them as realistic as possible.
You could try to adjust this by
pos=nx.spring_layout(G,k=\alpha, iterations=\beta)
# where 0.0<\alpha<1.0 and \beta>0
# k is the minimum distance between the nodes
# iterations specify the simulated annealing runs
# This code works only on Networkx 1.8 and not earlier versions

Categories