Python 3.8 , windows 10
Hi All, im pretty new to Python so please be nice :)
I am having problems on the final stage of a piece of code writing that interacts with a busAPI. (no I am not a bus spotter)
So far I have taken in data from a user
Validated the service is one we support
Retrieved pertinent data about the service
Retrieved route information for the service, and passed this to the final part where I'm meant to map it.
Unfortunately, it is only plotting one point, the end stop
I can see my functions interacting as I have copious amounts of debug logging going on so I can see data flowing between functions etc.
....
....
1
DEBUG 4: Alyssum Walk , 51.89186 , 0.93168
DEBUG 5: Alyssum Walk , 51.89186 , 0.93168
DEBUG 5.1: map_it_dict{'stop': 'Alyssum Walk', 'latitude': 51.89186, 'longitude': 0.93168}
stop latitude longitude
0 Alyssum Walk 51.89186 0.93168
1
DEBUG 4: Azalea Court , 51.89278 , 0.93334
DEBUG 5: Azalea Court , 51.89278 , 0.93334
DEBUG 5.1: map_it_dict{'stop': 'Azalea Court', 'latitude': 51.89278, 'longitude': 0.93334}
stop latitude longitude
0 Azalea Court 51.89278 0.93334
1
DEBUG 4: Library , 51.89165 , 0.93706
DEBUG 5: Library , 51.89165 , 0.93706
DEBUG 5.1: map_it_dict{'stop': 'Library', 'latitude': 51.89165, 'longitude': 0.93706}
stop latitude longitude
0 Library 51.89165 0.93706
1
DEBUG 0: Returned from bus_route(). Last Stop is Library , 51.89165 , 0.93706
As can be seen above I am getting consistent data across all my functions and my Pandas dataframe.
DEBUG 0 is the main funcntion
DEBUG 4 -comes from my function busroute. This queries a URL and returns the bust stops on the route. This is delivered with the bus stand name, and the lat and long of the bus stop. The final part of the busroute function passes the busstop, lat and long to the Map_it function
DEBUG 5 - is showing us that it is receiving the same data from the Bus_route function. finally , on line 19 you can see the pandas dataframe, again with the same information.
the ) is the index in my pandas data fram
the 1 is the number of items in the dictionary at that time ( total route is 53 items)
The data that comes in to my map_it function should plot the52 bus stops across a folium map. but it is only mapping the last one.
I think this is happening as the data that is passed over to the map_it function is being passed line by line and I suspect the dictionary is over written each time. hence the index of 0 against the datagram.
I get all the data in my bus_route function'
What bus do you want?: 64
The number 64 is a bus service we support
DEBUG 0: From validate_bus() we got 64
DEBUG 0: From bus_service(): Bus Number 64 , Traveling from: Greenstead, Essex, Traveling to: Shrub End, Essex
DEBUG 4: Hazell Avenue , 51.87154 , 0.86659
DEBUG 4: Paxman Avenue , 51.87186 , 0.86827
DEBUG 4: Alderman Blaxhill School , 51.87236 , 0.87038
...
...
...
DEBUG 4: Alyssum Walk , 51.89186 , 0.93168
DEBUG 4: Azalea Court , 51.89278 , 0.93334
DEBUG 4: Library , 51.89165 , 0.93706
DEBUG 0: Returned from bus_route(). Last Stop is Library , 51.89165 , 0.93706
Here is my code for the two functions.
def bus_route(bus_number):
# This function queries the transport API for route data for a specific bus service number
# It receives input as bus_number from main()
# it provides bus_stand, lat and long of each of the stops along the specific buses route
# and passes them to map_it() for route mapping.
bus = bus_number
# Retrieve a URL via urllib3
# This could be tidied up using data from URL but for expediancy it is coded in here
# Use %s to pass in the Constants and Variables to make up the URL
if bus == "64":
url = BASE_URL + '/route/FESX/%s/inbound/1500IM2349B/2020-05-22/06:40/timetable.json?app_id=%s&app_key=%s' \
'&edge_geometry=false&stops=ALL' % (bus, APP_ID, API_KEY)
elif bus == "65":
url = BASE_URL + '/route/FESX/%s/inbound/1500IM2456B/2020-05-22/19:27/timetable.json?app_id=%s&app_key=%s' \
'&edge_geometry=false&stops=ALL' % (bus, APP_ID, API_KEY)
elif bus == "67":
url = BASE_URL + '/route/FESX/%s/inbound/150033038003/2020-05-22/06:55/timetable.json?app_id=%s&app_key=%s' \
'&edge_geometry=false&stops=ALL' % (bus, APP_ID, API_KEY)
elif bus == "70":
url = BASE_URL + '/route/FESX/%s/inbound/1500IM77A/2020-05-22/06:51/timetable.json?app_id=%s&app_key=%s' \
'&edge_geometry=false&stops=ALL' % (bus, APP_ID, API_KEY)
elif bus == "74B":
url = BASE_URL + '/route/FESX/%s/inbound/15003303800B/2020-05-22/20:10/timetable.json?app_id=%s&app_key=%s' \
'&edge_geometry=false&stops=ALL' % (bus, APP_ID, API_KEY)
elif bus == "88":
url = BASE_URL + '/route/FESX/%s/inbound/1500IM77A/2020-05-22/05:50/timetable.json?app_id=%s&app_key=%s' \
'&edge_geometry=false&stops=ALL' % (bus, APP_ID, API_KEY)
else:
url = BASE_URL + '/route/FESX/%s/inbound/1500IM52/2020-05-22/06:00/timetable.json?app_id=%s&app_key=%s' \
'&edge_geometry=false&stops=ALL' % (bus, APP_ID, API_KEY)
http = urllib3.PoolManager()
# Request our data, and decode the json data returned
response = http.request('GET', url)
bus_route_dict = json.loads(response.data.decode('utf-8'))
x = 0
# iterate through our dictionary giving us the bus stop names and their
# lat and long so we can plot them on a map.
#while x < len(bus_route_dict['stops']):
# print(x)
for stop in bus_route_dict['stops']:
bus_stand = stop['stop_name']
lat = stop['latitude']
long = stop['longitude']
print("DEBUG 4: " + bus_stand + " , " + str(lat) + " , " + str(long))
#map_it(bus_stand, lat, long)
return bus_stand, lat, long
def map_it(bus_stand, lat, long):
# This function maps teh bus route on a folium map
# It receives input as bus_stand, lat, and long from bus_route
# Folium mapping
stop = bus_stand
latitude = lat
longitude = long
# DEBUG code to show we are receiving code from get_route()
print("DEBUG 5: " + stop + " , " + str(latitude) + " , " + str(longitude))
# Setup a dictionary to store the information we need to build
# a folium map
map_it_dict = {}
map_it_dict['stop'] = stop
map_it_dict['latitude'] = latitude
map_it_dict['longitude'] = longitude
print("DEBUG 5.1: map_it_dict" + str(map_it_dict))
# lets get the dict into pandas
map_it_df = pd.DataFrame([map_it_dict])
# Check we got data - we get it. tw 22/05/2020
#print(map_it_df.head())
# Prep data for the map
locations = map_it_df[['latitude', 'longitude']]
locationlist = locations.values.tolist()
print(len(locationlist))
# Now build the map
# the Location is the Lat/Long for Colchester
route_64 = folium.Map(location=[51.8959,0.8919] , zoom_start=14)
for point in range(0, len(locationlist)):
folium.Marker((locationlist[point]) , popup=map_it_dict['stop']).add_to(route_64)
route_64.save("route_maps/route_64.html ")
I cant see where to create my dictionary as all i seem to get out in the bus_route() function is single lines. I realise if i could get the print statement to append to the dictionary then that would be my problem solved.
Related
I am using gps neo 6m v2 and a raspberry pi to monitor vehicle health. I have also use obd to get the data of engine sensors, in this case I was facing problem on the gps which is data from the gps is missing for few minutes. If I started the travel and from one place to another, from the beginning until sometime I getting data and suddenly data is missing and return back normal. In the between data missing is the biggest issue for me. I could not find the solution why the data missing in the middle of the way i travel along. please give me a solution.
the python code used for get the GPS data from raspberry pi to influxdb and visualize in Grafana server.
from datetime import datetime
from influxdb_client import InfluxDBClient, Point, WritePrecision
from influxdb_client.client.write_api import SYNCHRONOUS
import serial
# Setup database
token = "<mytoken>"
org = "<myorg>"
bucket = "<mybucket>"
with InfluxDBClient(url="<influxurl>", token=token, org=org) as client:
write_api = client.write_api(write_options=SYNCHRONOUS)
# Setup dataload
json_dataload = []
ser = serial.Serial("/dev/ttyS0")
gpgga_info = "$GPGGA,"
GPGGA_buffer = 0
NMEA_buff = 0
def convert_to_degrees(raw_value):
decimal_value = raw_value / 100.00
degrees = int(decimal_value)
mm_mmmm = (decimal_value - int(decimal_value)) / 0.6
position = degrees + mm_mmmm
position = "%.4f" % position
return position
while True:
received_data = str(ser.readline()) # read NMEA string received
GPGGA_data_available = received_data.find(gpgga_info) # check for NMEA>
if (GPGGA_data_available > 0):
GPGGA_buffer = received_data.split("$GPGGA,", 1)[1] # store data com>
NMEA_buff = (GPGGA_buffer.split(','))
nmea_latitude = []
nmea_longitude = []
extract_latitude = NMEA_buff[1] # extract latitude from >
extract_longitude = NMEA_buff[3] # extract longitude from>
lat = float(extract_latitude)
lat = convert_to_degrees(lat)
longi = float(extract_longitude)
longi = convert_to_degrees(longi)
point = Point("latest GPS") \
.field("latitude", lat) \
.field("longitude", longi) \
.time(datetime.utcnow(), WritePrecision.NS)
json_dataload.append(point)
# Send our payload
write_api.write(bucket, org,json_dataload)
I have been trying to make my code faster by running parallel processes with no luck. I am fetching weather data with an external library (https://github.com/pnuu/fmiopendata). Under the hood the library is simply using requests.get() for fetching data from the API. Any tips on how to proceed? I could surely edit the code of fmiopendata, but I would prefer a workaround and not having to refactor others code.
Here is some working code, which I would like to edit:
from fmiopendata.wfs import download_stored_query
def parseStartTime(ts, year):
return str(year) + "-" + ts[0][0] + "-" + ts[0][1] + "T00:00:00Z"
def parseEndTime(ts, year):
return str(year) + "-" + ts[1][0] + "-" + ts[1][1] + "T23:59:59Z"
def weatherWFS(lat, lon, start_time, end_time):
# Downloading the observations form the WFS server. Using bbox and timestams for querying
while True:
try:
obs = download_stored_query(
"fmi::observations::weather::daily::multipointcoverage",
args=["bbox="+str(lon - 1e-2)+","+str(lat - 1e-2)+","+str(lon + 1e-2)+","+str(lat + 1e-2),
"starttime=" + start_time,
"endtime=" + end_time])
if obs.data == {}:
return False
else:
return obs
except:
pass
def getWeatherData(lat, lon):
StartYear, EndYear = 2011, 2021
# Handling the data is suitable chunks. Array pairs represent the starting and ending
# dates of the intervals in ["MM", "dd"] format
intervals = [
[["01", "01"], ["03", "31"]],
[["04", "01"], ["06", "30"]],
[["07", "01"], ["09", "30"]],
[["10", "01"], ["12", "31"]]
]
# Start and end timestamps are saved in an array
queries = [[parseStartTime(intervals[i], year),
parseEndTime(intervals[i], year)]
for year in range(StartYear, EndYear + 1)
for i in range(len(intervals))]
for query in queries:
# This is the request we need to run in parallel processing to save time
# the obs-objects need to be saved somehow and merged afterwards
obs = weatherWFS(lat, lon, query[0], query[1])
""" INSERT MAGIC CODE HERE """
lat, lon = 62.6, 29.72
getWeatherData(lat, lon)
Answering to my self:
The best solution I found so far is to use concurrent.futures with either the map() or submit() functions.
The suggested solution by Trambi does not improve the execution, as the requests are not CPU intensive. The bottleneck here is the waiting time, which the CPU has to stay idle, and therefore using separate processes is not going to solve the problem. However, multithreading can improve the speed, as the threads are created and shut down quicker.
Using the ThreadPoolExecutor with combination with as_completed(), I was able to recude the execution time with ~15%.
from concurrent.futures import ThreadPoolExecutor, as_completed
from fmiopendata.wfs import download_stored_query
def parseStartTime(ts, year):
return str(year) + "-" + ts[0][0] + "-" + ts[0][1] + "T00:00:00Z"
def parseEndTime(ts, year):
return str(year) + "-" + ts[1][0] + "-" + ts[1][1] + "T23:59:59Z"
def weatherWFS(lat, lon, start_time, end_time):
# Downloading the observations form the WFS server. Using bbox and timestams for querying
while True:
try:
obs = download_stored_query(
"fmi::observations::weather::daily::multipointcoverage",
args=["bbox="+str(lon - 1e-2)+","+str(lat - 1e-2)+","+str(lon + 1e-2)+","+str(lat + 1e-2),
"starttime=" + start_time,
"endtime=" + end_time])
if obs.data == {}:
return False
else:
return obs
except:
pass
def getWeatherData(lat, lon):
StartYear, EndYear = 2011, 2021
# Handling the data is suitable chunks. Array pairs represent the starting and ending
# dates of the intervals in ["MM", "dd"] format
intervals = [
[["01", "01"], ["03", "31"]],
[["04", "01"], ["06", "30"]],
[["07", "01"], ["09", "30"]],
[["10", "01"], ["12", "31"]]
]
# Start and end timestamps are saved in an array
queries = [
[lat, lon,
parseStartTime(intervals[i], year),
parseEndTime(intervals[i], year)]
for year in range(StartYear, EndYear)
for i in range(len(intervals))]
observations = [executor.submit(weatherWFS, query) for query in queries]
for obs in as_completed(observations):
obs = obs.result()
"""do stuff with the observations"""
lat, lon = 62.6, 29.72
getWeatherData(lat, lon)
You could try using multiprocessing.Pool.
Replace your for query in queries: loop with something like:
import multiprocessing
iterable = zip([lat]*len(queries), [lon]*len(queries), queries)
pool = multiprocessing.Pool(len(queries))
obs_list = pool.map(func=weatherWFS, iterable=iterable)
pool.close()
pool.join()
Note that this will pass the whole query elements as arguments to weatherWFS so you should change the function signature accordingly:
def weatherWFS(lat, lon, query):
start_time = query[0]
end_time = query[1]
Depending on the length of queries and its element you might also choose to unpack queries in your iterable...
Hello everybody, hope you are all doing well.
I am doing in a project in which I receive GPS data (Longitude, and Latitude) from an Android device via an SQL server. What I am trying to do is to send this Longitude - Latitude data to my SITL vehicle in Ardupilot. I thought about using Dronekit Python API as such:
from dronekit import connect, VehicleMode
import time
import mysql.connector
import time
#--- Start the Software In The Loop (SITL)
import dronekit_sitl
#
sitl = dronekit_sitl.start_default() #(sitl.start)
#connection_string = sitl.connection_string()
mydb = mysql.connector.connect(
host="******",
user="******",
password="*****",
database="koordinat"
)
mycursor = mydb.cursor()
#--- Now that we have started the SITL and we have the connection string (basically the ip and udp port)...
print("Araca bağlanılıyor")
vehicle = connect('tcp:127.0.0.1:5762', wait_ready=False, baud = 115200)
vehicle.wait_ready(True, raise_exception=False)
#-- Read information from the autopilot:
#- Version and attributes
vehicle.wait_ready('autopilot_version')
print('Autopilot version: %s'%vehicle.version)
#- Does the firmware support the companion pc to set the attitude?
print('Supports set attitude from companion: %s'%vehicle.capabilities.set_attitude_target_local_ned)
vehicle.mode = VehicleMode("GUIDED")
vehicle.armed = True
while(True):
mycursor.execute("SELECT * FROM koordinat WHERE 1")
location = str(mycursor.fetchall())
location = location.split(",")
location[0] = location[0].replace("[", "")
location[0] = location[0].replace("(", "")
location[0] = location[0].replace("'", "")
location[1] = location[1].replace("[", "")
location[1] = location[1].replace(")", "")
location[1] = location[1].replace("'", "")
location[1] = location[1].replace(")", "")
# Converting the longitude and latitude to float, before assigning to the vehicle GPS data:
location[0] = float(location[0])
location[1] = float(location[1])
# Setting the location of the vehicle:
vehicle.location.global_frame.lat = location[0]
vehicle.location.global_frame.lon = location[1]
print('Konum:', str(vehicle.location.global_frame.lat)+str(","), str(vehicle.location.global_frame.lon)+str(","), str(vehicle.location.global_frame.alt))
#- When did we receive the last heartbeat
print('Son bilgi gelişi: %s'%vehicle.last_heartbeat)
time.sleep(1)
However, when I check from the SITL and Mission Planner (also from the print statement from my code) the location does not change; the simulator simply ignores those commands sent by the Dronekit. Is there a working method to accomplish what I am trying to do? I tried to change the sim_vehicle.py script which I use to start the simulation. But I was only able to change the starting/home location of the vehicle. I was not able to change the current location of the vehicle on SITL and Mission Planner.
This is incorrect. You're modifying the attribute of the vehicle object that's connected to the SITL, not sending any commands to the actual autopilot.
vehicle.location.global_frame.lat = location[0]
vehicle.location.global_frame.lon = location[1]
What you want to do is set the mode to GUIDED and use the simple_goto function in dronekit to make the drone move to lat/lon/alt coordinates.
Otherwise, you can also send this MAVLink command SET_POSITION_TARGET_GLOBAL_INT to guide it.
I'm writing a program to toggle the lights at my house based on my iPhone's GPS coordinates. Below is what I have so far. However, I feel like there must be a better way to do this. Is there a way to get GPS data without pinging my phone every five minutes?
So far I've tried the following with no joy:
Using Shortcuts and Scriptable I tried to write some JavaScript that would trigger when I got close to home. However, I could not figure out how to use await require('wemo-client') using scriptablify. I kept getting an error, "ReferenceError: Can't find variable: require".
IFTTT does not have a variable timed trigger so the lights won't turn off after 15 minutes. Also, I plan on adding a motion sensor trigger that is unsupported.
Pythonista is $10. Yes, I am that cheap.
Apple HomeKit does not support the model I'm using, Wemo Smart Light Switch F7C030.
The code below works, but I hate that I have to ping my phone every five minutes. I'd rather save battery life by firing this code once or twice a day, as needed.
Any suggestions would be greatly appreciated.
Code:
import sys
import time
import datetime
import os
from pyicloud import PyiCloudService
import pywemo
APPLE_ID = os.getenv('APPLE_ID') # Apple ID username
APPLE_ID_PASSWORD = os.getenv('APPLE_ID_PASSWORD') # Apple ID password
API = PyiCloudService(APPLE_ID, APPLE_ID_PASSWORD)
IPHONE = API.devices[1]
LOCATION = IPHONE.location()
FIVE = 300 # 5 * 60 seconds
FIFTEEN = 900 # 15 * 60 seconds
ONEMILE = 0.01449275362318840579710144927536 # one mile is 1/69 degrees lat or long
HOMELAT = # my home's latitude
HOMELONG = # my home's longitude
WEMOS = pywemo.discover_devices()
LEN_WEMOS = range(len(WEMOS))
# Two factor authentication to retrieve iPhone data
if API.requires_2fa:
import click
print("Two-step authentication required. Your trusted devices are:")
DEVICES = API.devices
for i, device in enumerate(DEVICES):
print(" %s: %s" % (i, device.get('deviceName', "SMS to %s" % device.get('phoneNumber'))))
DEF_DEVICE = click.prompt('Which device would you like to use?', default=0)
DEVICE = DEVICES[DEF_DEVICE]
if not API.send_verification_code(DEVICE):
print("Failed to send verification code")
sys.exit(1)
CODE = click.prompt('Please enter validation code')
if not API.validate_verification_code(DEVICE, CODE):
print("Failed to verify verification code")
sys.exit(1)
# Turn off the lights when I leave
def leavehome():
timenow = datetime.datetime.now()
print("Left home on {}".format(timenow.strftime("%B %d, %Y at %H:%M:%S")))
for wemo in LEN_WEMOS:
WEMOS[wemo].off()
# Turn on the lights for 15 minutes when I get home
def arrivehome():
timenow = datetime.datetime.now()
print("Arrived home on {}".format(timenow.strftime("%B %d, %Y at %H:%M:%S")))
# Loop through all Wemo devices
for wemo in LEN_WEMOS:
WEMOS[wemo].on()
time.sleep(FIFTEEN)
for wemo in LEN_WEMOS:
WEMOS[wemo].off()
# Automatically turn off the lights after 15 minutes - save electricity
def timeoff():
time.sleep(FIFTEEN)
for wemo in LEN_WEMOS:
WEMOS[wemo].off()
# Ping my phone for GPS data
def pingphone(prev):
mylat = LOCATION["latitude"]
mylong = LOCATION["longitude"]
logic(prev, mylat, mylong)
time.sleep(FIVE)
# Perform logic to determine if I'm home, out, arriving, or leaving
def logic(prev, lat, long):
inrange = (HOMELAT+ONEMILE >= lat >= HOMELAT-ONEMILE and HOMELONG+ONEMILE >= long >= HOMELONG-ONEMILE)
current = bool(inrange)
previous = prev
if current and not previous:
arrivehome()
elif previous and not current:
leavehome()
else:
timeoff()
pingphone(current)
# Run the script
pingphone(False)
I would like to grab satellite positions from the page(s) below, but I'm not sure if scraping is appropriate because the page appears to be updating itself every second using some internal code (it keeps updating after I disconnect from the internet). Background information can be found in my question at Space Stackexchange: A nicer way to download the positions of the Orbcomm-2 satellites.
I need a "snapshot" of four items simultaneously:
UTC time
latitude
longitude
altitude
Right now I use screen shots and manual typing. Since these values are being updated by the page - is conventional web-scraping going to work here? I found a "screen-scraping" tag, should I try to learn about that instead?
I'm looking for the simplest solution to get those four values, I wonder if I can just use urllib or urllib2 and avoid installing something new?
example page: http://www.satview.org/?sat_id=41186U I need to do 41179U through 41189U (the eleven Orbcomm-2 satellites that SpaceX just put in orbit)
One option would be to fire up a real browser and continuously poll the position in an endless loop:
import time
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("http://www.satview.org/?sat_id=41186U")
while True:
location = driver.find_element_by_css_selector("#sat_latlon .texto_track2").text
latitude, longitude = location.split("\n")[:2]
print(latitude, longitude)
time.sleep(1)
Sample output:
(u'-16.57', u'66.63')
(u'-16.61', u'66.67')
...
Here we are using selenium and Firefox - there are multiple drivers for different browsers including headless, like PhantomJS.
no need to scrape. Just look at the source html of that page and copy/paste the javascript code. None of the positions are fetched remotely...they're all calculated on the fly in the page. So just grab the code and run it yourself!
Space-Track.org's REST API seems built to handle this type of request. Once you have an account there, you can even download a sample script (updated here) to download TLES:
# STTest.py
#
# Simple Python app to extract resident space object history data from www.space-track.org into a spreadsheet
# (prior to executing, register for a free personal account at https://www.space-track.org/auth/createAccount)
#
# This program is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the Free Software Foundation,
# either version 3 of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
# without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU General Public License for more details.
#
# For full licencing terms, please refer to the GNU General Public License (gpl-3_0.txt) distributed with this release,
# or see http://www.gnu.org/licenses/gpl-3.0.html.
import requests
import json
import xlsxwriter
import time
import datetime
import getpass
import sys
class MyError(Exception):
def __init___(self, args):
Exception.__init__(self, "my exception was raised with arguments {0}".format(args))
self.args = args
# See https://www.space-track.org/documentation for details on REST queries
# "Find Starlinks" query finds all satellites w/ NORAD_CAT_ID > 40000 & OBJECT_NAME matching STARLINK*, 1 line per sat;
# the "OMM Starlink" query gets all Orbital Mean-Elements Messages (OMM) for a specific NORAD_CAT_ID in JSON format.
uriBase = "https://www.space-track.org"
requestLogin = "/ajaxauth/login"
requestCmdAction = "/basicspacedata/query"
requestFindStarlinks = "/class/tle_latest/NORAD_CAT_ID/>40000/ORDINAL/1/OBJECT_NAME/STARLINK~~/format/json/orderby/NORAD_CAT_ID%20asc"
requestOMMStarlink1 = "/class/omm/NORAD_CAT_ID/"
requestOMMStarlink2 = "/orderby/EPOCH%20asc/format/json"
# Parameters to derive apoapsis and periapsis from mean motion (see https://en.wikipedia.org/wiki/Mean_motion)
GM = 398600441800000.0
GM13 = GM ** (1.0 / 3.0)
MRAD = 6378.137
PI = 3.14159265358979
TPI86 = 2.0 * PI / 86400.0
# Log in to personal account obtained by registering for free at https://www.space-track.org/auth/createAccount
print('\nEnter your personal Space-Track.org username (usually your email address for registration): ')
configUsr = input()
print('Username capture complete.\n')
configPwd = getpass.getpass(prompt='Securely enter your Space-Track.org password (minimum of 15 characters): ')
# Excel Output file name - e.g. starlink-track.xlsx (note: make it an .xlsx file)
configOut = 'STText.xslx'
siteCred = {'identity': configUsr, 'password': configPwd}
# User xlsxwriter package to write the .xlsx file
print('Creating Microsoft Excel (.xlsx) file to contain outputs...')
workbook = xlsxwriter.Workbook(configOut)
worksheet = workbook.add_worksheet()
z0_format = workbook.add_format({'num_format': '#,##0'})
z1_format = workbook.add_format({'num_format': '#,##0.0'})
z2_format = workbook.add_format({'num_format': '#,##0.00'})
z3_format = workbook.add_format({'num_format': '#,##0.000'})
# write the headers on the spreadsheet
print('Starting to write outputs to Excel file create...')
now = datetime.datetime.now()
nowStr = now.strftime("%m/%d/%Y %H:%M:%S")
worksheet.write('A1', 'Starlink data from' + uriBase + " on " + nowStr)
worksheet.write('A3', 'NORAD_CAT_ID')
worksheet.write('B3', 'SATNAME')
worksheet.write('C3', 'EPOCH')
worksheet.write('D3', 'Orb')
worksheet.write('E3', 'Inc')
worksheet.write('F3', 'Ecc')
worksheet.write('G3', 'MnM')
worksheet.write('H3', 'ApA')
worksheet.write('I3', 'PeA')
worksheet.write('J3', 'AvA')
worksheet.write('K3', 'LAN')
worksheet.write('L3', 'AgP')
worksheet.write('M3', 'MnA')
worksheet.write('N3', 'SMa')
worksheet.write('O3', 'T')
worksheet.write('P3', 'Vel')
wsline = 3
def countdown(t, step=1, msg='Sleeping...'): # in seconds
pad_str = ' ' * len('%d' % step)
for i in range(t, 0, -step):
sys.stdout.write('{} for the next {} seconds {}\r'.format(msg, i, pad_str))
sys.stdout.flush()
time.sleep(step)
print('Done {} for {} seconds! {}'.format(msg, t, pad_str))
# use requests package to drive the RESTful session with space-track.org
print('Interfacing with SpaceTrack.org to obtain data...')
with requests.Session() as session:
# Need to log in first. NOTE: we get a 200 to say the web site got the data, not that we are logged in.
resp = session.post(uriBase + requestLogin, data=siteCred)
if resp.status_code != 200:
raise MyError(resp, "POST fail on login.")
# This query picks up all Starlink satellites from the catalog. NOTE: a 401 failure shows you have bad credentials.
resp = session.get(uriBase + requestCmdAction + requestFindStarlinks)
if resp.status_code != 200:
print(resp)
raise MyError(resp, "GET fail on request for resident space objects.")
# Use json package to break json-formatted response into a Python structure (a list of dictionaries)
retData = json.loads(resp.text)
satCount = len(retData)
satIds = []
for e in retData:
# each e describes the latest elements for one resident space object. We just need the NORAD_CAT_ID...
catId = e['NORAD_CAT_ID']
satIds.append(catId)
# Using our new list of resident space object NORAD_CAT_IDs, we can now get the OMM message
maxs = 1 # counter for number of sessions we have established without a pause in querying space-track.org
for s in satIds:
resp = session.get(uriBase + requestCmdAction + requestOMMStarlink1 + s + requestOMMStarlink2)
if resp.status_code != 200:
# If you are getting error 500's here, its probably the rate throttle on the site (20/min and 200/hr)
# wait a while and retry
print(resp)
raise MyError(resp, "GET fail on request for resident space object number " + s + '.')
# the data here can be quite large, as it's all the elements for every entry for one resident space object
retData = json.loads(resp.text)
for e in retData:
# each element is one reading of the orbital elements for one resident space object
print("Scanning satellite " + e['OBJECT_NAME'] + " at epoch " + e['EPOCH'] + '...')
mmoti = float(e['MEAN_MOTION'])
ecc = float(e['ECCENTRICITY'])
worksheet.write(wsline, 0, int(e['NORAD_CAT_ID']))
worksheet.write(wsline, 1, e['OBJECT_NAME'])
worksheet.write(wsline, 2, e['EPOCH'])
worksheet.write(wsline, 3, float(e['REV_AT_EPOCH']))
worksheet.write(wsline, 4, float(e['INCLINATION']), z1_format)
worksheet.write(wsline, 5, ecc, z3_format)
worksheet.write(wsline, 6, mmoti, z1_format)
# do some ninja-fu to flip Mean Motion into Apoapsis and Periapsis, and to get orbital period and velocity
sma = GM13 / ((TPI86 * mmoti) ** (2.0 / 3.0)) / 1000.0
apo = sma * (1.0 + ecc) - MRAD
per = sma * (1.0 - ecc) - MRAD
smak = sma * 1000.0
orbT = 2.0 * PI * ((smak ** 3.0) / GM) ** (0.5)
orbV = (GM / smak) ** (0.5)
worksheet.write(wsline, 7, apo, z1_format)
worksheet.write(wsline, 8, per, z1_format)
worksheet.write(wsline, 9, (apo + per) / 2.0, z1_format)
worksheet.write(wsline, 10, float(e['RA_OF_ASC_NODE']), z1_format)
worksheet.write(wsline, 11, float(e['ARG_OF_PERICENTER']), z1_format)
worksheet.write(wsline, 12, float(e['MEAN_ANOMALY']), z1_format)
worksheet.write(wsline, 13, sma, z1_format)
worksheet.write(wsline, 14, orbT, z0_format)
worksheet.write(wsline, 15, orbV, z0_format)
wsline = wsline + 1
maxs = maxs + 1
print(str(maxs))
if maxs > 18:
print('\nSnoozing for 60 secs for rate limit reasons (max 20/min and 200/hr).')
countdown(60)
maxs = 1
session.close()
workbook.close()
print('\nCompleted session.')