I have recently started developing an application to analyse my all-time exercises in the Polar platform.
I'm using their Accesslink API to get new sessions and I have exported my old sessions through another service they offer.
The exported sessions come with fully detailed information (instant GPS location, speed, heart rate), but the JSON data provided by the API is just a summary. I am looking for a way to get the initial position (GPS location) of my session to, later, find the city's name from another source. I think that the only way to do this is by getting the GPS info of my sessions.
Although the sessions have a has-route field, I cannot find in their documentation a way to request this route. They have provided a working example, but it does not provide a way to get these data.
Does anyway know if this is possible and, if so, could you please give me some directions?
Thanks in advance.
Turns out that the GPS information is provided through GPX files, which are provided by the API mentioned on the question. There is a method implemented to do this on their github (link also on the question) which already performs this task. I have added the call to this method and saved its output in this project.
Related
I'm building a tool in Python for which I need to read out error codes for specific devices using the Server-Eye API. Server-Eye is our monitoring solution where all of our devices and the devices of our customers are registered. The documentation at https://api.server-eye.de/ wasn't very helpful or I'm just not finding what I need. Does anybody have any experience with the Server-Eye API?
The Server-Eye support wasn't helpful either. My request apparently was so exotic that they have to discuss the problem in a weekly meeting they are having. No answer from them yet.
What I am able to do is read out customers from our tenant and the devices registered in a customer. I'm also able to format the data that the API gives you which is a huge pain. I just can't seem to find how I can read which sensors are applied to a device or what errors these sensors found.
A usual request would look like this:
requests.get('https://api.server-eye.de/2/customer/cId/containers', params=apiKey)
cId would be the ID of the customer you want to read from.
apiKey is an authorization token which can be generated in the webconsole.
This request gives a response object out of which you have to read the contents with something like .json() or .text.
Any help is appreciated, I'm starting to get real frustrated here as my deadline is slowly approaching.
I'm building a project using python and grafana where I'd like to generate a certain number of copies of certain grafana dashboards based on certain criteria. I've downloaded the grafanalib library to help me out with that, and I've read through the Generating Dashboards From Code section of the grafanalib website, but I feel like I still need more context to understand how to use this library.
So my first question is, how do I convert a grafana dashboard JSON model into a python friendly format? What method of organization do I use? I saw the dashboard generation function written in the grafanalib documentation, but it looked quite a bit different from how my JSON data is organized. I'd just like some further description of how to do the conversion.
My second question is, once I've converted my grafana JSON into a python format, how do I then get the proper information to send that generated dashboard to my grafana server? I see in the grafanalib documentation the "upload_to_grafana" function used to send the information and it takes in the three parameters (json, server, api_key), and I understand where its getting the json parameter from, but I dont get where the server information or API key are coming from or where that information is found to be input.
This is all being developed on a raspberry pi 4 just to put that out there. I'm working on a personal smart agriculture project as a way to develop my coding abilities further, as I'm self taught. Any help that can be provided to help me in my understanding is most appreciated. Thank you.
create an API key in Grafana configuration ..The secret key that u get while creating is the API key ..Server is localhost:3000 in case of installed grafana
My company is trying out google's recommendation AI using BQ exports of merchant center and GA data sources. However, we discovered a configuration error in the merchant feed which led to most of the events being unjoined.
I would like to do a new (clean) setup and am looking for the best way to delete the old data. It seems only possible via the API?
Secondly, while the UserEventService has a purge function, there doesn't seem to be a similar function for the ProductService.
Is deleting each product one by one the only way to go?
Any pointers and examples (Python) would be greatly appreciated as there seems to be very little documentation about this at this point in time.
As you mentioned, the only way to delete data is through the API, you can use Google Cloud Client Libraries or use REST requests; however, the library does not have a function to purge all the Product data.
In this case will be necessary to delete one product at a time by using the delete_prod() function (example).
Nevertheless, as a workaround you can get the id product get_product()function (example) of your products and add them into a collection, then, iterate this collection and pass each value into the delete_prod(). In that way you can delete all the data products, but this needs to be reviewed on your side.
Additionally, I would like to share additional information provided by Google where you can find all related to Python Library.
Retail Docs API,
Python Retail library, GitHub Repository Retail API
Please keep in mind that Stackoverflow is for specific questions about code such as errors.
I have a json full of event data that I need to send into snowplow in python using an iglu webhook but having trouble finding any solid guidance on this. Most of the documentation I've been able to find relates to tracking specific events and sending the data through but I need to backfill historical data in the same manner I'll fill forward looking data hence having to send a large json with activity history at the outset.
Is this possible using snowplow/python/iglu or am I approaching the problem incorrectly?
This question is getting old and OP may have moved on, but I'll leave an answer for anyone else who might stumble upon it.
A Snowplow collector (eg, the stream-collector) receives data over HTTP. Any method of sending an HTTP request should work in theory, however there are specific SDKs that address common use cases. For Python specifically, there is the snowplow-python-tracker. You can refer to the full documentation here: Snowplow Python Tracker Docs.
You do not need to be using an Iglu webhook. You can point your Python tracker instance directly to your collector via the existing request paths, which are documented here. Yes, one of these paths is for requests via the Iglu webhook adapter but that is meant to be used in specific situations where you don't control the environment, in which the tracker is instantiated, eg third-pary vendor systems.
I'm building a face-book style activity stream/wall. Using python/app engine. I have build the activity classes based on the current activity standard being used by face-book, yahoo and the likes. i have a Chanel/api system built that will create the various object messages that live on the wall/activity stream.
Where i can use some help is with some design ideas on how the wall should work. as follows:
I am using a fan out system. When something happens i send a message - making one copy but relating it to all that have subscribed to the channel it is written on. This is all working fine.
My original idea was to then simple use a query to show a wall - a simple get all the messages for a given channel or user. Which is fine.
But now I'm wondering if that is the best way to do it. I'm wondering if as the wall is a historical log that really should show "what has happened recently say last 90 days at the most. And that i will use Ajax to fetch the new messages. Is it better to use the message api i have built to send messages and then use a simple model/class/ to store the messages that form the wall for each user. Almost storing the raw HTML for each post. If each post was stored with its post date, object ref (comment,photo,event) it would be very easy to update/insert new entries in the right places and remove older ones. It would also be easy ajax side to simply listen for a new message. Insert it and continue.
I know their have been a lot of posts re "the wall" & "activity" stream does anyone have any thoughts i if my ideas are correct or off track?
Thanks
This is pretty much exactly what Brett Slatkin was talking about in his 2009 I/O talk. I'd highly recommend watching it for inspiration, and to see how a member of the App Engine team solves this problem.
Also you can check Opensocial API for design and maybe http://github.com/sahid/gosnippets.