Any way to include variable date in API URL? - python

I'm pretty new to python, I have been able to connect to an API and get the data I want. However I notice that the date field has be hardcoded in the URL.
https://api.eia.gov/v2/electricity/rto/fuel-type-data/data?api_key=XXXX&start=2022-05-18T00
I plan to run the script daily and I am only interested in the past week of data. Is there a way to code this so I don't need to change the date manually everyday?
Thanks!

Related

How can I organize a list of assignments in a calendar interface using python?

I am creating the backend of a web application with Python and FastAPI. The application has the following database:
(To manipulate the database I am using Tortoise-ORM).
An application interface should look something like this:
In other words, for each month of the year I must show a kind of "calendar" where the assignments corresponding to each job are grouped by project. (I only have to take care of the backend)
So, my initial idea is to obtain all the assignments corresponding to a specific month (first problem, the assignments have a start date and an end date, so they must be the ones that have days within that range) and group them by project, for job... but my big problem is that I need to have them per day since I have to feed the calendar for each day.
How could I implement it?

Date Range for Facebook Graph API request on posts level

I am working on a tool for my company created to get data from our Facebook publications. It has not been working for a while, so I have to get all the historical data from June to November 2018.
My two scripts (one that get title and type of publication, and the other that get the number of link clicks) are working well to get data from last pushes, but when I try to add a date range in my Graph API request, I have some issues:
the regular query is [page_id]/posts?fields=id,created_time,link,type,name
the query for historical data is [page_id]/posts?fields=id,created_time,link,type,name,since=1529280000&until=1529712000, as the API is supposed to work with unixtime
I get perfect results for regular use, but the results for historical data only shows video publications in Graph API Explorer, with a debug message saying:
The since field does not exist on the PagePost object.
Same for "until" field when not using "since". I tried to replace "posts/" with "feed/" but it returned the exact same result...
Do you have any idea of how to get all the publications from a Page I own on a certain date range?
So it seems that it is not possible to request this kind of data unfortunately, third party services must be used...

Unable to get campaigns data to push to database in google adwords api

I am facing a couple of issues in figuring out what-is-what, in spite of the humungous documentation I am unable to figure out these issues
1.Which report type should be used to get the campaign level totals. I am trying to get the data in the format of headers
-campaign_id|campaign_name|CLicks|Impressions|Cost|Conversions.
2.I have tried to use "CAMPAIGN_PERFORMANCE_REPORT" but I get broken up information at a keyword level, but I am trying to pull the data at a campaign level.
3.I also need to push the data to a database. In the API documentation, i get samples which will either print the results on my screen or it will create a file on my machine. is there a way where I can get the data in JSON to push it to the database.
4.I have 7 accounts on my MCC account as of now, the number will increase in the coming days. I don't want to manually hard code the client customer ids into my code as there will be new accounts which will be created. is there a way where I can get the list of client customer ids which are on my MCC ac
I am trying to get this data using python as my code base and adwords api V201710.
To retrieve campaign performance data you need to run a campaign_performance_report. Follow this link to view all available columns for Campaign performance report.
The campaign performance report does not include stats aggregated at a keyword level. Are you using AWQL to pull your report?
Can you paste your code here, I find it odd you are getting keyword level data.
Run this python example code to get campaign data (you should definitely not be getting keyword level data with this example code).
Firstly Google AdWords API only returns report data in the following file formats CSVFOREXCEL, CSV, TSV, XML, GZIPPED_CSV, GZIPPED_XML. Unfortunately JSON is not supported for your use case. I would recommend GZIPPED_CSV and set the following properties to false:
skipReportHeader
skipColumnHeader
skipReportSummary
This will simply skip all headers, report titles & totals from the report making is very simple to upsert data into a table.
It is not possible to enter a MCC ID and expect the API to fetch a report for all client accounts. Each API report request contains the client ID, so therefore you are required to create an array of all client IDs and then iterate through each id. If you are using the client library (recommended) then you can simply set the clientID within the session i.e. session.setClientCustomerId("xxx");
To automate this use the ManagedCustomerService to automatically retrieve all clientIDs then iterate through this therefore you would not need to hard code each ClientID. Google have created a handy python file which returns the account hierarchy including child account ID (click here).
Lastly I based on your question I assume you attempting to run an ETL process. Google have an opensource AdWords extractor which I highly recommend.

Django: saving data from API into database (postgres)

I am building an app that uses the Oanda API to retrieve historical forex data. There are 28 forex pairs and 10 years worth of historical data. Since none of this data changes I was planning on saving it into my database rather than blowing up the API.
My goal is to initially populate the database for all pairs and then update the data once per minute from then on.
What I can't figure out is how to do this effectively.
Where should the logic for this live inside the Django app?
How can I execute the initial population of the data so that it will save?
It's the saving that is giving me the most problems. As far as I can tell Django only likes to save model instances from the shell.
Any help would be greatly appreciated.
You might want to take a look at this answer and to the django-admin commands.
I hope this helps! =)
Generally you should do operations like this inside proper view.
If you want to save some data once per minute just create a method that will implement it and refresh it with for example Ajax from time to time (for example once per minute). You don't have to render page from beginning - everything can work in the background.
Remember that you will need psycopg2 module to interact with Postgres

github api - fetch all commits of all repos of an organisation based on date

Assembla provides a simple way to fetch all commits of an organisation using api.assembla.com/v1/activity.json and it takes to and from parameters allowing to get commits of selected date(from all the spaces(repos) the user is participating.
Is there any similar way in Github ?
I found these for Github:
/repos/:owner/:repo/commits
Accepts since and until parameters for getting commits of selected date. But, since I want commits from all repos, I have to loop over all those repos and fetch commits for each repo.
/users/:user/events
This shows the commits of a user. I dont have any problem looping over all the users in the org, but how can I get for a particular date ?
/orgs/:org/events
This shows commits of all users of all repos but dont know how to fetch for a particular date ?
The problem with using the /users/:user/events endpoint is that you just don't get the PushEvents and you would have to skip over non-commit events and perform more calls to the API. Assuming you're authenticated, you should be safe so long as your users aren't hyper active.
For /orgs/:org/events I don't think they accept parameters for anything, but I can check with the API designers.
And just in case you aren't familiar, these are all paginated results. So you can go back until the beginning with the Link headers. My library (github3.py) provides iterators to do this for you automatically. You can also tell it how many events you'd like. (Same with commits, etc). But yeah, I'll come back an edit after talking to the API guys at GitHub.
Edit: Conversation
You might want to check out the GitHub Archive project -- http://www.githubarchive.org/, and the ability to query the archive using Google's BigQuery. Sounds like it would be a perfect tool for the job -- I'm pretty sure you could get exactly what you want with a single query.
The other option is to call the GitHub API -- iterate over all events for the organization and filter out the ones that don't satisfy your date rage criteria and event type criteria (commits). But since you can't specify date ranges in the API call, you will probably do a lot of calls to get the the events that interest you. Notice that you don't have to iterate over every page starting from 0 to find the page that contains the first result in the date range -- just do a (variation of) binary search over page numbers to find any page that contains a commit in the date range, a then iterate in both directions until you break out of the date range. That should reduce the number of API calls you make.

Categories