I just use the simple code to post a data to firebase, but I don't know why it appears 6 times on firebase realtime database.
from firebase import firebase
url = "https://xxx.firebaseio.com/"
fb = firebase.FirebaseApplication(url, None)
fb.post("/posts", {'ID':123})
I run "python fb.py" one time only.
However the result is:
I am very confused.
Are you trying to update this field or create a new entry with a unique ID when you run your code?
Maybe try using fb.put() instead.
fb.post() is the equivalent of .push() in the JavaScript API, so it creates a unique ID for you. fb.put() is equivalent to .set() and will just set the data.
Related
I have recently started with pyrebase and I am having trouble in storing data depending on user and receiving it. After completing the authentication part I create a user ID as user['idToken']
And then pushed data by using
archer = {"name": "Sterling Archer", "agency": "Figgis Agency"}
db.child("agents").push(archer, user['idToken'])
I assume that each user has different ID token which remains same even if we logout and relogin. but when I am using
all_agents = db.child("agents").get(user['idToken']).val()
print(all_agentes)
It's printing all data stored in realtime database. Even which were stored by other users.
I tried reading all it's documentation, but I was unable to understand how to handle it.
What's wrong that I am doing in here and How can I correct it?
Use user[localId] instead of user[idToken] as it will create database for every different user separately
I'm trying to get all customers who abandoned their order last week.
I was able to achieved it through the REST API, however, I wonder how to achieve the same thing via Shopify Python API.
Here's the code I tried in Postman:
https://{shop}.myshopify.com/admin/api/2019-07/customer_saved_searches/{customer_saved_search_id}/customers.json?limit=250
Also, it seems like there's a 250 results limit in the REST API, is there a way to exceed it?
The Python SDK is a bit of afterthought of Shopify. Sadly.
Regarding the paging on th REST request. Just add the query parameter ...?page=2 at the end of your URL.
Regarding the Python SDK, following will do the trick:
id = 23423423423
css = shopify.CustomerSavedSearch.find(id)
I am facing a couple of issues in figuring out what-is-what, in spite of the humungous documentation I am unable to figure out these issues
1.Which report type should be used to get the campaign level totals. I am trying to get the data in the format of headers
-campaign_id|campaign_name|CLicks|Impressions|Cost|Conversions.
2.I have tried to use "CAMPAIGN_PERFORMANCE_REPORT" but I get broken up information at a keyword level, but I am trying to pull the data at a campaign level.
3.I also need to push the data to a database. In the API documentation, i get samples which will either print the results on my screen or it will create a file on my machine. is there a way where I can get the data in JSON to push it to the database.
4.I have 7 accounts on my MCC account as of now, the number will increase in the coming days. I don't want to manually hard code the client customer ids into my code as there will be new accounts which will be created. is there a way where I can get the list of client customer ids which are on my MCC ac
I am trying to get this data using python as my code base and adwords api V201710.
To retrieve campaign performance data you need to run a campaign_performance_report. Follow this link to view all available columns for Campaign performance report.
The campaign performance report does not include stats aggregated at a keyword level. Are you using AWQL to pull your report?
Can you paste your code here, I find it odd you are getting keyword level data.
Run this python example code to get campaign data (you should definitely not be getting keyword level data with this example code).
Firstly Google AdWords API only returns report data in the following file formats CSVFOREXCEL, CSV, TSV, XML, GZIPPED_CSV, GZIPPED_XML. Unfortunately JSON is not supported for your use case. I would recommend GZIPPED_CSV and set the following properties to false:
skipReportHeader
skipColumnHeader
skipReportSummary
This will simply skip all headers, report titles & totals from the report making is very simple to upsert data into a table.
It is not possible to enter a MCC ID and expect the API to fetch a report for all client accounts. Each API report request contains the client ID, so therefore you are required to create an array of all client IDs and then iterate through each id. If you are using the client library (recommended) then you can simply set the clientID within the session i.e. session.setClientCustomerId("xxx");
To automate this use the ManagedCustomerService to automatically retrieve all clientIDs then iterate through this therefore you would not need to hard code each ClientID. Google have created a handy python file which returns the account hierarchy including child account ID (click here).
Lastly I based on your question I assume you attempting to run an ETL process. Google have an opensource AdWords extractor which I highly recommend.
Suppose I am getting a list of user from an api in json format and I want to save it in my User model in django. Saving the user might not be a problem but I want that data continuously.
I am getting an api from a system which sends me the list of user who sent emails. These users go on increasing.
Now I am getting a list of users. I want to save these users in my database. But the questioning part is, suppose 10 user has sent messages I am getting list of 10 users from api then I will save it say like
usr = User()
usr.username = data["username"]
usr.save()
Now what when 1 more user sends the email. Now I will be receiving 11 user.
Here I want continuously add the updated user in my database. How to do this ?
I dont know if I have made it clear or not but need help on this
I guess your problem is how to trigger the retrieving of the emails.
A very easy way to do this is to use a timed autoreload - you can do this easily in html. See this answer. Then your django view can check for new emails and respond with the new data.
as per my understanding, you want to check for the new data entries periodically and save the new records into db. Right?
So for periodic tasks Celery is the best utility.
once you get data periodically then get db data into list_db and API data into API_list. Now start comparing both the lists data and store the new one into db.
I hope this will help.
I use Python 3 as a serverside scripting language, and I want a way to keep users logged into my site. I don't use any framework, since I prefer to hand code pages, so how do I create session variables like in PHP in Python 3?
The logic of a session is storing a unique session id inside the user cookie ( uuid package will do a perfect job for that ). And you store the sessions data inside a file, database or other semi-permanent datastore.
The idea is matching the sessionid that you receive from your user cookie, to some data stored somewhere on your server.
I assume that you know how to add the right header to set a cookie via the response header.
Otherwise there is more information here : http://en.wikipedia.org/wiki/List_of_HTTP_header_fields#Responses