I am unable to retrieve the documents which are available in my collection inside the firestore database. Here is my code.
Every time I run this console dosen't print anything. I am following the documentation avaliable on this link https://firebase.google.com/docs/firestore/query-data/get-data, but it dosen't seems to work.
database_2 = firestore.client()
all_users_ref_2 = database_2.collection(u'user').stream()
for users in all_users_ref_2:
print(u'{} => {}'.format(users.id, users.to_dict()))
Do you have multiple projects? If so, double check that you open a client to the correct project. One quick way to confirm is to pass the project ID to the client:
db = firestore.Client('my-project-id')
Could be an authentication issue, you could download a service account key and use that in your project at the top.
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALs"] = "path/to/key.json"
or as mentioned
database_2 = firestore.Client("<project ID>")
make sure Client is a capital C
Related
I am attempting to retrieve and add function/host keys for an Azure Government function app via Python. I am currently working with the information from this question and the corresponding API page. While these are not specific to Azure Government, I would think the process would be similar after updating the URLs to the Azure Government versions. However, I am receiving the error "No route registered for '/api/functions/admin/token'" when running the jwt part of the given code. Is this approach feasible for what I am trying to do?
I also found somewhere that I instead might want to try a GET request like this:
resp = requests.get("https://management.usgovcloudapi.net/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Web/sites/<function-app-name>/functions/admin/masterkey?api-version=20XX-XX-XX", headers={"Authorization": f"Bearer {something}"})
This gives me the error "{"error":{"code":"InvalidAuthenticationToken","message":"The access token is invalid."}}", though. If this is indeed the correct approach, then what format should the Bearer token take?
Bit late answering but it may be useful for someone else in the future, it took me a while to find out how to do this.
If you want to retrieve the keys of a specific function within a function app then you can use list_function_keys() function from the Python SDK
Working with the Az management API directly may be a bit annoying and since the Azure CLI is written in Python whatever operation you do with the CLI you can do it directly in a Python script.
Here's an example of how you can retrieve the keys
from azure.identity import DefaultAzureCredential
from azure.mgmt.web import WebSiteManagementClient
# Your subscription ID
SUB_ID = "00000000-0000-0000-0000-000000000000"
fn_name = "some_function" # Name of your function
app_name = "some_app" # Name of your site/function app
rg_name = "some_rg" # Resource group name to which the function belongs
web_client = WebSiteManagementClient(subscription_id=SUB_ID, credential=DefaultAzureCredential())
keys = web_client.web_apps.list_function_keys(rg_name, app_name, fn_name)
# Your keys will be accessible in the additional_properties param
print(keys.additional_properties)
Hope it helps! I'm new on Azure so if I'm doing something wrong, please don't hesitate to point out my mistake and share your correction
I have a database in Google Firebase that has streaming sensor data. I have a Shiny app that needs to read this data and map the sensors and their values.
I am trying to pull the data from Firebase into R, but couldn't find any package that does this. The app is currently running on local downloaded data.
I found the FireData package, but have no idea how it works.
I do know that you can pull data from Firebase with Python, but I don't know enough Python to do so, but I would be willing to code it in R with rPython if necessary.
I have:
- The Firebase project link
- The username
- The password
Has anyone tried Firebase and R / Shiny in the past?
I hope my question is clear enough.
The basics to get started with the R package fireData are as follows. First you need to make sure that you have set up a firebase account on GCP (Google Cloud Platform). Once there set up a new project and give it a name
Now that you have a project select the option on the overview page that says "Add Firebase to your web app". It will give you all the credential information you need.
[
One way of dealing with this kind of information in R is to add it to an .Renviron file so that you do not need to share it with your code (for example if it goes to github). There is a good description about how to manage .Renviron files in the Efficient R Programming Book.
API_KEY=AIzaSyBxxxxxxxxxxxxxxxxxxxLwX1sCBsFA
AUTH_DOMAIN=stackoverflow-1c4d6.firebaseapp.com
DATABASE_URL=https://stackoverflow-1c4d6.firebaseio.com
PROJECT_ID=stackoverflow-1c4d6
This will be available to your R session after you restart R (if you have made any changes).
So now you can try it out. But first, change the rules of your firebase Database to allow anyone to make changes and to read (for these examples to work)
Now you can run the following examples
library(fireData)
api_key <- Sys.getenv("API_KEY")
db_url <- Sys.getenv("DATABASE_URL")
project_id <- Sys.getenv("PROJECT_ID")
project_domain <- Sys.getenv("AUTH_DOMAIN")
upload(x = mtcars, projectURL = db_url, directory = "new")
The upload function will return the name of the document it saved, that you can then use to download it.
> upload(x = mtcars, projectURL = db_url, directory = "main")
[1] "main/-L3ObwzQltt8IKjBVgpm"
The dataframe (or vector of value) you uploaded will be available in your Firebase Database Console immediately under that name, so you can verify that everything went as expected.
Now, for instance, if the name that was returned read main/-L3ObwzQltt8IKjBVgpm then you can download it as follows.
download(projectURL = db_url, fileName = "main/-L3ObwzQltt8IKjBVgpm")
You can require authentication, once you have created users. For example, you can create users like so (the users appear in your firebase console).
createUser(projectAPI = api_key, email = "test#email.com", password = "test123")
You can then get their user information and token.
registered_user <- auth(api_key, email = "test#email.com", password = "test123")
And then use the tokenID that is returned to access the files.
download(projectURL = db_url, fileName = "main/-L3ObwzQltt8IKjBVgpm",
secretKey = api_key,
token = registered_user$idToken)
I'm trying to automatically setup git system and I'm stuck in a process where I want to add user's key using the github api. This is what I've so far.
USER_SSH_PUB=glob.glob(os.path.expanduser('~/.ssh/temp.k.pub'))
user_Ssh_Pub_Key_File=open(USER_SSH_PUB[0],"r")
GITHUB_URL='https://api.github.com/users/abc/keys'
key_Data=urllib.urlencode({"title":"abcd","key":user_Ssh_Pub_Key_File.read()})
request=urllib2.Request(GITHUB_URL,key_Data) response=urllib2.urlopen(request) |
print response.read()
I get a 404 when I do this. Has anybody done this ?
I assume you want to take a public key and add that to a User's set of keys, i.e., through this API.
The problem is that you can only do this for the authenticated user, you can not do this on the behalf of a different user. So your GITHUB_URL would have to be https://api.github.com/user/keys and you would have to authenticate as user abcd in order to do that.
I don't think there are any python wrappers for the API using urllib2 which work (well), but there are a few listed here which includes mine which is pip-installable. With my library, your code would look like:
from github3 import login
g = login('abcd', password)
with open('~/.ssh/temp.k.pub', 'r') as fd:
key = g.create_key('abcd', fd)
print("Created {0}".format(key.title))
There are other popular wrappers like pygithub3 but I'm not familiar with them.
I need to upload a file/document to Google Docs on a GAE application. This should be simple enough, but I'm having a lot of trouble with the API.
The context:
import gdata.docs.service
client = gdata.docs.service.DocsService()
client.ClientLogin('gmail', 'pass')
ms = gdata.MediaSource(#what goes in here?)
client.Upload(media_source=ms, title='title')
To upload I'm using client.Upload(), which takes a MediaSource (wrapper) object as a parameter. However, MediaSource() seems to only accept a filepath for a document: 'C:/Docs/ex.doc'.
Since I'm on GAE with no filesystem, I can only access the file through the Blobstore or a direct URL to the file. But how do I input that into MediaSource()?
There seems to be a way in Java to accomplish this by using MediaByteArraySource(), but nothing for Python.
If anyone is curious, here's how I solved this problem using the Document List API.
I didn't want to use the Drive SDK since it does complicate a lot of things. It's much simpler with the List API to just authenticate/login without the need for some OAuth trickery. This is using version 2.0.14 of the gdata Python library, which is not the current version (2.0.17), but it seems to have a simpler upload mechanism.
There's also slightly more (still sparse) documentation online for 2.0.14, though I had to piece this together from various sources and trial & error. The downside is that you cannot upload pdf's with this version. This code will not work with 2.0.17.
Here's the code:
import gdata.docs.service
import gdata.docs.data
from google.appengine.api import urlfetch
# get file from url
result = urlfetch.fetch('http://example.com/test.docx')
headers = result.headers
data = result.content
# authenticate client object
client = gdata.docs.service.DocsService()
client.ClientLogin('gmail', 'password')
# create MediaSource file wrapper
ms = gdata.MediaSource(file_handle=result.content,
content_type=headers['content-type'],
content_length=int(headers['content-length']))
# upload specific folder, return URL of doc
google_doc_name = 'title'
folder_uri = '/feeds/folders/private/full/folder:j7XO8SJj...'
entry = client.Upload(ms, google_doc_name, folder_or_uri=secret.G_FOLDER_URI)
edit_url = entry.GetAlternateLink().href
The Google Drive SDK docs include a complete sample application written in Python that runs on App Engine:
https://developers.google.com/drive/examples/python
You can use it as reference for your implementation and to see how to save a file from App Engine.
I have a script which scans an email inbox for specific emails. That part's working well and I'm able to acquire the data I'm interested in. I'd now like to take that data and add it to a Django app which will be used to display the information.
I can run the script on a CRON job to periodically grab new information, but how do I then get that data into the Django app?
The Django server is running on a Linux box under Apache / FastCGI if that makes a difference.
[Edit] - in response to Srikar's question When you are saying " get that data into the Django app" what exactly do you mean?...
The Django app will be responsible for storing the data in a convenient form so that it can then be displayed via a series of views. So the app will include a model with suitable members to store the incoming data. I'm just unsure how you hook into Django to make new instances of those model objects and tell Django to store them.
I think Celery is what you are looking for.
You can write custom admin command to load data according to your need and run that command through cron job. You can refer Writing custom commands
You can also try existing loaddata command, but it tries to load data from fixture added in your app directory.
I have done the same thing.
Firstly, my script was already parsing the emails and storing them in a db, so I set the db up in settings.py and used python manage.py inspectdb to create a model based on that db.
Then it's just a matter of building a view to display the information from your db.
If your script doesn't already use a db it would be simple to create a model with what information you want stored, and then force your script to write to the tables described by the model.
Forget about this being a Django app for a second. It is just a load of Python code.
What this means is, your Python script is absolutely free to import the database models you have in your Django app and use them as you would in a standard module in your project.
The only difference here, is that you may need to take care to import everything Django needs to work with those modules, whereas when a request enters through the normal web interface it would take care of that for you.
Just import Django and the required models.py/any other modules you need for it work from your app. It is your code, not a black box. You can import it from where ever the hell you want.
EDIT: The link from Rohan's answer to the Django docs for custom management commands is definitely the least painful way to do what I said above.
When you are saying " get that data into the DJango app" what exactly do you mean?
I am guessing that you are using some sort of database (like mysql). Insert whatever data you have collected from your cronjob into the respective tables that your Django app is accessing. Also insert this cron data into the same tables that your users are accessing. So that way your changes are immediately reflected to the users using the app as they will be accessing the data from the same table.
Best way?
Make a view on the django side to handle receiving the data, and have your script do a HTTP POST on a URL registered to that view.
You could also import the model and such from inside your script, but I don't think that's a very good idea.
Have your script send an HTTP Post request like so. This is the library Requests
>>> files = {'report.xls': open('report.xls', 'rb')}
>>> r = requests.post(url, files=files)
>>> r.text
then on the receiving side you can use web.py to process the info like this
x = web.input()
then do whatever you want with x
On the receiving side of the POST request import web and write a function that handles the post
for example
def POST(self):
x = web.input()
If you dont want to use HTTP to send messages back and forth you could just have the script write the email info to a .txt file and then have your django app open the file and read it.
EDIT:
You could set your CRON job to read the e-mails at say 8AM then write it to a text file info.txt. The in your code write something like
import time
if '9' == time.strftime("%H"):
file = open(info.txt)
info = file.read()
that will check the file at 9AM untill 10AM. if you want it to only check one time just add the minutes too the if statement as well.