I have a Firestore database like this:(https://i.stack.imgur.com/QSZ8m.png)
My code intends to update the fields "intensity" and "seconds" (under the document "1", under collection "Event") with the value "test" and 123 respectively.
import firebase_admin
from firebase_admin import credentials
from firebase_admin import db
# Initialize Firebase admin
cred = credentials.Certificate('taiwaneew-firebase-adminsdk-odl9d-222bd18a4e.json')
firebase_admin.initialize_app(cred, {
'databaseURL': 'https://taiwaneew.firebaseio.com/'
})
# Define a function to send data to the Firebase database
def send_data(param1, param2):
ref = db.reference(path='/TaiwanEEW/Event/1')
ref.update({
'intensity': param1,
'seconds': param2
})
# Invoke our function to send data to Firebase
send_data("test", 123)
The code, however, causes the following error:
File "/Users/joelin/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/firebase_admin/db.py", line 929, in request
return super(_Client, self).request(method, url, **kwargs)
File "/Users/joelin/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/firebase_admin/_http_client.py", line 119, in request
resp.raise_for_status()
File "/Users/joelin/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://taiwaneew.firebaseio.com/TaiwanEEW/Event/1.json?print=silent
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/joelin/PycharmProjects/pythonProject/eewPush.py", line 20, in <module>
send_data("777", 778)
File "/Users/joelin/PycharmProjects/pythonProject/eewPush.py", line 14, in send_data
ref.update({
File "/Users/joelin/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/firebase_admin/db.py", line 341, in update
self._client.request('patch', self._add_suffix(), json=value, params='print=silent')
File "/Users/joelin/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/firebase_admin/db.py", line 931, in request
raise _Client.handle_rtdb_error(error)
firebase_admin.exceptions.NotFoundError: 404 Not Found
I have tried to identify the cause of error but the same error persists. I would really like to hear some opinions if you have any experiences on this. Thank you so much!
I have double checked that my credentials json file is correct, under the same directory as the python file, and my database premissions to write and read set to true.
I tried both '/TaiwanEEW/Event/1' and '/taiwaneew/Event/1' for the reference path because I am not sure if it should be the project name or the database name.
I coud not find the error here, so I come with a workaround.
You can use firebase_admin.firestore.
Then, your db object will be instanciated by db = firestore.client(), and have access to all collections and document (see doc).
Complete solution:
import firebase_admin
from firebase_admin import credentials, firestore
cred = credentials.Certificate('taiwaneew-firebase-adminsdk-odl9d-222bd18a4e.json')
# no need for an url here if your credentials already contain the project id.
firebase_admin.initialize_app(cred)
db = firestore.client()
# Define a function to send data to the Firebase database
def send_data(param1, param2):
doc = db.document('Event/1') # or doc = db.collection('Event').document('1')
doc.update({
'intensity': param1,
'seconds': param2
})
# Invoke our function to send data to Firebase
send_data("test", 123)
Related
I am trying to send a request to OVH's API using their python API wrapper to check if my IP address is in mitigation, when trying to do this I get the following error:
result = client.get(f'/ip/{quote(ipblock)}/mitigation/{ipOnMitigation}', _need_auth=False)
File "/usr/local/lib/python3.8/dist-packages/ovh/client.py", line 347, in get
return self.call('GET', _target, None, _need_auth)
File "/usr/local/lib/python3.8/dist-packages/ovh/client.py", line 442, in call
raise ResourceNotFoundError(json_result.get('message'),
ovh.exceptions.ResourceNotFoundError: Got an invalid (or empty) URL
Here is my code
import json
import ovh
from urllib.parse import quote
client = ovh.Client(
endpoint='ovh-ca',
application_key='xxxxxxx',
application_secret='xxxxxxx',
consumer_key='xxxxxxxxx'
)
ipblock = "xxxx/28"
ipOnMitigation = "xxx/32"
result = client.get(f'/ip/{quote(ipblock)}/mitigation/{ipOnMitigation}', _need_auth=False)
# Pretty print
print(json.dumps(result))
Maybe the endpoint needs to look like a path (ie /ovh-ca)?
Edit: the exception indicates that something is wrong with the request. If it’s not the endpoint, it must be some other parameter your passing to the client (or not passing). I see you have a app key/secret pair. Does this endpoint require a consumer secret (to build a request signature for instance) also?
We have a python program that needs to send logs to splunk. Our splunk admins have created a service collector HTTP endpoint to publish logs to with the following:
index
token
hostname
URI
We can't find where to input the URI in the splunk python SDK client. For example:
import splunklib.client as client
import splunklib.results as results_util
HOST="splunkcollector.hostname.com"
URI="services/collector/raw"
TOKEN="ABCDEFG-8A55-4ABB-HIJK-1A7E6637LMNO"
PORT=443
# Create a Service instance and log in
service = client.connect(
host=HOST,
port=PORT,
token=TOKEN)
# Retrieve the index for the data
myindex = service.indexes["cloud_custodian"]
# Submit an event over HTTP
myindex.submit("Dummy test python client log")
As you can see I never use the URI variable. The above code results in:
Traceback (most recent call last):
File "splunk_log.py", line 15, in <module>
myindex = service.indexes["cloud_custodian"]
File "/usr/local/lib/python2.7/site-packages/splunklib/client.py", line 1230, in __getitem__
raise KeyError(key)
KeyError: UrlEncoded('cloud_custodian')
Ended up performing a stock POST with requests. I'm not sure if the splunk client is even intended to support the HTTP Event Collector.
import requests
url='https://splunkcollector.hostname.com:443/services/collector/event'
authHeader = {'Authorization': 'Splunk {}'.format('ABCDEFG-8A55-4ABB-HIJK-1A7E6637LMNO')}
jsonDict = {"index":"cloud_custodian", "event": { 'message' : "Dummy test python client log" } }
r = requests.post(url, headers=authHeader, json=jsonDict, verify=False)
print r.text
You should look into the HTTP Event Collector in Splunk. It's as simple as enabling it, generating a token, and making the call.
If you wanted to send data to Splunk HEC, it would look like this
<protocol>://<host>:<port>/<endpoint>
https://docs.splunk.com/Documentation/SplunkCloud/6.6.0/Data/UsetheHTTPEventCollector
I built a web app with a REST API using Flask. I take advantage of Flask's g to save the current user and pull the user's data I want from the datastore (the app is hosted at Google Cloud). However, I would like to implement Google Cloud Endpoints because of some of its advantages but if I call one of the urls in Cloud Endpoints I get the error:
Traceback (most recent call last):
File "/Users/manuelgodoy/Documents/Google/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 239, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/Users/manuelgodoy/Documents/Google/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 298, in _LoadHandler
handler, path, err = LoadObject(self._handler)
File "/Users/manuelgodoy/Documents/Google/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 95, in LoadObject
__import__(cumulative_path)
File "/Users/manuelgodoy/Projects/Eatsy/Eatsy/src/application/apis.py", line 18, in <module>
user = g.user
File "/Users/manuelgodoy/Projects/Eatsy/Eatsy/src/lib/werkzeug/local.py", line 338, in __getattr__
return getattr(self._get_current_object(), name)
File "/Users/manuelgodoy/Projects/Eatsy/Eatsy/src/lib/werkzeug/local.py", line 297, in _get_current_object
return self.__local()
File "/Users/manuelgodoy/Projects/Eatsy/Eatsy/src/lib/flask/globals.py", line 27, in _lookup_app_object
raise RuntimeError('working outside of application context')
RuntimeError: working outside of application context
How can I use flask's context variables like g, login_required, current_user, etc. for Cloud Endpoints?
In my code I store current_user in g.user and I have an endpoint where I get the g.user so I can get the id.
views.py:
from flask.ext.login import login_user, logout_user, current_user, login_required
from flask import session, g, request
import requests
#app.before_request
def before_request():
log.info('Received request: %s' % request.path)
g.user = current_user
#app.route('/recommendations', methods = ['GET'])
def recommendations_retrieve():
# This HTTP call is what I'd like to get rid off
app_url = request.url_root
usr_id = g.user.key().id()
d = {'id': str(usr_id)}
r = requests.get(urljoin(app_url,"/_ah/api/myapp/v1/recommendations"),
params = d)
return (r.text, r.status_code, r.headers.items())
My Cloud Endpoints file looks like this:
from views import g
#endpoints.api(name='myapp', version='v1', description='myapp API',
allowed_client_ids=[WEB_CLIENT_ID, endpoints.API_EXPLORER_CLIENT_ID])
class MyAppApi(remote.Service):
#endpoints.method(IdRequestMessage, RecommendationsResponseMessage,
path='recommendations', http_method='GET',
name='recommendations.recommendations')
def recommendations(self, request):
# I would prefer to use this, but I get the
# "Working outside the app context" error
# when I uncomment it
#user = User.get_by_id(g.user.key().id())
user = User.get_from_message(request)
response = user.get_recommendations()
return response
My Javascript function is as follows:
loadRecommendationsFromServer: function() {
$.ajax({
// This is how I *would* call it, if it worked
//url: this.props.url+"/_ah/api/myapp/v1/recommendations",
//data: JSON.stringify({'id':2}),
url: this.props.url+"/recommendations",
dataType: 'json',
success: function(data) {
this.setState({data: data.recommendations});
}.bind(this),
error: function(xhr, status, err) {
console.error(this.props.url, status, err.toString());
}.bind(this)
});
The existing code works - how can I avoid having to make an HTTP request in my view handler and avoid the RuntimeError in the MyAppApi service when I use g.user?
I have tried everything I can find to get this to work...
I'm working on a plugin for a python-based task program (called GTG). I'm running Gnome on Opensuse Linux.
Code (Python 2.7):
def initialize(self):
"""
Intialize backend: try to authenticate. If it fails, request an authorization.
"""
super(Backend, self).initialize()
path = os.path.join(CoreConfig().get_data_dir(), 'backends/gtask', 'storage_file-%s' % self.get_id())
# Try to create leading directories that path
path_dir = os.path.dirname(path)
if not os.path.isdir(path_dir):
os.makedirs(path_dir)
self.storage = Storage(path)
self.authenticate()
def authenticate(self):
""" Try to authenticate by already existing credences or request an authorization """
self.authenticated = False
credentials = self.storage.get()
if credentials is None or credentials.invalid == True:
self.request_authorization()
else:
self.apply_credentials(credentials)
# Request periodic import, avoid waiting a long time
# self.start_get_tasks()
def apply_credentials(self, credentials):
""" Finish authentication or request for an authorization by applying the credentials """
http = httplib2.Http(ca_certs = '/etc/ssl/certs/ca_certs.pem', disable_ssl_certificate_validation=True)
http = credentials.authorize(http)
# Build a service object for interacting with the API.
self.service = build_service(serviceName='tasks', version='v1', http=http, developerKey='AIzaSyAmUlk8_iv-rYDEcJ2NyeC_KVPNkrsGcqU')
# self.service = build_service(serviceName='tasks', version='v1')
self.authenticated = True
def _authorization_step2(self, code):
credentials = self.flow.step2_exchange(code)
# credential = self.flow.step2_exchange(code)
self.storage.put(credentials)
credentials.set_store(self.storage)
return credentials
def request_authorization(self):
""" Make the first step of authorization and open URL for allowing the access """
self.flow = OAuth2WebServerFlow(client_id=self.CLIENT_ID,
client_secret=self.CLIENT_SECRET,
scope='https://www.googleapis.com/auth/tasks',
redirect_uri='http://localhost:8080',
user_agent='GTG')
oauth_callback = 'oob'
auth_uri = self.flow.step1_get_authorize_url(oauth_callback)
# credentials = self.flow.step2_exchange(code)
# url = self.flow.step1_get_authorize_url(oauth_callback)
browser_thread = threading.Thread(target=lambda: webbrowser.open_new(auth_uri))
browser_thread.daemon = True
browser_thread.start()
# Request the code from user
BackendSignals().interaction_requested(self.get_id(), _(
"You need to <b>authorize GTG</b> to access your tasks on <b>Google</b>.\n"
"<b>Check your browser</b>, and follow the steps there.\n"
"When you are done, press 'Continue'."),
BackendSignals().INTERACTION_TEXT,
"on_authentication_step")
def on_authentication_step(self, step_type="", code=""):
if step_type == "get_ui_dialog_text":
return _("Code request"), _("Paste the code Google has given you"
"here")
elif step_type == "set_text":
try:
credentials = self._authorization_step2(code)
except FlowExchangeError, e:
# Show an error to user and end
self.quit(disable = True)
BackendSignals().backend_failed(self.get_id(),
BackendSignals.ERRNO_AUTHENTICATION)
return
self.apply_credentials(credentials)
# Request periodic import, avoid waiting a long time
self.start_get_tasks()
The browser window opens up and I am presented with a code from Google. The program opens a small window where I can enter the code from Google.When that happens I get this in the console :
No handlers could be found for logger "oauth2client.util"
Created new window in existing browser session.
[522:549:0108/063825:ERROR:nss_util.cc(821)] After loading Root Certs, loaded==false: NSS error code: -8018
but the SSL icon is green in Chrome...
then when I submit the code, I get :
Exception in thread Thread-10:
Traceback (most recent call last):
File "/usr/lib64/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib64/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/site-packages/GTG/backends/backend_gtask.py", line 204, in on_authentication_step
credentials = self._authorization_step2(code)
File "/usr/lib/python2.7/site-packages/GTG/backends/backend_gtask.py", line 151, in _authorization_step2
credentials = self.flow.step2_exchange(code)
File "/usr/lib/python2.7/site-packages/oauth2client/util.py", line 132, in positional_wrapper
return wrapped(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/oauth2client/client.py", line 1283, in step2_exchange
headers=headers)
File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 1586, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 1328, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 1250, in _conn_request
conn.connect()
File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 1037, in connect
raise SSLHandshakeError(e)
SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
The file is called backend_gtask.py...
I have tried importing the certificate as stated here : How to update cacerts.txt of httplib2 for Github?
I have tried to disable verification (httplib2.Http(disable_ssl_certificate_validation=True)) as stated all over the web,
I have updated the python packages (which seemed to make things worse)
I have copied ca_certs.pem back and forth between /etc/ssl... and /usr/lib/python2.7/...
When I visit the auth page in a browser, it says the certificate is verified...
What else can I possibly check?
SHORT TEST CODE :
from oauth2client.client import OAuth2WebServerFlow
from oauth2client.tools import run
from oauth2client.file import Storage
CLIENT_ID = 'id'
CLIENT_SECRET = 'secret'
flow = OAuth2WebServerFlow(client_id=CLIENT_ID,
client_secret=CLIENT_SECRET,
scope='https://www.googleapis.com/auth/tasks',
redirect_uri='http://localhost:8080')
storage = Storage('creds.data')
credentials = run(flow, storage)
print "access_token: %s" % credentials.access_token
Found that here: https://github.com/burnash/gspread/wiki/How-to-get-OAuth-access-token-in-console%3F
OK...
Big thanks to Steffen Ullrich.
httplib2 version 0.9 tries to use the system certificates and not the certs.txt file that used to be shipped with it. It also enforces verification.
httplib2 can take a couple of useful parameters - notably ca_certs. Use it to point to the actual *.pem file in you ssl installation. I cannot be a folder, must be a real file.
I use the following in the initialization of the plugin :
self.http = httplib2.Http(ca_certs = '/etc/ssl/ca-bundle.pem')
Then, for all subsequent calls to httplib or google client libraries, I pass my pre-built http object as a parameter like this:
credentials = self.flow.step2_exchange(code, self.http)
self.http = credentials.authorize(self.http)
Now ssl connections work with the new httplib2...
I will eventually have to make sure the plugin can find certificates on any system, but at least I know what the problem was.
Thanks again to Steffen Ullrich for walking me through this.
See this answer for an easier fix without touching your code: just set your certificate bundle pem file path in an environment variable:
export HTTPLIB2_CA_CERTS="\path\to\your\ca-bundle"
I am trying to list items in a S3 container with the following code.
import boto.s3
from boto.s3.connection import OrdinaryCallingFormat
conn = boto.connect_s3(calling_format=OrdinaryCallingFormat())
mybucket = conn.get_bucket('Container001')
for key in mybucket.list():
print key.name.encode('utf-8')
Then I get the following error.
Traceback (most recent call last):
File "test.py", line 5, in <module>
mybucket = conn.get_bucket('Container001')
File "/usr/lib/python2.7/dist-packages/boto/s3/connection.py", line 370, in get_bucket
bucket.get_all_keys(headers, maxkeys=0)
File "/usr/lib/python2.7/dist-packages/boto/s3/bucket.py", line 358, in get_all_keys
'', headers, **params)
File "/usr/lib/python2.7/dist-packages/boto/s3/bucket.py", line 325, in _get_all
response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 301 Moved Permanently
<?xml version="1.0" encoding="UTF-8"?>
PermanentRedirectThe bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.99EBDB9DE3B6E3AF
Container001
<HostId>5El9MLfgHZmZ1UNw8tjUDAl+XltYelHu6d/JUNQsG3OaM70LFlpRchEJi9oepeMy</HostId><Endpoint>Container001.s3.amazonaws.com</Endpoint></Error>
I tried to search for how to send requests to the specified end point, but couldn't find useful information.
How do I avoid this error?
As #garnaat mentioned and #Rico answered in another question connect_to_region works with OrdinaryCallingFormat:
conn = boto.s3.connect_to_region(
region_name = '<your region>',
aws_access_key_id = '<access key>',
aws_secret_access_key = '<secret key>',
calling_format = boto.s3.connection.OrdinaryCallingFormat()
)
bucket = conn.get_bucket('<bucket name>')
in terminal run
nano ~/.boto
if there is some configs try to comment or rename file and connect again. (it helps me)
http://boto.cloudhackers.com/en/latest/boto_config_tut.html
there is boto config file directories. take a look one by one and clean them all, it will work by default configs. also configs may be in .bash_profile, .bash_source...
I guess you must allow only KEY-SECRET
also try to use
calling_format = boto.s3.connection.OrdinaryCallingFormat()