how to fetch firebase data? - python

I am new to python and firebase and I am trying to flaten my firebase database.
I have a database in this format
each cat has thousands of data in it. All I want is to fetch the cat names and put them in an array. for example I want the output to be ['cat1','cat2'....]
I was using this tutorial
http://ozgur.github.io/python-firebase/
from firebase import firebase
firebase = firebase.FirebaseApplication('https://your_storage.firebaseio.com', None)
result = firebase.get('/Data', None)
the problem with the above code is it'll attempt to fetch all the data under Data. How can I only fetch the "cats"?

if you want to get the values inside the cats as columns, try using the pyrebase, using pip install pyrebase at cmd / anaconda prompt(later prefered if you didn't set up PIP or Python at your environment paths. after installing:
import pyrebase
config {"apiKey": yourapikey
"authDomain": yourapidomain
"databaseURL": yourdatabaseurl,
"storageBucket": yourstoragebucket,
"serviceAccount": yourserviceaccount
}
Note: you can find all the information above at your Firebase's console:
https://console.firebase.google.com/project/ >>> your project >>> click on the icon "<'/>" with the tag "add firebase to your web app
back to the code...
make a neat definition so you can store it into a py file:
def connect_firebase():
# add a way to encrypt those, I'm a starter myself and don't know how
username: "usernameyoucreatedatfirebase"
password: "passwordforaboveuser"
firebase = pyrebase.initialize_app(config)
auth = firebase.auth()
#authenticate a user > descobrir como não deixar hardcoded
user = auth.sign_in_with_email_and_password(username, password)
#user['idToken']
# At pyrebase's git the author said the token expires every 1 hour, so it's needed to refresh it
user = auth.refresh(user['refreshToken'])
#set database
db = firebase.database()
return db
Ok, now save this into a neat .py file
NEXT, at your new notebook or main .py you're going to import this new .py file that we'll call auth.py from now on...
from auth import *
# add do a variable
db = connect_firebase()
#and now the hard/ easy part that took me a while to figure out:
# notice the value inside the .child, it should be the parent name with all the cats keys
values = db.child('cats').get()
# adding all to a dataframe you'll need to use the .val()
data = pd.DataFrame(values.val())
and thats it, print(data.head()) to check if the values / columns are where they're expected to be.

Firebase Realtime Database is one big JSON tree:
when you fetch data at a location in your database, you also retrieve
all of its child nodes.
The best practice is to denormalize your data, creating multiple locations (nodes) for the same data:
Many times you can denormalize the data by using a query to retrieve a
subset of the data
In your case, you may create a second node named "categories" where you list "only" the category names.
/cat1
/...
/cat2
/...
/cat3
/...
/cat4
/...
/categories
/cat1
/cat2
/cat3
/cat4
In this scenario you can use the update() method to write to more than one location at the same time.

I was exploring pyrebase documentation. As per that, we may extract only keys from some path.
To return just the keys at a particular path use the shallow() method.
all_user_ids = db.child("users").shallow().get()
In your case, it'll be something like:
firebase = pyrebase.initialize_app(config)
db = firebase.database()
allCats = db.child("data").shallow().get()
Let me know if it didn't help.

Related

Delete Maximo Location Hierarchy using Automation Script

I have a question regarding Maximo Location Hierarchy, I want to delete location hierarchy using enterprise service via CSV file, additional field is "MARKFORDELETE" has been created, in the CSV file the user need to enter "1" on the MARKFORDELETE field. And I have created the following action automation script on LOCHIERARCHY object:
from psdi.util.logging import MXLogger
from psdi.util.logging import MXLoggerFactory
from psdi.mbo import MboConstants
from psdi.mbo import MboSetRemote
from psdi.server import MXServer
from psdi.util import MXException
mxServer = MXServer.getMXServer();
userInfo = mbo.getUserInfo();
if launchPoint=='LOCHIERDEL2':
locHierarchySet=mbo.getMboSet("LOCHIERARCHYDEL")
locHierarchySet.setWhere("markfordelete = 1")
locHierarchySet.reset()
locHier = locHierarchySet.moveFirst()
while locHier is not None:
locHierarchy=locHierarchySet.getMbo(0)
locAncestorSet=mxServer.getMboSet("LOCANCESTOR",userInfo);
locAncestorSet.setWhere("location='"+locHierarchy.getString("LOCATION")+"' and ancestor='"+locHierarchy.getString("PARENT")+"' and systemid='"+locHierarchy.getString("SYSTEMID")+"' and siteid='"+locHierarchy.getString("SITEID")+"'")
locAncestorSet.reset()
locAnc = locAncestorSet.moveFirst()
if locAncestorSet.count()==1:
locAncestor=locAncestorSet.getMbo(0)
locAncestor.delete(11l)
locAncestorSet.save(11l)
locHierarchy.delete(11l)
locHierarchySet.save(11l)
locHierarchySet2=mbo.getMboSet("LOCHIERARCHYDEL3")
locHier2 = locHierarchySet2.moveFirst()
while locHier2 is not None:
locHierarchy2=locHierarchySet2.getMbo(0)
locHierarchy2.delete(11l)
locHierarchySet2.save()
locHier2 = locHierarchySet2.moveNext()
locHier = locHierarchySet.moveNext()
And the following is the CSV file:
EXTSYS1,LOCHIER_DEL,AddChange,EN
LOCATION,PARENT,SYSTEMID,CHILDREN,SITEID,ORGID,MARKFORDELETE
45668,XY_10603,NETWORK,0,ABC,ORG1,1
45668,XY_10604,NETWORK,0,ABC,ORG1,1
45669,XY_10606,NETWORK,0,ABC,ORG1,1
45669,XY_10607,NETWORK,0,ABC,ORG1,1
Create an escalation point with action using the above action script with where clause markfordelete=1. The escalation is working fine, and the records were deleted from LOCHIERARCHY table from the above CSV file, however the LOCANCESTOR table records were deleted ALL with SYSTEMID is NETWORK. and I noticed that, if the LOCHIERARCHY record is deleted, a new record will be created with parent is null.
Is there something that I have done wrong in writing the code, or have I missed out something?
Any suggestions or pointer would be great.
That is out of box behaviour, it will create new record in lochierarchy and locancestor tables. Please look into Lochierarchy.class delete method.

Querying Tableau Server for exporting a view using python and REST API

I am trying to export a tableau view as an image/csv (doesn't matter) using Python. I googled and found that REST API would help here, so I created a Personal Access Token and wrote the following command to connect: -
import tableauserverclient as TSC
from tableau_api_lib import TableauServerConnection
from tableau_api_lib.utils.querying import get_views_dataframe, get_view_data_dataframe
server_url = 'https://tableau.mariadb.com'
site = ''
mytoken_name = 'Marine'
mytoken_secret = '$32mcyTOkmjSFqKBeVKEZYpMUexseV197l2MuvRlwHghMacCOa'
server = TSC.Server(server_url, use_server_version=True)
tableau_auth = TSC.PersonalAccessTokenAuth(token_name=mytoken_name, personal_access_token=mytoken_secret, site_id=site)
with server.auth.sign_in_with_personal_access_token(tableau_auth):
print('[Logged in successfully to {}]'.format(server_url))
It entered successfully and gave the message: -
[Logged in successfully to https://tableau.mariadb.com]
However, Iam at a loss now on how to access the tableau workbooks using Python. I searched here:-
https://help.tableau.com/current/api/rest_api/en-us/REST/rest_api_ref_workbooks_and_views.htm
but was unable to write these commands like GET or others in python.
Can anyone help?
I'm assuming you don't know the view_id of the view you're looking for
Adding this after the print in the with block will query all the views available on your site;
all_views, pagination_item = server.views.get()
print([view.name for view in all_views])
Then find the view you're looking for in the printed output and note the view_id for use like this;
view_item = server.view.get_by_id('d79634e1-6063-4ec9-95ff-50acbf609ff5')
From there, you can get the image like this;
server.views.populate_image(view_item)
with open('./view_image.png', 'wb') as f:
f.write(view_item.image)
The tableauserverclient-python docs should help you out a ton as well
https://tableau.github.io/server-client-python/docs/api-ref#views

How do I retrieve a path's data from firebase database using python?

I have this firebase database structure
I want to print out the inventory list(Inventory) for each ID under Businesses.
So I tried this code
db = firebase.database()
all_users = db.child("Businesses").get()
for user in all_users.each():
userid = user.key()
inventorydb = db.child("Businesses").child(userid).child("Inventory")
print(inventorydb)
but all I got is this
<pyrebase.pyrebase.Database object at 0x1091eada0>
what am I doing wrong and how can I loop through each Business ID and print out their inventory?
First, you're printing a Database object. You need to get the data still.
You seem to already know how to get that as well as the children. Or you only copied the examples without understanding it...
Either way, you can try this
db = firebase.database()
businesses = db.child("Businesses")
for userid in businesses.shallow().get().each():
inventory = businesses.child(userid).child("Inventory").get()
print( inventory.val() )
On a side note, National_Stock_Numbers looks like it should be a value of the name, not a key for a child

using boto to scan dynamodb table

Folks,
I have an 'admins' table, with 'UserName' as its HashKey.
The table looks like this:
admins = Table('admins')
admins.put_item(data={
'UserName':'jon',
'password':'pass1',
})
admins.put_item(data={
'UserName':'tom',
'password':'pass2',
})
So to pull the users out, I am trying to do the following, but failing:
admins = Table('admins')
all_admins = admins.scan()
for x in all_admins:
print x['UserName']
Why am I getting an empty set?
Thanks!
What you are doing looks correct.
Have you confirmed the data actually got written?(take a look at the AWS console)
Are you trying to read directly after writing? The default read is eventually consistent, and thus you may not find items directly after writing them
Use Item= instead of data= to insert the items, you can also use batch
admins = Table('admins')
admins.put_item(Item={
'UserName':'jon',
'password':'pass1'})
admins.put_item(Item={
'UserName':'tom',
'password':'pass2'})
And to pull the users you should use
admins = Table('admins')
all_admins = admins.scan(
ConsistentRead=True)
items = all_admins['Items']
for x in items:
print x['UserName']
This should do the job

Link generator using django or any python module

I want to generate for my users temporary download link.
Is that ok if i use django to generate link using url patterns?
Could it be correct way to do that. Because can happen that I don't understand some processes how it works. And it will overflow my memory or something else. Some kind of example or tools will be appreciated. Some nginx, apache modules probably?
So, what i wanna to achieve is to make url pattern which depend on user and time. Decript it end return in view a file.
A simple scheme might be to use a hash digest of username and timestamp:
from datetime import datetime
from hashlib import sha1
user = 'bob'
time = datetime.now().isoformat()
plain = user + '\0' + time
token = sha1(plain)
print token.hexdigest()
"1e2c5078bd0de12a79d1a49255a9bff9737aa4a4"
Next you store that token in a memcache with an expiration time. This way any of your webservers can reach it and the token will auto-expire. Finally add a Django url handler for '^download/.+' where the controller just looks up that token in the memcache to determine if the token is valid. You can even store the filename to be downloaded as the token's value in memcache.
Yes it would be ok to allow django to generate the urls. This being exclusive from handling the urls, with urls.py. Typically you don't want django to handle the serving of files see the static file docs[1] about this, so get the notion of using url patterns out of your head.
What you might want to do is generate a random key using a hash, like md5/sha1. Store the file and the key, datetime it's added in the database, create the download directory in a root directory that's available from your webserver like apache or nginx... suggest nginx), Since it's temporary, you'll want to add a cron job that checks if the time since the url was generated has expired, cleans up the file and removes the db entry. This should be a django command for manage.py
Please note this is example code written just for this and not tested! It may not work the way you were planning on achieving this goal, but it works. If you want the dl to be pw protected also, then look into httpbasic auth. you can generate and remove entries on the fly in a httpd.auth file using htpasswd and the subprocess module when you create the link or at registration time.
import hashlib, random, datetime, os, shutil
# model to hold link info. has these fields: key (charfield), filepath (filepathfield)
# datetime (datetimefield), url (charfield), orgpath (filepathfield of the orignal path
# or a foreignkey to the files model.
from models import MyDlLink
# settings.py for the app
from myapp import settings as myapp_settings
# full path and name of file to dl.
def genUrl(filepath):
# create a onetime salt for randomness
salt = ''.join(['{0}'.format(random.randrange(10) for i in range(10)])
key = hashlib('{0}{1}'.format(salt, filepath).hexdigest()
newpath = os.path.join(myapp_settings.DL_ROOT, key)
shutil.copy2(fname, newpath)
newlink = MyDlink()
newlink.key = key
newlink.date = datetime.datetime.now()
newlink.orgpath = filepath
newlink.newpath = newpath
newlink.url = "{0}/{1}/{2}".format(myapp_settings.DL_URL, key, os.path.basename(fname))
newlink.save()
return newlink
# in commands
def check_url_expired():
maxage = datetime.timedelta(days=7)
now = datetime.datetime.now()
for link in MyDlink.objects.all():
if(now - link.date) > maxage:
os.path.remove(link.newpath)
link.delete()
[1] http://docs.djangoproject.com/en/1.2/howto/static-files/
It sounds like you are suggesting using some kind of dynamic url conf.
Why not forget your concerns by simplifying and setting up a single url that captures a large encoded string that depends on user/time?
(r'^download/(?P<encrypted_id>(.*)/$', 'download_file'), # use your own regexp
def download_file(request, encrypted_id):
decrypted = decrypt(encrypted_id)
_file = get_file(decrypted)
return _file
A lot of sites just use a get param too.
www.example.com/download_file/?09248903483o8a908423028a0df8032
If you are concerned about performance, look at the answers in this post: Having Django serve downloadable files
Where the use of the apache x-sendfile module is highlighted.
Another alternative is to simply redirect to the static file served by whatever means from django.

Categories