Querying Tableau Server for exporting a view using python and REST API - python

I am trying to export a tableau view as an image/csv (doesn't matter) using Python. I googled and found that REST API would help here, so I created a Personal Access Token and wrote the following command to connect: -
import tableauserverclient as TSC
from tableau_api_lib import TableauServerConnection
from tableau_api_lib.utils.querying import get_views_dataframe, get_view_data_dataframe
server_url = 'https://tableau.mariadb.com'
site = ''
mytoken_name = 'Marine'
mytoken_secret = '$32mcyTOkmjSFqKBeVKEZYpMUexseV197l2MuvRlwHghMacCOa'
server = TSC.Server(server_url, use_server_version=True)
tableau_auth = TSC.PersonalAccessTokenAuth(token_name=mytoken_name, personal_access_token=mytoken_secret, site_id=site)
with server.auth.sign_in_with_personal_access_token(tableau_auth):
print('[Logged in successfully to {}]'.format(server_url))
It entered successfully and gave the message: -
[Logged in successfully to https://tableau.mariadb.com]
However, Iam at a loss now on how to access the tableau workbooks using Python. I searched here:-
https://help.tableau.com/current/api/rest_api/en-us/REST/rest_api_ref_workbooks_and_views.htm
but was unable to write these commands like GET or others in python.
Can anyone help?

I'm assuming you don't know the view_id of the view you're looking for
Adding this after the print in the with block will query all the views available on your site;
all_views, pagination_item = server.views.get()
print([view.name for view in all_views])
Then find the view you're looking for in the printed output and note the view_id for use like this;
view_item = server.view.get_by_id('d79634e1-6063-4ec9-95ff-50acbf609ff5')
From there, you can get the image like this;
server.views.populate_image(view_item)
with open('./view_image.png', 'wb') as f:
f.write(view_item.image)
The tableauserverclient-python docs should help you out a ton as well
https://tableau.github.io/server-client-python/docs/api-ref#views

Related

How to connect to Elasticsearch using Pyton Flask

I'm writing a site on Flask, I decided to make a search system on the site using Elasticsearch, I did it according to the guide https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-xvi-full-text-search .
Here is my code
es = Elasticsearch(os.environ.get('ELASTICSEARCH_URL'))
app.elasticsearch = Elasticsearch([app.config['ELASTICSEARCH_URL']]) if app.config['ELASTICSEARCH_URL'] else None
But I get an error:
app.elasticsearch = Elasticsearch([app.config['ELASTICSEARCH_URL']]) if app.config['ELASTICSEARCH_URL'] else None
KeyError: 'ELASTICSEARCH_URL'
Please help to fix this
I think you have simply not degined the elasticsearch key inside your app config. According to the documentation you should first make
app.config['ELASTICSEARCH_URL'] = os.environ.get('ELASTICSEARCH_URL')
In the end the app config behaves as a dictionary, and you are trying to access a key that does not exist

How can i set up credentials as an environmental variable in windows?

I'm currently working on an academic project in my university and im trying to access IEX Cloud API (iexfinance) for financial data extraction using python but i keep running into an authentication error.
When i checked the documentation of the package it recommends to set Secret Authentication Key as an environmental variable using 'IEX_TOKEN' to authenticate my request which i dont know how to do.
Also, i should note that i'm very new to the world of programming so thank you in advance for any assistance.
Here's a snippet of the script i use:
tickerSymbol = input("Ticker Symbol: ")
companyInfo = Stock(tickerSymbol)
stockPrice = companyInfo.get_price()
start = datetime(sy,sm,sd)
end = datetime(ey, em,ed)
historicalPrices = get_historical_intraday(tickerSymbol, start, end)
stockHistoricals = pd.DataFrame(historicalPrices).T
Assuming you know the secret authentication key. Try:
#import os module in first line of your code
import os
#set the env vairable in 2nd line
os.environ['IEX_TOKEN'] = 'TheSecretAuthenticationKey'
#other imports
...
...
...
...
#remaining code

how to fetch firebase data?

I am new to python and firebase and I am trying to flaten my firebase database.
I have a database in this format
each cat has thousands of data in it. All I want is to fetch the cat names and put them in an array. for example I want the output to be ['cat1','cat2'....]
I was using this tutorial
http://ozgur.github.io/python-firebase/
from firebase import firebase
firebase = firebase.FirebaseApplication('https://your_storage.firebaseio.com', None)
result = firebase.get('/Data', None)
the problem with the above code is it'll attempt to fetch all the data under Data. How can I only fetch the "cats"?
if you want to get the values inside the cats as columns, try using the pyrebase, using pip install pyrebase at cmd / anaconda prompt(later prefered if you didn't set up PIP or Python at your environment paths. after installing:
import pyrebase
config {"apiKey": yourapikey
"authDomain": yourapidomain
"databaseURL": yourdatabaseurl,
"storageBucket": yourstoragebucket,
"serviceAccount": yourserviceaccount
}
Note: you can find all the information above at your Firebase's console:
https://console.firebase.google.com/project/ >>> your project >>> click on the icon "<'/>" with the tag "add firebase to your web app
back to the code...
make a neat definition so you can store it into a py file:
def connect_firebase():
# add a way to encrypt those, I'm a starter myself and don't know how
username: "usernameyoucreatedatfirebase"
password: "passwordforaboveuser"
firebase = pyrebase.initialize_app(config)
auth = firebase.auth()
#authenticate a user > descobrir como não deixar hardcoded
user = auth.sign_in_with_email_and_password(username, password)
#user['idToken']
# At pyrebase's git the author said the token expires every 1 hour, so it's needed to refresh it
user = auth.refresh(user['refreshToken'])
#set database
db = firebase.database()
return db
Ok, now save this into a neat .py file
NEXT, at your new notebook or main .py you're going to import this new .py file that we'll call auth.py from now on...
from auth import *
# add do a variable
db = connect_firebase()
#and now the hard/ easy part that took me a while to figure out:
# notice the value inside the .child, it should be the parent name with all the cats keys
values = db.child('cats').get()
# adding all to a dataframe you'll need to use the .val()
data = pd.DataFrame(values.val())
and thats it, print(data.head()) to check if the values / columns are where they're expected to be.
Firebase Realtime Database is one big JSON tree:
when you fetch data at a location in your database, you also retrieve
all of its child nodes.
The best practice is to denormalize your data, creating multiple locations (nodes) for the same data:
Many times you can denormalize the data by using a query to retrieve a
subset of the data
In your case, you may create a second node named "categories" where you list "only" the category names.
/cat1
/...
/cat2
/...
/cat3
/...
/cat4
/...
/categories
/cat1
/cat2
/cat3
/cat4
In this scenario you can use the update() method to write to more than one location at the same time.
I was exploring pyrebase documentation. As per that, we may extract only keys from some path.
To return just the keys at a particular path use the shallow() method.
all_user_ids = db.child("users").shallow().get()
In your case, it'll be something like:
firebase = pyrebase.initialize_app(config)
db = firebase.database()
allCats = db.child("data").shallow().get()
Let me know if it didn't help.

Python Google Cloud Storage Image Store

Ok...I have been trying to figure out how to do this for a long time no without much success.
I have a Python Script locally on Google App Engine Launcher that receives a image file via post. I have not launched application yet, however I am able to get to the Google Cloud SQL so I assume I can get to Google Cloud Storage.
import MySQLdb
import logging
import webapp2
import json
class PostTest(webapp2.RequestHandler):
def post(self):
image = self.request.POST.get('file'))
logging.info("Pic: %s" % self.request.POST.get('file'))
#################################
#Main Portion
#################################
application = webapp2.WSGIApplication([
('/', PostTest)
], debug=True)
The logging outputs this, so I know it is receiving the image:
INFO 2014-08-04 23:20:43,299 posttest.py:21] Pic Bytes: FieldStorage(u'file', u'tmp.jpg')
Do I connect to GoogleCloudStorage
How do I upload this image to my GoogleCloudStorage bucket called 'app'?
How do I retrieve it once it is there?
Should be a simple thing to do, however I haven't been able to find good/clear documentation on how to do this. There is REST API which is depreciated and the GoogleAppEngineCloudStorageClient confuses me.
Can someone help me please with a code example? I will be really grateful!
I created a repository with a script to do this simply: https://github.com/itsdeka/python-google-cloud-storage
Example of integration with DJango:
picture = request.FILES.get('picture', None)
file_name = 'test'
directory = 'myfolder'
format = '.jpg'
GoogleCloudStorageUtil.uploadMediaObject(file=picture,file_name=file_name,directory=directory,format=format)
Tip: it automatically creates a folder called 'myfolder' in your bucket if that folder doesn't exist
The link of all pictures uploaded in your bucket is the same except for the file name, so it is pretty easy retrieve the picture that you want.

Python script for "Google search by image"

I have checked Google Search API's and it seems that they have not released any API for searching "Images". So, I was wondering if there exists a python script/library through which I can automate the "search by image feature".
This was annoying enough to figure out that I thought I'd throw a comment on the first python-related stackoverflow result for "script google image search". The most annoying part of all this is setting up your proper application and custom search engine (CSE) in Google's web UI, but once you have your api key and CSE, define them in your environment and do something like:
#!/usr/bin/env python
# save top 10 google image search results to current directory
# https://developers.google.com/custom-search/json-api/v1/using_rest
import requests
import os
import sys
import re
import shutil
url = 'https://www.googleapis.com/customsearch/v1?key={}&cx={}&searchType=image&q={}'
apiKey = os.environ['GOOGLE_IMAGE_APIKEY']
cx = os.environ['GOOGLE_CSE_ID']
q = sys.argv[1]
i = 1
for result in requests.get(url.format(apiKey, cx, q)).json()['items']:
link = result['link']
image = requests.get(link, stream=True)
if image.status_code == 200:
m = re.search(r'[^\.]+$', link)
filename = './{}-{}.{}'.format(q, i, m.group())
with open(filename, 'wb') as f:
image.raw.decode_content = True
shutil.copyfileobj(image.raw, f)
i += 1
There is no API available but you are can parse the page and imitate the browser, but I don't know how much data you need to parse because google may limit or block access.
You can imitate the browser by simply using urllib and setting correct headers, but if you think parsing complex web-pages may be difficult from python, you can directly use a headless browser like phontomjs, inside a browser it is trivial to get correct elements using javascript/DOM
Note before trying all this check google's TOS
You can try this:
https://developers.google.com/image-search/v1/jsondevguide#json_snippets_python
It's deprecated, but seems to work.

Categories