I'm trying to add a python sqlite3 generated database to superset. Getting that strange error. Is there a way to work around it?
You have to modify superset configuration (config.py file) adding this parameter:
PREVENT_UNSAFE_DB_CONNECTION = False
This is the link to a similar question in superset github repository: https://github.com/apache/incubator-superset/issues/9748, it points to the request to add this security measure.
Related
I plan to (attempt) to upload a flask application to a web-host that does not provide ssh access. I have confirmed that it (the webhost) will run Flask applications, and I can create one that works, when it has no database, but I am getting errors when attempting to create the database. I can't work out how to control where it is trying to place the database. My code looks like this:
from flask_sqlalchemy import SQLAlchemy
# create the extension
db = SQLAlchemy()
# create the app
app = Flask(__name__)
# configure the SQLite database, relative to the app instance folder
app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///flaskapp.db"
# initialize the app with the extension
db.init_app(app)
In my development machine, using Geany, running db.create_all() places the database in "var/app-instance/". Using PyCharm, on the same machine it places it in "instance/".
Some variable presumably dictates what this path is, but so far I haven't worked out what, or how to influence it. My application works as expected on my development server, using either development environment (Geany or Pycharm), but this does not work on the webhost I am trying to use, as described below.
As well as 'googling', I have grepped through the sqlalchemy files, and I found the def create_all(..), in schema.py but can't work out where it gets the information on what directory-structure to create.
I am not able to use "os" in the web-host, a suggestion made in other answers and tutorials.
I tried creating a path in various forms, and on my development machine, this, for example, works:
"sqlite:////tmp/flaskapp.db" , but I don't have access to /tmp on the web host, and I was unable to find an absolute path that the webhost would accept (ie without complaining that I don't have access to write to the directory). I can't 'pwd' on the webhost either.
Using "sqlite://instance/flaskapp.db" on my development machine produces an error pointing out that:
Valid SQLite URL forms are:
sqlite:///:memory: (or, sqlite://)
sqlite:///relative/path/to/file.db
sqlite:////absolute/path/to/file.db
However, if I try, a relative path, for example, "sqlite:///instance/flaskapp.db", I get "sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to open database file -(Background on this error at: https://sqlalche.me/e/20/e3q8)", even if I create the directory myself (ie relative to the app.py root directory). [In this case write permissions are the same as for all other parts of the project].
That link, in the error output, says "This error is a DBAPI Error and originates from the database driver (DBAPI), not SQLAlchemy itself". Unfortunately I am not clear how to proceed from that.
If someone could help direct me to information that would help me understand and resolve the issue, that would be great, thanks!
I would like to be able to explicitly state where the database will be stored, relative to the route of my application directory
I am using Linux (Arch), in case that is important, too.
The question linked by Sam below shows the use of "url.make_url()". When I use this as shown, I get
>>> import sqlalchemy.engine.url as url
>>> url.make_url('sqlite:///flaskcw.db')
sqlite:///flaskcw.db
which is what I would expect (and what I want). But this is not what happens when I run db.create_all()
>>> from main import db
database binding
sqlite:///flaskcw.db
Engine(sqlite:////home/user/PycharmProjects/cwflaskapp/instance/flaskcw.db)
whereas I would expect it to place the database in the root of the project (in this case cwflaskapp/, as ..cwflaskapp/flaskcw.db) - given that is where main.py is - rather than in 'projroot'/instance/, a directory that is created in the process. (Or in the case of Geany in 'projroot'/var/app-instance/ - also created only on creating the database as above).
What am I missing?
I found another question this code isnt creating my site.db file in directory with an answer that allows me to specify a folder on my development server, without a dependency on "os".
To briefly repeat the relevant part of the answer, I can do this:
app.config['SQLALCHEMY_DATABASE_URI'] = f"sqlite:///{app.root_path}/mydir/site.db"
and this puts the database in 'projroot'/mydir/site.db . I am going to confirm that I can do the same on the webhost, and will re-edit accordingly
So, I am using AWS athena where I have Data Source set to AwsDataCatalog, database set to test_db, under which I have a table named debaprc.
Now, I have superset installed on an EC2 instance (in virtual environment). On the Instance, I have installed PyAthenaJDBC and PyAthena. Now, when I launch Superset and try to add a database, the syntax given is this:
awsathena+rest://{aws_access_key_id}:{aws_secret_access_key}#athena.{region_name}.amazonaws.com/{schema_name}?s3_staging_dir={s3_staging_dir}
Now I have 2 questions -
What do I provide for schema_name?
I tried putting test_db as schema_name but it couldn't connect for some reason. Am I doing this right or do I need to do stuff differently?
Beware of the encoding:
awsathena+rest://{aws_access_key_id}:{aws_secret_access_key}#athena.{region_name}.amazonaws.com:443/{schema_name}?AwsRegion={region_name}&s3_staging_dir=s3%3A%2F%2Faws-athena-results-xxxxxxx
For example, for me it has been necessary to:
transform s3:// to s3%3A%2F%2F (and not just the : like in Superset doc?)
add the region again in the extra parameters
If you do not provide schema name (also called database), I think it defaults to a value of default
Sadly when a connection string fails on Superset, nothing very helpful is displayed...
It worked for me adding port 443 to the connection string as below and you can use test_db as schema_name:
awsathena+rest://{aws_access_key_id}:{aws_secret_access_key}#athena.{region_name}.amazonaws.com:443/{schema_name}?s3_staging_dir={s3_staging_dir}
Check PyAthena version. Superset docs tell PyAthena>1.2.0 while PyAthena PyPI page says PyAthena[SQLAlchemy]>=1.0.0, <2.0.0. In my case PyAthena[SQLAlchemy]>1.2.0, <2.0.0 (combining both constraints) solved an issue and the tables were present in dropdown list in SQL Lab (it was empty with PyAthena==2.5.1 (latest) version before).
I'm working on integrating Salesforce with my Django web app via Heroku Connect. I'm using Postgres for my database. I've set up Heroku Connect so that my Salesforce tables are replicating to Postgres correctly:
However, I'm not sure how to access the "salesforce" schema in code. (E.g. in views.py file). I've taken a look at this tutorial to set up my settings.py file but I'm still unsure of the syntax needed to access and update the "salesforce" schema in code. Can someone point me in the right direction please?
After updating my settings.py file according to the linked tutorial in the question, I've decided to use raw queries to access the Salesforce database directly in Python. The Django documentation here is good. For example:
with connection.cursor() as cursor:
cursor.execute("UPDATE salesforce.account SET name = %s WHERE sfid = %s",[newName, id])
I have gone through this http://docs.pylonsproject.org/projects/pyramid/en/latest/quick_tutorial/authentication.html but it does not give any clue how to add database to this to store email and password?
The introduction to the Quick Tutorial describes its purpose and intended audience. Authentication and persistent storage are not covered in the same lesson, but in two different lessons.
Either you can combine learning from previous steps (not recommended) or you can take a stab at the SQLAlchemy + URL dispatch wiki tutorial which covers a typical web application with authentication, authorization, hashing of passwords, and persistent storage in an SQL database.
Note however that it uses SQLite, not MySQL, as its SQL database, so you'll either have to use the provided one or swap it out for your preferred SQL database.
Here are a few suggestions regarding switching from SQLite to MySQL. In your development.ini (and/or production.ini) file, change from SQLite to MySQL:
# sqlalchemy.url = sqlite:///%(here)s/MyProject.sqlite [comment out or remove this line]
sqlalchemy.url = mysql://MySQLUsername:MySQLPassword#localhost/MySQLdbName
Of course, you will need a MySQL database (MySQLdbName in the example above) and likely the knowledge and privileges to edit its metadata, for example, to add fields called user_email and passwordhash to the users table or create a users table if necessary.
In your setup.py file, you will require the mysql-python module to be imported. An example would be:
requires = [
'bcrypt',
'pyramid',
'pyramid_jinja2',
'pyramid_debugtoolbar',
'pyramid_tm',
'SQLAlchemy',
'transaction',
'zope.sqlalchemy',
'waitress',
'mysql-python',
]
After specifying new module(s) in setup.py, be sure to run the following commands so your project recognizes the new module(s):
cd $VENV/MyPyramidProject
sudo $VENV/bin/pip install -e .
By this point, your Pyramid project should be hooked up to MySQL. Now it is down to learning the details of Pyramid (and SQLAlchemy if this is your selected ORM). Much of the suggestions in the tutorials, partcularly the SQLAlchemy + URL dispatch wiki tutorial in your case, should work as they work with SQLite.
I'm using Azure and the python SDK.
I'm using Azure's table service API for DB interaction.
I've created a table which contains data in unicode (hebrew for example). Creating tables and setting the data in unicode seems to work fine. I'm able to view the data in the database using Azure Storage Explorer and the data is correct.
The problem is when retrieving the data.. Whenever I retrieve specific row, data retrieval works fine for unicoded data:
table_service.get_entity("some_table", "partition_key", "row_key")
However, when trying to get a number of records using a filter, an encode exception is thrown for any row that has non-ascii chars in it:
tasks = table_service.query_entities('some_table', "PartitionKey eq 'partition_key'")
Is this a bug on the azure python SDK? Is there a way to set the encoding beforehand so that it won't crash? (azure doesn't give access to sys.setdefaultencoding and using DEFAULT_CHARSET on settings.py doesn't work as well)
I'm using https://www.windowsazure.com/en-us/develop/python/how-to-guides/table-service/ as reference to the table service API
Any idea would be greatly appreciated.
This looks like a bug in the Python library to me. I whipped up a quick fix and submitted a pull request on GitHub: https://github.com/WindowsAzure/azure-sdk-for-python/pull/59.
As a workaround for now, feel free to clone my repo (remembering to checkout the dev branch) and install it via pip install <path-to-repo>/src.
Caveat: I haven't tested my fix very thoroughly, so you may want to wait for the Microsoft folks to take a look at it.