I'm working on integrating Salesforce with my Django web app via Heroku Connect. I'm using Postgres for my database. I've set up Heroku Connect so that my Salesforce tables are replicating to Postgres correctly:
However, I'm not sure how to access the "salesforce" schema in code. (E.g. in views.py file). I've taken a look at this tutorial to set up my settings.py file but I'm still unsure of the syntax needed to access and update the "salesforce" schema in code. Can someone point me in the right direction please?
After updating my settings.py file according to the linked tutorial in the question, I've decided to use raw queries to access the Salesforce database directly in Python. The Django documentation here is good. For example:
with connection.cursor() as cursor:
cursor.execute("UPDATE salesforce.account SET name = %s WHERE sfid = %s",[newName, id])
Related
I'm trying to add a python sqlite3 generated database to superset. Getting that strange error. Is there a way to work around it?
You have to modify superset configuration (config.py file) adding this parameter:
PREVENT_UNSAFE_DB_CONNECTION = False
This is the link to a similar question in superset github repository: https://github.com/apache/incubator-superset/issues/9748, it points to the request to add this security measure.
For a specific security reason, a client has asked if we can integrate a 'write-only' DB into a Django web application. I have tried creating a DB then restricting access to one of its tables in psql via:
REVOKE SELECT ON TABLE testapp_testmodel FROM writeonlyuser;
But then trying to save a model in the Django shell...
p = new TestModel(test_field="testvalue")
p.save(using="writeonlydb")
...generates this error:
ProgrammingError: permission denied for relation testapp_testmodel
Which I assume is because the ORM generated SQL includes a return of the newly created object's id, which counts as a read:
INSERT INTO "testapp_testmodel" ("test_field") VALUES ('testvalue') RETURNING "testapp_testmodel"."id"
My question is therefore, is this basically impossible? Or is there perhaps some other way?
I have gone through this http://docs.pylonsproject.org/projects/pyramid/en/latest/quick_tutorial/authentication.html but it does not give any clue how to add database to this to store email and password?
The introduction to the Quick Tutorial describes its purpose and intended audience. Authentication and persistent storage are not covered in the same lesson, but in two different lessons.
Either you can combine learning from previous steps (not recommended) or you can take a stab at the SQLAlchemy + URL dispatch wiki tutorial which covers a typical web application with authentication, authorization, hashing of passwords, and persistent storage in an SQL database.
Note however that it uses SQLite, not MySQL, as its SQL database, so you'll either have to use the provided one or swap it out for your preferred SQL database.
Here are a few suggestions regarding switching from SQLite to MySQL. In your development.ini (and/or production.ini) file, change from SQLite to MySQL:
# sqlalchemy.url = sqlite:///%(here)s/MyProject.sqlite [comment out or remove this line]
sqlalchemy.url = mysql://MySQLUsername:MySQLPassword#localhost/MySQLdbName
Of course, you will need a MySQL database (MySQLdbName in the example above) and likely the knowledge and privileges to edit its metadata, for example, to add fields called user_email and passwordhash to the users table or create a users table if necessary.
In your setup.py file, you will require the mysql-python module to be imported. An example would be:
requires = [
'bcrypt',
'pyramid',
'pyramid_jinja2',
'pyramid_debugtoolbar',
'pyramid_tm',
'SQLAlchemy',
'transaction',
'zope.sqlalchemy',
'waitress',
'mysql-python',
]
After specifying new module(s) in setup.py, be sure to run the following commands so your project recognizes the new module(s):
cd $VENV/MyPyramidProject
sudo $VENV/bin/pip install -e .
By this point, your Pyramid project should be hooked up to MySQL. Now it is down to learning the details of Pyramid (and SQLAlchemy if this is your selected ORM). Much of the suggestions in the tutorials, partcularly the SQLAlchemy + URL dispatch wiki tutorial in your case, should work as they work with SQLite.
TL;DR: Besides my default django database, i need data pulled in from two different user-selected databases. not sure how to setup django to access these besides just running manual queries using connection.cursor().execute("SQL")
Situtaion:
A process creates a sqlitedb. The database is imported into mysql. I'm writing a django app that interacts with that mysql database (call it StreamDB), another mysql database with additional info user needs to see (call this SourceDB), and of course the default Django app mysql DB (call it AppDB).
There will be two versions of SourceDB (prod and test) the StreamDB imported maps to data to one and only one of these SourceDB's.
I have a table/model in my "AppDB" that identifies these sources (the StreamDB name, which of the two SourceDB's it maps to, and some other data. Here's a sample record:
name: foo
path: /var/www/data/test/foo.sqlite
db_name: foo
source_db_name: bar
date_imported: 2014-05-03 10:20:30
These are managed through the django admin and added manually (or dyanmically via external script)
Dilemma
Depending on which source is selected, my SQL needs to join the tables from those two DB's. Example query with dyanmic db names in :
SELECT a.image_id, a.image_name, b.title, b.begin_time
FROM <selected_streamdb>.image a JOIN <selected_sourcedb>.event b ON b.event_id = a.event_id
WHERE a.image_type = 'png'
Do I fill in and with variables perhaps?
Question
Is there anyway to use Django's ORM model in a situation like this? Can django grab the DATABASE.settings from a DB table? Do I create a model that manages this? (i.e. the source-table above?)
I don't mind managing db permissions on the backend (assume app database user in django settings has access to all of the databases)
Hope all of the above makes sense.
I am a little confused with the topic alluded to in the title.
So, when a Flask app is started, does the SQLAlchemy search theSQLALCHEMY_DATABASE_URI for the correct, in my case, MySQL database. Then, does it create the tables if they do not exist already?
What if the database that is programmed into theSQLALCHEMY_DATABASE_URI variable in the config.py file does not exist?
What if that database exists, and only a few of the tables exist (There are more tables coded into the SQLAlchemy code than exist in the actual MySQL database)? Does it erase those tables and then create new tables with the current specs?
And what if those tables do all exist? Do they get erased and re-created?
I am trying to understand how the entire process works so that I (1) Don't lose database information when changes are made to the schema, and (2) can write the necessary code to completely manage how and when the SQLAlchemy talks to the actual Database.
Tables are not created automatically; you need to call the SQLAlchemy.create_all() method to explicitly to have it create tables for you:
db = SQLAlchemy(app)
db.create_all()
You can do this with command-line utility, for example. Or, if you deploy to a PaaS such as Google App Engine, a dedicated admin-only view.
The same applies for database table destruction; use the SQLAlchemy.drop_all() method.
See the Creating and Dropping tables chapter of the documentation, or take a look at the database chapter of the Mega Flask Tutorial.
You can also delegate this task to Flask-Migrate or similar schema versioning tools. These help you record and edit schema creation and migration steps; the database schema of real-life projects is never static and you would want to be able to move existing data between versions or the schema. Creating the initial schema is then just the first step.