I have a SQLite database configured with Pyramid Alchemy scaffold in my Linux machine. I have another remote Mysql database which resides in a Windows machine, with loads of data.Now, I have to connect to the remote Mysql database to pull data and populate them into my Sqlite database with the help of sqlAlchemy.
What I had been doing: I used mysql workbench to query data from the mysql database, export results to a csv file, load the csv file into my pyramid initializedb.py through python's default csv module, and then finally insert the retrieved rows into my Sqlite database.
What I want to do: I want to connect to the remote Mysql database from my initializedb.py itself, fetch results and insert them into my sqlite database.
How do I go on about with this? Any help is appreciated.
You can create a connection into your mysql db within the initializedb.py, extract the data and store them into your local db. Now, you will find the basics about sqlalchemy here:
http://docs.sqlalchemy.org/en/rel_0_9/core/tutorial.html
If you need this configurable, then you can put the url of your database into your config and access it through settings.
Also, you do not have to define the structure of your tables exactly, you can use something like:
users = Table('users', metadata, autoload=True)
On the other hand, if your MySQL db structure is the same as the sqlite db structure, it might be possible to reuse your model, but that would probably be hard to explain here without seeing any of your code.
Related
I am new to python but I have worked with ORM frameworks before.
It is confusing for me that when I create database connection I can specify a database URL to some remote DB.
Or I can create DB in a file without remote DB. How does SQLAlchemy work with a file then?
It doesn`t have an RDBMS to connect to. Only file. So this is the reason for my question:
Is SQLAlchemy an RDBMS itself?
It's not a database, but a library for handling databases, or as they put it:
SQLAlchemy is the Python SQL toolkit and Object Relational Mapper that gives application developers the full power and flexibility of SQL.
The file based dialect is SQLite.
I want to load data from our cloud environment (pivotal cloud foundry) into SQL Server. Data is fetched from API and held in memory and we use tds to insert data to SQL Server, but only way in documentation I see to use bulk load is to load a file. I cannot use pyodbc because we dont have odbc connection in cloud env.
How can I do bulk insert directly from dictionary?
pytds does not offer bulk load directly, only from file
The first thing that comes to mind is to convert the data into bulk insert sql. Similar to how you migrate mysql.
Or if you could export the data into cvs, you could import use SSMS (Sql Server Managment Studio).
I'm trying to connect to our internal Teradata database, using flask and sqlAlchemy along with a custom engine called sqlalchemy teradata. I put the database into the create_engine function likes so.
engine = sqlalchemy.create_engine('teradata://username:pw#server_name/database')
I've setup my dialect just like in the tests
registry.register("tdalchemy", "sqlalchemy_teradata.dialect", "TeradataDialect")
I'm getting a.
DatabaseError: (teradata.api.DatabaseError) (3807, u"[42S02] [Teradata][ODBC Teradata Driver][Teradata Database] Object 'table_name' does not exist
I can make raw sql queries just fine, I can also have alchemy do a query I construct and it pulls the data. I'm not sure what all is preventing things from working properly at all. When I test a similar call but looking at a database in an psql server it works just fine and pulls from that db without issue.
Also the pypi page says there is supposed to be an test/orm_test.py but it doesn't seem to have it.
I tried looking in the documentation of Flask and SQLAlchemy but I couldn't find the answer...
Do the sessions used with SQLAlchemy automatically lock the DB? I do a read (let's say it takes a long time) and it seems to block out any other reads I am trying to do on my Postgres DB. Is this something I have to configure manually on Postgres using constraints or is there something I can change with Flask / SQLAlchemy?
Thanks
In Django, is there a way to open any sqlite database file. Make use of the Django ORM, and then close the database. Something like:
def my_view(request):
song1 = Song.objects.get(id=4) # Using normal postgres db
temp_db = open_db('/path/to/my/db.sqlite')
song2 = Song.objects.get(id=4, using=temp_db)
song2.title = song1.title
song2.save()
temp_db.close()
return HttpResponse(temp_db.as_binary(), mime_type='application/x-sqlite3')
In short: my Django web application needs to be able to synchronise with some 3rd party desktop software which stores data in a sqlite database that is stored on the users local computer. The synchronisation only needs to be one way. That is, changes in the web application would update the local sqlite database.
I was thinking I could get away with using the dropbox api to sync the sqlite database. Each user would have their own dropbox database synchronised.
Would it be much easier to do something like this and not use the ORM at all?
conn = sqlite3.connect('/path/to/db.sqlite')
c = conn.cursor()
for row in c.execute('SELECT * FROM Song'):
print row