Hey i am pretty new to the pynamodb,
i want to know how to use pre-existing table in pynamodb without re-writing the models defination.
from pynamodb.connection import Connection
# Get a connection
conn = Connection(host='http://localhost:8000')
# print(conn)
# List tables
print(conn.list_tables()['TableNames'])
so here i see the table names in my database but i can't use them, please help me how can i use the tables without defining models.
Thanks
okay so, PynamoDB's only advantage over using the native DynamoDB Python API is models. If you don't need models, consider using the native API.
if there is already table created in the database then we have to use boto3 lib.
thanks
Related
I have a bigquery table about 200 rows, i need to insert,delete and update values in this through a web interface(the table cannot be migrated to any other relational or non-relational database).
The web application will be deployed in google-cloud on app-engine and the user who acts as admin and owner privileges on Bigquery will be able to create and delete records and the other users with view permissions on the dataset in bigquery will be able to view records only.
I am planning to use the scripting language as python,
server(django or flask or any other)-> not sure which one is better
The web application should be displayed as a data-grid like appearance with buttons create,delete or view visiblility according to their roles.
I have not done anything like this in python,bigquery and django. I am already familiar with calling bigquery from python-client but to call in a web interface and in a transactional way, i am totally new.
I am seeing examples only related to django with their inbuilt model and not with big-query.
Can anyone please help me and clarify whether this is possible to implement and how?
I was able to achieve all of "C R U D" on Bigquery with the help of SQLAlchemy, though I had make a lot of concessions like if i use sqlalchemy class i needed to use a false primary key as Bigquery does not use any primary key and for storing sessions i needed to use file based session On Django for updates and create sqlalchemy does not allow without primary key, so i used raw sql part of SqlAlchemy. Thanks to the #mhawke who provided the hint for me to carry out this exericse
No, at most you could achieve the "R" of "CRUD." BigQuery isn't a transactional database, it's for querying vast amounts of data and preparing the results as an immutable view.
It doesn't provide a method to modify the source data directly and even if you did you'd need to run the query again. Also important to note are that queries are asynchronous and require much longer to perform than traditional databases.
The only reasonable solution would be to export the table data to GCS and then import it into a normal database for querying. Alternatively if you can't use another database and since you said there are only 1,000 rows you could perform your CRUD actions directly on that exported CSV.
When creating an mlflow tracking server and specifying that a SQL Server database is to be used as a backend store, mlflow creates a bunch of table within the dbo schema. Does anyone know if it is possible to specify a different schema in which to create these tables?
It is possible to alter mlflow/mlflow/store/sqlalchemy_store.py to change the schema of the tables that are stored.
It is very likely that this is the wrong solution for you, since you will go out of sync with the open source and lose newer features that alter this, unless you maintain the fork yourself. Could you maybe reply with your use case?
You can use postgres uri options:
Postgres URI options sample:
"postgresql://postgres:postgres#localhost:5432/postgres?options=-csearch_path%3Ddbo,mlflow_schema"
In your Mlflow Code:
mlflow.set_tracking_uri("postgresql://postgres:postgres#localhost:5432/postgres?options=-csearch_path%3Ddbo,mlflow_schema")
! Don't forget to create 'mlflow_schema' schema.
how-to-specify-schema-in-psycopg2-connection-method
I'm using MSSQLServer as the backend store. I could use a different schema than dbo by specifying the default schema for the SQLServer user being used by MLFlow.
In my case, if the MLFlow tables (e.g: experiences) exist in dbo, then those tables will be used. If not, MLFlow will create those tables in the default schema.
I am new to flask. So I came to know about sql alchemy. Can someone explain me how to use it properly?
Is it like, we need to create a Mysql db first and connect to flask or is everything done through sqlalchemy?
or whatever be the tables in mysql,same have to be represented in models.py???
I tried many tutorials but nothing explains the same.
Thanks in advnce
For your use case, to insert 100s of value a once use BULK operations like add_all (http://docs.sqlalchemy.org/en/rel_1_0/orm/persistence_techniques.html#bulk-operations)
If you don't know Sqlalchemy go here http://docs.sqlalchemy.org/en/latest/orm/tutorial.html
from app import session
from models import User
objects = [User(name="u1"), User(name="u2"), User(name="u3")]
session.add_all(objects)
session.commit()
I'm completely new to managing data using databases so I hope my question is not too stupid but I did not find anything related using the title keywords...
I want to setup a SQL database to store computation results; these are performed using a python library. My idea was to use a python ORM like SQLAlchemy or peewee to store the results to a database.
However, the computations are done by several people on many different machines, including some that are not directly connected to internet: it is therefore impossible to simply use one common database.
What would be useful to me would be a way of saving the data in the ORM's format to be able to read it again directly once I transfer the data to a machine where the main database can be accessed.
To summarize, I want to do:
On the 1st machine: Python data -> ORM object -> ORM.fileformat
After transfer on a connected machine: ORM.fileformat -> ORM object -> SQL database
Would anyone know if existing ORMs offer that kind of feature?
Is there a reason why some of the machine cannot be connected to the internet?
If you really can't, what I would do is setup a database and the Python app on each machine where data is collected/generated. Have each machine use the app to store into its own local database and then later you can create a dump of each database from each machine and import those results into one database.
Not the ideal solution but it will work.
Ok,
thanks to MAhsan's and Padraic's answers I was able to find the how this can be done: the CSV format is indeed easy to use for import/export from a database.
Here are examples for SQLAlchemy (import 1, import 2, and export) and peewee
I want to dump oracle objects like tables and stored procedures using cx_Oracle from python ,
is any tutorial how to do this ?
If you are looking for the source code for tables you can use the following:
select DBMS_METADATA.GET_DDL('TABLE','<table_name>') from DUAL;
for stored procedures you can use
select text from all_source where name = '<procedure name>'
In general this is not a cx_Oracle specific problem, just call the oracle specific tables (like all_source) or functions (like get_ddl) and read it in like any other query. There are more of these sorts of tables (like user_source for source that you the specific user own) in Oracle, but I'm doing this off the top of my head and don't have easy access to an Oracle db to remind myself.