Is there any way to alter the Property names in the Google application engine for a Kind, or in other words is there a way to alter the column names of a table in Google application Engine (though it follows a different way to handle the data)? I am using python.
Please suggest.
Thanks in advance.
Refactoring on Google AppEngine involves you having to either modify all of the records in your datastore as you make the change, or write the code so that it will still read the old value if the new value doesn't exist.
Removing a column from the datastore is possible but not easy. More information can be found here.
Related
I have a bigquery table about 200 rows, i need to insert,delete and update values in this through a web interface(the table cannot be migrated to any other relational or non-relational database).
The web application will be deployed in google-cloud on app-engine and the user who acts as admin and owner privileges on Bigquery will be able to create and delete records and the other users with view permissions on the dataset in bigquery will be able to view records only.
I am planning to use the scripting language as python,
server(django or flask or any other)-> not sure which one is better
The web application should be displayed as a data-grid like appearance with buttons create,delete or view visiblility according to their roles.
I have not done anything like this in python,bigquery and django. I am already familiar with calling bigquery from python-client but to call in a web interface and in a transactional way, i am totally new.
I am seeing examples only related to django with their inbuilt model and not with big-query.
Can anyone please help me and clarify whether this is possible to implement and how?
I was able to achieve all of "C R U D" on Bigquery with the help of SQLAlchemy, though I had make a lot of concessions like if i use sqlalchemy class i needed to use a false primary key as Bigquery does not use any primary key and for storing sessions i needed to use file based session On Django for updates and create sqlalchemy does not allow without primary key, so i used raw sql part of SqlAlchemy. Thanks to the #mhawke who provided the hint for me to carry out this exericse
No, at most you could achieve the "R" of "CRUD." BigQuery isn't a transactional database, it's for querying vast amounts of data and preparing the results as an immutable view.
It doesn't provide a method to modify the source data directly and even if you did you'd need to run the query again. Also important to note are that queries are asynchronous and require much longer to perform than traditional databases.
The only reasonable solution would be to export the table data to GCS and then import it into a normal database for querying. Alternatively if you can't use another database and since you said there are only 1,000 rows you could perform your CRUD actions directly on that exported CSV.
Im using python library to interact with google bigquery and create a group a new views, however, those view need to be added in a different share dataset as authorized views, but Im not able to find how to do using scripting due is a big amount. Somebody have an idea?
Thanks!!
The short answer to this is unfortunately,no. This can not be done directly as you describe in your question.
As per the official documentation "Currently, you cannot grant permissions on tables, views, or rows. You can set access controls at the dataset level, and you can restrict access to columns with BigQuery Column-level security" Controlling access to datasets
. Controlling access to views, requires you to grant a Cloud IAM role to an entity at the dataset level or higher
There is however a possible workaround that would allow you achieve your goal.
It would be possible to share access to BigQuery views using project- level IAM roles or dataset-level access controls. This is a very detailed walk through of how you could achieve this, it uses only two datasets. But the solution could be expanded for a larger number of datasets.
The subtle art of sharing “views” in BigQuery
Additionally, as you ask about using a Python script. There is no reason that the steps described could not be implemented using the Python client library for Big Query..
I hope this helps.
I've written a tiny app on Google App Engine that lets users upload files which have about 10 or so string and numeric fields associated with them. I store the files and these associated fields in an ndb model. I then allow users to filter and sort through these files, using arbitrary fields for sorting and arbitrary fields or collections of fields for filtering. However, whenever I run a sort/filter combination on my app that I didn't run on the dev_appserver before uploading, I get a NeedIndexError along with a suggested index, which seems to be unique for every combination of sort and filter fields. I tried running through every combination of sort/filter field on the appserver, generating a large index.yaml file, but at some point the app stopped loading altogether (I wasn't monitoring whether this was a gradual slowdown or a sudden breaking).
My questions are as follows. Is this typical behavior for the GAE datastore, and if not what parts of my code would be relevant for troubleshooting this? If this is typical behavior, is there an alternative to the datastore on GAE that would let me do what I want?
It seems like Google Cloud SQL would do what I need, but since I'm trying not to spend any money on this project and GCS doesn't have a free unlimited tier, I've resorted to querying by my filter and then sorting the results myself.
This seems so basic - I must be missing something.
I am trying to download my entities, update a few properties, and upload the entities. I'm using the Django nonrel & appengine projects, so all the entities are stored as id rather than name.
I can download the entities to csv fine, but when I upload (via appcfg.py upload_data ...), the keys come in as name=... rather than id=...
In the config file, I added -
import_transform: transform.create_foreign_key('auth_user', key_is_id=True)
to see if this would, as the documentation for transform states, "convert the key into an integer to be used as an id." With this import_transform, I get this error -
ErrorOnTransform: Numeric keys are not supported on input at this time.
Any ideas?
As the error message indicates, overwriting entities with numeric IDs isn't currently supported. You may be able to work around it by providing a post-upload function that recreates the entity with the relevant key, but I'd suggest stepping back and analyzing why you're doing this - why not just update the entities in-place on App Engine, or use remote_api to do this? Doing a bulk download and upload seems a cumbersome way to handle it.
Hi I want some help in building a Phone book application on python and put it on google app engine. I am running a huge db of 2 million user lists and their contacts in phonebook. I want to upload all that data from my servers directly onto the google servers and then use a UI to retrieve the phone book contacts of each user based on his name.
I am using MS SQL sever 2005 as my DB.
Please help in putting together this application.
Your inputs are much appreciated.
For building your UI, AppEngine has it's own web framework called webapp that is pretty easy to get working. I've also had a good experience using the Jinja2 templating engine, which you can include in your source, or package as a zip file (example shows Django, you can do the same type of thing for Jinja).
As for loading all of your data into the DataStore, you should take a look at the bulk uploader documentation.
I think you're going to need to be more specific as to what problem you're having. As far as bulk loading goes, there's lots of bulkloader documentation around; or are you asking about model design? If so, we need to know more about how you plan to search for users. Do you need partial string matches? Sorting? Fuzzy matching?