OpenErp - External Id Bulk Update - python

I need a way to add, some external ID's in the system without having to add them manually or by .csv
Is there any way to do this by a module, that maybe updates all the ir.model.data tables of the db?
If so, what module should i look for? Is there any in existence, so i can make a new one based on it?
Thanks in advance

You can do this with a module by loading the data through xml or csv. Have a look at any module with a security\ir.model.access.csv file.
So to load data, create new module and add a csv file with the name of the table you want to load into (eg ir.model.data.csv) and add it to the __openerp__.py file under 'update_xml'.

Related

import a set of text file to mysql using mysql or python code

I have a folder which contains a set of file .txt with the same structure.
The folder directory is E:\DataExport
Which contain 1000 .txt file: text1.txt, text2.txt,....
Each txt file contain price data of one. The data is updated on a daily. An example of one .txt file is as below
Ticker,Date/Time,Open,High,Low,Close,Volume
AAA,7/15/2010,19.581,20.347,18.429,18.698,174100
AAA,7/16/2010,19.002,19.002,17.855,17.855,109200
AAA,7/19/2010,19.002,19.002,17.777,17.777,104900
....
My question is, I want to:
Load all the file into mysql data base through a line of code, I can do it one by one in MySql command line but do not know how to import all at once, into one table in MySql (I have already create the table)
Everyday, when I get new data, how could I only update the new data to the table in MySql
I would like to get solution using either Python or MySql. I have tried to google and apply some solution but cannot success, the data do not load into mySQL
You could use python package pandas.
Read data with read_csv into DataFrame and use to_sql method with proper con and schema.
You will have to keep track of what is imported. You could for example keep it in file or database that last imported 54th line on 1032th file. And perform an update that reads the rest and imports that.

How to store and download a zip file in database postgres

I'm trying to store and then download a zip file from Postgres database. I know that this is not the best approach (i should only save the path to file) but i need to do this way, just for learning and practice.
I did a python script to store the content of the file into a bytea field but this was not my final goal. I really want to know how to save the zip file.
Any ideas? I just know python so i'm trying to this in python
Thank you guys!
If you can store the file as a bytea field then storing a zipped file is just the same.
Postgres doesn't have a concept of "file" field - you simply store the content (as you did for the original content) in bytea field.
If you're asking about zipping a file on the fly,
Take a look at zlib it's one of the common modules for such tasks.
Regards
Jony

Dynamically add columns to Exsisting BigQuery table

Background
I am loading files from local machine to BigQuery.Each file has variable number of fields.So,i am using 'autodetect=true' while running load job.
Issue is,when load job is run for first time and if the destination table doesn't exsist,Bigquery creates the table ,by infering the fields present in our file and that becomes New table's schema.
Now,when i run load job with a different file,which contains some extra (Eg:"Middile Name":"xyz")fields ,bigQuery throws error saying "field doesn't exsist in table")
From this post::BigQuery : add new column to existing tables using python BQ API,i learnt that columns can be added dynamically.However what i don't understand is,
Query
How will my program come to know,that the file being uploaded ,contains extra fields and schema mismatch will occur.(Not a problem ,if table doesn't exsist bcoz. new table will be created).
If my program can somehow infer the extra fields present in file being uploaded,i could add those columns to the exsisting table and then run the load job.
I am using python BQ API.
Any thoughts on how to automate this process ,would be helpful.
You should check schema update options. There is an option named as "ALLOW_FIELD_ADDITION" that will help you.
A naive solution would be:
1.get the target table schema using
service.tables().get(projectId=projectId, datasetId=datasetId, tableId=tableId)
2.Generate schema of your data in the file.
3.Compare the schemas (kind of a "diff") and then add those columns to the target table ,which are extra in your data schema
Any better ideas or approaches would be highly appreciated!

Insert users into Active Directory

I am trying to determine how to best insert users into active directory from a SQL server table.
I figured I could use the LDAP sever to do a insert, but the research iv done would suggest otherwise and that I could only pull data from active directory to SQL server.
Then I thought I could use a python program to query the table and spit out a CSV file to then do a bulk insert but I am not sure if this would modify existing users if data changes.
Any insight would be appreciated
Here's a general idea of the algorithm:
Load user data from SQL Server
Convert it into an LDIF (LDAP Data Interchange Format) file
Import the LDIF file into Active Directory using the LDIFDE command-line tool
Python, or any other programming language, can help you with step 2. Notice that the details of the conversion are very specific to how your data is represented. You'll have to carefully map each data base field into an LDAP attribute, and determine the classes to be used in the LDAP objects.
Will the above modify existing users? yes, of course. You could write the LDIF in such a way that it updates the existing data, or if that's a problem you could verify first if an user exists in the Active Directory and don't add those changes to the LDIF file.
Alternatively
You could use CSVDE for importing data in CSV format, but anyway you'll have to design a mapping strategy for each one of the fields that you want to import into Active Directory.

OpenERP 6.1 module with upload file field

I want to create OpenERP 6.1 module with a upload binary file field as one of the fields in view.
The file will be stored in database as binary data, but before storage in database I need to parse that file, and save data as part of the other created module.
So, I don't know how to specifiy filed for upload files in a view xml file, and also how to run the uploading process. Can somebody help me about this? Some code snippets or advice how to do that.
Take a look at the way the attachments module works, particularly the binary data column. You should also look at the screen definition.

Categories