I am trying to determine how to best insert users into active directory from a SQL server table.
I figured I could use the LDAP sever to do a insert, but the research iv done would suggest otherwise and that I could only pull data from active directory to SQL server.
Then I thought I could use a python program to query the table and spit out a CSV file to then do a bulk insert but I am not sure if this would modify existing users if data changes.
Any insight would be appreciated
Here's a general idea of the algorithm:
Load user data from SQL Server
Convert it into an LDIF (LDAP Data Interchange Format) file
Import the LDIF file into Active Directory using the LDIFDE command-line tool
Python, or any other programming language, can help you with step 2. Notice that the details of the conversion are very specific to how your data is represented. You'll have to carefully map each data base field into an LDAP attribute, and determine the classes to be used in the LDAP objects.
Will the above modify existing users? yes, of course. You could write the LDIF in such a way that it updates the existing data, or if that's a problem you could verify first if an user exists in the Active Directory and don't add those changes to the LDIF file.
Alternatively
You could use CSVDE for importing data in CSV format, but anyway you'll have to design a mapping strategy for each one of the fields that you want to import into Active Directory.
Related
I have basic csv report that is produced by other team on a daily basis, each report has 50k rows, those reports are saved on sharedrive everyday. And I have Oracle DB.
I need to create autoscheduled process (or at least less manual) to import those csv reports to Oracle DB. What solution would you recommend for it?
I did not find such solution in SQL Developer, since it is upload from file and not a query. I was thinking about python cron script, that will autoran on a daily basis and transform csv report to txt with needed SQL syntax (insert into...) and then python will connect to Oracle DB and will ran txt file as SQL command and insert data.
But this looks complicated.
Maybe you know other solution that you would recommend yo use?
Create an external table to allow you to access the content of the CSV as if it were a regular table. This assumes the file name does not change day-to-day.
Create a scheduled job to import the data in that external table and do whatever you want with it.
One common blocking issue that prevents using 'external tables' is that external tables require the data to be on the computer hosting the database. Not everyone has access to those servers. Or sometimes the external transfer of data to that machine + the data load to the DB is slower than doing a direct path load from the remote machine.
SQL*Loader with direct path load may be an option: https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-sql-loader.html#GUID-8D037494-07FA-4226-B507-E1B2ED10C144 This will be faster than Python.
If you do want to use Python, then read the cx_Oracle manual Batch Statement Execution and Bulk Loading. There is an example of reading from a CSV file.
I have a requirement like I have an ods file with some data and I want insert that data into a table. This scenario need to be done via procedure call because we have to validate some fields in the ods file. Steps for the requirement. For this, we have two tables like Staging and main table. The staging table contains validation failed records and the main table contains success records.
Note: How to do this using python scripting. This will be automate on a daily basis
Step 1:Place the file in a specified location.
Step 2:Pick up file from specified location and call the procedure to insert the records.
Step 3: While calling the procedure needs to handle validation for some fields. Only validation success records needs to be stored in Mani_table. Records which are failed in validation those records need to be stored in the Staging table.
Step 4: Automation script need to be done on daily basis.
You can move the files across the folders with python's shutil module.
shutil.move("path/to/current/file.foo", "path/to/new/destination/for/file.foo")
Check the more details of it here
You can periodically run the python scripts in multiple ways! I can't comment on the efficiency of these methods, but you can use python's apscheduler. More details of it here
You can use python's pyexcel-ods to read ods files.
Since you haven't added your work, I can't help you more than this!
I was wondering if there is a way to allow a user to export a SQLite database as a .csv file, make some changes to it in a program like Excel, then upload that .csv file back to the table it came from using a record UPDATE method.
Currently I have a client that needed an inventory and pricing management system for their e-commerce store. I designed a database system and logic in Python 3 and SQLite. The system from a programming standpoint works flawlessly.
The problem I have is that there are some less then technical office staff that need to edit things like product markup within the database. Currently, I have them setup with SQLite DB Browser, from there they can edit products one at a time and write the changes to the database. They can also export tables to a .csv file for data manipulation in Excel.
The main issue is getting that .csv file back into the table it was exported from using an UPDATE method. When importing a .csv file to a table in SQLite DB Browser there is no way to perform an update import. It can only insert new rows by default and do to my table constraints that is a problem.
I like SQLite DB Browser because it is clean and simple and does exactly what I need. However, as soon as you have to edit more then one thing at a time and filter information in more complicated ways it starts to lack the functionality needed.
Is there a solution out there for SQLite DB Browser to tackle this problem? Is there a better software option all together to interact with a SQLite database that would give me that last bit of functionality?
Have you tried SQLiteForExcel? however, some coding is required.
So after researching some off the shelf options I found that the Devart Excel Add Ins did exactly what I needed. They are paid add ins, however, they seem to support almost all modern databases including SQlite. Once the add in is installed you can connect to a database and manipulate the data returned just like normal in Excel including bulk edits and advanced filtering, all changes are highlighted and can easily be written to the database with one click.
Overall I thought it was a pretty solid solution and everyone seems to be very happy with it as it made interacting with a database intuitive and non threatening to the more technically challenged.
I've heard that MongoDB is very good Database, especially for placing large data inside, However i'm not sure how safe can it really be.
I'm not experienced at MongoDB, but before i choose, i want to know how safe it can be for important data.
So for example, if i specified uri, i would type this:
uri = "mongodb://test1:test1#ds051990.mongolab.com:51990/base1"
I'm trying to make a P2P text chat, It can be accessed on user's PC with root permissions, Whenever user registers, User's Latest IP, Username and will be added to database, as code was shown below.
But the "Hacker" would easily access it by simply getting into code, and viewing all the information, then he would read/write all the data inside.
What would be the best solution to prevent this issue? I think high-level Databases like MongoDB would have some-kind of protection against other users accessing it.
How can make sure only necessary users can access database and other users can't enter it by viewing uri variable?
If not is there ANY other way i can do it? So user can't access Database at all, But i would read and write files from database.
You have no easy way of hiding the credentials. Instead, create a user with the minimal required permissions in the database, and use these credentials in your distributed code.
If you are worried about the users being able to see plain-text IP addresses, you should hash and salt them before inserting them to the database.
I have an sqlite database which I use as a data storage file for an application I develop in python.
Now the development of new features requires me to define new fields in the database. Is there a way, with peewee, of loading a database file, which used the old table definition (without new field) without getting an SQLError: no such column error?
Like an automatic insertion of the new field with a default value in the database. This would make life a lot easier for having backwards compatibility with opening database files from previous versions.
I've written a web-based tool called sqlite-web that will allow you to manage your database schema using a GUI.
If you want to add columns on the fly in your Python code, check out peewee's migration extension: http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#schema-migrations