How do i modify this python code as SQL query in redshift - python

I am trying to see if theres anyway i can implement this piece of code using only sql REDSHIFT
a = '''
SELECT to_char(DATE '2022-01-01'
+ (interval '1 day' * generate_series(0,365)), 'YYYY_MM_DD') AS ym
'''
dfa = pd.read_sql(a, conn)
b = f'''
select account_no, {','.join('"' + str(x) + '"' for x in dfa.ym)}
from loan__balance_table
where account_no =
'''
dfb = pd.read_sql(b, conn)
the first query will yield something like this
| ym |
| ---------- |
| 2022_01_01 |
| 2022_01_02 |
...
| 2022_12_31|
Then i used string concatenation to combime the dates together and use then in the second query to select all columns in ym. The result of the second query should be something like this.
| account_no | 2022_01_01 | 2022_01_01 | ...
| ---------- | ---------- | ---------- | ...
| 1234 | 234,987.09 | 233,989.19 | ...
I just want to know if theres a way i can combine both queries together as one in sql without using python to concat the column_names.
I tried using CTE but i cant seem to get it right i dont even know if this is the right approach, The database is REDSHIFT

Related

psycopg2: cursor.execute storing only table structure, no data

I am trying to store some tables I create in my code in an RDS instance using psycopg2. The script runs without issue and I can see the table being stored correctly in the DB. However, if I try to retrieve the query, I only see the columns, but no data:
import pandas as pd
import psycopg2
test=pd.DataFrame({'A':[1,1],'B':[2,2]})
#connect is a function to connect to the RDS instance
connection= connect()
cursor=connection.cursor()
query='CREATE TABLE test (A varchar NOT NULL,B varchar NOT NULL);'
cursor.execute(query)
connection.commit()
cursor.close()
connection.close()
This script runs without issues and, printing out file_check from the following script:
connection=connect()
# check if file already exists in SQL
sql = """
SELECT "table_name","column_name", "data_type", "table_schema"
FROM INFORMATION_SCHEMA.COLUMNS
WHERE "table_schema" = 'public'
ORDER BY table_name
"""
file_check=pd.read_sql(sql, con=connection)
connection.close()
I get:
table_name column_name data_type table_schema
0 test a character varying public
1 test b character varying public
which looks good.
Running the following however:
read='select * from public.test'
df=pd.read_sql(read,con=connection)
returns:
Empty DataFrame
Columns: [a, b]
Index: []
Anybody have any idea why this is happening? I cannot seem to get around this
Erm, your first script has a test_tbl dataframe, but it's never referred to after it's defined.
You'll need to
test_tbl.to_sql("test", connection)
or similar to actually write it.
A minimal example:
$ createdb so63284022
$ python
>>> import sqlalchemy as sa
>>> import pandas as pd
>>> test = pd.DataFrame({'A':[1,1],'B':[2,2], 'C': ['yes', 'hello']})
>>> engine = sa.create_engine("postgres://localhost/so63284022")
>>> with engine.connect() as connection:
... test.to_sql("test", connection)
...
>>>
$ psql so63284022
so63284022=# select * from test;
index | A | B | C
-------+---+---+-------
0 | 1 | 2 | yes
1 | 1 | 2 | hello
(2 rows)
so63284022=# \d+ test
Table "public.test"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
--------+--------+-----------+----------+---------+----------+--------------+-------------
index | bigint | | | | plain | |
A | bigint | | | | plain | |
B | bigint | | | | plain | |
C | text | | | | extended | |
Indexes:
"ix_test_index" btree (index)
Access method: heap
so63284022=#
I was able to solve this:
As it was pointed out by #AKX, I was only creating the table structure, but I was not filling in the table.
I now import import psycopg2.extras as well and, after this:
query='CREATE TABLE test (A varchar NOT NULL,B varchar NOT NULL);'
cursor.execute(query)
I add something like:
update_query='INSERT INTO test(A, B) VALUES(%s,%s) ON CONFLICT DO NOTHING'
psycopg2.extras.execute_batch(cursor, update_query, test.values)
cursor.close()
connection.close()
My table is now correctly filled after checking with pd.read_sql

Is there any ways to combine two rows of table into one row using Django ORM?

I have a table which has columns named measured_time, data_type and value.
In data_type, there is two types, temperature and humidity.
I want to combine two rows of data if they have same measured_time using Django ORM.
I am using Maria DB.
Using Raw SQL, The following Query does what I want to.
SELECT T1.measured_time, T1.temperature, T2.humidity
FROM ( SELECT CASE WHEN data_type = 1 then value END as temperature,
CASE WHEN data_type = 2 then value END as humidity ,
measured_time FROM data_table) as T1,
( SELECT CASE WHEN data_type = 1 then value END as temperature ,
CASE WHEN data_type = 2 then value END as humidity ,
measured_time FROM data_table) as T2
WHERE T1.measured_time = T2.measured_time and
T1.temperature IS NOT null and T2.humidity IS NOT null and
DATE(T1.measured_time) = '2019-07-01'
Original Table
| measured_time | data_type | value |
|---------------------|-----------|-------|
| 2019-07-01-17:27:03 | 1 | 25.24 |
| 2019-07-01-17:27:03 | 2 | 33.22 |
Expected Result
| measured_time | temperaure | humidity |
|---------------------|------------|----------|
| 2019-07-01-17:27:03 | 25.24 | 33.22 |
I've never used it and so can't answer in detail, but you can feed a raw SQL query into Django and get the results back through the ORM. Since you have already got the SQL this may be the easiest way to proceed. Documentation here

Insert a value into a row with petl?

I'm using petl and trying to figure out how to insert a value into a specific row.
I have a table that looks like this:
+----------------+---------+------------+
| Cambridge Data | IRR | Price List |
+================+=========+============+
| '3/31/1989' | '4.37%' | |
+----------------+---------+------------+
| '4/30/1989' | '5.35%' | |
+----------------+---------+------------+
I want to set the price list to 100 on the row where Cambridge Data is 4/30/1989. This is what I have so far:
def insert_initial_price(self, table):
import petl as etl
initial_price_row = etl.select(table, 'Cambridge Data', lambda v: v == '3/31/1989')
That selects the row I need to insert 100 into, but i'm unsure how to insert it. petl doesn't seem to have an "insert value" function.
I would advice not to use select.
To update the value of a field use convert.
See the docs with many examples: https://petl.readthedocs.io/en/stable/transform.html#petl.transform.conversions.convert
I have not tested it, but this should solve it:
import petl as etl
table2 = etl.convert(
table,
'Price List',
100,
where = lambda rec: rec["Cambridge Data"] == '4/30/1989',
)

Flask SQLAlchemy sum function comparison

I have table that is holding some data about users. There are two fields there like and smile. I need to get data from table, grouped by user_id that will show if user has likes or smiles. Query that I would write in SQL looks like:
select sum(smile) > 0 as has_smile,
sum(like) > 0 as has_like,
user_id
from ratings
group by user_id.
This would provide output like:
| has_smile | has_like | user_id |
+-----------+----------+---------+
| 1 | 0 | 1 |
| 1 | 1 | 2 |
Is there any chance this query can be translated to SQLAlchemy (Flask-SQLAlchemy to be precise)? I know there is db.func.sum but I don't know how to add comparison there, and to have label. What I did for now is:
cls.query.with_entities("user_id").group_by(user_id).\
add_columns(db.func.sum(cls.smile).label("has_smile"),
db.func.sum(cls.like).label("has_like")).all()
but that will return exact number of smiles/likes instead of just 1/0 if there is or there is not smile/like.
Thanks to operator overloading you'd do comparison the way you're used to doing in Python in general:
db.func.sum(cls.smile) > 0
which produces an SQL expression object that you can then give a label to:
(db.func.sum(cls.smile) > 0).label('has_smile')

Sqoop job from python stdout=subprocess.pipe

I am trying to generate Sqoopcommand using python. I am able to pass and fire the Sqoop query. I wanted to map the column name in Sqoop command --map-column-java and number of columns are different in each column . only BLOB and CLOB needs to be mapped.
Data:
-----------------------------------------------
| COLUMN_NAME | DATA_TYPE |
-----------------------------------------------
| C460 | VARCHAR2 |
| C459 | CLOB |
| C458 | VARCHAR2 |
| C457 | VARCHAR2 |
| C456 | CLOB |
| C8 | BLOB |
| C60901 | VARCHAR2 |
-----------------------------------------------
sample code :-
proc=subprocess.Popen(["sqoop", "eval", "--connect","jdbc:oracle:thin:#" + config["Production_host"]+":"+config["port"]+"/"+config['Production_SERVICE_NAME'],"--username", config["Production_User"], "--password", config["Production_Password"], "--query","SELECT column_name, data_type FROM all_tab_columns where table_name =" + "'"+ Tablename + "'"],stdout=subprocess.PIPE)
COl_Re=re.compile('(?m)(C\d+)(?=.+[CB]LOB)')
columns=COl_Re.findall(proc.stdout.read())
i am able to get the required column namesC459,C456,C8 using the above code . output ['C459', 'C456','C8']
i should get a new Sqoop query with below format
sqoop import --connect "--connect","jdbc:oracle:thin:#" + config["Production_host"]+":"+config["port"]+"/"+config['Production_SERVICE_NAME'],"--username", config["Production_User"], "--password", config["Production_Password"], --table table --fields-terminated-by '|' --map-column-java C456=String,C459=String,C8=String --hive-drop-import-delims --input-null-string '\\N' --input-null-non-string '\\N' --as-textfile --target-dir <Location> -m 1
i only need to add this part --map-column-java C456=String,C459=String,C8=String dynamically so that my next code subprocess.call can use this.
Build your sqoop syntax by assigning to a variable and based on the conditions override the variable with parameters and once the final syntax is built then execute it. Hope this helps.

Categories