I'm using petl and trying to figure out how to insert a value into a specific row.
I have a table that looks like this:
+----------------+---------+------------+
| Cambridge Data | IRR | Price List |
+================+=========+============+
| '3/31/1989' | '4.37%' | |
+----------------+---------+------------+
| '4/30/1989' | '5.35%' | |
+----------------+---------+------------+
I want to set the price list to 100 on the row where Cambridge Data is 4/30/1989. This is what I have so far:
def insert_initial_price(self, table):
import petl as etl
initial_price_row = etl.select(table, 'Cambridge Data', lambda v: v == '3/31/1989')
That selects the row I need to insert 100 into, but i'm unsure how to insert it. petl doesn't seem to have an "insert value" function.
I would advice not to use select.
To update the value of a field use convert.
See the docs with many examples: https://petl.readthedocs.io/en/stable/transform.html#petl.transform.conversions.convert
I have not tested it, but this should solve it:
import petl as etl
table2 = etl.convert(
table,
'Price List',
100,
where = lambda rec: rec["Cambridge Data"] == '4/30/1989',
)
Related
I am trying to see if theres anyway i can implement this piece of code using only sql REDSHIFT
a = '''
SELECT to_char(DATE '2022-01-01'
+ (interval '1 day' * generate_series(0,365)), 'YYYY_MM_DD') AS ym
'''
dfa = pd.read_sql(a, conn)
b = f'''
select account_no, {','.join('"' + str(x) + '"' for x in dfa.ym)}
from loan__balance_table
where account_no =
'''
dfb = pd.read_sql(b, conn)
the first query will yield something like this
| ym |
| ---------- |
| 2022_01_01 |
| 2022_01_02 |
...
| 2022_12_31|
Then i used string concatenation to combime the dates together and use then in the second query to select all columns in ym. The result of the second query should be something like this.
| account_no | 2022_01_01 | 2022_01_01 | ...
| ---------- | ---------- | ---------- | ...
| 1234 | 234,987.09 | 233,989.19 | ...
I just want to know if theres a way i can combine both queries together as one in sql without using python to concat the column_names.
I tried using CTE but i cant seem to get it right i dont even know if this is the right approach, The database is REDSHIFT
I have a table which has columns named measured_time, data_type and value.
In data_type, there is two types, temperature and humidity.
I want to combine two rows of data if they have same measured_time using Django ORM.
I am using Maria DB.
Using Raw SQL, The following Query does what I want to.
SELECT T1.measured_time, T1.temperature, T2.humidity
FROM ( SELECT CASE WHEN data_type = 1 then value END as temperature,
CASE WHEN data_type = 2 then value END as humidity ,
measured_time FROM data_table) as T1,
( SELECT CASE WHEN data_type = 1 then value END as temperature ,
CASE WHEN data_type = 2 then value END as humidity ,
measured_time FROM data_table) as T2
WHERE T1.measured_time = T2.measured_time and
T1.temperature IS NOT null and T2.humidity IS NOT null and
DATE(T1.measured_time) = '2019-07-01'
Original Table
| measured_time | data_type | value |
|---------------------|-----------|-------|
| 2019-07-01-17:27:03 | 1 | 25.24 |
| 2019-07-01-17:27:03 | 2 | 33.22 |
Expected Result
| measured_time | temperaure | humidity |
|---------------------|------------|----------|
| 2019-07-01-17:27:03 | 25.24 | 33.22 |
I've never used it and so can't answer in detail, but you can feed a raw SQL query into Django and get the results back through the ORM. Since you have already got the SQL this may be the easiest way to proceed. Documentation here
I'm using Python and SQLite to manipulate a string in android.
I have a SQLite Table that looks like this:
| ID | Country
+----------------+-------------
| 1 | USA, Germany, Mexico
| 2 | Brazil, Canada
| 3 | Peru
I would like to split the comma delimited values of Country column and insert them into another table countries so that Countries table looks like this
| ID | Country
+----------------+-------------
| 1 | USA
| 1 | Germany
| 1 | Mexico
| 2 | Brazil
| 2 | Canada
| 3 | Peru
How do I do split the values from Country column in one table and insert them into Country column of another table?
There is no split function in SQLite.
There is of course the substring function but it's not suitable for your needs since every row could contain more than 1 commas.
If you were an expert in SQLite I guess you could create a recursive statement using substring to split each row.
If you're not use Python to read the data, split each row and write it back to the db.
You can use a recursive common table expression to split the comma-delimited column by extracting substrings of the Country column recursively.
CREATE TABLE country_split AS
WITH RECURSIVE split(id, value, rest) AS (
SELECT ID, '', Country||',' FROM country
UNION ALL SELECT
id,
substr(rest, 0, instr(rest, ',')),
substr(rest, instr(rest, ',')+1)
FROM split WHERE rest!=''
)
SELECT id, value
FROM split
WHERE value!='';
im solved
im using python
import sqlite3
db = sqlite3.connect(':memory:')
db = sqlite3.connect('mydb.db')
cursor = db.cursor()
cursor.execute("""Select * from Countries""")
all_data = cursor.fetchall()
cursor.execute("""CREATE TABLE IF NOT EXISTS Countriess
(ID TEXT,
Country TEXT)""")
for single_data in all_data:
countriess = single_data[1].split(",")
for single_country in countriess :
cursor.execute("INSERT INTO Countriess VALUES(:id,:name)", { "id": single_data[0], "name": single_country })
db.commit()
and after use sqlite db another project; :)
Question: how do I insert a datetime value into MS SQL server, given the code below?
Context:
I have a 2-D list (i.e., a list of lists) in Python that I'd like to upload to a table in Microsoft SQL Server 2008. For this project I am using Python's pymssql package. Each value in each list is a string except for the very first element, which is a datetime value.
Here is how my code reads:
import pymssql
db_connect = pymssql.connect( # these are just generic names
server = server_name,
user = db_usr,
password = db_pwd,
database = db_name
)
my_cursor = db_connect.cursor()
for individual_list in list_of_lists:
# the first value in the paranthesis should be datetime
my_cursor.execute("INSERT INTO [DB_Table_Name] VALUES (%s, %s, %s, %s, %s, %s, %s, %s)", tuple(individual_list))
db_connect.commit()
The python interpreter is having a tough time inserting my datetime values. I understand that currently I have %s and that it is a string formatter, but I'm unsure what I should use for datetime, which is what the database's first column is formatted as.
The "list of lists" looks like this (after each list is converted into a tuple):
[(datetime.datetime(2012, 4, 1), '1', '4.1', 'hip', 'A1', 'J. Smith', 'B123', 'XYZ'),...]
Here is an illustration of what the table should look like:
+-----------+------+------+--------+-------+-----------+---------+---------+
| date | step | data | type | ID | contact | notif. | program |
+-----------+------+------+--------+-------+-----------+---------+---------+
|2012-04-01 | 1 | 4.1 | hip | A1 | J. Smith | B123 | XYZ |
|2012-09-05 | 2 | 5.1 | hip | A9 | B. Armst | B123 | ABC |
|2012-01-16 | 5 | 9.0 | horray | C6 | F. Bayes | P995 | XYZ |
+-----------+------+------+--------+-------+-----------+---------+---------+
Thank you in advance.
I would try formatting the date time to "yyyymmdd hh:mm:ss" before inserting. With what you are doing SQL will be parsing the string so I would also build the entire string and then insert the string as a variable. See below
for individual_list in list_of_lists:
# the first value in the parentheses should be datetime
date_time = individual_list[0].strftime("%Y%m%d %H:%M:%S")
insert_str = "INSERT INTO [DB_Table_Name] VALUES (" + str(date_time) + "),(" + str(individual_list[1]) + ");"
print insert_str
my_cursor.execute(insert_str)
db_connect.commit()
I apologize for the crude python but SQL should like that insert statement as long as all the fields match up. If not you may want to specify what fields those values go to in your insert statement.
Let me know if that works.
I have a table defined like so:
Column | Type | Modifiers | Storage | Stats target | Description
-------------+---------+-----------+---------+--------------+-------------
id | uuid | not null | plain | |
user_id | uuid | | plain | |
area_id | integer | | plain | |
vote_amount | integer | | plain | |
I want to be able to generate a rank 'column' when I query this database. This rank column would be ordered by the vote_amount column. I have attempted to create a query to do this, it looks like so:
subq_rank = db.session.query(user_stories).add_columns(db.func.rank.over(partition_by=user_stories.user_id, order_by=user_stories.vote_amount).label('rank')).subquery('slr')
data = db.session.query(user_stories).select_entity_from(subq_rank).filter(user_stories.area_id == id).group_by(-subq_rank.c.rank).limit(50).all()
Hopefully my attempt will give you an idea of what I am trying to achieve.
Thanks.
Well, if you need in each query these columns better I would do it in DB. I would create a view which contains the column rank, and in the query I call this view to show directly the data in code:
CREATE VIEW [ranking_user_stories] AS
SELECT TOP 50 * FROM
(SELECT *, rank() over (partition by user_stories.user_id order by user_stories.vote_amount ASC) AS ranking
FROM user_stories
WHERE user_stories.area_id = id) uS
ORDER BY vote_amount ASC
It's the same logic than your code but in SQL, if your are using MySQL, just change TOP 50 to LIMIT 50 (and put at the end of query). I don't see the sense to put the last group by by ranking, but if you need it:
CREATE VIEW [ranking_user_stories] AS
SELECT TOP 50 MAX(id) AS id, user_id, area_id, MAX(vote_amount) AS vote_amount, ranking FROM
(SELECT *, rank() over (partition by user_stories.user_id order by user_stories.vote_amount ASC) AS ranking
FROM user_stories
WHERE user_stories.area_id = id) uS
ORDER BY MAX(vote_amount) ASC
GROUP BY user_id, area_id, ranking