I have a csv file like this:
nohaelprince#uwaterloo.ca, 01-05-2014
nohaelprince#uwaterloo.ca, 01-05-2014
nohaelprince#uwaterloo.ca, 01-05-2014
nohaelprince#gmail.com, 01-05-2014
I am reading the above csv file and extracting domain name and also the count of emails address by domain name and date as well. All these things I need to insert into MySQL table called domains which I am able to do it successfully.
Problem Statement:- Now I need to use the same table to report the top 50 domains by count sorted by percentage growth of the last 30 days compared to the total. And this is what I am not able to understand how can I do it?
Below is the code in which I am successfully able to insert into MySQL database but not able to do above reporting task as I am not able to understand how to achieve this task?
#!/usr/bin/python
import fileinput
import csv
import os
import sys
import time
import MySQLdb
from collections import defaultdict, Counter
domain_counts = defaultdict(Counter)
# ======================== Defined Functions ======================
def get_file_path(filename):
currentdirpath = os.getcwd()
# get current working directory path
filepath = os.path.join(currentdirpath, filename)
return filepath
# ===========================================================
def read_CSV(filepath):
with open('emails.csv') as f:
reader = csv.reader(f)
for row in reader:
domain_counts[row[0].split('#')[1].strip()][row[1]] += 1
db = MySQLdb.connect(host="localhost", # your host, usually localhost
user="root", # your username
passwd="abcdef1234", # your password
db="test") # name of the data base
cur = db.cursor()
q = """INSERT INTO domains(domain_name, cnt, date_of_entry) VALUES(%s, %s, STR_TO_DATE(%s, '%%d-%%m-%%Y'))"""
for domain, data in domain_counts.iteritems():
for email_date, email_count in data.iteritems():
cur.execute(q, (domain, email_count, email_date))
db.commit()
# ======================= main program =======================================
path = get_file_path('emails.csv')
read_CSV(path) # read the input file
What is the right way to do the reporting task while using domains table.
Update:
Here is my domains table:
mysql> describe domains;
+----------------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------+-------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| domain_name | varchar(20) | NO | | NULL | |
| cnt | int(11) | YES | | NULL | |
| date_of_entry | date | NO | | NULL | |
+-------------+-------------+------+-----+---------+----------------+
And here is data I have in them:
mysql> select * from domains;
+----+---------------+-------+------------+
| id | domain_name | count | date_entry |
+----+---------------+-------+------------+
| 1 | wawa.com | 2 | 2014-04-30 |
| 2 | wawa.com | 2 | 2014-05-01 |
| 3 | wawa.com | 3 | 2014-05-31 |
| 4 | uwaterloo.ca | 4 | 2014-04-30 |
| 5 | uwaterloo.ca | 3 | 2014-05-01 |
| 6 | uwaterloo.ca | 1 | 2014-05-31 |
| 7 | anonymous.com | 2 | 2014-04-30 |
| 8 | anonymous.com | 4 | 2014-05-01 |
| 9 | anonymous.com | 8 | 2014-05-31 |
| 10 | hotmail.com | 4 | 2014-04-30 |
| 11 | hotmail.com | 1 | 2014-05-01 |
| 12 | hotmail.com | 3 | 2014-05-31 |
| 13 | gmail.com | 6 | 2014-04-30 |
| 14 | gmail.com | 4 | 2014-05-01 |
| 15 | gmail.com | 8 | 2014-05-31 |
+----+---------------+-------+------------+
Your needed report can be done in SQL on the MySQL side and Python can be used to call the query, import the resultset, and print out the results.
Consider the following aggregate query with subquery and derived table which follow the percentage growth formula:
((this month domain total cnt) - (last month domain total cnt))
/ (last month all domains total cnt)
SQL
SELECT domain_name, pct_growth
FROM (
SELECT t1.domain_name,
# SUM OF SPECIFIC DOMAIN'S CNT BETWEEN TODAY AND 30 DAYS AGO
(Sum(CASE WHEN t1.date_of_entry >= (CURRENT_DATE - INTERVAL 30 DAY)
THEN t1.cnt ELSE 0 END)
-
# SUM OF SPECIFIC DOMAIN'S CNT AS OF 30 DAYS AGO
Sum(CASE WHEN t1.date_of_entry < (CURRENT_DATE - INTERVAL 30 DAY)
THEN t1.cnt ELSE 0 END)
) /
# SUM OF ALL DOMAINS' CNT AS OF 30 DAYS AGO
(SELECT SUM(t2.cnt) FROM domains t2
WHERE t2.date_of_entry < (CURRENT_DATE - INTERVAL 30 DAY))
As pct_growth
FROM domains t1
GROUP BY t1.domain_name
) As derivedTable
ORDER BY pct_growth DESC
LIMIT 50;
Python
cur = db.cursor()
sql = "SELECT * FROM ..." # SEE ABOVE
cur.execute(sql)
for row in cur.fetchall():
print(row)
If I understand correctly, you just need the ratio of the past thirty days to the total count. You can get this using conditional aggregation. So, assuming that cnt is always greater than 0:
select d.domain_name,
sum(cnt) as CntTotal,
sum(case when date_of_entry >= date_sub(now(), interval 1 month) then cnt else 0 end) as Cnt30Days,
(sum(case when date_of_entry >= date_sub(now(), interval 1 month) then cnt else 0 end) / sum(cnt)) as Ratio30Days
from domains d
group by d.domain_name
order by Ratio30Days desc;
Related
I have a table A with the following columns:
id UUID
str_identifier TEXT
num FLOAT
and a table B with similar columns:
str_identifier TEXT
num FLOAT
entry_date TIMESTAMP
I want to construct a sqlalchemy query that does the following:
finds entries in table B that either do not exist yet in table A, and inserts them
finds entries in table B that do exist in table A but have a different value for the num column
The catch is that table B has the entry_date column, and as a result can have multiple entries with the same str_identifier but different entry dates. So I always want to perform this insert/update query using the latest entry for a given str_identifier (if it has multiple entries in table B).
For example, if before the query runs tables A and B are:
[A]
| id | str_identifier | num |
|-----|-----------------|-------|
| 1 | str_id_1 | 25 |
[B]
| str_identifier | num | entry_date |
|----------------|-----|------------|
| str_id_1 | 89 | 2020-07-20 |
| str_id_1 | 25 | 2020-06-20 |
| str_id_1 | 50 | 2020-05-20 |
| str_id_2 | 45 | 2020-05-20 |
After the update query, table A should look like:
[A]
| id | str_identifier | num |
|-----|-----------------|-----|
| 1 | str_id_1 | 89 |
| 2 | str_id_2 | 45 |
The query I've constructed so far should detect difference, but will adding order_by(B.entry_date.desc()) ensure I only do the exist comparisons with the latest str_identifier values?
My Current Query
query = (
select([B.str_identifier, B.value])
.select_from(
join(B, A, onclause=B.str_identifier == A.str_identifier, isouter=True)
)
.where(
and_(
~exists().where(
and_(
B.str_identifier == A.str_identifier,
B.value == A.value,
~B.value.in_([None]),
)
)
)
)
)
I have this mysql table
id | sender | file_id
1 | A | 376482734627836
2 | B | 67387648327648726
3 | B | 8734682346287346
4 | A | 78623186347812
5 | A | 278618762378
6 | C | 287628681682
7 | A | 8389479247
I'm not good on "making a good sql statement", but I'd like to output the SENDER who have more entries
print("the user: "+user+" is the winner with "+sends+" sends!")
how would be the simple way for that?
In mysql you could use
select sender
from my_table
group by sender
order by count(*) desc limit 1
I am trying to get the number of network edges from a normalised SQLite database which has been normalised as follows:
Authors Paper Paper_Authors
| authorID | name | etc | paperID | title | etc | paperID | authorID |
| 1 | .... | ... | 1 | ..... | ... | 1 | 1 |
| 2 | .... | ... | 2 | ..... | ... | 1 | 2 |
| 3 | .... | ... | . | ..... | ... | 1 | 3 |
| 4 | .... | ... | 60,000 | ..... | ... | 2 | 1 |
| 5 | .... | ... | 2 | 4 |
| . | .... | ... | 2 | 5 |
| 120,000 | .... | ... | . | . |
| 60,000 | 120,000 |
With somewhere in the region of 120,000 authors and 60,000 papers, and the index table has around 250,000 rows.
I am trying to get this into networkX to do some connectivity analysis, inputting the nodes is simple:
conn = sqlite3.connect('../input/database.sqlite')
c = conn.cursor()
g = nx.Graph()
c.execute('SELECT authorID FROM Authors;')
authors = c.fetchall()
g.add_nodes_from(authors)
The problem I am having arises from trying to determine the edges to feed to networkX, which requires the values in a tuple of the two nodes to connect, using the data above as an example;
[(1,1),(1,2),(1,3),(2,3),(1,4),(1,5),(4,5)]
Would describe the dataset above.
I have the following code, which works, but is inelegant:
def coauthors(pID):
c.execute('SELECT authorID \
FROM Paper_Authors \
WHERE paperID IS ?;', (pID,))
out = c.fetchall()
g.add_edges_from(itertools.product(out, out))
c.execute('SELECT COUNT() FROM Papers;')
papers = c.fetchall()
for i in range(1, papers[0][0]+1):
if i % 1000 == 0:
print('On record:', str(i))
coauthors(i)
This works by looping through each of the papers in the database, returning a list of authors and iteratively making list of author combination tuples and adding them to the network in a piecemeal way, which works, but took 30 - 45 minutes:
print(nx.info(g))
Name:
Type: Graph
Number of nodes: 120670
Number of edges: 697389
Average degree: 11.5586
So my question is, is there a more elegant way to come to the same result, ideally with the paperID as the edge label, to make it easier to navigate the the network outside of networkX.
You can get all combinations of authors for each paper with a self join:
SELECT paperID,
a1.authorID AS author1,
a2.authorID AS author2
FROM Paper_Authors AS a1
JOIN Paper_Authors AS a2 USING (paperID)
WHERE a1.authorID < a2.authorID; -- prevent duplicate edges
This will be horribly inefficient unless you have an index on paperID, or better, a covering index on both paperID and authorID, or better, a WITHOUT ROWID table.
Conversions
user_id | tag | timestamp
|--------- |-------- |---------------------|
| 1 | click1 | 2016-11-01 01:20:39 |
| 2 | click2 | 2016-11-01 09:48:10 |
| 3 | click1 | 2016-11-04 14:27:22 |
| 4 | click4 | 2016-11-05 17:50:14 |
User Sessions
user_id | utm_campaign | session_start
|--------- |--------------- |---------------------|
| 1 | outbrain_2 | 2016-11-01 00:15:34 |
| 1 | email | 2016-11-01 01:00:29 |
| 2 | google_1 | 2016-11-01 08:24:39 |
| 3 | google_4 | 2016-11-04 14:25:06 |
| 4 | google_1 | 2016-11-05 17:43:02 |
Given the 2 tables above, I want to map each conversion event to the most recent campaign that brought a particular user to a site (aka last touch/last click attribution).
The desired output is a table of the format:
user_id | tag | timestamp | campaign
|--------- |-------- |---------------------|-----------
| 1 | click1 | 2016-11-01 01:20:39 | email
| 2 | click2 | 2016-11-01 09:48:10 | google_1
| 3 | click1 | 2016-11-04 14:27:22 | google_4
| 4 | click4 | 2016-11-05 17:50:14 | google_1
Note how user 1 visited the site via the outbrain_2 campaign and then came back to the site via the email campaign. Sometime during the user's second visit, they converted, thus the conversion should be attributed to email and not outbrain_2.
Is there a way to do this in MySQL or Python?
You can do this in Python with Pandas. I assume you can load the data from MySQL tables to Pandas dataframes conversions and sessions. First, concatenate both tables:
all = pd.concat([conversions,sessions])
Some of the elements in the new frame will be NAs. Create a new column that collects the time stamps from both tables:
all["ts"] = np.where(all["session_start"].isnull(),
all["timestamp"],
all["session_start"])
Sort by this column, forward fill the time values, group by the user ID, and select the last (most recent) row from each group:
groups = all.sort_values("ts").ffill().groupby("user_id",as_index=False).last()
Select the right columns:
result = groups[["user_id", "tag", "timestamp", "utm_campaign"]]
I tried this code with your sample data and got the right answer.
I have the following table assignment structure:
| employee | product | process | qty |
| Swati | PROD1 | issue | 60 |
| Rohit | PROD1 | issue | 30 |
| Rohit | PROD2 | issue | 40 |
| Swati | PROD1 | receive | 40 |
| Swati | PROD2 | issue | 70 |
I want the final table to look like this for each employee (say for employee = 'Swati'):
| product | sum_issued | sum_received
| PROD1 | 60 | 40 |
| PROD2 | 70 | 0 |
The SQL query which does this is:
select product
, sum(case when process='issue' then qty else 0 end) as sum_issued
, sum(case when process='receive' then qty else 0 end) as sum_received
from assignment
where employee = 'Swati'
group
by product;
What should the Django query be, corresponding to this result?
I guess your Model name is 'Assignment'. You can use the below query
from django.db.models import Case, Value, When, Sum, IntegerField, Count
result = Assignment.objects.filter(employee="Swati").values('product').annotate(
sum_issued=Sum(
Case(When(process='issue', then='qty'), default=Value(0), output_field=IntegerField())),
sum_recived=Sum(Case(When(process='receive', then='qty'), default=Value(0), output_field=IntegerField()))
)
If you print the above query print result.query , the result is,
SELECT "product", SUM(CASE WHEN "process" = issue THEN "qty" ELSE 0 END) AS "sum_issued", SUM(CASE WHEN "process" = receive THEN "qty" ELSE 0 END) AS "sum_recived" FROM "assignment" WHERE "employee" = 'Swati' GROUP BY "product"