Dictionary query - python

Below is a function that extracts information from a database which holds information about events. Everything works except that when I try and iterate through times in rows in HTML it is apparently empty. I will therefore assume that rows.append(time) is not doing what it should be doing. I tried rows.append((time)) and that did not work either.
def extractor(n):
date = (datetime.datetime.now() + datetime.timedelta(days=n)).date()
rows = db.execute("SELECT * FROM events WHERE date LIKE :date ORDER BY date", date = str(date) + '%')
printed_day = date.strftime('%A') + ", " + date.strftime('%B') + " " + str(date.day) + ", " + str(datetime.datetime.now().year)
start_time = time.strftime("%H:%M:%S")
for row in rows:
date_split = str.split(row['date'])
just_time = date_split[1]
if just_time == '00:00:00':
just_time = 'All Day'
else:
just_time = just_time[0:5]
times.append((just_time))
rows.append(times)
results.append((rows, printed_day, start_time, times))

Solved it:
replace
times.append((just_time))
rows.append(times)
with
row['times'] = just_time

Related

How can I send a sms in Django?

I encountered a problem when trying to send sms using the SMSC service in Django project.
My Celery task for sending email and sms:
def order_created_retail(order_id):
# Task to send an email when an order is successfully created
order = OrderRetail.objects.get(id=order_id)
subject = 'Order №{}.'.format(order_id)
message_mail = 'Hello, {}! You have successfully placed an order{}. Manager will contact you shortly'.format(order.first_name, order.id)
message_sms = 'Your order №{} is accepted! Wait for operator call'
mail_sent = send_mail(
subject,
message_mail,
'email#email.com',
[order.email]
)
smsc = SMSC()
sms_sent = smsc.send_sms(
[order.phone],
str(message_sms)
)
return mail_sent, sms_sent
Email sends correctly, but for sms I get that error:
Task orders.tasks.order_created_retail[f05458b1-65e8-493b-9069-fbaa55083e7a] raised unexpected: TypeError('quote_from_bytes() expected bytes')
function from SMSC library:
def send_sms(self, phones, message, translit=0, time="", id=0, format=0, sender=False, query=""):
formats = ["flash=1", "push=1", "hlr=1", "bin=1", "bin=2", "ping=1", "mms=1", "mail=1", "call=1", "viber=1", "soc=1"]
m = self._smsc_send_cmd("send", "cost=3&phones=" + quote(phones) + "&mes=" + quote(message) + \
"&translit=" + str(translit) + "&id=" + str(id) + ifs(format > 0, "&" + formats[format-1], "") + \
ifs(sender == False, "", "&sender=" + quote(str(sender))) + \
ifs(time, "&time=" + quote(time), "") + ifs(query, "&" + query, ""))
# (id, cnt, cost, balance) или (id, -error)
if SMSC_DEBUG:
if m[1] > "0":
print("Сообщение отправлено успешно. ID: " + m[0] + ", всего SMS: " + m[1] + ", стоимость: " + m[2] + ", баланс: " + m[3])
else:
print("Ошибка №" + m[1][1:] + ifs(m[0] > "0", ", ID: " + m[0], ""))
return m
What am I doing wrong?
Thanks!
to solve this problem, I started investigating the functions that were giving out the error.
It turned out that I was passing an incorrect value. the function was expecting a string. And it took me a long time to figure out why editing didn't help.
It turns out that you have to RESET CELERY every time you make an edit.

What cause SQL slowdown in Python?

My problem is that when I run project or debug, the first query run so fast in about < 1s, but when it comes to run second query, it costs more than 30s. I'm so confused about it. I have already ran it in DB editor, both of them run so fast, doesn't have any problem. First look, two queries is quite similar so I do not know why caused it.
By the way sometimes I debug, it pop up a red notice in the left debug and run tab, but I cannot get screen shot of this. It just appear once or twice .
This is screen shot of sql query
query 1: rows = db.select("SELECT recruiter_id FROM linkedin.candidates WHERE recruiter_id in (" + ",".join(recruiter_ids) + ")")
query 2: rows = db.select("select c.recruiter_id, c.updated from linkedin.candidates c where c.recruiter_id in (" + ",".join(duplicates_rid) + ")")
This is my code
if recruiter_ids:
print("Creating connection to MySQL in recruiter 12")
rows = db.select("SELECT recruiter_id FROM linkedin.candidates WHERE recruiter_id in (" + ",".join(recruiter_ids) + ")")
db_recruiter_ids = [r['recruiter_id'] for r in rows] + [get_recruiter_id(url) for url in duplicates]
print("Recruiter ids in database:", len(db_recruiter_ids), db_recruiter_ids[:5])
duplicates = [url for url in profile_urls if any(get_recruiter_id(url) == rid for rid in db_recruiter_ids)]
duplicates_rid = [ get_recruiter_id(url) for url in duplicates]
if duplicates_rid:
rows = db.select("select c.recruiter_id, c.updated from linkedin.candidates c where c.recruiter_id in (" + ", ".join(duplicates_rid) + ")")
#rows = db.select("select c.recruiter_id, c.updated from linkedin.candidates c where c.recruiter_id in {}".format(tuple(duplicates_rid)))
rows = [r['recruiter_id'] for r in rows if r['updated'] < datetime.datetime.now() - datetime.timedelta(days=90)]
old_resumes = [url for url in profile_urls if any(get_recruiter_id(url) == r for r in rows)]
profile_urls = [url for url in profile_urls if not any(get_recruiter_id(url) == rid for rid in db_recruiter_ids)]
print("Found duplicates in list:", len(duplicates), duplicates[:3])
if db_recruiter_ids:
tag_candidate_by_recruiter_id(db, db_recruiter_ids, project_id, tracker_id)
Thank you guy so much !!

How to Handle Exceptions Caused by Holidays and Weekends in Python

I'm using an API to lookup historical stock market prices for a given company on the last day of each month. The problem is that the last day can sometimes fall on a weekend or holiday, in which case the API returns a KeyError. I've tried using an exception to handle this by adding n number to the date to get the next-closest valid one, but this is not foolproof.
Here is my existing code:
import os
from iexfinance.stocks import get_historical_data
import iexfinance
import pandas as pd
# Set API Keys
os.environ['IEX_API_VERSION'] = 'iexcloud-sandbox'
os.environ['IEX_TOKEN'] = 'Tsk_5798c0ab124d49639bb1575b322841c4'
stocks = ['AMZN', 'FDX', 'XXXXX', 'BAC', 'COST']
date = "20191130"
for stock in stocks:
try:
price_df = get_historical_data(stock, date, close_only=True,output_format='pandas')
price = price_df['close'].values[0]
print(price)
except KeyError:
date = str(int(date) - 1)
price_df = get_historical_data(stock, date, close_only=True, output_format='pandas')
price = price_df['close'].values[0]
print(price)
except iexfinance.utils.exceptions.IEXQueryError:
print(stock + " is not a valid company")
But if you change date = "20160131", then you get a KeyError again.
So is there a simple way to handle these exceptions and get the next-valid date?
Note that the API Key is public and for sandbox purposes, so feel free to use
I think this might work:
def get_prices(stocks, date):
for stock in stocks:
try:
price_df = get_historical_data(stock, date, close_only=True,output_format='pandas')
price = price_df['close'].values[0]
print(stock + " was # $" + str(price) + " on " + str(date))
except KeyError:
return get_prices(stocks, date = str(int(date) - 1))
print(stock + " was # $" + str(price) + " on " + str(date))
except iexfinance.utils.exceptions.IEXQueryError:
print(stock + " is not a valid company")

Difference between masking and querying pandas.DataFrame

My example shows when using DataFrame of float that querying might in certains cases be faster than using masks. When you look at the graph, the q̶u̶e̶r̶y̶ ̶f̶u̶n̶c̶t̶i̶o̶n̶ ̶p̶e̶r̶f̶o̶r̶m̶s̶ ̶b̶e̶t̶t̶e̶r̶ ̶w̶h̶e̶n̶ ̶t̶h̶e̶ ̶c̶o̶n̶d̶i̶t̶i̶o̶n̶ ̶i̶s̶ ̶c̶o̶m̶p̶o̶s̶e̶d̶ ̶o̶f̶ ̶1̶ ̶t̶o̶ ̶5̶ ̶s̶u̶b̶c̶o̶n̶d̶i̶t̶i̶o̶n̶s̶.
Edit (thanks to a_guest): mask function performs better when the condition is composed of 1 to 5 subconditions
Then, Is there any difference between the two methods since it tends to have the same trend over the number of subconditions.
The function used to plot my data:
import matplotlib.pyplot as plt
def graph(data):
t = [int(i) for i in range(1, len(data["mask"]) + 1)]
plt.xlabel('Number of conditions')
plt.ylabel('timeit (ms)')
plt.title('Benchmark mask vs query')
plt.grid(True)
plt.plot(t, data["mask"], 'r', label="mask")
plt.plot(t, data["query"], 'b', label="query")
plt.xlim(1, len(data["mask"]))
plt.legend()
plt.show()
The functions used to creates the conditions to be tested by timeit:
def create_multiple_conditions_mask(columns, nb_conditions, condition):
mask_list = []
for i in range(nb_conditions):
mask_list.append("(df['" + columns[i] + "']" + " " + condition + ")")
return " & ".join(mask_list)
def create_multiple_conditions_query(columns, nb_conditions, condition):
mask_list = []
for i in range(nb_conditions):
mask_list.append(columns[i] + " " + condition)
return "'" + " and ".join(mask_list) + "'"
The function to benchmark masking vs querying using a pandas DataFrame containing float:
def benchmarks_mask_vs_query(dim_df=(50,10), labels=[], condition="> 0", random=False):
# init local variable
time_results = {"mask": [], "query": []}
nb_samples, nb_columns = dim_df
all_labels = list('ABCDEFGHIJKLMNOPQRSTUVWXYZ')
if nb_columns > 26:
if len(labels) == nb_columns:
all_labels = labels
else:
raise Exception("labels length must match nb_columns" )
df = pd.DataFrame(np.random.randn(nb_samples, nb_columns), columns=all_labels[:nb_columns])
for col in range(nb_columns):
if random:
condition = "<" + str(np.random.random(1)[0])
mask = "df[" + create_multiple_conditions_mask(df.columns, col+1, condition) + "]"
query = "df.query(" + create_multiple_conditions_query(df.columns, col+1, condition) + ")"
print("Parameters: nb_conditions=" + str(col+1) + ", condition= " + condition)
print("Mask created: " + mask)
print("Query created: " + query)
print()
result_mask = timeit(mask, number=100, globals=locals()) * 10
result_query = timeit(query, number=100, globals=locals()) * 10
time_results["mask"].append(result_mask)
time_results["query"].append(result_query)
return time_results
What I run:
# benchmark on a DataFrame of shape(50,25) populating with random values
# as well as the conditions ("<random_value")
data = benchmarks_mask_vs_query((50,25), random=True)
graph(data)
What I get:

The python operation database error

I use python operation postgresql database, the implementation of sql, it removed the quotation marks, resulting in inquiries failed, how to avoid?
def build_sql(self,table_name,keys,condition):
print(condition)
# condition = {
# "os":["Linux","Windows"],
# "client_type":["ordinary"],
# "client_status":'1',
# "offset":"1",
# "limit":"8"
# }
sql_header = "SELECT %s FROM %s" % (keys,table_name)
sql_condition = []
sql_range = []
sql_sort = []
sql_orederby = []
for key in condition:
if isinstance(condition[key],list):
sql_condition.append(key+" in ("+",".join(condition[key])+")")
elif key == 'limit' or key == 'offset':
sql_range.append(key + " " + condition[key])
else:
sql_condition.append(key + " = " + condition[key])
print(sql_condition)
print(sql_range)
sql_condition = [str(i) for i in sql_condition]
if not sql_condition == []:
sql_condition = " where " + " and ".join(sql_condition) + " "
sql = sql_header + sql_condition + " ".join(sql_range)
return sql
Error:
MySQL Error Code : column "winxp" does not exist
LINE 1: ...T * FROM ksc_client_info where base_client_os in (WinXP) and...
Mind you I do not have much Python experience, but basically you don't have single quotes in that sequence, so you either need to add those before passing it to function or for example during join(), like that:
sql_condition.append(key+" in ("+"'{0}'".format("','".join(condition[key]))+")")
You can see other solutions in those questions:
Join a list of strings in python and wrap each string in quotation marks
Add quotes to every list elements

Categories