I am new to Redis and Redisearch.
I want to create an autocomplete using redis in flask app.
Below is what I have tried so far,
autocomplete.py:
import redis
import redisearch
from flask import Flask,request,jsonify,render_template
app = Flask("autocomplete")
#creating a redis connection
r = redis.Redis(host='localhost', port=6379,db=0)
#app.route('/')
def home():
return "This is Home Page"
#route to add a value to autocomplete list
#app.route('/add')
def addValue():
try:
name = request.args.get('name')
n = name.strip()
for l in range(1,len(n)):
prefix = n[0:l]
r.zadd('compl',{prefix:0})
r.zadd('compl',{n+"*":0})
return "Success"
except:
return "Failed"
#route to get the autocomplete
#app.route('/autocomplete')
def autocomplete():
prefix = request.args.get('prefix')
results = []
rangelen = 50
count=5
start = r.zrank('compl',prefix)
if not start:
return []
while (len(results) != count):
range = r.zrange('compl',start,start+rangelen-1)
start += rangelen
if not range or len(range) == 0:
break
for entry in range:
entry=entry.decode('utf-8')
minlen = min(len(entry),len(prefix))
if entry[0:minlen] != prefix[0:minlen]:
count = len(results)
break
if entry[-1] == "*" and len(results) != count:
results.append(entry[0:-1])
return jsonify(results)
Currently the values for #app.route('/add') and prefixes for #app.route('/autocomplete') is fetched through the URL itself.
However, I want the prefixes/text for #app.route('/autocomplete') to be fetched through an input textbox to create dynamic autocomplete.
I would be really grateful if anyone could guide me in implementing the same.
This is a sample output:
autocomplete
I have also referred to https://redis.com/ebook/part-2-core-concepts/chapter-6-application-components-in-redis/6-1-autocomplete/ but was unable to understand on how to implement it
EDIT : I found a solution for this at https://github.com/RediSearch/redisearch-py/blob/master/redisearch/auto_complete.py
You can use redisearch's autocompleter.
Example using flask is available on Github https://github.com/Redislabs-Solution-Architects/redisearch_demo_and_preso
Related
I am pretty new to coding and aws chalice. I tried writing a code that gets messages from trading-view and executes orders depending on the signals.
I tested the code locally and everything worked fine, but when I test the Rest API I get the following error:
{"message":"Missing Authentication Token"}
I set up my credentials via "aws configure" as explained here: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html
I also created a config.txt file in my aws folder and checked my settings via "aws configure get" and they were fine.
The index function in the beginning worked too, so there should be a problem within my code?
I changed some values and cut some functions and the strategy part out, but the code looks somewhat like this:
from chalice import Chalice
from datetime import datetime
from binance.client import Client
from binance.enums import *
import ccxt
exchange = ccxt.binance({
'apiKey': 'KEY',
'secret': 'SECRET',
'enableRateLimit': True,
'options': {
'defaultType': 'future',
},
})
def buy_order(quantity, symbol, order_type = ORDER_TYPE_MARKET,side=SIDE_BUY,recvWindow=5000):
try:
print("sending order")
order = client.futures_create_order(symbol = symbol, type = order_type, side = side, quantity = quantity,recvWindow=recvWindow)
print(order)
except Exception as e:
print("an exception occured - {}".format(e))
return False
return True
app = Chalice(app_name='tradingview-webhook-alert')
indicator1 = "x"
indicator2 = "y"
TRADE_SYMBOL = "Test123"
in_position = False
def diff_time(time1, time2):
fmt = '%Y-%m-%dT%H:%M:%SZ'
tstamp1 = datetime.strptime(time1, fmt)
tstamp2 = datetime.strptime(time2, fmt)
if tstamp1 > tstamp2:
td = tstamp1 - tstamp2
else:
td = tstamp2 - tstamp1
td_mins = int(round(td.total_seconds() / 60))
return td_mins
#app.route('/test123', methods=['POST'])
def test123():
global indicator1, indicator2
request = app.current_request
message = request.json_body
indicator = message["indicator"]
price = message["price"]
value = message["value"]
if indicator == "indicator1":
indicator1 = value
if indicator == "indicator2":
indicator2 = value
if in_position == False:
if (indicator1 >123) & (indicator2 < 321):
balance = exchange.fetch_free_balance()
usd = float(balance['USDT'])
TRADE_QUANTITY = (usd / price)*0.1
order_succeeded = buy_order(TRADE_QUANTITY, TRADE_SYMBOL)
if order_succeeded:
in_position = True
return {"test": "123"}
I tested it locally with Insomnia and tried the Rest API link there and in my browser, both with the same error message. Is my testing method wrong or is it the code? But even then, why isn't the Rest API link working, when I include the index function from the beginning again? If I try the index function from the beginning, I get the {"message": "Internal server error"} .
This is probably a very very basic question but I couldn't find an answer online.
Any help would be appreciated!
I am not pretty sure if that helps you because I don't really understand your question but:
You are using a POST-request which will not be executed by opening a URL.
Try something like #app.route('/test123', methods=['POST', 'GET']) so that if you just open the URL, it will execute a GET-request
Some more information:
https://www.w3schools.com/tags/ref_httpmethods.asp
I'm struggling to get a Lambda function working. I have a python script to access twitter API, pull information, and export that information into an excel sheet. I'm trying to transfer python script over to AWS/Lambda, and I'm having a lot of trouble.
What I've done so far: Created AWS account, setup S3 to have a bucket, and poked around trying to get things to work.
I think the main area I'm struggling is how to go from a python script that I'm executing via local CLI and transforming that code into lambda-capable code. I'm not sure I understand how the lambda_handler function works, what the event or context arguments actually mean (despite watching a half dozen different tutorial videos), or how to integrate my existing functions into Lambda in the context of the lambda_handler, and I'm just very confused and hoping someone might be able to help me get some clarity!
Code that I'm using to pull twitter data (just a sample):
import time
import datetime
import keys
import pandas as pd
from twython import Twython, TwythonError
import pymysql
def lambda_handler(event, context):
def oauth_authenticate():
twitter_oauth = Twython(keys.APP_KEY, keys.APP_SECRET, oauth_version=2)
ACCESS_TOKEN = twitter_oauth.obtain_access_token()
twitter = Twython(keys.APP_KEY, access_token = ACCESS_TOKEN)
return twitter
def get_username():
"""
Prompts for the screen name of targetted account
"""
username = input("Enter the Twitter screenname you'd like information on. Do not include '#':")
return username
def get_user_followers(username):
"""
Returns data on all accounts following the targetted user.
WARNING: The number of followers can be huge, and the data isn't very valuable
"""
#username = get_username()
#import pdb; pdb.set_trace()
twitter = oauth_authenticate()
datestamp = str(datetime.datetime.now().strftime("%Y-%m-%d"))
target = twitter.lookup_user(screen_name = username)
for y in target:
target_id = y['id_str']
next_cursor = -1
index = 0
followersdata = {}
while next_cursor:
try:
get_followers = twitter.get_followers_list(screen_name = username,
count = 200,
cursor = next_cursor)
for x in get_followers['users']:
followersdata[index] = {}
followersdata[index]['screen_name'] = x['screen_name']
followersdata[index]['id_str'] = x['id_str']
followersdata[index]['name'] = x['name']
followersdata[index]['description'] = x['description']
followersdata[index]['date_checked'] = datestamp
followersdata[index]['targeted_account_id'] = target_id
index = index + 1
next_cursor = get_followers["next_cursor"]
except TwythonError as e:
print(e)
remainder = (float(twitter.get_lastfunction_header(header = 'x-rate-limit-reset')) \
- time.time())+1
print("Rate limit exceeded. Waiting for:", remainder/60, "minutes")
print("Current Time is:", time.strftime("%I:%M:%S"))
del twitter
time.sleep(remainder)
twitter = oauth_authenticate()
continue
followersDF = pd.DataFrame.from_dict(followersdata, orient = "index")
followersDF.to_excel("%s-%s-follower list.xlsx" % (username, datestamp),
index = False, encoding = 'utf-8')
This question already has an answer here:
Why not generate the secret key every time Flask starts?
(1 answer)
Closed 4 years ago.
I'm currently super stumped. I have a large flask application that I recently deployed to my production ec2 server that's not functioning properly. It works fine in dev/test. It looks like, maybe, apache is hanging. When I first deployed it, it worked a few times. Then, after that I started getting 500 server errors. When I restart apache, it works fine again for a session but the 500 server errors come back. Each time, on a different page of the application. This is reoccurring. How would I go about fixing this? Here's my routes/app.py:
from flask import Flask, render_template,redirect,request,url_for, flash, session
from Index_generator import index_generator
from Yelp_api import request_yelp
from plaid import auth
from search_results import search,search1
from Options import return_data
from datetime import timedelta
import os
import urllib2
import time
from path import path_data
#from writer import create_config_file
import logging
from logging_path import log_dir
logger = logging.getLogger("Routing")
logger.setLevel(logging.INFO)
foodie_log = log_dir()
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - % (message)s')
foodie_log.setFormatter(formatter)
logger.addHandler(foodie_log)
app = Flask(__name__)
app.secret_key = os.urandom(24)
go = path_data()
#app.before_request
def make_session_permanent():
session.permanent = False
app.permanent_session_lifetime = timedelta(seconds=300)
##app.route('/setup',methods=['GET','POST'])
#def setup():
# if request.method == 'POST':
# Username=request.form['username']
# Password=request.form['password']
# Port=request.form['port']
# Host=request.form['host']
# C_Key=request.form['CONSUMER_KEY']
# C_Sec=request.form['CONSUMER_SECRET']
# Tok=request.form['TOKEN']
# Tok_Sec=request.form['TOKEN_SECRET']
# clientid=request.form['client_id']
# Sec=request.form['secret']
# create_config_file(Username,Password,Port,Host,C_Key,C_Sec,Tok,Tok_Sec,clientid,Sec)
# return 'complete'
# elif request.method == 'GET':
#
# return render_template('setup.html')
#app.route('/')
def home():
store = index_generator()
session['store'] = store
return render_template('home.html')
#app.route('/about')
def about():
return render_template('about.html')
#app.route('/home_city',methods = ['POST'])
def home_city():
try:
CITY=request.form['city']
store = session.get('store')
request_yelp(DEFAULT_LOCATION=CITY,data_store=store)
return render_template('bank.html')
except Exception as e:
logger.error(e)
error = 'Sorry, no results. Is' + ' ' +CITY + ' '+ 'your hometown? If not, try again and if so, we have been made aware of the issue and is working to resolve it'
return render_template('home.html',error=error)
#app.route('/traveling',methods = ['POST'])
def travel():
store = session.get('store')
answer=request.form['Travel']
if answer == 'yes':
#time.sleep(2)
return render_template('destination.html')
else:
results_home = search(index=store)
time.sleep(2)
return return_data(results_home)
#app.route('/dest_city',methods = ['POST'])
def dest_city():
store = session.get('store')
try:
DESTINATION=request.form['dest_city']
request_yelp(DEFAULT_LOCATION=DESTINATION, data_store=store,sourcetype='dest_city')
results_dest = search1(index=store)
time.sleep(2)
return return_data(results_dest)
except urllib2.HTTPError:
error = 'Sorry, no results. Is your destination city? If not, try again and if so, we have been made aware of the issue and is working to resolve it'
return render_template('destination.html',error=error)
#app.route('/bank',methods = ['POST'])
def bank():
try:
store = session.get('store')
print store
Bank=request.form['Fin_Bank']
Username=request.form['username']
Password=request.form['password']
Test = auth(account=Bank,username=Username,password=Password,data_store=store)
if Test == 402 or Test ==401:
error = 'Invalid credentials'
return render_template('bank.html',error=error)
else :
return render_template('travel.html')
except:
logger.error(e)
if __name__ == '__main__':
app.run(debug=True)
app.secret_key=os.urandom(24)
Not completely sure if this causes your errors but you should definitely change this:
You have to set app.secret_key to a static value rather than os.urandom(24), e.g.:
app.secret_key = "/\xfa-\x84\xfeW\xc3\xda\x11%/\x0c\xa0\xbaY\xa3\x89\x93$\xf5\x92\x9eW}"
With your current code, each worker thread of your application will have a different secret key, which can result in errors or session inconsistency depending on how you use the session object in your application.
How to configure jinja2 in Appengine to:
Auto reload when template is updated.
Enable bytecode cache, so it can be share among each instances. I prefer jinja2 to produce bytecode when compiling template, and store it to datastore. So next instance will load bytecode instead of repeatedly compile the template.
I have added the bcc like this, using the app engine memcache Client()::
loader = dynloaders.DynLoader() # init Function loader
bcc = MemcachedBytecodeCache(memcache.Client(), prefix='jinja2/bytecode/', timeout=None)
return Environment(auto_reload=True, cache_size=100, loader=FunctionLoader(loader.load_dyn_all),
bytecode_cache=bcc)
My function loader:
def html(self, cid):
def _html_txt_up_to_date(): # closure to check if template is up to date
return CMSUpdates.check_no_update(cid, template.modified)
template = ndb.Key('Templates', cid, parent=self.parent_key).get()
if not template:
logging.error('DynLoader (HTML/TXT): %s' % cid)
return None # raises TemplateNotFound exception
return template.content, None, _html_txt_up_to_date
The template model uses template.modified : ndb.DateTimeProperty(auto_now=True)
The closure function:
class CMSUpdates(ndb.Model):
updates = ndb.JsonProperty()
#classmethod
def check_no_update(cls, cid, cid_modified):
cms_updates = cls.get_or_insert('cms_updates', updates=dict()).updates
if cid in cms_updates: # cid modified has dt microseconds
if cid_modified >= datetime.strptime(cms_updates[cid], '%Y-%m-%d %H:%M:%S'):
if (datetime.now() - timedelta(days=1)) > cid_modified:
del cms_updates[cid]
cls(id='cms_updates', updates=cms_updates).put_async()
return True
return False # reload the template
return True
Been few weeks i looking for the solution. And finally i figured it out, i would like to share my code for everyone. There are 4 python source files in my code.
TemplateEngine.py, ContentRenderer.py, TestContent.py & Update_Template.py
File: TemplateEngine.py
Note:
i use now = datetime.utcnow() + timedelta(hours=8) because my timezone is GMT+8
You must use ndb.BlobProperty to store the bytecode, ndb.TextProperty will not work!
from google.appengine.ext import ndb
from datetime import datetime,timedelta
class SiteTemplates(ndb.Model):
name = ndb.StringProperty(indexed=True, required=True)
data = ndb.TextProperty()
uptodate = ndb.BooleanProperty(required=True)
class SiteTemplateBytecodes(ndb.Model):
key = ndb.StringProperty(indexed=True, required=True)
data = ndb.BlobProperty(required=True)
mod_datetime = ndb.DateTimeProperty(required=True)
class LocalCache(jinja2.BytecodeCache):
def load_bytecode(self, bucket):
q = SiteTemplateBytecodes.query(SiteTemplateBytecodes.key == bucket.key)
if q.count() > 0:
r = q.get()
bucket.bytecode_from_string(r.data)
def dump_bytecode(self, bucket):
now = datetime.utcnow() + timedelta(hours=8)
q = SiteTemplateBytecodes.query(SiteTemplateBytecodes.key == bucket.key)
if q.count() > 0:
r = q.get()
r.data = bucket.bytecode_to_string()
r.mod_datetime = now
else:
r = SiteTemplateBytecodes(key=bucket.key, data=bucket.bytecode_to_string(), mod_datetime=now)
r.put()
def Update_Template_Source(tn, source):
try:
q = SiteTemplates.query(SiteTemplates.name == tn)
if q.count() == 0:
u = mkiniTemplates(name=tn, data=source, uptodate=False)
else:
u = q.get()
u.name=tn
u.data=source
u.uptodate=False
u.put()
return True
except Exception,e:
logging.exception(e)
return False
def Get_Template_Source(tn):
uptodate = False
def Template_Uptodate():
return uptodate
try:
q = SiteTemplates.query(SiteTemplates.name == tn)
if q.count() > 0:
r = q.get()
uptodate = r.uptodate
if r.uptodate == False:
r.uptodate=True
r.put()
return r.data, tn, Template_Uptodate
else:
return None
except Exception,e:
logging.exception(e)
return None
File: ContentRenderer.py
Note: It is very important to set cache_size=0, otherwise bytecode cache function will be disable. I have no idea why.
from TemplateEngine import Get_Template_Source
import jinja2
def Render(tn,tags):
global te
return te.Render(tn, tags)
bcc = LocalCache()
te = jinja2.Environment(loader=jinja2.FunctionLoader(Get_Template_Source), cache_size=0, extensions=['jinja2.ext.autoescape'], bytecode_cache=bcc)
File: Update_Template.py
Note: Use Update_Template_Source() to update template source to datastore.
from TemplateEngine import Update_Template_Source
template_source = '<html><body>hello word to {{title}}!</body></html>'
if Update_Template_Source('my-template.html', template_source):
print 'template is updated'
else:
print 'error when updating template source'
File: TestContent.py
Note: Do some test
from ContentRenderer import Render
print Render('my-template.htmnl', {'title':'human'})
'hello world to human!'
You will realize, even you have more than 20 instances in your application, the latency time will not increase, even you update your template. And the template source will update in 5 to 10 seconds.
I trying to save entry in mongodb and get id. Then I want to find this entry in thread. But sometimes I can't do it.
import pymongo
import bson
import threading
connection = pymongo.Connection("localhost", 27017)
db = connection.test
def set_cache(db):
cache_id = db.test_collection.save({'test': 'some string'})
return cache_id
def get_cache(db, cache_id):
entry = db.test_collection.find_one({'_id' : bson.objectid.ObjectId(cache_id)})
if not entry:
print('No entry for %s' % cache_id)
return entry
i = 0
while 1:
i += 1
cache_id = set_cache(db)
t = threading.Thread(target=get_cache, args=(db, cache_id))
t.start()
t.join()
if i > 10000:
break
So, somethimes I see 'No entry for ...'. But I can see this entry in mongo.
python2.6
mongo 2.0.6
The problem with your implementation is that you are using unacknowledged writes with the default usage of pymongo.Connection . By using this you can get into situations that the writes are not confirmed in memory but you receive the confirmation in the client. If you are faster processing the response and emitting the find request you will get into situations like this one. You are basically being too fast :)
Now if you use an acknowledge write concern w:1 or by just using using the new pymongo.MongoClient class (which I encourage you to do so) you won't get into that situation:
import pymongo
import bson
import threading
connection = pymongo.MongoClient("localhost", 27017)
db = connection.test
def set_cache(db):
cache_id = db.test_collection.save({'test': 'some string'})
return cache_id
def get_cache(db, cache_id):
entry = db.test_collection.find_one({'_id' : bson.objectid.ObjectId(cache_id)})
if not entry:
print('No entry for %s' % cache_id)
return entry
i = 0
while 1:
i += 1
cache_id = set_cache(db)
t = threading.Thread(target=get_cache, args=(db, cache_id))
t.start()
t.join()
if i > 10000:
break
N.