Trouble using Sanic and Redis - python

I am using Sanic with 2 workers. I am trying to get a billing system working, i.e. Counting how many times a user hit the API endpoint. Following is my code:
class User(object):
def __init__(self, id, name, age, address, mobile, credits=0):
self.id = id
self.name = name
self.credits = count
self.details = {"age": age, "address": address, "mobile_number": mobile}
The above Users Class is used to make objects that I have uploaded onto Redis using another python script as follows:
user = User(..., credits = 10)
string_obj = json.dumps(user)
root.set(f"{user.user_id}", string_obj)
The main issue arises when I want to maintain a count of the number of hits an endpoint receives and track it withing the user object and upload it back onto Redis. My code is as follows:
from sanic_redis_ext import RedisExtension
app = Sanic("Testing")
app.config.update(
{
"REDIS_HOST": "127.0.0.1",
"REDIS_PORT": 6379,
"REDIS_DATABASE": 0,
"REDIS_SSL": None,
"REDIS_ENCODING": "utf-8",
"REDIS_MIN_SIZE_POOL": 1,
"REDIS_MAX_SIZE_POOL": 10,
})
#app.route("/test", methods=["POST"])
#inject_user()
#protected()
async def foo(request, user):
user.credits -= 1
if user.credits < 0:
user.credits = 0
return sanic.response.text("Credits Exhausted")
result = process(request)
if not result:
user.credits += 1
await app.redis.set(f"{user.user_id}", json.dumps(user))
return sanic.response.text(result)
And this is how I am retrieving the user:
async def retrieve_user(request, *args, **kwargs):
if "user_id" in kwargs:
user_id = kwargs.get("user_id")
else:
if "payload" in kwargs:
payload = kwargs.get("payload")
else:
payload = await request.app.auth.extract_payload(request)
if not payload:
raise exceptions.MissingAuthorizationHeader()
user_id = payload.get("user_id")
user = json.loads(await app.redis.get(user_id))
return user
When I use JMeter to test the API endpoint with 10 threads acting as the same user, the credit system does not seem to work. In this case, as the user starts with 10 credits, they may end up with 7 or 8 (not predictable) credits left whereas they should have 0 left. According to me, this is due to the workers not sharing the user object and not having the updated copy of the variable which is causing them to overwrite each others update. Can anyone help me find a way out of this so that even if the same user simultaneously hits the endpoint, he/she should be billed perfectly and the user object should be saved back into Redis.

The problem is that you read the credits info from Redis, deduct it, then save it back it to Redis, which is not an atomic process. It's a concurrency issue.
I don't know about Python, so I'll just use pseudo code.
First set 10 credits for user {user_id}.
app.redis.set("{user_id}:credits", 10)
Then this user comes in
# deduct 1 from the user credits and get the result
int remaining_credits=app.redis.incryBy ("{user_id}:credits",-1)
if(remaining_credits<=0){
return sanic.response.text("Credits Exhausted")} else{
return "sucess" # or some other result}
Save your user info with payload somewhere else and retrieve the "{user_id}:credits"and combine them when you retrieve the user.

Related

How to get actual slack username instead of user id

I have pulled data from a private slack channel, using conversation history, and it pulls the userid instead of username, how can I change the code to pull the user name so I can identify who each user is? Code below
CHANNEL = ""
MESSAGES_PER_PAGE = 200
MAX_MESSAGES = 1000
SLACK_TOKEN = ""
client = slack_sdk.WebClient(token=SLACK_TOKEN)
# get first page
page = 1
print("Retrieving page {}".format(page))
response = client.conversations_history(
channel=CHANNEL,
limit=MESSAGES_PER_PAGE,
)
assert response["ok"]
messages_all = response['messages']
# get additional pages if below max message and if they are any
while len(messages_all) + MESSAGES_PER_PAGE <= MAX_MESSAGES and response['has_more']:
page += 1
print("Retrieving page {}".format(page))
sleep(1) # need to wait 1 sec before next call due to rate limits
response = client.conversations_history(
channel=CHANNEL,
limit=MESSAGES_PER_PAGE,
cursor=response['response_metadata']['next_cursor']
)
assert response["ok"]
messages = response['messages']
messages_all = messages_all + messages
It isn't possible to change what is returned from the conversations.history method. If you'd like to convert user IDs to usernames, you'll need to either:
Call the users.info method and retrieve the username from the response.
or
Call the users.list method and iterate through the list and create a local copy (or store in a database) and then have your code look it up.

Python telegram bot, how to store and show users data for each users?

Hello I am new one for python and telegram api so i have some questions. I am creating telegram bot(python telegram api) with users profile. I have created database(mysql.connector) and store there all users info after registration. Also i have created user class. When user types /start i am checking if it exists, if it is, i am filling this class. Then i use this class to show some profile information(photo, name, age, etc.) if users click on button(my profile). So the problem is when i have 2-users at the same time. First typed "/start" and logged in, wanna watch self profile, everything fine. But when second user do the same, i got that the first users when clicked on (my profile), he or she got the last one profile who typed "/start" loaded for both users. How to fix this? Solution to check and load data all the time sounds not good, i'd like to do smth with "class Users", but i don't know to make it uniq for each users session. Any solutions? If it's needed i can give more code, just ask.
class Users:
def __init__(self, id=0, name='', age=0, gender='', balance=0, telegram_id=0, photo='', sallarytext=0, sallaryvideo=0, videocall=0):
self.id = id
self.name = name
self.age = age
self.gender = gender
self.balance = balance
self.telegram_id = telegram_id
self.photo = photo
self.sallarytext = sallarytext
self.sallaryvideo = sallaryvideo
self.videocall = videocall
user = Users()
def check_auth(connection, telegram_id):
cursor = connection.cursor()
result = None
try:
cursor.execute("SELECT * FROM users WHERE telegram_id = '%s'" % telegram_id)
result = cursor.fetchall()
data = []
if result:
for row in result:
user.id = row[0]
user.name = row[1]
user.age = row[2]
user.gender = row[3]
user.telegram_id = row[4]
user.balance = row[5]
data = [user.name]
if user.gender == 'Female':
cursor.execute("SELECT * FROM photos WHERE users_id = '%s'" % user.id)
result2 = cursor.fetchall()
for row in result2:
user.photo = row[1]
user.sallarytext = row[2]
user.sallaryvideo = row[3]
user.videocall = row[4]
return data
except Error as e:
print(f"The error '{e}' occurred")
#bot.message_handler(commands=['start'])
def check_reg(message):
if message.chat.type == 'private':
telegram_id = message.from_user.id
# create_db_users(connection)
# create_db_photos(connection)
# create_db_chats(connection)
data_user = check_auth(connection, telegram_id)
if not data_user:
new_user(message) # user registration
else:
if user.gender == 'Male':
default_user_keybord(message) # show user keybord
elif user.gender == 'Female':
default_model_keybord(message)
def show_profile(message): # funtion show profile when user click on "My profile" button
profile_text = "Profile\n\nYour name: " + user.name + "\nYour age: " + str(
user.age)
menu_keybord = types.ReplyKeyboardMarkup(row_width=2, resize_keyboard=True)
button_name_age = types.KeyboardButton(text="🗣 Change name/age")
button_back = types.KeyboardButton(text="◀️ Return")
menu_keybord.add(button_name_age, button_back)
bot.send_message(message.chat.id, profile_text, reply_markup=menu_keybord)
Could you tell me what telegram api package you are using exactly?
The core of your problem, I think, is the use of a global variable user to store user data. It would be best practice to instantiate and return a new Users every time you call check_auth.
That being said,
in Python, if you want to update a global variable, say user, you have to use the statement global user before you do so;
consider using an ORM such as SQLAlchemy to spare you some headaches and code.
Let me know if that solved your issue.
C
I have fixed this by doing request to the database, getting data and pushing it to class, then closing connection with database and showing to user

How to process webhook request coming from a 3rd party application?

I need help to evaluate weather i am doing it right or is there a better way, the scenario is an 3rd party application is sending an webhook request after a successful payment but the problem is that sometimes this application may send the same notification more than once.so it is recommended to ensure that implementation of the webhook is idempotent.so steps that i am implementing for this are
if signature is correct (assume it is corect),Find orders record in the database using orderId in the request params.
Please note: orderId in request params is payment_gateway_order_identifier in orders table.
if txStatus = 'SUCCESS' AND haven't already processed COLLECTION payment for this same order,
Create payments record.
201 response with nothing in the response body.
else
201 response with nothing in the response body.
else
422 response with {message: "Signature is incorrect"} in response body
views.py
#api_view(['POST'])
def cashfree_request(request):
if request.method == 'POST':
data=request.POST.dict()
payment_gateway_order_identifier= data['orderId']
amount = data['orderAmount']
transaction_status = data['txStatus']
signature = data['signature']
if(computedsignature==signature): #assume it to be true
order=Orders.objects.get(
payment_gateway_order_identifier=payment_gateway_order_identifier)
if transaction_status=='SUCCESS':
try:
payment= Payments.objects.get(orders=order)
return Response({"Payment":"Done"},status=status.HTTP_200_OK)
except (Payments.DoesNotExist):
payment = Payments(orders=order,amount=amount,datetime=datetime)
payment.save()
return Response(status=status.HTTP_200_OK)
else:
return Response(status=status.HTTP_422_UNPROCESSABLE_ENTITY)
models.py
class Orders(models.Model):
id= models.AutoField(primary_key=True)
amount = models.DecimalField(max_digits=19, decimal_places=4)
payment_gateway_order_identifier = models.UUIDField(
primary_key=False,default=uuid.uuid4,editable=False,unique=True)
sussessfull
class Payments(models.Model):
id = models.AutoField(primary_key=True)
orders = models.ForeignKey(Orders, on_delete=models.CASCADE)
amount = models.DecimalField(max_digits=19, decimal_places=4, verbose_name='Price in INR')
datetime = models.DateTimeField(auto_now=False,auto_now_add=False)
This rather belongs to Codereview site. Anyway - you are doing up to 3 consecutive SQL queries, so there's a chance for a race condition. A simple way how to prevent that: use some KV storage like Redis/Memcache as a lock - save the value you're using as a nonce on the start of the function, and delete it on the end.
#api_view(['POST'])
def cashfree_request(request):
data = request.POST.dict()
payment_gateway_order_identifier = data['orderId']
# `nx` will set & return only if the key does not exist
# set a timeout in case it wont reach `delete()` on the end
if not redis.set("lock_%s" % payment_gateway_order_identifier, "1", nx=True, ex=2):
return Response(status=status.HTTP_409_CONFLICT)
amount = data['orderAmount']
transaction_status = data['txStatus']
signature = data['signature']
if computedsignature == signature:
order = Orders.objects.get(payment_gateway_order_identifier=payment_gateway_order_identifier)
if transaction_status == 'SUCCESS':
try:
Payments.objects.get(orders=order)
res = Response({"Payment": "Done"}, status=status.HTTP_200_OK)
except Payments.DoesNotExist:
payment = Payments(orders=order, amount=amount, datetime=datetime)
payment.save()
res = Response(status=status.HTTP_200_OK)
else:
res = Response(status=status.HTTP_422_UNPROCESSABLE_ENTITY)
# unlock for another request
redis.delete("lock_%s" % payment_gateway_order_identifier)
return res
you dont need if request.method == 'POST': since the code is accessible via POST only anyway, that will make your code less indented.
notice you dont handle the case where transaction_status is not SUCCESS

Python\Flask\SQLAlchemy\Marshmallow - How to process a request with duplicate values without failing the request?

This is only my second task (bug I need to fix) in a Python\Flask\SQLAlchemy\Marshmallow system I need to work on. So please try to be easy with me :)
In short: I'd like to approve an apparently invalid request.
In details:
I need to handle a case in which a user might send a request with some json in which he included by mistake a duplicate value in a list.
For example:
{
"ciphers": [
"TLS_AES_256_GCM_SHA384",
"AES256-SHA256"
],
"is_default": true,
"tls_versions": [
"tls10",
"tls10",
"tls11",
]
}
What I need to do is to eliminate one of the duplicated tls1.0 values, but consider the request as valid, update the db with the correct and distinct tls versions, and in the response return the non duplicated json in body.
Current code segments are as follows:
tls Controller:
...
#client_side_tls_bp.route('/<string:tls_profile_id>', methods=['PUT'])
def update_tls_profile_by_id(tls_profile_id):
return update_entity_by_id(TlsProfileOperator, entity_name, tls_profile_id)
...
general entity controller:
...
def update_entity_by_id(operator, entity_name, entity_id):
"""flask route for updating a resource"""
try:
entity_body = request.get_json()
except Exception:
return make_custom_response("Bad Request", HTTPStatus.BAD_REQUEST)
entity_obj = operator.get(g.tenant, entity_id, g.correlation)
if not entity_obj:
response = make_custom_response(http_not_found_message(entity_name, entity_id), HTTPStatus.NOT_FOUND)
else:
updated = operator.update(g.tenant, entity_id, entity_body, g.correlation)
if updated == "accepted":
response = make_custom_response("Accepted", HTTPStatus.ACCEPTED)
else:
response = make_custom_response(updated, HTTPStatus.OK)
return response
...
tls operator:
...
#staticmethod
def get(tenant, name, correlation_id=None):
try:
tls_profile = TlsProfile.get_by_name(tenant, name)
return schema.dump(tls_profile)
except NoResultFound:
return None
except Exception:
apm_logger.error(f"Failed to get {name} TLS profile", tenant=tenant,
consumer=LogConsumer.customer, correlation=correlation_id)
raise
#staticmethod
def update(tenant, name, json_data, correlation_id=None):
schema.load(json_data)
try:
dependant_vs_names = VirtualServiceOperator.get_dependant_vs_names_locked_by_client_side_tls(tenant, name)
# locks virtual services and tls profile table simultaneously
to_update = TlsProfile.get_by_name(tenant, name)
to_update.update(json_data, commit=False)
db.session.flush() # TODO - need to change when 2 phase commit will be implemented
snapshots = VirtualServiceOperator.get_snapshots_dict(tenant, dependant_vs_names)
# update QWE
# TODO handle QWE update atomically!
for snapshot in snapshots:
QWEController.update_abc_services(tenant, correlation_id, snapshot)
db.session.commit()
apm_logger.info(f"Update successfully {len(dependant_vs_names)} virtual services", tenant=tenant,
correlation=correlation_id)
return schema.dump(to_update)
except Exception:
db.session.rollback()
apm_logger.error(f"Failed to update {name} TLS profile", tenant=tenant,
consumer=LogConsumer.customer, correlation=correlation_id)
raise
...
and in the api schema class:
...
#validates('_tls_versions')
def validate_client_side_tls_versions(self, value):
if len(noDuplicatatesList) < 1:
raise ValidationError("At least a single TLS version must be provided")
for tls_version in noDuplicatatesList:
if tls_version not in TlsProfile.allowed_tls_version_values:
raise ValidationError("Not a valid TLS version")
...
I would have prefer to solve the problem in the schema level, so it won't accept the duplication.
So, as easy as it is to remove the duplication from the "value" parameter value, how can I propagate the non duplicates list back in order to use it to update the db and the response?
Thanks.
I didn't test but I think mutating value in the validation function would work.
However, this is not really guaranteed by marshmallow's API.
The proper way to do it would be to add a post_load method to de-duplicate.
#post_load
def deduplicate_tls(self, data, **kwargs):
if "tls_versions" in data:
data["tls_version"] = list(set(data["tls_version"]))
return data
This won't maintain the order, so if the order matters, or for issues related to deduplication itself, see https://stackoverflow.com/a/7961390/4653485.

How to implement simple sessions for Google App Engine?

Here is a very basic class for handling sessions on App Engine:
"""Lightweight implementation of cookie-based sessions for Google App Engine.
Classes:
Session
"""
import os
import random
import Cookie
from google.appengine.api import memcache
_COOKIE_NAME = 'app-sid'
_COOKIE_PATH = '/'
_SESSION_EXPIRE_TIME = 180 * 60
class Session(object):
"""Cookie-based session implementation using Memcached."""
def __init__(self):
self.sid = None
self.key = None
self.session = None
cookie_str = os.environ.get('HTTP_COOKIE', '')
self.cookie = Cookie.SimpleCookie()
self.cookie.load(cookie_str)
if self.cookie.get(_COOKIE_NAME):
self.sid = self.cookie[_COOKIE_NAME].value
self.key = 'session-' + self.sid
self.session = memcache.get(self.key)
if self.session:
self._update_memcache()
else:
self.sid = str(random.random())[5:] + str(random.random())[5:]
self.key = 'session-' + self.sid
self.session = dict()
memcache.add(self.key, self.session, _SESSION_EXPIRE_TIME)
self.cookie[_COOKIE_NAME] = self.sid
self.cookie[_COOKIE_NAME]['path'] = _COOKIE_PATH
print self.cookie
def __len__(self):
return len(self.session)
def __getitem__(self, key):
if key in self.session:
return self.session[key]
raise KeyError(str(key))
def __setitem__(self, key, value):
self.session[key] = value
self._update_memcache()
def __delitem__(self, key):
if key in self.session:
del self.session[key]
self._update_memcache()
return None
raise KeyError(str(key))
def __contains__(self, item):
try:
i = self.__getitem__(item)
except KeyError:
return False
return True
def _update_memcache(self):
memcache.replace(self.key, self.session, _SESSION_EXPIRE_TIME)
I would like some advices on how to improve the code for better security.
Note: In the production version it will also save a copy of the session in the datastore.
Note': I know there are much more complete implementations available online though I would like to learn more about this subject so please don't answer the question with "use that" or "use the other" library.
Here is a suggestion for simplifying your implementation.
You are creating a randomized temporary key that you use as the session's key in the memcache. You note that you will be storing the session in the datastore as well (where it will have another key).
Why not randomize the session's datastore key, and then use that as the one and only key, for both the database and the memcache (if necessary)? Does this simplification introduce any new security issues?
Here's some code for creating a randomized datastore key for the Session model:
# Get a random integer to use as the session's datastore ID.
# (So it can be stored in a cookie without being 'guessable'.)
random.seed();
id = None;
while None==id or Session.get_by_id( id ):
id = random.randrange( sys.maxint );
seshKey = db.Key.from_path( 'Session', id );
session = Session( key = seshKey );
To get the ID from the session (i.e. to store in the cookie) use:
sid = session.key().id();
To retrieve the session instance after the 'sid' has been read from the cookie:
session = Session.get_by_id( sid );
Here are a couple of additional security measures you could add.
First, I think it is pretty common to use information stored in the session instance to validate each new request. For example, you could verify that the IP address and user-agent don't change during a session:
newip = str( request.remote_addr );
if sesh.ip_addr != newip:
logging.warn( "Session IP has changed to %s." % newip);
newua = rh.request.headers.get( 'User-Agent', None );
if sesh.agent != newua:
logging.warn( "Session UA has changed to %s." % newua );
Also, perhaps it would be better to prevent the session from being renewed indefinitely? I think that sites such as Google will eventually ask you to sign-in again if you try to keep a session going for a long time.
I guess it would be easy to slowly decrease the _SESSION_EXPIRE_TIME each time the session gets renewed, but that isn't really a very good solution. Ideally the choice of when to force the user to sign-in again would take into account the flow and security requirements of your site.

Categories