I want help regarding tweepy hiding your username - python

So here is what I am trying to do I am trying to get my Twitter bot to give maths answers to users using WolframAlpha API
here is what problem I am facing
as people will mention my Twitter username to active the bot example: #twitterusername 2+2
the WolframAlpha will take it as the whole input #twitterusername 2+2 which will give me the error I want it to ignore the username
here is my code
def respondToTweet(file='tweet_ID.txt'):
last_id = get_last_tweet(file)
mentions = api.mentions_timeline(last_id, tweet_mode='extended')
if len(mentions) == 0:
return
new_id = 0
logger.info("someone mentioned me...")
for mention in reversed(mentions):
logger.info(str(mention.id) + '-' + mention.full_text)
new_id = mention.id
status = api.get_status(mention.id)
if '#Saketltd01' in mention.full_text.lower():
logger.info("Responding back with QOD to -{}".format(mention.id))
client = wolframalpha.Client(app_id)
query = mention.full_text.lower()
rest = client.query(query)
answer = next(rest.results).text
Wallpaper.get_wallpaper(answer)
media = api.media_upload("created_image.png")
logger.info("liking and replying to tweet")
api.create_favorite(mention.id)
api.update_status('#' + mention.user.screen_name, mention.id,
media_ids=[media.media_id])
put_last_tweet(file, new_id)
def main():
respondToTweet()

When you take the whole input remember to strip it down by simply removing your username from the actual input string and then perform the mathematical operation on it:
myUsername = "#my_username"
equation = userInput.lstrip(myUsername)
perform_desired_operation_on(equation) // User defined function

Related

Calling Python Google Cloud Function via HTTP

I'm trying to call my Google Cloud Function via HTTP to get a response back in my browser. I expect a list of text, but am getting Error: could not handle the request.
Here's our code in the main.py source:
import openai
def gptinput(number,question,q_fc='q'):
#user_information = firebase.FirebaseReader()
openai.api_key = "___"
quest = "create '"+str(number)+ "' questions related to '"+question+"' with 4 options and print answer as ('ANSWER') and seperate each question with '/////'"
# Generate text using GPT-3
output_lis = []
while(len(output_lis)<number):
response = openai.Completion.create(
engine="text-davinci-002",
prompt= quest,
max_tokens=2048,
n=1,
stop=None,
temperature=0.5
)
# Extract the generated text
generated_text = response["choices"][0]["text"]
ans = generated_text.split("/////")
for i in ans:
q = i.split('?')
options= q[1][0:q[1].find("ANSWER")]
answer = q[1][q[1].find("ANSWER")+6:]
answer = answer.strip()
option_lis = options.split('\n')
option_lis.append(answer)
ques = q[0].strip()
option_lis = [s for s in option_lis if (s!='' or s!="")]
option_lis1 = [s for s in option_lis if (s!="\n" )]
output_lis.append({ques:option_lis1})
return output_lis[0:number]
`

How to get actual slack username instead of user id

I have pulled data from a private slack channel, using conversation history, and it pulls the userid instead of username, how can I change the code to pull the user name so I can identify who each user is? Code below
CHANNEL = ""
MESSAGES_PER_PAGE = 200
MAX_MESSAGES = 1000
SLACK_TOKEN = ""
client = slack_sdk.WebClient(token=SLACK_TOKEN)
# get first page
page = 1
print("Retrieving page {}".format(page))
response = client.conversations_history(
channel=CHANNEL,
limit=MESSAGES_PER_PAGE,
)
assert response["ok"]
messages_all = response['messages']
# get additional pages if below max message and if they are any
while len(messages_all) + MESSAGES_PER_PAGE <= MAX_MESSAGES and response['has_more']:
page += 1
print("Retrieving page {}".format(page))
sleep(1) # need to wait 1 sec before next call due to rate limits
response = client.conversations_history(
channel=CHANNEL,
limit=MESSAGES_PER_PAGE,
cursor=response['response_metadata']['next_cursor']
)
assert response["ok"]
messages = response['messages']
messages_all = messages_all + messages
It isn't possible to change what is returned from the conversations.history method. If you'd like to convert user IDs to usernames, you'll need to either:
Call the users.info method and retrieve the username from the response.
or
Call the users.list method and iterate through the list and create a local copy (or store in a database) and then have your code look it up.

Facebook sdk python outputting access token without login?

I am creating a python reaction tester and I wish to share scores to facebook, however the sdk is confusing me greatly. I have got an access token without prompting a log in page, and the token appears in the web browser instead of the login. This is the code I have and an image of the result:
def facebookShare():
appID = '[removed]'
appSecret = '[removed]'
authArguments = dict(client_id = appID,
client_secret = appSecret,
redirect_uri = 'http://localhost:8080/',
grant_type = 'client_credentials',
state='abc123')
authCommand = 'https://graph.facebook.com/v3.2/oauth/access_token?' + urllib.parse.urlencode(authArguments)
authResponse = os.startfile(authCommand)
try:
authAccessToken = urllib.parse.parse_qs(str(authResponse))['access_token']
graph = facebook.GraphAPI(access_token=authAccessToken, version="3.2")
facebookResponse = graph.put_wall_post('I got an average time of ' + averageTime + 'seconds!')
except KeyError:
errorWindow = tkinter.Toplevel(root, bg = "blue")
tkinter.Label(errorWindow, text = "Something Went Wrong.", bg = "red").pack()
Shouldn't it need to log in to generate the access token? How can I make this generate a log in so it can print to the user's wall how it is supposed to? What URL would I need to use to get a login dialog?

AWS Lambda - How do I convert my code to work in AWS?

I'm struggling to get a Lambda function working. I have a python script to access twitter API, pull information, and export that information into an excel sheet. I'm trying to transfer python script over to AWS/Lambda, and I'm having a lot of trouble.
What I've done so far: Created AWS account, setup S3 to have a bucket, and poked around trying to get things to work.
I think the main area I'm struggling is how to go from a python script that I'm executing via local CLI and transforming that code into lambda-capable code. I'm not sure I understand how the lambda_handler function works, what the event or context arguments actually mean (despite watching a half dozen different tutorial videos), or how to integrate my existing functions into Lambda in the context of the lambda_handler, and I'm just very confused and hoping someone might be able to help me get some clarity!
Code that I'm using to pull twitter data (just a sample):
import time
import datetime
import keys
import pandas as pd
from twython import Twython, TwythonError
import pymysql
def lambda_handler(event, context):
def oauth_authenticate():
twitter_oauth = Twython(keys.APP_KEY, keys.APP_SECRET, oauth_version=2)
ACCESS_TOKEN = twitter_oauth.obtain_access_token()
twitter = Twython(keys.APP_KEY, access_token = ACCESS_TOKEN)
return twitter
def get_username():
"""
Prompts for the screen name of targetted account
"""
username = input("Enter the Twitter screenname you'd like information on. Do not include '#':")
return username
def get_user_followers(username):
"""
Returns data on all accounts following the targetted user.
WARNING: The number of followers can be huge, and the data isn't very valuable
"""
#username = get_username()
#import pdb; pdb.set_trace()
twitter = oauth_authenticate()
datestamp = str(datetime.datetime.now().strftime("%Y-%m-%d"))
target = twitter.lookup_user(screen_name = username)
for y in target:
target_id = y['id_str']
next_cursor = -1
index = 0
followersdata = {}
while next_cursor:
try:
get_followers = twitter.get_followers_list(screen_name = username,
count = 200,
cursor = next_cursor)
for x in get_followers['users']:
followersdata[index] = {}
followersdata[index]['screen_name'] = x['screen_name']
followersdata[index]['id_str'] = x['id_str']
followersdata[index]['name'] = x['name']
followersdata[index]['description'] = x['description']
followersdata[index]['date_checked'] = datestamp
followersdata[index]['targeted_account_id'] = target_id
index = index + 1
next_cursor = get_followers["next_cursor"]
except TwythonError as e:
print(e)
remainder = (float(twitter.get_lastfunction_header(header = 'x-rate-limit-reset')) \
- time.time())+1
print("Rate limit exceeded. Waiting for:", remainder/60, "minutes")
print("Current Time is:", time.strftime("%I:%M:%S"))
del twitter
time.sleep(remainder)
twitter = oauth_authenticate()
continue
followersDF = pd.DataFrame.from_dict(followersdata, orient = "index")
followersDF.to_excel("%s-%s-follower list.xlsx" % (username, datestamp),
index = False, encoding = 'utf-8')

A small piece of code which attract top_tracks from Last.fm API by pylast software

I modified the code published on smbrown.wordpress.com which can extract the top tracks using the Last.fm API as below:
#!/usr/bin/python
import time
import pylast
import re
from md5 import md5
user_name = '*******'
user_password = '*******'
password_hash = pylast.md5("*******")
api_key = '***********************************'
api_secret = '****************************'
top_tracks_file = open('top_tracks_wordle.txt', 'w')
network = pylast.LastFMNetwork(api_key = api_key, api_secret = api_secret, username = user_name, password_hash = password_hash)
# to make the output more interesting for wordle viz.
# run against all periods. if you just want one period,
# delete the others from this list
time_periods = ['PERIOD_12MONTHS', 'PERIOD_6MONTHS', 'PERIOD_3MONTHS', 'PERIOD_OVERALL']
# time_periods = ['PERIOD_OVERALL']
#####
## shouldn't have to edit anything below here
#####
md5_user_password = md5(user_password).hexdigest()
sg = pylast.SessionKeyGenerator(network) #api_key, api_secret
session_key = sg.get_session_key(user_name, md5_user_password)
user = pylast.User(user_name, network) #api_key, api_secret, session_key
top_tracks = []
for time_period in time_periods:
# by default pylast returns a seq in the format:
# "Item: Andrew Bird - Fake Palindromes, Weight: 33"
tracks = user.get_top_tracks(period=time_period)
# regex that tries to pull out only the track name (
# for the ex. above "Fake Palindromes"
p = re.compile('.*[\s]-[\s](.*), Weight: [\d]+')
for track in tracks:
m = p.match(str(track))
**track = m.groups()[0]** <-----------Here---------------
top_tracks.append(track)
# be nice to last.fm's servers
time.sleep(5)
top_tracks = "\n".join(top_tracks)
top_tracks_file.write(top_tracks)
top_tracks_file.close()
When the script is run to the position where marked by " <-----------Here--------------", I got a error message :".... line 46, in
track = m.groups()[0]
AttributeError: 'NoneType' object has no attribute 'groups'"
I just stuck here for over a day, and do not know what to do next. Can anyone give me some clue about this problem?
Apparently some track names do not match your regex, so match() returns None. Catch the exception and examine track.

Categories