how to use gpg encrypted oauth files via Python for offlineimap - python

I was playing around with oauth2 to get a better understanding of it. For this reason, I've installed offlineimap which should act as a third-party app. I've found a nice way to read encrypted credentials here on stackexchange.
Based on the linked post I've modified/copied the following python script:
import subprocess
import os
import json
def passwd(file_name):
acct = os.path.basename(file_name)
path = "/PATHTOFILE/%s" % file_name
args = ["gpg", "--use-agent", "--quiet", "--batch", "-d", path]
try:
return subprocess.check_output(args).strip()
except subprocess.CalledProcessError:
return ""
def oauthpasswd(acct, key):
acct = os.path.basename(acct)
path = "/PATHTOFILE/%s_oauth2.gpg" % acct
args = ["gpg", "--use-agent", "--quiet", "--batch", "-d", path]
try:
return str(json.loads(subprocess.check_output(args).strip())['installed'][key])
except subprocess.CalledProcessError:
return ""
def prime_gpg_agent():
ret = False
i = 1
while not ret:
ret = (passwd("prime.gpg") == "prime")
if i > 2:
from offlineimap.ui import getglobalui
sys.stderr.write("Error reading in passwords. Terminating.\n")
getglobalui().terminate()
i += 1
return ret
prime_gpg_agent()
In the corresponding offlineimaprc file I call the function with the correct arguments:
oauth2_client_id = oauthpasswd('gmail', 'client_id')
oauth2_client_secret = oauthpasswd('gmail', 'client_secret')
oauth2_request_url = https://accounts.google.com/o/oauth2/token
oauth2_refresh_token = passwd('gmail_rf_token.gpg')
Please note in the local file the PATHTOFILE is set correctly. What I've done was downloaded the JSON file from Google including the oauth2 credentials and encrypted it. I've stored the refresh token in a separate file.
However, if I run offlineimap I get an authentication error:
ERROR: While attempting to sync account 'gmail'
('http error', 401, 'Unauthorized', <httplib.HTTPMessage instance at 0x7f488c214320>) (configuration is: {'client_secret': "oauthpasswd('gmail', 'client_secret')", 'grant_type': 'refresh_token', 'refresh_token': "passwd('gmail_rf_token.gpg')", 'client_id': "oauthpasswd('gmail', 'client_id')"})
I've tried then to check the outputs of the two python functions passwd and oauthpasswd in a python interpreter. I get the desired outputs. Even more, I've copied the output from the functions within the python interpreter to the offlineimaprc config file and I was able to sync to Gmail. This implies that there must be a mistake when offlineimap executes the file but I can't see what's wrong.
If I only encrypt my Gmail password everything is working. This means there is something going wrong from the details downloaded from Google (client_id, client_secret and refresh token). As pointed out above, the values itself are correct. I've really copied the output of
oauthpasswd('gmail', 'client_id')
oauthpasswd('gmail', 'client_secret')
passwd('gmail_rf_token.gpg')
from a python console to the offlineimaprc file and it worked.

The problem which happens is the following. According to this answer offlineimap does not allow for encryption of all keys within the offlinemaprc file. That's why the python function never gets evaluated and the wrong strings are handed over.

Related

Avoid time-out error in python when trying to write a file to some location [ Python - v3.8+ ]

I am trying to write a file to a specific mount location in Linux. The API returns a path which is required for the further operations. The problem is if the file size is huge, then i face a request time-out error because of which iam not able to get the path. The code is as follows:
#migration_blueprint.route("/migration/upload", methods=["POST"])
def upload_migration_file():
file_abs_path = ""
try:
file = request.files['files']
logger.debug("The file recieved is '{}'".format(file))
file_name = str(datetime.now().strftime("%H%M%s")) + file.filename
proxy_bin = db.find_one("bins", query={"bin_type":"proxy", "status":"active"})
if not proxy_bin:
raise Exception("Proxy Bin not found")
base_proxy_path = "/mnt/share_{}/migration/".format(proxy_bin['_id'])
if not os.path.exists(base_proxy_path):
os.makedirs(base_proxy_path)
file_abs_path = os.path.join(base_proxy_path, file_name)
file.save(file_abs_path)
except Exception as ex:
logger.exception("Error : {}".format(str(ex)))
abort("500",{"message" : str(ex)})
return {"path" : file_abs_path}
Is there any workaround where irrespective of the file size, the file gets written to the location and the path is also returned as response in time ?
You could try uploading files via ajax and polling the server at intervals until the filename is ready (you could also use server side events) or you could use websockets to upload files and then update the client when the filename is available.
I tried to impelement the solution through multiprocessing in python.
from multiprocessing import Process
def copy_file(file_data, abs_path):
file_data.save(abs_path)
In the API i updated it:
p = Process(target=copy_file(file, file_abs_path))
p.start()
# file.save(file_abs_path)

Upload image to facebook using the Python API

I have searched the web far and wide for a still working example of uploading a photo to facebook through the Python API (Python for Facebook). Questions like this have been asked on stackoverflow before but non of the answers I have found work anymore.
What I got working is:
import facebook as fb
cfg = {
"page_id" : "my_page_id",
"access_token" : "my_access_token"
}
api = get_api(cfg)
msg = "Hello world!"
status = api.put_wall_post(msg)
where I have defined the get_api(cfg) function as this
graph = fb.GraphAPI(cfg['access_token'], version='2.2')
# Get page token to post as the page. You can skip
# the following if you want to post as yourself.
resp = graph.get_object('me/accounts')
page_access_token = None
for page in resp['data']:
if page['id'] == cfg['page_id']:
page_access_token = page['access_token']
graph = fb.GraphAPI(page_access_token)
return graph
And this does indeed post a message to my page.
However, if I instead want to upload an image everything goes wrong.
# Upload a profile photo for a Page.
api.put_photo(image=open("path_to/my_image.jpg",'rb').read(), message='Here's my image')
I get the dreaded GraphAPIError: (#324) Requires upload file for which non of the solutions on stackoverflow works for me.
If I instead issue the following command
api.put_photo(image=open("path_to/my_image.jpg",'rb').read(), album_path=cfg['page_id'] + "/picture")
I get GraphAPIError: (#1) Could not fetch picture for which I haven't been able to find a solution either.
Could someone out there please point me in the right direction of provide me with a currently working example? It would be greatly appreciated, thanks !
A 324 Facebook error can result from a few things depending on how the photo upload call was made
a missing image
an image not recognised by Facebook
incorrect directory path reference
A raw cURL call looks like
curl -F 'source=#my_image.jpg' 'https://graph.facebook.com/me/photos?access_token=YOUR_TOKEN'
As long as the above calls works, you can be sure the photo agrees with Facebook servers.
An example of how a 324 error can occur
touch meow.jpg
curl -F 'source=#meow.jpg' 'https://graph.facebook.com/me/photos?access_token=YOUR_TOKEN'
This can also occur for corrupted image files as you have seen.
Using .read() will dump the actual data
Empty File
>>> image=open("meow.jpg",'rb').read()
>>> image
''
Image File
>>> image=open("how.png",'rb').read()
>>> image
'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00...
Both of these will not work with the call api.put_photo as you have seen and Klaus D. mentioned the call should be without read()
So this call
api.put_photo(image=open("path_to/my_image.jpg",'rb').read(), message='Here's my image')
actually becomes
api.put_photo('\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00...', message='Here's my image')
Which is just a string, which isn't what is wanted.
One needs the image reference <open file 'how.png', mode 'rb' at 0x1085b2390>
I know this is old and doesn't answer the question with the specified API, however, I came upon this via a search and hopefully my solution will help travelers on a similar path.
Using requests and tempfile
A quick example of how I do it using the tempfile and requests modules.
Download an image and upload to Facebook
The script below should grab an image from a given url, save it to a file within a temporary directory and automatically cleanup after finished.
In addition, I can confirm this works running on a Flask service on Google Cloud Run. That comes with the container runtime contract so that we can store the file in-memory.
import tempfile
import requests
# setup stuff - certainly change this
filename = "your-desired-filename"
filepath = f"{directory}/{filename}"
image_url = "your-image-url"
act_id = "your account id"
access_token = "your access token"
# create the temporary directory
temp_dir = tempfile.TemporaryDirectory()
directory = temp_dir.name
# stream the image bytes
res = requests.get(image_url, stream=True)
# write them to your filename at your temporary directory
# assuming this works
# add logic for non 200 status codes
with open(filepath, "wb+") as f:
f.write(res.content)
# prep the payload for the facebook call
files = {
"filename": open(filepath, "rb"),
}
url = f"https://graph.facebook.com/v10.0/{act_id}/adimages?access_token={access_token}"
# send the POST request
res = requests.post(url, files=files)
res.raise_for_status()
if res.status_code == 200:
# get your image data back
image_upload_data = res.json()
temp_dir.cleanup()
if "images" in image_upload_data:
return image_upload_data["images"][filepath.split("/")[-1]]
return image_upload_data
temp_dir.cleanup() # paranoid: just in case an error isn't raised

How to use pyfig module in python?

This is the part of the mailer.py script:
config = pyfig.Pyfig(config_file)
svnlook = config.general.svnlook #svnlook path
sendmail = config.general.sendmail #sendmail path
From = config.general.from_email #from email address
To = config.general.to_email #to email address
what does this config variable contain? Is there a way to get the value for config variable without pyfig?
In this case config = a pyfig.Pyfig object initialised with the contents of the file named by the content of the string config_file.
To find out what that object does and contains you can either look at the documentation and/or the source code, both here, or you can print out, after the initialisation, e.g.:
config = pyfig.Pyfig(config_file)
print "Config Contains:\n\t", '\n\t'.join(dir(config))
if hasattr(config, "keys"):
print "Config Keys:\n\t", '\n\t'.join(config.keys())
or if you are using Python 3,
config = pyfig.Pyfig(config_file)
print("Config Contains:\n\t", '\n\t'.join(dir(config)))
if hasattr(config, "keys"):
print("Config Keys:\n\t", '\n\t'.join(config.keys()))
To get the same data without pyfig you would need to read and parse at the content of the file referenced by config_file within your own code.
N.B.: Note that pyfig seems to be more or less abandoned - no updates in over 5 years, web site no longer exists, etc., so I would strongly recommend converting the code to use a json configuration file instead.

It's Dangerous not recognizing payload

I'm using Heroku Scheduler to run a script outside of my app. The script is in the same folder as the run.py and procfile.
#manager.command
def run_purge():
candidates = models.Candidate.query.all()
print "SECRET KEY --------->", os.environ["APP_SECRET_KEY"]
people_purged = []
for candidate in candidates:
if candidate.status != 0 and candidate.approved == True and over_30_days(candidate.last_status_change):
payload = reactivate_account_link(candidate.email, 'reactivate_account')
send_email("Your account is innactive", "TalentTracker", [candidate.email], payload)
candidate.status = 0
db.session.commit()
people_purged.append(candidate.email)
else:
pass
return send_email("Purge Completed", "TalentTracker", email_to_admin, "purge completed --> {0}".format(people_purged))
The script generates a payload using Flask's It's Dangerous and the payload is received within the views file within the app itself. This works fine locally. However, when I run live it's giving me an "Internal Server Error". Through Print statements, I figured out it's triggering the BadSignature Exception and I'm exactly sure why. My hunch is that it's to do with the secret key being outside of the app but when I print the secret key it's present!
#app.route('/candidates/reactivate_account/<payload>/')
def reactivate_account(payload):
s = get_serializer()
try:
candidate_email = s.loads(payload)[0]
except BadSignature:
print "BAD SIGNATURE", payload, s.loads(payload)
raise
candidate = Candidate.query.filter_by(email=candidate_email).first()
candidate.status += 1
candidate.last_status_change = datetime.datetime.now()
db.session.commit()
commit_to_analytics(candidate.candidate_id, None, 4)
return render_template("test.html")
This is what get_serializer looks like outside of the app.
def get_serializer(secret_key=None):
if secret_key is None:
secret_key = app.secret_key
return URLSafeSerializer(secret_key)
# for getting serialized urls
def reactivate_account_link(candidate_email, path):
s = get_serializer(os.environ["APP_SECRET_KEY"])
loads = [candidate_email]
payload = s.dumps(loads)
return url_for(path, payload=payload, _external=True)
I'm creating it separately outside of the app but the function is the same within. I've tried a version of this where I explicitly call the Secret Key but that didn't work either. Instead of creating it separately should I import it?
----- SECOND UPDATE -----
I got this to work by feeding both get_serialize functions - the one in the app and the one outside - a completely new `secret_key'.
However, when I ran the repr and == on the os.environ["APP_SECRET_KEY"] and app.secret_key the values were the same. The payloads matched too.
When I print this secret key in terminal it appears without backslashes e.g. abcdefghi (which I assumed is correct behaviour). In reality the secret key has backslashes e.g. ab/cd/ef/gh/ij. I'm not sure this is related but thought I would include.

git server side hooks

I am running into a problem when running the follow python script on the server looking for commit information for the push making sure it follows a particular syntax, I am unable to get input from the user which is why the username and password are hard coded. I am now also unable to get the list of commit message that occurred before this particular push.
#!/usr/bin/python
import SOAPpy
import getpass
import datetime
import sys
import re
import logging
import os
def login(x,y):
try:
auth = soap.login(x, y)
return auth
except:
sys.exit( "Invalid username or password")
def getIssue(auth,issue):
try:
issue = soap.getIssue(auth, issue)
except:
sys.exit("No issue of that type found : Make sure all PRs are vaild jira PRs")
def git_get_commit_msg(commit_id):
return get_shell_cmd_output("git rev-list --pretty --max-count=1 " + commit_id)
def git_get_last_commit_id():
return get_shell_cmd_output("git log --pretty=format:%H -1")
def getCommitText():
commit_msg_filename = sys.argv[1]
try:
commit_msg_text = open(commit_msg_filename).read()
return commit_msg_text
except:
sys.exit("Could not read commit message")
def git_get_array_of_commit_ids(start_id, end_id):
output = get_shell_cmd_output("git rev-list " + start_id + ".." + end_id)
if output == "":
return None
commit_id_array = string.split(output, '\n')
return commit_id_array
def get_shell_cmd_output(cmd):
try:
proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
return proc.stdout.read().rstrip('\n')
except KeyboardInterrupt:
logging.info("... interrupted")
except Exception, e:
logging.error("Failed trying to execute '%s'", cmd)
def findpattern(commit_msg):
pattern = re.compile("\w\w*-\d\d*")
group = pattern.findall(commit_msg)
print group
found = len(group)
found =0
issues = 0
for match in group:
auth = soap.login(jirauser,passwd)
getIssue(auth,match)
issues = issues + 1
found+=1
if found ==0:
sys.exit("No issue patterns found.")
print "Retrieved issues: " + str(issues)
def update():
print sys.argv[2]
print sys.argv[3]
old_commit_id = sys.argv[2]
new_commit_id = sys.argv[3]
commit_id_array = git_get_array_of_commit_ids(old_commit_id, new_commit_id)
for commit_id in commit_id_array:
commit_text = git_get_commit_msg(commit_id)
findpattern(commit_text)
soap = SOAPpy.WSDL.Proxy('some url')
# this line if for repointing the input from dev/null
#sys.stdin = open('/dev/tty', 'r') # this fails horribly.
#ask user for input
#jirauser = raw_inp
#("Username for jira: ")
jirauser = "username"
passwd = "987654321"
#passwd = getpass.getpass("Password for %s: " % jirauser)
login(jirauser,passwd)
#commit_msg = getCommitText()
#findpattern(commit_msg)
update()
The intended goal of this code is to check the commits made locally, and to parse through them for the intended pattern, as well as checking the in jira if that PR exists. it is a server side hook that get activated on a push to the repository.
Any tips on writing python hooks would be appreciated. Please and thank you.
I suggest that you have a look at gitorious (http://gitorious.org/gitorious).
They use ssh to handle authentication and rights management (getting the username given by ssh).
They also have some hooks on git repositories. I guess it could help to see how they are processing git hooks using ruby.
By the time your update hook fires, the server has the new commits: the question is whether your hook will allow the ref in question to move. What information from the local (sending) repository do you want?
For the credentials issue, funnel everyone through a single user. For example, GitHub does it with the git user, which is why their SSH URLs begin with git#github.com:.... Then in ~git/.ssh/authorized_keys, associate a username with each key. Note that the following should be on a single line but is wrapped for presentation purposes.
no-agent-forwarding,no-port-forwarding,no-pty,no-X11-forwarding,
command="env myuser=gbgcoll /usr/bin/git-shell -c \"${SSH_ORIGINAL_COMMAND:-}\""
ssh-rsa AAAAB...
Now to see who's trying to do the update, your hook examines the $myuser environment variable.
This doesn't give you each user's Jira credentials. To solve that issue, create a dummy Jira account that has read-only access to everything, and hardcode that Jira account's credentials in your hook. This allows you to verify that a given PR exists.

Categories