PySVN - Determine if a repository exists - python

I'm writing a small script that manages several SVN repositories. Users pass through the ID of the repository they want to change (the root of the repos are of the form https://www.mydomain.com/).
I need to check if the given repo actually exists. I've tried using Client.list to see if I can find any files, like so:
client = pysvn.Client()
client.list("https://.../<username>/")
But if the repo does not exist then the script hangs on the list line. From digging through the tracebacks it looks like pysvn is actually hanging on the login credentials callback (client.callback_get_login - which I have implemented but omitted, it does not fail if the repo exists).
Can you suggest how I can determine if a repo exists or not using pysvn?
Cheers,
Pete

I couldn't reproduce your hanging in credentials callback problem, so it might need an expanded description of the problem. I'm running pysvn 1.7.2 on Ubuntu 10.04, Python 2.6.6.
When I try to list a non-existent remote repository with client.list() it raises an exception. You could also use client.info2() to check for existence of a remote repository:
head_rev = pysvn.Revision(pysvn.opt_revision_kind.head)
bad_repo = 'https://.../xyz_i_dont_exist'
good_repo = 'https://.../real_project'
for url in (bad_repo, good_repo):
try:
info = client.info2(url, revision=head_rev, recurse=False)
print url, 'exists.'
except pysvn._pysvn_2_6.ClientError, ex:
if 'non-existent' in ex.args[0]:
print url, 'does not exist'
else:
print url, 'error:', ex.args[0]

Peter,
My team and I have experienced the same challenge. Samplebias, try providing a callback_get_login function but set your callback_server_ssl_trust_prompt to return (True, trust_dict['failures'], True). IFF subversion has not cached your server certificate trust settings, then you may find the info2() (or Peter's list() command) hangs (it's not actually hanging, it just takes intermittently much longer time to return). Oddly, when you CTRL-C the interpreter in these scenarios, you'll get indication that it hung on the login callback, not the server_cert verification. Play around with your ~/.subversion/auth settings (in particular the svn.simple and svn.ssl.server directories) and you'll see different amounts of 'hang time'. Look at pysvn.Client.callback_cancel if you need to handle situations which truly never return.
Considering: http://pysvn.tigris.org/docs/pysvn_prog_ref.html#pysvn_client_callback_ssl_server_trust_prompt you need to decide what your desired behavior is. Do you want ONLY to allow those connections for which you already have a cached trust answer? Or, do you want to ALWAYS accept regardless of server certificate verification (WARNING: this could (obviously) have negative security implications). Consider the following suggestion:
import pysvn
URL1 = "https://exists.your.org/svn/repos/dev/trunk/current"
URL2 = "https://doesntexit.your.org/svn/repos/dev/trunk/current"
URL3 = "https://exists.your.org/svn/repos/dev/trunk/youDontHavePermissionsBranch"
ALWAYS = "ALWAYS"
NEVER = "NEVER"
DESIRED_BEHAVIOR = ALWAYS
def ssl_server_certificate_trust_prompt(trust_dict):
if DESIRED_BEHAVIOR == NEVER:
return (False, 0, False)
elif DESIRED_BEHAVIOR == ALWAYS:
return (True, trust_dict['failures'], True)
raise Exception, "Unsupported behavior"
def testURL(url):
try:
c.info2(url)
return True
except pysvn.ClientError, ce:
if ('non-existant' in ce.args[0]) or ('Host not found' in ce.args[0]):
return False
else:
raise ce
c = pysvn.Client()
c.callback_ssl_server_trust_prompt = lambda t: (False, t['failures'], True)
c.callback_get_login = lambda x, y, z: (True, "uname", "pw", False)
if not testURL(URL1): print "Test1 failed."
if testURL(URL2): print "Test2 failed."
try:
testURL(URL3)
print "Test3 failed."
except: pass
In actuality, you probably don't want to get as fancy as I have with the return values. I do think it was important to consider a potential 403 returned by the server and the "Host not found" scenario separately.

Related

Pyetrade / Etrade API for option-chains function only returns options for apple?

I'm trying to get some option chains using the pyetrade package. I'm working in Sandbox mode for a newly made Etrade account.
When I execute the following code, it executes fine, but the returned information is incorrect: I keep getting options for Apple between 2012 and 2015, instead of current Exxon-Mobil options (what I'm inputting). This is also true for if I ask for Google or Facebook or Netflix, I just keep getting outdated Apple options.
I'm not sure where I messed up, or if this is just something that's part of sandbox mode, so that's why I asked for help. Thank you!
(Note: Some of the code is sourced from: https://github.com/1rocketdude/pyetrade_option_chains/blob/master/etrade_option_chains.py)
The following is the function to get the option chain from the API:
def getOption(thisSymbol):
#Renew session / or start session
try:
authManager.renew_access_token()
except:
authenticate() #this works fine
#make a market object to pull what you need from
market = pyetrade.ETradeMarket(
consumer_key,
consumer_secret,
tokens['oauth_token'],
tokens['oauth_token_secret'],
dev=True
)
try:
#the dates returned are also
q = market.get_option_expire_date(thisSymbol,resp_format='xml')
#just formats the dates to be more comprehensible:
expiration_dates = option_expire_dates_from_xml(q)
except Exception:
raise
rtn = []
for this_expiry_date in expiration_dates:
q = market.get_option_chains(thisSymbol, this_expiry_date)
chains = q['OptionChainResponse']['OptionPair']
rtn.append(chains)
print()
return rtn
ret = getOption("XOM")
print(ret[0])
The API provider is explicit on this:
Note:
E*TRADE's sandbox doesn't actually produce correct option chains so this will return an error.
The sandbox is still useful for debugging e.g. the OAuth stuff.
No one could hardly make the sandbox-ed code work otherwise.

This page isn’t working 127.0.0.1 didn’t send any data Error in Flask

I am building a Flask webapp and it uses the OpenBabel API (chemistry toolkit) to generate some files for me. When I call this particular function it seems to work fine and generate the files in the directory that I want. However, once it gets back to Flask it crashes and Flask does not render the html template, instead redirecting me to This page isn’t working 127.0.0.1 didn’t send any data. I found that when removing the code in the function it works normally. So it's likely a problem with OpenBabel. The Openbabel function does not ouput any errors itself and seemingly even returns at the end, based on debugging.
I have tried many things from other SO answers, including changing my host to 0.0.0.0, adding threaded=True and some other solutions. All to no avail. I tried debugging it for a long time, but now I am lost because I have tried everything I know. All I could get was a SystemExit exception from Flask. Sometimes it was able to run the print statement following it, but more often it crashes immediately. I have no clue where the problem may lie. Appreciate any help I can get. A sample of the code (shortened it a bit):
#app.route("/", methods=["POST", "GET"])
def form_handler():
if request.method == "POST":
smiles = request.form["smiles_molecule"]
pdb_file = request.files["pdb_molecule"]
no_conformers = int(request.form["no_conformers"])
scoring = request.form["score"]
if smiles:
pattern = re.compile('[^A-Za-z0-9]+')
smiles_no_special_chars = re.sub(pattern, "", smiles)
mol_path = os.path.join(app.config["MOLECULE_UPLOADS"], smiles_no_special_chars[0:10])
if os.path.exists(mol_path):
shutil.rmtree(mol_path)
os.mkdir(mol_path)
os.chdir(mol_path)
x = conf_gen.gen_and_write_confs(smiles, no_conformers, scoring) #<- breaks down here
print(x)
return render_template("index.html", mole=smiles_no_special_chars[0:10])
The function that is called:
def gen_and_write_confs(molecule, no_confs, scoring):
"""Generate and write the conformers to PDB. Takes mol, number of conformers and
scoring method: RCS, MECS and ECS: OBRMSDConformerScore,
OBMinimizingEnergyConformerScore and OBEnergyConformerScore. See OpenBabel docs."""
mole = pybel.readstring("can", molecule)
mole.addh()
mole.make3D()
mole = mole.OBMol
mole.SetChainsPerceived(True)
cs_obj = ob.OBConformerSearch()
cs_obj.Setup(mole, no_confs, 5, 5, 25)
if scoring == "RCS":
score = ob.OBRMSDConformerScore()
elif scoring == "MECS":
score = ob.OBMinimizingEnergyConformerScore()
else:
score = ob.OBEnergyConformerScore()
cs_obj.SetScore(score)
cs_obj.Search()
cs_obj.GetConformers(mole)
mole.DeleteNonPolarHydrogens()
return "Test"
If needed I can upload the project on Github. It does require installing a few dependencies and I am using conda for that right now, but I could make it available through pip too since OpenBabel is available in pip.
Ps: no single error message is shown after it crashes

Python function returns inconsistent results

I wrote a script that lists EC2 instances in Amazon Web Services. It writes the results to confluence. But it's behaving oddly.
I'm on windows 10. The first time I open a powershell terminal and run it it reports the correct number of servers in the AWS account.
The next time I run it (without changing anything at all) it doubles the resutls. And each time you run it again with the same arguments it reports the same incorrect (doubled) amount.
This is the function that lists the instances and this is where I think the trouble is, but I'm having trouble finding it:
def list_instances(aws_account, aws_account_number, interactive, regions, fieldnames, show_details):
today, aws_env_list, output_file, output_file_name, fieldnames = initialize(interactive, aws_account)     options = arguments()     instance_list = ''     session = ''     ec2 = ''     account_found = ''     PrivateDNS = None     block_device_list = None     instance_count = 0     account_type_message = ''     profile_missing_message = ''     region = ''
# Set the ec2 dictionary
ec2info = {}
# Write the file headers
if interactive == 1:
with open(output_file, mode='w+') as csv_file:
writer = csv.DictWriter(csv_file, fieldnames=fieldnames, delimiter=',', lineterminator='\n')
            writer.writeheader()
if 'gov' in aws_account and not 'admin' in aws_account:
try:
session = boto3.Session(profile_name=aws_account,region_name=region)
account_found = 'yes'
except botocore.exceptions.ProfileNotFound as e:
message = f"An exception has occurred: {e}"
account_found = 'no'
banner(message)
else:
try:
session = boto3.Session(profile_name=aws_account,region_name=region)
account_found = 'yes'
except botocore.exceptions.ProfileNotFound as e:
message = f"An exception has occurred: {e}"
account_found = 'no'
banner(message)
print(Fore.CYAN)
report_gov_or_comm(aws_account, account_found)
print(Fore.RESET)
for region in regions:
if 'gov' in aws_account and not 'admin' in aws_account:
try:
session = boto3.Session(profile_name=aws_account,region_name=region)
except botocore.exceptions.ProfileNotFound as e:
profile_missing_message = f"An exception has occurred: {e}"                 account_found = 'no' pass else: try:                 session = boto3.Session(profile_name=aws_account,region_name=region)                 account_found = 'yes' except botocore.exceptions.ProfileNotFound as e:                 profile_missing_message = f"An exception has occurred: {e}" pass try:             ec2 = session.client("ec2") except Exception as e: pass
# Loop through the instances
try:
instance_list = ec2.describe_instances()
except Exception as e:
pass
try:
for reservation in instance_list["Reservations"]:
for instance in reservation.get("Instances", []):
instance_count = instance_count + 1
launch_time = instance["LaunchTime"]
launch_time_friendly = launch_time.strftime("%B %d %Y")
tree = objectpath.Tree(block_devices = set(tree.execute('$..BlockDeviceMappings['Ebs']['VolumeId']'))
if block_devices:
block_devices = list(block_devices)
block_devices = str(block_devices).replace('[','').replace(']','').replace("'",'')
else:
block_devices = None
private_ips =  set(tree.execute('$..PrivateIpAddress'))
if private_ips:
private_ips_list = list(private_ips)                         private_ips_list = str(private_ips_list).replace('[','').replace(']','').replace(''','')
else:
private_ips_list = None
type(private_ips_list)
public_ips =  set(tree.execute('$..PublicIp'))
if len(public_ips) == 0:
public_ips = None
if public_ips:
public_ips_list = list(public_ips)
public_ips_list = str(public_ips_list).replace('[','').replace(']','').replace("'",'')
else:
public_ips_list = None
if 'KeyName' in instance:
key_name = instance['KeyName']
else:
key_name = None
name = None
if 'Tags' in instance:
try:
tags = instance['Tags']
name = None
for tag in tags:
if tag["Key"] == "Name":
name = tag["Value"]
if tag["Key"] == "Engagement" or tag["Key"] == "Engagement Code":
engagement = tag["Value"]
except ValueError:
print("Instance: %s has no tags" % instance_id
pass
if 'VpcId' in instance:
vpc_id = instance['VpcId'] else:                         vpc_id = None if 'PrivateDnsName' in instance:                         private_dns = instance['PrivateDnsName'] else:                         private_dns = None if 'Platform' in instance:                         platform = instance['Platform'] else:                         platform = None print(f"Platform: {platform}")                     ec2info[instance['InstanceId']] = { 'AWS Account': aws_account, 'Account Number': aws_account_number, 'Name': name, 'Instance ID': instance['InstanceId'], 'Volumes': block_devices, 'Private IP': private_ips_list, 'Public IP': public_ips_list, 'Private DNS': private_dns, 'Availability Zone': instance['Placement']['AvailabilityZone'], 'VPC ID': vpc_id, 'Type': instance['InstanceType'], 'Platform': platform, 'Key Pair Name': key_name, 'State': instance['State']['Name'], 'Launch Date': launch_time_friendly                     } with open(output_file,'a') as csv_file:                         writer = csv.DictWriter(csv_file, fieldnames=fieldnames, delimiter=',', lineterminator='\n')                         writer.writerow({'AWS Account': aws_account, "Account Number": aws_account_number, 'Name': name, 'Instance ID': instance["InstanceId"], 'Volumes': block_devices,  'Private IP': private_ips_list, 'Public IP': public_ips_list, 'Private DNS': private_dns, 'Availability Zone': instance['Placement']['AvailabilityZone'], 'VPC ID': vpc_id, 'Type': instance["InstanceType"], 'Platform': platform, 'Key Pair Name': key_name, 'State': instance["State"]["Name"], 'Launch Date': launch_time_friendly})
if show_details == 'y' or show_details == 'yes': for instance_id, instance in ec2info.items(): if account_found == 'yes': print(Fore.RESET + "-------------------------------------") for key in [ 'AWS Account', 'Account Number', 'Name', 'Instance ID', 'Volumes', 'Private IP', 'Public IP', 'Private DNS', 'Availability Zone', 'VPC ID', 'Type', 'Platform', 'Key Pair Name', 'State', 'Launch Date'                                 ]: print(Fore.GREEN + f"{key}: {instance.get(key)}") print(Fore.RESET + "-------------------------------------") else: pass                     ec2info = {} with open(output_file,'a') as csv_file:                         csv_file.close() except Exception as e: pass if profile_missing_message == '*':         banner(profile_missing_message) print(Fore.GREEN)     report_instance_stats(instance_count, aws_account, account_found) print(Fore.RESET + '\n') return output_file
This is a paste of the whole code for context: aws_ec2_list_instances.py
The first time you run it from the command line it reports the correct total of servers in EC2 (can be verified in the AWS console):
----------------------------------------------------------
There are: 51 EC2 instances in AWS Account: company-lab.
----------------------------------------------------------
The next time it's run with ABSOLUTELY NOTHING changed it reports this total:
----------------------------------------------------------
There are: 102 EC2 instances in AWS Account: company-lab.
----------------------------------------------------------
You literally just up arrow the command and it doubles the results. When it writes to confluence you can see duplicate servers listed. It does that each time you up arrow to run it again, with the same incorrect total (102 servers).
If you close the powershell command line and open again it's back to reporting the correct result (51 servers) which corresponds to what you see in the AWS console.
What the heck is happening, here? Why is it doing that and how do I correct the problem?
This is pretty mysterious! I don't think I'm going to be able to debug it without access to your environment. My suggestion is that you use pdb to try to figure out how instance_count is being incremented past 51. I'd recommend adding import pdb; pdb.set_trace() at line 210, that is, right after for reservation in instance_list["Reservations"]:. You can then inspect the value of instance_count each time through the loop, see whether instance_list["Reservations"] actually has duplicate data somehow, etc.

How to act on s3 bucket ACLs using bucket tags in Lambda function

I am writing a Lambda script to automatically revert an s3 bucket's public ACL back to private unless the bucket is tagged with "public-allowed" = "True". My script is successful in reverting the ACL, however I am having problems getting it to recognize the specified tag set.
I have found suggestions elsewhere saying to modify tag.id to tag['id'] (so, tag['name']), however when I do that, instead of it saying 'dict' object has no attribute 'name', it simply says name in the logs as if I had print(name) in there. Doing this also has no effect on the outcome.
#Public Tag
def public_bucket(bucketname):
try:
bucket_tagging = s3.get_bucket_tagging(Bucket=bucketname)
tag_set = bucket_tagging['TagSet']
for tag in tag_set:
if (tag.name == "public-allowed"):
if (tag.value == "True"):
return True
break
except Exception, e:
print(e.message)
I was expecting this to check through the tags that exist on the bucket and break the loop when it finds the specific key/value of "public-allowed" = "True", which would allow the bucket ACL to stay public, and if there are no tags then print the error message. Instead, it still reverts the ACL to private regardless, although there are no actual errors being thrown.
What am I doing wrong here?
I was able to get some more information from a co-worker, and this turns out that I needed to define the tags with ['Key'] and ['Value']:
def public_bucket(bucketname):
try:
bucket_tagging = s3.get_bucket_tagging(Bucket=bucketname)
tag_set = bucket_tagging['TagSet']
for tag in tag_set:
if (tag['Key'] == "public-allowed"):
if (tag['Value'] == "True"):
return True
break
except Exception, e:
print(e.message)
Sorry if me asking this question (and then answering myself) is bad stack overflow etiquette. Hopefully it's useful to others if nothing else.

Is there a way to detect whether a command prompt is available in python/django?

I have some code that will some times be run from a command line (django management command) and sometimes not (django changelist action). In this code, if a certain exception is raised, I can get some user input and keep going if the a command prompt (stdin) is available. Otherwise, I need to just let the exception propagate or do something different.
e.g.
def copy_account_settings(old_acct_domain, new_acct_domain):
try:
new_account = Account.objects.get(domain = new_acct_domain)
except Account.DoesNotExist:
print ("Couldn't find an account matching %s." % new_acct_domain)
if <command prompt is available>:
print "Would you like to create the account? (y/n)"
if raw_input().lower().strip()='y':
# get some more input and create the account and move on
else:
raise
How would you do this?
Perhaps you can check for a TTY?
import os
if os.isatty(0):
That should return true if the session is interactive and false if not.

Categories