self.zone.locations==None when parsing zones.mpk - python

I've finally made it through all of the linting errors when I converted the project from the archaic Python 2.7 code to Python 3.11. When I run python MapperGUI-3.pyw, the GUI loads but something is keeping the daoc game path from being loaded properly and stored, resulting in the error message "Game not found".
I've beat my ahead against the functions and variables and I am sure I'm missing something obvious, but for the life of me I cannot figure out where it is going wrong parsing the game directory of 'C:/Program Files (x86)/Electronic Arts/Dark Age of Camelot/'
For whatever reason, the program is stuck thinking self.zone.locations==None and I confirmed this by manually declaring the variable to a valid one and things worked fine.
MapperGUI/modules/zones.py is the module that loads the zones from the .mpk file dynamically. I verified mapper/all_locations.txt is being created properly when zones.py is parsed and called from the main program.
Did I miss something in zones.py? When I ran the 2 to 3 conversion on it, nothing was changed and I am not seeing any runtime errors.
#zones.py
import os, sys
sys.path.append("../mapper")
import dempak
import datParser as ConfigParser
class Zones:
def __init__(self, gamepath):
try:
self.locations=[]
cp=ConfigParser.ConfigParser()
cp.readfp(dempak.getMPAKEntry(os.path.join(gamepath, 'zones', 'zones.mpk'), 'zones.dat'))
sections=cp.sections()
sections.sort()
for s in sections:
if s[:4] == 'zone':
id=int(s[4:])
enabled=int(cp.get(s, 'enabled'))
name=cp.get(s, 'name')
region=cp.get(s,'region'); # need region to id realm -- cch
try:
type=int(cp.get(s, 'type'))
except ConfigParser.NoOptionError:
type=0
if type==0 or type==3:
# add region to tuple -- cch
self.locations.append(("%03d" % id, name, region))
else: # ignore type 1,2,4 (city,dungeon,TOA city)
continue
dataFile=open("mapper/all_locations.txt","w")
for x in range(len(self.locations)):
dataFile.write("%s : %s\n" % (self.locations[x][0], self.locations[x][1]))
dataFile.close()
except IOError:
self.locations=None

Related

python importlib seems to be sharing data between instances

Ok, so I am having a weird one. I am running python in a SideFX Hython (their custom build) implementation that is using PDG. The only real difference between Hython and vanilla Python is some internal functions for handling geometry data and compiled nodes, which shouldn't be an issue even though they are being used.
The way the code runs, I am generating a list of files from the disk which creates PDG work items. Those work items are then processed in parallel by PDG. Here is the code for that:
import importlib.util
import pdg
import os
from pdg.processor import PyProcessor
import json
class CustomProcessor(PyProcessor):
def __init__(self, node):
PyProcessor.__init__(self,node)
self.extractor_module = 'GeoExtractor'
def onGenerate(self, item_holder, upstream_items, generation_type):
for upstream_item in upstream_items:
new_item = item_holder.addWorkItem(parent=upstream_item, inProcess=True)
return pdg.result.Success
def onCookTask(self, work_item):
spec = importlib.util.spec_from_file_location("callback", "Geo2Custom.py")
GE = importlib.util.module_from_spec(spec)
spec.loader.exec_module(GE)
GE.convert(f"{work_item.attribValue('directory')}/{work_item.attribValue('filename')}{work_item.attribValue('extension')}", work_item.index, f'FRAME { work_item.index }', self.extractor_module)
return pdg.result.Success
def bulk_convert (path_pattern, extractor_module = 'GeoExtractor'):
type_registry = pdg.TypeRegistry.types()
try:
type_registry.registerNode(CustomProcessor, pdg.nodeType.Processor, name="customprocessor", label="Custom Processor", category="Custom")
except Exception:
pass
whereItWorks = pdg.GraphContext("testBed")
whatWorks = whereItWorks.addScheduler("localscheduler")
whatWorks.setWorkingDir(os.getcwd (), '$HIP')
whereItWorks.setValues(f'{whatWorks.name}', {'maxprocsmenu':-1, 'tempdirmenu':0, 'verbose':1})
findem = whereItWorks.addNode("filepattern")
whereItWorks.setValue(f'{findem.name}', 'pattern', path_pattern, 0)
generic = whereItWorks.addNode("genericgenerator")
whereItWorks.setValue(generic.name, 'itemcount', 4, 0)
custom = whereItWorks.addNode("customprocessor")
custom.extractor_module = extractor_module
node1 = [findem]
node2 = [custom]*len(node1)
for n1, n2 in zip(node1, node2):
whereItWorks.connect(f'{n1.name}.output', f'{n2.name}.input')
n2.cook(True)
for node in whereItWorks.graph.nodes():
node.dirty(False)
whereItWorks.disconnect(f'{n1.name}.output', f'{n2.name}.input')
print ("FULLY DONE")
import os
import hou
import traceback
import CustomWriter
import importlib
def convert (filename, frame_id, marker, extractor_module = 'GeoExtractor'):
Extractor = importlib.__import__ (extractor_module)
base, ext = os.path.splitext (filename)
if ext == '.sc':
base = os.path.splitext (base)[0]
dest_file = base + ".custom"
geo = hou.Geometry ()
geo.loadFromFile (filename)
try:
frame = Extractor.extract_geometry (geo, frame_id)
except Exception as e:
print (f'F{ frame_id } Geometry extraction failed: { traceback.format_exc () }.')
return None
print (f'F{ frame_id } Geometry extracted. Writing file { dest_file }.')
try:
CustomWriter.write_frame (frame, dest_file)
except Exception as e:
print (f'F{ frame_id } writing failed: { e }.')
print (marker + " SUCCESS")
The onCookTask code is run when the work item is processed.
Inside of the GeoExtractor.py program I am importing the geometry file defined by the work item, then converting it into a couple Pandas dataframes to collate and process the massive volumes of data quickly, which is then passed to a custom set of functions for writing binary files to disk from the Pandas data.
Everything appears to run flawlessly, until I check my output binaries and see that they escalate in file size much more than they should, indicating that either something is being shared between instances or not cleared from memory and subsequent loads of the extractor code is appending the dataframes which are named the same.
I have run the GeoExtractor code sequentially with the python instance closing between each file conversion using the exact same code and the files are fine, growing only very slowly as the geometry data volume grows, so the issue has to lie somewhere in the parallelization of it using PDG and calling the GeoExtractor.py code over and over for each work item.
I have contemplated moving the importlib stuff to the __init__() for the class leaving only the call to the member function in the onCookTask() function. Maybe even going so far as to pass a unique variable for each work item which is used inside GeoExtractor to create a closure of the internal functions so they are unique instances in memory.
I tried to do a stripped down version of GeoExtractor and since I'm not sure where the leak is, I just ended up pulling out comments with proprietary or superfluous information and changing some custom library names, but the file ended up kinda long so I am including a pastebin: https://pastebin.com/4HHS8D2W
As for CustomGeometry and CustomWriter, there is no working form of either of those libraries that will be NDA safe, so unfortunately they have to stay blackboxed. The CustomGeometry is a handful of container classes which organize all of the data coming out of the geometry, and the writer is a formatter/writer for the binary format we are utilizing. I am hoping the issue wouldn't be in either of them.
Edit 1: I fixed an issue in the example code.
Edit 2: Added larger examples.

Python function returns inconsistent results

I wrote a script that lists EC2 instances in Amazon Web Services. It writes the results to confluence. But it's behaving oddly.
I'm on windows 10. The first time I open a powershell terminal and run it it reports the correct number of servers in the AWS account.
The next time I run it (without changing anything at all) it doubles the resutls. And each time you run it again with the same arguments it reports the same incorrect (doubled) amount.
This is the function that lists the instances and this is where I think the trouble is, but I'm having trouble finding it:
def list_instances(aws_account, aws_account_number, interactive, regions, fieldnames, show_details):
today, aws_env_list, output_file, output_file_name, fieldnames = initialize(interactive, aws_account)     options = arguments()     instance_list = ''     session = ''     ec2 = ''     account_found = ''     PrivateDNS = None     block_device_list = None     instance_count = 0     account_type_message = ''     profile_missing_message = ''     region = ''
# Set the ec2 dictionary
ec2info = {}
# Write the file headers
if interactive == 1:
with open(output_file, mode='w+') as csv_file:
writer = csv.DictWriter(csv_file, fieldnames=fieldnames, delimiter=',', lineterminator='\n')
            writer.writeheader()
if 'gov' in aws_account and not 'admin' in aws_account:
try:
session = boto3.Session(profile_name=aws_account,region_name=region)
account_found = 'yes'
except botocore.exceptions.ProfileNotFound as e:
message = f"An exception has occurred: {e}"
account_found = 'no'
banner(message)
else:
try:
session = boto3.Session(profile_name=aws_account,region_name=region)
account_found = 'yes'
except botocore.exceptions.ProfileNotFound as e:
message = f"An exception has occurred: {e}"
account_found = 'no'
banner(message)
print(Fore.CYAN)
report_gov_or_comm(aws_account, account_found)
print(Fore.RESET)
for region in regions:
if 'gov' in aws_account and not 'admin' in aws_account:
try:
session = boto3.Session(profile_name=aws_account,region_name=region)
except botocore.exceptions.ProfileNotFound as e:
profile_missing_message = f"An exception has occurred: {e}"                 account_found = 'no' pass else: try:                 session = boto3.Session(profile_name=aws_account,region_name=region)                 account_found = 'yes' except botocore.exceptions.ProfileNotFound as e:                 profile_missing_message = f"An exception has occurred: {e}" pass try:             ec2 = session.client("ec2") except Exception as e: pass
# Loop through the instances
try:
instance_list = ec2.describe_instances()
except Exception as e:
pass
try:
for reservation in instance_list["Reservations"]:
for instance in reservation.get("Instances", []):
instance_count = instance_count + 1
launch_time = instance["LaunchTime"]
launch_time_friendly = launch_time.strftime("%B %d %Y")
tree = objectpath.Tree(block_devices = set(tree.execute('$..BlockDeviceMappings['Ebs']['VolumeId']'))
if block_devices:
block_devices = list(block_devices)
block_devices = str(block_devices).replace('[','').replace(']','').replace("'",'')
else:
block_devices = None
private_ips =  set(tree.execute('$..PrivateIpAddress'))
if private_ips:
private_ips_list = list(private_ips)                         private_ips_list = str(private_ips_list).replace('[','').replace(']','').replace(''','')
else:
private_ips_list = None
type(private_ips_list)
public_ips =  set(tree.execute('$..PublicIp'))
if len(public_ips) == 0:
public_ips = None
if public_ips:
public_ips_list = list(public_ips)
public_ips_list = str(public_ips_list).replace('[','').replace(']','').replace("'",'')
else:
public_ips_list = None
if 'KeyName' in instance:
key_name = instance['KeyName']
else:
key_name = None
name = None
if 'Tags' in instance:
try:
tags = instance['Tags']
name = None
for tag in tags:
if tag["Key"] == "Name":
name = tag["Value"]
if tag["Key"] == "Engagement" or tag["Key"] == "Engagement Code":
engagement = tag["Value"]
except ValueError:
print("Instance: %s has no tags" % instance_id
pass
if 'VpcId' in instance:
vpc_id = instance['VpcId'] else:                         vpc_id = None if 'PrivateDnsName' in instance:                         private_dns = instance['PrivateDnsName'] else:                         private_dns = None if 'Platform' in instance:                         platform = instance['Platform'] else:                         platform = None print(f"Platform: {platform}")                     ec2info[instance['InstanceId']] = { 'AWS Account': aws_account, 'Account Number': aws_account_number, 'Name': name, 'Instance ID': instance['InstanceId'], 'Volumes': block_devices, 'Private IP': private_ips_list, 'Public IP': public_ips_list, 'Private DNS': private_dns, 'Availability Zone': instance['Placement']['AvailabilityZone'], 'VPC ID': vpc_id, 'Type': instance['InstanceType'], 'Platform': platform, 'Key Pair Name': key_name, 'State': instance['State']['Name'], 'Launch Date': launch_time_friendly                     } with open(output_file,'a') as csv_file:                         writer = csv.DictWriter(csv_file, fieldnames=fieldnames, delimiter=',', lineterminator='\n')                         writer.writerow({'AWS Account': aws_account, "Account Number": aws_account_number, 'Name': name, 'Instance ID': instance["InstanceId"], 'Volumes': block_devices,  'Private IP': private_ips_list, 'Public IP': public_ips_list, 'Private DNS': private_dns, 'Availability Zone': instance['Placement']['AvailabilityZone'], 'VPC ID': vpc_id, 'Type': instance["InstanceType"], 'Platform': platform, 'Key Pair Name': key_name, 'State': instance["State"]["Name"], 'Launch Date': launch_time_friendly})
if show_details == 'y' or show_details == 'yes': for instance_id, instance in ec2info.items(): if account_found == 'yes': print(Fore.RESET + "-------------------------------------") for key in [ 'AWS Account', 'Account Number', 'Name', 'Instance ID', 'Volumes', 'Private IP', 'Public IP', 'Private DNS', 'Availability Zone', 'VPC ID', 'Type', 'Platform', 'Key Pair Name', 'State', 'Launch Date'                                 ]: print(Fore.GREEN + f"{key}: {instance.get(key)}") print(Fore.RESET + "-------------------------------------") else: pass                     ec2info = {} with open(output_file,'a') as csv_file:                         csv_file.close() except Exception as e: pass if profile_missing_message == '*':         banner(profile_missing_message) print(Fore.GREEN)     report_instance_stats(instance_count, aws_account, account_found) print(Fore.RESET + '\n') return output_file
This is a paste of the whole code for context: aws_ec2_list_instances.py
The first time you run it from the command line it reports the correct total of servers in EC2 (can be verified in the AWS console):
----------------------------------------------------------
There are: 51 EC2 instances in AWS Account: company-lab.
----------------------------------------------------------
The next time it's run with ABSOLUTELY NOTHING changed it reports this total:
----------------------------------------------------------
There are: 102 EC2 instances in AWS Account: company-lab.
----------------------------------------------------------
You literally just up arrow the command and it doubles the results. When it writes to confluence you can see duplicate servers listed. It does that each time you up arrow to run it again, with the same incorrect total (102 servers).
If you close the powershell command line and open again it's back to reporting the correct result (51 servers) which corresponds to what you see in the AWS console.
What the heck is happening, here? Why is it doing that and how do I correct the problem?
This is pretty mysterious! I don't think I'm going to be able to debug it without access to your environment. My suggestion is that you use pdb to try to figure out how instance_count is being incremented past 51. I'd recommend adding import pdb; pdb.set_trace() at line 210, that is, right after for reservation in instance_list["Reservations"]:. You can then inspect the value of instance_count each time through the loop, see whether instance_list["Reservations"] actually has duplicate data somehow, etc.

Module urllib.request not getting data

I am trying to test this demo program from lynda using Python 3. I am using Pycharm as my IDE. I already added and installed the request package, but when I run the program, it runs cleanly and shows a message "Process finished with exit code 0", but does not show any output from print statement. Where am I going wrong ?
import urllib.request # instead of urllib2 like in Python 2.7
import json
def printResults(data):
# Use the json module to load the string data into a dictionary
theJSON = json.loads(data)
# now we can access the contents of the JSON like any other Python object
if "title" in theJSON["metadata"]:
print(theJSON["metadata"]["title"])
# output the number of events, plus the magnitude and each event name
count = theJSON["metadata"]["count"];
print(str(count) + " events recorded")
# for each event, print the place where it occurred
for i in theJSON["features"]:
print(i["properties"]["place"])
# print the events that only have a magnitude greater than 4
for i in theJSON["features"]:
if i["properties"]["mag"] >= 4.0:
print("%2.1f" % i["properties"]["mag"], i["properties"]["place"])
# print only the events where at least 1 person reported feeling something
print("Events that were felt:")
for i in theJSON["features"]:
feltReports = i["properties"]["felt"]
if feltReports != None:
if feltReports > 0:
print("%2.1f" % i["properties"]["mag"], i["properties"]["place"], " reported " + str(feltReports) + " times")
# Open the URL and read the data
urlData = "http://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/2.5_day.geojson"
webUrl = urllib.request.urlopen(urlData)
print(webUrl.getcode())
if webUrl.getcode() == 200:
data = webUrl.read()
data = data.decode("utf-8") # in Python 3.x we need to explicitly decode the response to a string
# print out our customized results
printResults(data)
else:
print("Received an error from server, cannot retrieve results " + str(webUrl.getcode()))
Not sure if you left this out on purpose, but this script isn't actually executing any code beyond the imports and function definition. Assuming you didn't leave it out on purpose, you would need the following at the end of your file.
if __name__ == '__main__':
data = "" # your data
printResults(data)
The check on __name__ equaling "__main__" is just so your code is only executing when the file is explicitly run. To always run your printResults(data) function when the file is accessed (like, say, if its imported into another module) you could just call it at the bottom of your file like so:
data = "" # your data
printResults(data)
I had to restart the IDE after installing the module. I just realized and tried it now with "Run as Admin". Strangely seems to work now.But not sure if it was a temp error, since even without restart, it was able to detect the module and its methods.
Your comments re: having to restart your IDE makes me think that pycharm might not automatically detect newly installed python packages. This SO answer seems to offer a solution.
SO answer

How would I separate my Python File to multiple plugins?

So first thing I want to say: I have been looking into modules and such, I just don't quiet know how I would rewrite it to fit this in.
Project: What I have is a Skype Bot using the Skype4Py module. I have about 11 commands I noticed the one script is getting a little large.
I'm trying to think about how to link one main.py file to a Plugin Folder, which contains each and every bot function in it's own respectable Python file. It sounds simple an all, except when it comes to how the function is called.
Here is just a basic look at my Skype bot, missing some of the larger functions.
import Skype4Py, random
class SkypeBot():
def __init__(self):
self.skype = Skype4Py.Skype()
if self.skype.Client.IsRunning == False:
self.skype.Client.Start()
self.skype.Attach()
self.results = ['Yes', 'No', 'Maybe', 'Never']
def main(self):
print ' Skype Bot currently running on user: %s' % self.skype.CurrentUserHandle
print "\n\nCommands Called:\n"
while True:
self.skype.OnMessageStatus = self.RunFunction
def RunFunction(self, Message, Status):
if Status == 'SENT' or Status == 'RECEIVED':
cmd = Message.Body.split(' ')[0]
if cmd in self.functions.keys():
self.context = Message
self.caller = self.context.FromHandle
self.functions[cmd](self)
def ping(self):
print " %s : Ping" % self.caller
self.context.Chat.SendMessage('Pong')
def say(self):
try:
response = self.context.Body.split(' ', 1)
if response[1] == "-info":
print " %s : say -info" % self.caller
self.context.Chat.SendMessage("Resends the message entered. \n"
"Usage: !say Hello. \n"
"Example: Bot: Hello.")
else:
say = response[1]
print " %s : Say [%s]" % (self.caller, say)
self.context.Chat.SendMessage(say)
except:
self.context.Chat.SendMessage("Please use -info to properly use the !say command")
def eightball(self):
try:
question = self.context.Body.split(' ', 1)
if question[1] == "-info":
print " %s : 8Ball -info" % self.caller
self.context.Chat.SendMessage("Responds with an answer.\n"
"Usage: !8ball 'Do I have swag?'\n"
"Example: !8Ball Response: 'Yes'")
else:
random.shuffle(self.results)
answer = self.results[3]
print " %s : 8Ball [%s]" % (self.caller, question[1])
self.context.Chat.SendMessage("!8Ball Response: %s" % answer)
except:
self.context.Chat.SendMessage("Please use -info to properly use the !8ball command")
#FUNCTIONS LIST
#********************
functions = {
"!ping": ping,
"!say": say,
"!8ball": eightball,
}
if __name__ == "__main__":
snayer = SkypeBot()
snayer.main()
So basically, what I am wondering, how can I change
self.skype.OnMessageStatus = self.RunFunction
so that it'll run functions from another file?
For a program of this size it's not really necessary to put your command functions into separate files, but I guess it is good organization. And good practice for when you write a program that has thousands of lines of code. :)
One way to do this is to create a basic SkypeBot class without any command methods and then import the command methods from your plugins directory and add them to the class. It's easy enough to add new attributes to an existing class, and it doesn't matter if the new attributes are properties or methods, the syntax to add them is identical. (With a tiny bit more work it's even possible to add new attributes to an instance, so you can have multiple instances, each with their own individual set of commands. But I guess that's not necessary here, since a program that uses the SkypeBot class will normally only create a single instance).
So we can break your question into two parts:
How to add methods to an existing class.
How to import those methods
from other source files.
As I said, 1) is easy. 2) is quite easy as well, but I've never done it before, so I had to do a little bit of research and testing, and I can't promise that what I've done is best practice, but it works. :)
I don't know much about Skype, and I don't have that Skype4Py module, and as you said, the code above is not the complete program, so I've written some fairly simple code to illustrate the process of adding plugin methods from separate files to an existing class.
The name of the main program is "plugin_demo.py". To keep things neat, it lives in its own directory, "plugintest/", which you should create somewhere in your Python path (eg where you normally keep your Python programs). This path must be specified in your PYTHONPATH environment variable.
"plugintest/" has the following structure:
plugintest/
__init__.py
plugin_demo.py
plugins/
__init__.py
add.py
multiply.py
The __init__.py files are used by Python's import machinery to let it know that a directory contains a Python package, see 6.4. Packages in the Python docs for further details.
Here are the contents of those files. Firstly, the files that go into "plugintest/" itself:
__init__.py
__all__ = ['plugin_demo', 'plugins']
from plugintest import *
plugin_demo.py
#! /usr/bin/env python
#A simple class that will get methods added later from plugins directory
class Test(object):
def __init__(self, data):
self.data = data
def add_plugins(cls):
import plugins
print "Adding plugin methods to %s class" % cls.__name__
for name in plugins.__all__:
print name
plug = getattr(plugins, name)
print plug
method = getattr(plug, name)
print method
setattr(cls, name, method)
print
print "Done\n"
add_plugins(Test)
def main():
#Now test it!
t = Test([1, 2, 3]); print t.data
t.multiply(10); print t.data
t.add(5); print t.data
if __name__ == '__main__':
main()
And now the contents of the "plugintest/plugins/" directory:
__init__.py
__all__ = ['add', 'multiply']
from plugintest.plugins import *
add.py
#A method for the Test class of plugin_demo.py
def add(self, m):
self.data = [m + i for i in self.data]
multiply.py
#A method for the Test class of plugin_demo.py
def multiply(self, m):
self.data = [m * i for i in self.data]
If you cd to the directory containing the "plugintest/" folder, you should be able to run it with
python plugintest/plugin_demo.py
and if you cd to "plugintest/" itself
python plugin_demo.py
Also, in the interpreter (or another Python program), you should be able to do
import plugintest
and then run the main() function of "plugin_demo.py" with
plugintest.plugin_demo.main()
The other usual variations of from ... import ... etc should also work as expected.
The function in "plugin_demo.py" that performs the magic of adding the imported methods to the Test class is add_plugins(). When it runs it prints out each method name, its module, and its function. This could be handy during development, but you'd probably comment out some of those print statements once the program's working properly.
I hope this helps, and if you have any questions please don't hesitate to ask.

I need a super-duper simple CGI Python photo upload

I've looked through tons of answers but the truth is, I only know super basic python and I really need help. I don't know the os module or anything like that and I can't use PHP (not that I know it anyway, but it's not permitted) and I need something so easy that I can understand it.
Basically, I need a CGI upload (I don't need the HTML form, I've got that much down) that will take the photo and save it. That's it. I don't need any fancy place for it to save, I just need the file to be properly uploaded from the form.
I've got various versions of this function and I can't get them working because I don't understand them so PLEASE HELP!!!
import cgi
def savefile (filename, photodoc):
form=cgi.FieldStorage()
name=form[filename]
period=name.split(.)
if period[1]=="jpeg" or period[1]=="jpg" or period[1]=="png":
idk what to do
else:
make an error message
This cgi program will "take the photo and save it. That's it."
#!/usr/bin/python2.7
import cgi
field=cgi.FieldStorage()['fieldname']
open(field.filename, 'wb').write(field.value)
Among the things it doesn't do are error checking and security checking, and specifying in what directory the files should be saved.
Duplicate question but here's what you need:
Depending if windows or linux, first set to binary mode:
try:
import msvcrt
msvcrt.setmode (0, os.O_BINARY)
msvcrt.setmode (1, os.O_BINARY)
except ImportError:
pass
Then:
form = cgi.FieldStorage()
name = form[filename]
period = name.split('.') #You need the quotes around the period
if period[1]=='jpeg' or period[1] == 'jpg' or period[1] =='png':
if upload.filename:
name = os.path.basename(upload.filename)
out = open(YOUR_FILEPATH_HERE + name, 'wb', 1000)
message = "The file '" + name + "' was uploaded successfully"
while True:
packet = upload.file.read(1000)
if not packet:
break
out.write(packet)
out.close()
else:
print 'Error'
Some sources:
How to use Python/CGI for file uploading
http://code.activestate.com/recipes/273844-minimal-http-upload-cgi/

Categories