I am running into the following MemoryError when running the below script. Any assistance would be greatly appreciated. The layer that I am querying contains 235,896 features which I'm afraid is the problem.
Script
import arcgis
import json
from arcgis import ArcGIS
service = ArcGIS("http://mapping.dekalbcountyga.gov/arcgis/rest/services/LandUse/MapServer")
query = service.get(0, count_only=False)
json_query = json.dump(query)
f = open("dekalb_parcels.geojson", "w")
f.write(json_query)
f.close()
Error
Traceback (most recent call last):
File "G:/Python/Scripts/dekalb_parcel_query.py", line 8, in <module>
query = service.get(0, count_only=False)
File "C:\Python27\lib\site-packages\arcgis\arcgis.py", line 146, in get
jsobj = self.get_json(layer, where, fields, count_only, srid)
File "C:\Python27\lib\site-packages\arcgis\arcgis.py", line 90, in get_json
return response.json(strict=False)
File "C:\Python27\lib\site-packages\requests\models.py", line 802, in json
return json.loads(self.text, **kwargs)
File "C:\Python27\lib\site-packages\requests\models.py", line 769, in text
content = str(self.content, encoding, errors='replace')
MemoryError
I was able to rectify this issue by switching to 64-bit Python. The process was crashing when it reached 2GB of RAM usage, but by switching to 64-bit Python I avoided this problem.
Related
I'm using the pgdumplib lib. Unfortunately there is an error, when I'm trying to open the file. The file is in the same folder as the python script. I'm using Python 3.7
Code:
import pgdumplib
dump = pgdumplib.load('test.dump')
print('Database: {}'.format(dump.toc.dbname))
print('Archive Timestamp: {}'.format(dump.toc.timestamp))
print('Server Version: {}'.format(dump.toc.server_version))
print('Dump Version: {}'.format(dump.toc.dump_version))
for line in dump.table_data('public', 'pgbench_accounts'):
print(line)
Error:
Traceback (most recent call last):
File "C:/Users/user/data/test.py", line 3, in <module>
dump = pgdumplib.load('test.dump')
File "C:\Users\user\venv\data\lib\site-packages\pgdumplib\__init__.py", line 24, in load
return dump.Dump(converter=converter).load(filepath)
File "C:\Users\user\venv\data\lib\site-packages\pgdumplib\dump.py", line 228, in load
raise ValueError('Path {!r} does not exist'.format(path))
ValueError: Path 'test.dump' does not exist
If you are running your code from C:/Users/user/700Joach/project/ and you have the following line in your script:
dump = pgdumplib.load('test.dump')
Then, python would look for the following path to open test.dump:
C:/Users/user/700Joach/project/test.dump
Namely, this part: load('test.dump') internally is forging a relative path to test.dump.
You can do several things to resolve the issue. Either move test.dump to the directory from which you are executing your code. Or, provide an absolute path to your test.dump as follows:
dump = pgdumplib.load('C:/Users/user/700Joach/project/test.dump')
I have a fmu which is created in GT-Suite and am trying to work with it in python.
I have followed jmodelica tutorials
from pyfmi import load_fmu
model = load_fmu('myFMU.fmu')
res = model.simulate(final_time=10)
My fmu gets loaded but when I try to run model.simulate step it throws an error
Traceback (most recent call last):
File "<ipython-input-3-4812da4bb52b>", line 1, in <module>
res = model.simulate(final_time=10)
File "src\pyfmi\fmi.pyx", line 6981, in pyfmi.fmi.FMUModelCS2.simulate
File "src\pyfmi\fmi.pyx", line 304, in pyfmi.fmi.ModelBase._exec_simulate_algorithm
File "src\pyfmi\fmi.pyx", line 298, in pyfmi.fmi.ModelBase._exec_simulate_algorithm
File "C:\Users\chinn\Anaconda3\envs\test_env\lib\site-packages\pyfmi\fmi_algorithm_drivers.py", line 761, in __init__
self.model.setup_experiment(start_time=start_time, stop_time_defined=self.options["stop_time_defined"], stop_time=final_time)
File "src\pyfmi\fmi.pyx", line 4292, in pyfmi.fmi.FMUModelBase2.setup_experiment
FMUException: Failed to setup the experiment.
I have tried running it in multiple environments in my pc but am getting the same error. Googled a lot but couldn't find anything. Can some one help me with resolving this issue?
The fmu is probably not exported with the correct license setting.
I have a function that converts a docx to html and a large docx file to be converted.
The problem is this function is part of a bigger program and the converted html is parsed afterwards so I cannot afford to use another converter without impacting the rest of the code (which is not wanted). Running on python 2.7.13 installed on 32-bit, but changing to 64-bit is also not desired.
This is the function:
import logging
from ooxml import serialize
def trasnformDocxtoHtml(inputFile, outputFile):
logging.basicConfig(filename='ooxml.log', level=logging.INFO)
dfile = ooxml.read_from_file(inputFile)
with open(outputFile,'w') as htmlFile:
htmlFile.write( serialize.serialize(dfile.document))
and here's the error:
>>> import library
>>> library.trasnformDocxtoHtml(r'large_file.docx', 'output.html')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "library.py", line 9, in trasnformDocxtoHtml
dfile = ooxml.read_from_file(inputFile)
File "C:\Python27\lib\site-packages\ooxml\__init__.py", line 52, in read_from_file
dfile.parse()
File "C:\Python27\lib\site-packages\ooxml\docxfile.py", line 46, in parse
self._doc = parse_from_file(self)
File "C:\Python27\lib\site-packages\ooxml\parse.py", line 655, in parse_from_file
document = parse_document(doc_content)
File "C:\Python27\lib\site-packages\ooxml\parse.py", line 463, in parse_document
document.elements.append(parse_table(document, elem))
File "C:\Python27\lib\site-packages\ooxml\parse.py", line 436, in parse_table
for p in tc.xpath('./w:p', namespaces=NAMESPACES):
File "src\lxml\etree.pyx", line 1583, in lxml.etree._Element.xpath
MemoryError
no mem for new parser
MemoryError
Could I somehow increase the buffer memory in python? Or fix the function without impacting the html output format?
I am currently trying to open a large file on my 64 bit Mac (using 64 bit Miniconda, as this is the only version for mac). I use a framework 'sumatra' which is a library to track scientific computing work and possibly reproduce it. I get the following error. I was able to debug a little, and found out that it is mainly due to X_train.npy being too big (3.6GB when not unpacked).
Traceback (most recent call last):
File "/Users/davidal/miniconda3/envs/ml_project/bin/smt", line 31, in <module>
main(sys.argv[2:])
File "/Users/davidal/miniconda3/envs/ml_project/lib/python3.5/site-packages/sumatra/commands.py", line 372, in run
project.allow_command_line_parameters)
File "/Users/davidal/miniconda3/envs/ml_project/lib/python3.5/site-packages/sumatra/commands.py", line 76, in parse_arguments
parameters = build_parameters(arg)
File "/Users/davidal/miniconda3/envs/ml_project/lib/python3.5/site-packages/sumatra/parameters.py", line 586, in build_parameters
parameters = parameter_set_class(filename)
File "/Users/davidal/miniconda3/envs/ml_project/lib/python3.5/site-packages/parameters/__init__.py", line 387, in __init__
pstr = f.read()
OSError: [Errno 22] Invalid argument
Any experience resolving such issues? Where should I look further? Any ideas?
I'm getting an occasional error when trying to fetch a list of worksheets from gdata. This does not happen for all spreadsheets, but will consistently happen to the same spreadsheet for a period of several days to weeks. I suspected permissions, but was unable to find any special permissions for the spreadsheets that cause the error. I'm using OAuth2, gdata 2.0.18, and Python 2.6.8.
Traceback (most recent call last):
File "/mnt/shared_from_host/snake/base/fetchers/google_spreadsheet/common.py", line 176, in get_worksheet_list
feed = client.get_worksheets(spreadsheet_id)
File "/home/ubuntu/.virtualenvs/snakeenv/lib/python2.6/site-packages/gdata/spreadsheets/client.py", line 108, in get_worksheets
**kwargs)
File "/home/ubuntu/.virtualenvs/snakeenv/lib/python2.6/site-packages/gdata/client.py", line 640, in get_feed
**kwargs)
File "/home/ubuntu/.virtualenvs/snakeenv/lib/python2.6/site-packages/gdata/client.py", line 278, in request
version=get_xml_version(self.api_version))
File "/home/ubuntu/.virtualenvs/snakeenv/lib/python2.6/site-packages/atom/core.py", line 520, in parse
tree = ElementTree.fromstring(xml_string)
File "<string>", line 86, in XML
SyntaxError: no element found: line 1, column 0
This seems to be from the request getting an empty string as the response.
Does anybody have any idea on why this might not work, or troubleshooting ideas? Thanks.