I had not succeeded in using the bulkloader option that is officially provided, so i wrote my own bulkloader script (actually post handler that would incrementally load entities from csv into the datastore).
The solution worked as follows:
1. I would copy data from a csv file
2. Paste it into a text area in a form on the app
3. Post the form
4. The handler parses the incoming text for headers (column names)
5. Stores rest of lines in list
6. incrementally fetches 100 lines from the list and for each line creates and stores an entity (the type of entity is resolved from the type specified from a select field from the form)
Now, this technique would work on my dev server for small inputs of up to 1000 lines, beyond that it would show the following error:
Traceback (most recent call last):
File "/host/_Hive/Lab/ACTIVE WORKS/UG VOTES/google_appengine/google/appengine/ext/webapp/init.py", line 513, in call
File "/host/_Hive/Lab/ACTIVE WORKS/UG VOTES/google_appengine/myApps/ugvotes/ugvotes.py", line 241, in post
File "/host/_Hive/Lab/ACTIVE WORKS/UG VOTES/google_appengine/google/appengine/ext/db/init.py", line 893, in put
File "/host/_Hive/Lab/ACTIVE WORKS/UG VOTES/google_appengine/google/appengine/api/datastore.py", line 291, in Put
File "/host/_Hive/Lab/ACTIVE WORKS/UG VOTES/google_appengine/google/appengine/api/datastore.py", line 195, in _MakeSyncCall
File "/host/_Hive/Lab/ACTIVE WORKS/UG VOTES/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 499, in check_success
File "/host/_Hive/Lab/ACTIVE WORKS/UG VOTES/google_appengine/google/appengine/api/apiproxy_rpc.py", line 149, in _WaitImpl
File "/host/_Hive/Lab/ACTIVE WORKS/UG VOTES/google_appengine/google/appengine/api/datastore_file_stub.py", line 863, in MakeSyncCall
File "/host/_Hive/Lab/ACTIVE WORKS/UG VOTES/google_appengine/google/appengine/api/apiproxy_stub.py", line 80, in MakeSyncCall
File "/host/_Hive/Lab/ACTIVE WORKS/UG VOTES/google_appengine/google/appengine/api/datastore_file_stub.py", line 933, in _Dynamic_Put
File "/host/_Hive/Lab/ACTIVE WORKS/UG VOTES/google_appengine/google/appengine/api/datastore_file_stub.py", line 806, in __WriteDatastore
File "/host/_Hive/Lab/ACTIVE WORKS/UG VOTES/google_appengine/google/appengine/api/datastore_file_stub.py", line 836, in __WritePickled
IOError: [Errno 24] Too many open files: '/tmp/tmpOfgvm3'
At first I thought this was an error due to the limitations of the development server, but when i tried to do the same task from the
production server, i got the following error:
Error: Server Error
The server encountered an error and could not complete your request.
If the problem persists, please report your problem and mention this error message and the query that caused it.
Who knows what could have gone wrong, and what i can do about it?
thanks.
This is a known issue: Python dev SDK 1.3.8 datastore_file_stub.py appears to leak file handles.
There's an unofficial patch available.
Related
Im trying to run a python Server using CherryPy for a WebSite but when I run it this error pops up.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/cherrypy/_cprequest.py", line 638, in respond
self._do_respond(path_info)
File "/usr/local/lib/python3.10/dist-packages/cherrypy/_cprequest.py", line 694, in _do_respond
self.hooks.run('before_handler')
File "/usr/local/lib/python3.10/dist-packages/cherrypy/_cprequest.py", line 95, in run
self.run_hooks(iter(sorted(self[point])))
File "/usr/local/lib/python3.10/dist-packages/cherrypy/_cprequest.py", line 117, in run_hooks
hook()
File "/usr/local/lib/python3.10/dist-packages/cherrypy/_cprequest.py", line 65, in __call__
return self.callback(**self.kwargs)
File "/usr/local/lib/python3.10/dist-packages/cherrypy/_cptools.py", line 280, in _lock_session
cherrypy.serving.session.acquire_lock()
File "/usr/local/lib/python3.10/dist-packages/cherrypy/lib/sessions.py", line 550, in acquire_lock
self.lock = zc.lockfile.LockFile(path)
File "/usr/local/lib/python3.10/dist-packages/zc/lockfile/__init__.py", line 117, in __init__
super(LockFile, self).__init__(path)
File "/usr/local/lib/python3.10/dist-packages/zc/lockfile/__init__.py", line 87, in __init__
fp = open(path, 'a+')
FileNotFoundError: [Errno 2] No such file or directory: '/var/www/html/cncsessions\\/session-73ab2ecbe9bd50153b4f20828fcc08bff6e9cd6e.lock'
It's my first time using this module and I don't know what's wrong.
I'm using Ubuntu 22, Python 3.10.6
Hard to say without seeing your exact code that this is calling.
Judging by the Error you are trying to use Sessions.
The sessions are looking for
/var/www/html/cncsessions
To place the session files in
But it gives an error. It looks like the path might be wrong. There's a double backslash at the end there and a forward slash.
\\/
If you haven't given up on this/figured it out already I would try changing this path to just this
/var/www/html/cncsessions
Also be sure you do not store your session data in your web root. Looks like from that path you might be doing that! Anything in a webroot will be served via public webserver. There's little to no chance anyone would guess the file names though.
Why I run the dev_appserver.py with the option watcher_ignore_re, I get an error message that the regex is not JSON serializable.
Is this a bug with the development server? Am I using this command improperly? The command and callstack is printed below.
C:\Users\mes65\Documents\MyProject>"C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\bin\dev_appserver.py" ^
--watcher_ignore_re="(.*\.git|.*\.idea|tmp\.py)" ^
"C:\Users\mes65\Documents\MyProject"
WARNING 2018-06-06 09:28:59,161 appinfo.py:1622] lxml version "2.3" is deprecated, use one of: "3.7.3"
INFO 2018-06-06 09:28:59,187 devappserver2.py:120] Skipping SDK update check.
Traceback (most recent call last):
File "C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\dev_appserver.py", line 96, in <module>
_run_file(__file__, globals())
File "C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\dev_appserver.py", line 90, in _run_file
execfile(_PATHS.script_file(script_name), globals_)
File "C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 454, in <module>
main()
File "C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 442, in main
dev_server.start(options)
File "C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 163, in start
bool(ssl_certificate_paths), options)
File "C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\devappserver2\metrics.py", line 166, in Start
self._cmd_args = json.dumps(vars(cmd_args)) if cmd_args else None
File "C:\Python27\lib\json\__init__.py", line 244, in dumps
return _default_encoder.encode(obj)
File "C:\Python27\lib\json\encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "C:\Python27\lib\json\encoder.py", line 270, in iterencode
return _iterencode(o, 0)
File "C:\Python27\lib\json\encoder.py", line 184, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: <_sre.SRE_Pattern object at 0x00000000063C2188> is not JSON serializable
It looks like it is an issue with the google analytics code built into dev_appserver2 (google-cloud-sdk\platform\google_appengine\google\appengine\tools\devappserver2\devappserver2.py on or around line 316). It wants to send all of your command line options to google analytics. If you remove the analytics client id by adding the command line option --google_analytics_client_id= (note: '=' without any following value) the appserver won't call the google analytics code where it is trying to JSON serialize an SRE object and failing. However, since you are on Windows I find that the --watcher_ignore_re does not work anyway even when you get past this issue.
There is a comment in file_watcher.py
TODO: b/33178251 - Add watcher_ignore_re support for windows.
I also faced with this usability problem on Windows and was really disappointed. I tried to find some workarounds but I hadn't found any appropriate way.
In the end, I decided to make my own implementation of support watcher_ignore_re for Windows. I put required changes in my Github repo
If describe them in several words:
Add _watcher_ignore_re, _skip_files_re properties and its setters
Add import statement from google.appengine.tools.devappserver2 import watcher_common and use it in newly created def _path_ginored
Filter additional_changes before adding them to watcher changed files
For resolving mentioned problem with not serializable regex attribute we should drop them from the serialized dictionary. Fix for this is added as consequent commit and can be checked at metrics.py:185-193.
I hope it helps other guys enjoy developing on GAE on Windows :)
Hi friends I am trying to implement Mindbody API using python. But I am getting error that caused my work down. Down the page error is shown.
Traceback (most recent call last):
File "Appointment.py", line 6, in <module>
class AppointmentService():
File "Appointment.py", line 10, in AppointmentService
service = Client(urlService)
File "C:\Python27\lib\site-packages\suds\client.py", line 112, in __init__
self.wsdl = reader.open(url)
File "C:\Python27\lib\site-packages\suds\reader.py", line 152, in open
d = self.fn(url, self.options)
File "C:\Python27\lib\site-packages\suds\wsdl.py", line 136, in __init__
d = reader.open(url)
File "C:\Python27\lib\site-packages\suds\reader.py", line 79, in open
d = self.download(url)
File "C:\Python27\lib\site-packages\suds\reader.py", line 101, in download
return sax.parse(string=content)
File "C:\Python27\lib\site-packages\suds\sax\parser.py", line 136, in parse
sax.parse(source)
File "C:\Python27\lib\xml\sax\expatreader.py", line 110, in parse
xmlreader.IncrementalParser.parse(self, source)
File "C:\Python27\lib\xml\sax\xmlreader.py", line 123, in parse
self.feed(buffer)
File "C:\Python27\lib\xml\sax\expatreader.py", line 217, in feed
self._err_handler.fatalError(exc)
File "C:\Python27\lib\xml\sax\handler.py", line 38, in fatalError
raise exception
xml.sax._exceptions.SAXParseException: <unknown>:9:673: not well-formed (invalid token)
enter image description here
actually I am unable to resolve this problem and don't know the reason behind the error
It seems you are implementing third party API so you should take care of input which are sent to the server. Mainly this type of error occurs due to improper formatting of input and server was not able to understand your request. So please check you are hitting right service call with required parameter as in the format. Do refer to MindBody Api documentation
Good example and explanation can be found here for MindBody pyton integration http://www.mindbodydeveloper.com/mindbody-integration-python-example/
When I compare two databases, mysqluc stucks in halfway and it shows error as follows:
**Traceback (most recent call last):
File "G:\ade\build\sb_0-14553893-1424966082.93\Python-2.7.6-windows-x86-32bit\
lib\site-packages\cx_Freeze\initscripts\Console.py", line 27, in <module>
File "scripts\mysqldiff.py", line 245, in <module>
File ".\mysql\utilities\command\diff.py", line 149, in database_diff
File ".\mysql\utilities\command\diff.py", line 92, in object_diff
File ".\mysql\utilities\common\dbcompare.py", line 646, in diff_objects
File ".\mysql\utilities\common\dbcompare.py", line 466, in _check_tables_struc
ture
File ".\mysql\utilities\common\database.py", line 1206, in get_object_definiti
on
File ".\mysql\utilities\common\server.py", line 1263, in exec_query
File ".\mysql\connector\cursor.py", line 339, in close
mysql.connector.errors.InternalError: Unread result found.**
I have tried with both mysqldiff and mysqldbcompare command.
mysqldiff --server1=username:password#hostname:3307 --server2=username:pasword#localhost:3306 DB1:DB1 --force --difftype=sql --changes-for=server2
mysqldbcompare --server1=username:password#hostname:3307 --server2=username:pasword#localhost:3306 DB1:DB1 --run-all-test --difftype=sql changes-for=server2
It gets stuck on both commands.
I have tried to compared on both windows 7, windows 8.1 .
I found the problem. Database is Huge and some columns in tables are in "utf8" character set with "utf8_general_ci" collation and some in "latin1" with "latin1_swedish_ci" collation. So sometimes its getting stuck i don't know why. I changed the character set of every column to "latin1" now its not getting stuck.
Execute this query to check the character set of columns.
SELECT table_name,column_name,character_set_name,data_type FROM information_schema.`COLUMNS`
WHERE table_schema = "DbName"
For character set conversion you can take help from this link
http://dev.mysql.com/doc/refman/5.7/en/charset-conversion.html
I'm getting an occasional error when trying to fetch a list of worksheets from gdata. This does not happen for all spreadsheets, but will consistently happen to the same spreadsheet for a period of several days to weeks. I suspected permissions, but was unable to find any special permissions for the spreadsheets that cause the error. I'm using OAuth2, gdata 2.0.18, and Python 2.6.8.
Traceback (most recent call last):
File "/mnt/shared_from_host/snake/base/fetchers/google_spreadsheet/common.py", line 176, in get_worksheet_list
feed = client.get_worksheets(spreadsheet_id)
File "/home/ubuntu/.virtualenvs/snakeenv/lib/python2.6/site-packages/gdata/spreadsheets/client.py", line 108, in get_worksheets
**kwargs)
File "/home/ubuntu/.virtualenvs/snakeenv/lib/python2.6/site-packages/gdata/client.py", line 640, in get_feed
**kwargs)
File "/home/ubuntu/.virtualenvs/snakeenv/lib/python2.6/site-packages/gdata/client.py", line 278, in request
version=get_xml_version(self.api_version))
File "/home/ubuntu/.virtualenvs/snakeenv/lib/python2.6/site-packages/atom/core.py", line 520, in parse
tree = ElementTree.fromstring(xml_string)
File "<string>", line 86, in XML
SyntaxError: no element found: line 1, column 0
This seems to be from the request getting an empty string as the response.
Does anybody have any idea on why this might not work, or troubleshooting ideas? Thanks.