coincidentally, I run pip search django command and I got time out error. even specifing a high value of timeout
Below the logs:
D:\PERFILES\rmaceissoft\virtualenvs\fancy_budget\Scripts>pip search django --timeout=300
Exception:
Traceback (most recent call last):
File "D:\PERFILES\Marquez\rmaceissoft\Workspace\virtualenvs\fancy_budget\lib\s
ite-packages\pip-1.1-py2.7.egg\pip\basecommand.py", line 104, in main
status = self.run(options, args)
File "D:\PERFILES\Marquez\rmaceissoft\Workspace\virtualenvs\fancy_budget\lib\s
ite-packages\pip-1.1-py2.7.egg\pip\commands\search.py", line 34, in run
pypi_hits = self.search(query, index_url)
File "D:\PERFILES\Marquez\rmaceissoft\Workspace\virtualenvs\fancy_budget\lib\s
ite-packages\pip-1.1-py2.7.egg\pip\commands\search.py", line 48, in search
hits = pypi.search({'name': query, 'summary': query}, 'or')
File "C:\Python27\Lib\xmlrpclib.py", line 1224, in __call__
return self.__send(self.__name, args)
File "C:\Python27\Lib\xmlrpclib.py", line 1575, in __request
verbose=self.__verbose
File "C:\Python27\Lib\xmlrpclib.py", line 1264, in request
return self.single_request(host, handler, request_body, verbose)
File "C:\Python27\Lib\xmlrpclib.py", line 1297, in single_request
return self.parse_response(response)
File "C:\Python27\Lib\xmlrpclib.py", line 1462, in parse_response
data = stream.read(1024)
File "C:\Python27\Lib\httplib.py", line 541, in read
return self._read_chunked(amt)
File "C:\Python27\Lib\httplib.py", line 574, in _read_chunked
line = self.fp.readline(_MAXLINE + 1)
File "C:\Python27\Lib\socket.py", line 476, in readline
data = self._sock.recv(self._rbufsize)
timeout: timed out
Storing complete log in C:\Users\reiner\AppData\Roaming\pip\pip.log
however, another search command finish without problems:
pip search django-registration
Is that a bug of pip due to the big amount of packages name that contains "django"?
Note: speed internet connection = 2 Mbits
the --timeout option doesn't seem to work properly.
I can install django properly by using either:
pip --default-timeout=60 install django
or
export PIP_DEFAULT_TIMEOUT=60
pip install django
Note: using pip version 1.2.1 on RHEL 6.3
Source: DjangoDay2012-Brescia.pdf, page 11
The pypi is probably overloaded. Just enable mirror fallback and caching in pip. Maybe tune the timeout a bit. Add these in ~/.pip/pip.conf:
[global]
default-timeout = 60
download-cache = ~/.pip/cache
[install]
use-mirrors = true
There is too short default timeout set for pip by default. You should really set this environment variable PIP_DEFAULT_TIMEOUT to at least 60 (1 minute)
Source: http://www.pip-installer.org/en/latest/configuration.html
Related
tl;dr: An app that had been working fine is suddenly throwing a "Bad file descriptor" error with no other changes; I need advice for how to evaluate this.
I inherited an app that had been untouched for years, after the server crashed and I needed to move it to another machine. It's built with Flask, and uses Peewee to talk to a Postgres database over pyscopg2. It has a bunch of other stuff--an Elasticsearch engine for searching, a lot of heavy JS on the front end--but that doesn't seem to be the problem here. The code is moderately complex, and I am not very knowledgeable about all of its pieces.
It took me a while to get it set up using the sketchy deployment instructions that had been left behind, but eventually I got it running, and was able to get a test version running on a clean VM and then deploy it on an actual server, using gunicorn and nginx. It's been working fine in production for a week. I'm using Debian Buster for all versions. I'm using the most recent versions of all software.
I then decided to do some basic code cleanup, and ran the entire app through a linter, before looking at some other changes to make, that the end user had requested. Unfortunately, after this, the app consistently fails at the same point with a "Bad file descriptor" error. This is in a pre-run section, which parses a large XML file and saves the info to the database and to Elasticsearch; the app receives an XML upload, forks a few processes, and runs the parse/index process in the background.
I am subsequently unable to get past this error by any means. I have launched a clean VM and installed everything from scratch; I've reverted the git repo to before I linted the code. Same problem. I don't see how it can be a code issue, as it's now at the same point it was when I started. But I'm at a loss for what to do, and terrified that the production machine will fail.
The errors I get (trimming the first few lines that refer to places in the app itself) are:
[2021-03-14 14:40:11.699837] self.execute()
[2021-03-14 14:40:11.699878] File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/peewee.py", line 1906, in inner
[2021-03-14 14:40:11.699907] return method(self, database, *args, **kwargs)
[2021-03-14 14:40:11.699946] File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/peewee.py", line 1977, in execute
[2021-03-14 14:40:11.699976] return self._execute(database)
[2021-03-14 14:40:11.700004] File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/peewee.py", line 2149, in _execute
[2021-03-14 14:40:11.700032] cursor = database.execute(self)
[2021-03-14 14:40:11.700060] File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/peewee.py", line 3156, in execute
[2021-03-14 14:40:11.700088] return self.execute_sql(sql, params, commit=commit)
[2021-03-14 14:40:11.700115] File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/peewee.py", line 3150, in execute_sql
[2021-03-14 14:40:11.700143] self.commit()
[2021-03-14 14:40:11.700171] File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/peewee.py", line 2916, in __exit__
[2021-03-14 14:40:11.700198] reraise(new_type, new_type(exc_value, *exc_args), traceback)
[2021-03-14 14:40:11.700226] File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/peewee.py", line 190, in reraise
[2021-03-14 14:40:11.700254] raise value.with_traceback(tb)
[2021-03-14 14:40:11.700282] File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/peewee.py", line 3143, in execute_sql
[2021-03-14 14:40:11.700309] cursor.execute(sql, params or ())
[2021-03-14 14:40:11.700339] OperationalError('SSL SYSCALL error: Bad file descriptor\n')
127.0.0.1 - - [14/Mar/2021 10:40:11] "POST /manage/versions/upload HTTP/1.1" 500 -
Error on request:
Traceback (most recent call last):
File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/werkzeug/serving.py", line 323, in run_wsgi
execute(self.server.app)
File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/werkzeug/serving.py", line 315, in execute
write(data)
File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/werkzeug/serving.py", line 273, in write
self.send_response(code, msg)
File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/werkzeug/serving.py", line 388, in send_response
self.wfile.write(hdr.encode("ascii"))
File "/usr/lib/python3.7/socketserver.py", line 799, in write
self._sock.sendall(b)
OSError: [Errno 9] Bad file descriptor
Exception in thread Thread-22:
Traceback (most recent call last):
File "/usr/lib/python3.7/threading.py", line 917, in _bootstrap_inner
self.run()
File "/usr/lib/python3.7/threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.7/socketserver.py", line 654, in process_request_thread
self.shutdown_request(request)
File "/usr/lib/python3.7/socketserver.py", line 509, in shutdown_request
self.close_request(request)
File "/usr/lib/python3.7/socketserver.py", line 513, in close_request
request.close()
File "/usr/lib/python3.7/socket.py", line 420, in close
self._real_close()
File "/usr/lib/python3.7/socket.py", line 414, in _real_close
_ss.close(self)
OSError: [Errno 9] Bad file descriptor
I note that the final section ("Exception in thread Thread-22") is showing the system Python, rather than my virtual environment; I don't know if that's relevant, or if that's just what's running some overall process. I didn't get to this point doing anything different, though--the app is running in the virtual environment.
I'd be very grateful for any thoughts here--I'm obviously hoping it's some kind of stupid permission error or something, as I can't easily go into the code because of its complexity.
i have followed a tutorial and installed odoo+postgres
when i try to run in my linux terminal ./odoo-bin command, i get this error:
2019-09-15 08:48:30,765 5126 ERROR test werkzeug: Error on request:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/werkzeug/serving.py", line 205, in run_wsgi
execute(self.server.app)
File "/usr/local/lib/python3.6/dist-packages/werkzeug/serving.py", line 193, in execute
application_iter = app(environ, start_response)
File "/home/blink22/Desktop/odoo-nada/odoo/odoo/service/server.py", line 409, in app
return self.app(e, s)
File "/home/blink22/Desktop/odoo-nada/odoo/odoo/service/wsgi_server.py", line 128, in
application
return application_unproxied(environ, start_response)
File "/home/blink22/Desktop/odoo-nada/odoo/odoo/service/wsgi_server.py", line 117, in
application_unproxied
result = odoo.http.root(environ, start_response)
File "/home/blink22/Desktop/odoo-nada/odoo/odoo/http.py", line 1320, in __call__
return self.dispatch(environ, start_response)
File "/home/blink22/Desktop/odoo-nada/odoo/odoo/http.py", line 1293, in __call__
return self.app(environ, start_wrapped)
File "/usr/local/lib/python3.6/dist-packages/werkzeug/wsgi.py", line 599, in __call__
return self.app(environ, start_response)
File "/home/blink22/Desktop/odoo-nada/odoo/odoo/http.py", line 1473, in dispatch
ir_http = request.registry['ir.http']
File "/home/blink22/Desktop/odoo-nada/odoo/odoo/modules/registry.py", line 176, in
__getitem__
return self.models[model_name]
KeyError: 'ir.http' - - -
2019-09-15 08:48:54,130 5126 ERROR test odoo.sql_db: bad query: b"SELECT latest_version
FROM ir_module_module WHERE name='base'"
ERROR: relation "ir_module_module" does not exist
LINE 1: SELECT latest_version FROM ir_module_module WHERE name='base...
^
is the error related to which DB user i am using? or it is something else??
here is my bash commands that i used to install that:
https://github.com/mah007/OdooScript/blob/master/odoo_dev.sh
however you're question is not very clear but I can assume that this issue is occurred because you didn't initiate your database before running the server.
you need first init your db (and the logs shows that its name is test):
$ ./odoo-bin -i base -d test
the you can start the Odoo server as usual.
$ ./odoo-bin
if this is a development environment then start fresh with a new database.
I had similar issue while running OdooV13 in ubuntu20. I have missed to include some addons under addons path, because of which "base" module was not found.
I was able to resolve my issue by including base module folder in addons path and run server with "-i base -d test" to initialize my database, as suggested by Ahmed Magdy.
I try scrape pages contain underscore in subdomain, etc: https://taxi-3-extreme-rush_1.en.softonic.com
I check specifications and i see that subdomain can contain underscore.
Also i was try use link.encode('idna'), but also not works.
And i have error:
Traceback (most recent call last):
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 1297, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/usr/lib64/python2.7/site-packages/twisted/python/failure.py", line 389, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/usr/lib64/python2.7/site-packages/scrapy/core/downloader/middleware.py", line 43, in process_request
defer.returnValue((yield download_func(request=request,spider=spider)))
File "/usr/lib64/python2.7/site-packages/scrapy/utils/defer.py", line 45, in mustbe_deferred
result = f(*args, **kw)
File "/usr/lib64/python2.7/site-packages/scrapy/core/downloader/handlers/__init__.py", line 65, in download_request
return handler.download_request(request, spider)
File "/usr/lib64/python2.7/site-packages/scrapy/core/downloader/handlers/http11.py", line 60, in download_request
return agent.download_request(request)
File "/usr/lib64/python2.7/site-packages/scrapy/core/downloader/handlers/http11.py", line 285, in download_request
method, to_bytes(url, encoding='ascii'), headers, bodyproducer)
File "/usr/lib64/python2.7/site-packages/twisted/web/client.py", line 1596, in request
endpoint = self._getEndpoint(parsedURI)
File "/usr/lib64/python2.7/site-packages/twisted/web/client.py", line 1580, in _getEndpoint
return self._endpointFactory.endpointForURI(uri)
File "/usr/lib64/python2.7/site-packages/twisted/web/client.py", line 1456, in endpointForURI
uri.port)
File "/usr/lib64/python2.7/site-packages/scrapy/core/downloader/contextfactory.py", line 59, in creatorForNetloc
return ScrapyClientTLSOptions(hostname.decode("ascii"), self.getContext())
File "/usr/lib64/python2.7/site-packages/twisted/internet/_sslverify.py", line 1201, in __init__
self._hostnameBytes = _idnaBytes(hostname)
File "/usr/lib64/python2.7/site-packages/twisted/internet/_sslverify.py", line 87, in _idnaBytes
return idna.encode(text)
File "/usr/lib/python2.7/site-packages/idna/core.py", line 355, in encode
result.append(alabel(label))
File "/usr/lib/python2.7/site-packages/idna/core.py", line 276, in alabel
check_label(label)
File "/usr/lib/python2.7/site-packages/idna/core.py", line 253, in check_label
raise InvalidCodepoint('Codepoint {0} at position {1} of {2} not allowed'.format(_unot(cp_value), pos+1, repr(label)))
InvalidCodepoint: Codepoint U+005F at position 20 of u'taxi-3-extreme-rush_1' not allowed
A workaround:
import idna
idna.idnadata.codepoint_classes['PVALID'] = tuple(
sorted(list(idna.idnadata.codepoint_classes['PVALID']) + [0x5f0000005f])
)
Seems like it's an issue with Twisted.
There's an issue with a solution regarding it here:
Looking at Twisted's code, it'll use the idna library if available. If I pip uninstall idna and issue the same request again, it is successful.
idna gets installed with either pip install twisted[tls] or pip install treq.
I've tried uninstalling idna via pip uninstall idna and the request, indeed, goes through.
I tried with Selenium and it can parse correctly. I can verify this because if I disable the spider's middleware (where my Selenium code is). The same error is thrown.
raise InvalidCodepoint('Codepoint {0} at position {1} of {2} not allowed'.format(_unot(cp_value), pos+1, repr(label)))
idna.core.InvalidCodepoint: Codepoint U+005F at position 3 of 'xyx_abc' not allowed
I am installing centOS 7 minimal version on server using dvd disk, it has iso image. After choosing the language option it gives me the following error :
anaconda 21.48.22.93-1 exception report
Traceback (most recent call first):
File "/usr/lib/python2.7/site-packages/block/device.py", line 719, in get_map if compare_tables(map.table, self.rs.dmTable):
File "/usr/lib64/python2.7/site-packages/block/device.py", line 838, in active self.map.dev.mknod(self.prefix+self.name)
File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1768, in handleUdevDMRaidMemberFormat rs.activate(mknod=True)
file "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1979, in handleUdevDeviceFormat seld.handleUdevDMRaidMemberFormat(info, device)
File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1285, in addUdevDevice serlf.handleUdevDeviceFormat(info, device)
File "/usr/lib7python2.7/site-packages/blivet/devicetree.py", line 2295, in _populate self.addUdevDevice(dev)
file "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 2228, in populate self._populate()
File "7usr/lib/python2.7/site-packages/blivet/__init__py", line 489, in reset self.devicetree.populate(cleanupOnly=cleanupOnly)
File "/usr/lib/python2.7/site-packages/blivet/__init__py",line 184, in storagelnitialize storage.reset()
File "/usr/lib64/python2.7/threading.py", line 764, in run self.__target(*self.__args, **self.__kwargs)
File"/usr/lib64/python2.7/site-packages/anaconda/threads.py", line 227, in run threading.Thread.run(self, *args, **kwargs)
File"/usr/lib64/python2.7/site-packages/anaconda/threads.py", line 112, in wait self.raise_if_error(name)
File"/usr/lib64/python2.7/site-packages/anaconda/timezone.py", line 75, in time_initialize threadMgr.wait(THREAD_STORAGE)
File"/usr/lib64/python2.7/threading.py", line 764, in run self.__target(*self.__args,**self.__kwargs)
File"/usr/lib64/python2.7/site-packages/pythonanaconda/threads.py" line 227, in run threading.Thread.run(self,*args,**kwargs)
ValueError: invalid map 'nglish (the divide/multiply keys toggle the layout)'
The problem can rely in 3 different scenario:
CentOS on VirtualBox: You need to create Root Password and User Creation in the first 30 seconds before the installer starts to install stuff. This might be a bug (CentOS 8). Setup the User Creation and then the Root Password, you will see the installer will go on without problems
Pre-existing data on the HD: https://bugzilla.redhat.com/show_bug.cgi?id=1441891
In this case first boot in rescue mode then run the command dmraid -r -E /dev/sd<x>
I have tried to use the below forum to item to fix the problem but it did not seam to work for me
https://stackoverflow.com/questions/21955234/ckan-install-paster-error
Amazingly I got the same issue when I tried to install CKAN on windows.
paster db init -c XXXX/development.ini not working for CKAN-command 'db' not know
This time I am trying to install CKAN on Ubuntu 12.04 (actually 12.04.5 as I couldn't get 12.0.4) as instructed in
http://docs.ckan.org/en/latest/maintaining/installing/install-from-source.html
I am having to install everything using a PROXY
I have added the password to the SQL Chemistry and the Development.ini does exist. This is my error (below)
Is this a proxy issue? I have used the chmod to change the access to the ini file as the other forum recommended. I also set the virtual path. The database base does exist as I check it.
:
(default)root#UbuntaDataServer:/usr/lib/ckan/default/src/ckan# paster db init -c /etc/ckan/default/development.ini
Traceback (most recent call last):
File "/usr/lib/ckan/default/bin/paster", line 9, in <module>
load_entry_point('PasteScript==1.7.5', 'console_scripts', 'paster')()
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 104, in run
invoke(command, command_name, options, args[1:])
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 143, in invoke
exit_code = runner.run(args)
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 238, in run
result = self.command()
File "/root/ckan/lib/default/src/ckan/ckan/lib/cli.py", line 156, in command
self._load_config()
File "/root/ckan/lib/default/src/ckan/ckan/lib/cli.py", line 98, in _load_config
load_environment(conf.global_conf, conf.local_conf)
File "/root/ckan/lib/default/src/ckan/ckan/config/environment.py", line 232, in load_environment
p.load_all(config)
File "/root/ckan/lib/default/src/ckan/ckan/plugins/core.py", line 124, in load_all
unload_all()
File "/root/ckan/lib/default/src/ckan/ckan/plugins/core.py", line 182, in unload_all
unload(*reversed(_PLUGINS))
File "/root/ckan/lib/default/src/ckan/ckan/plugins/core.py", line 210, in unload
plugins_update()
File "/root/ckan/lib/default/src/ckan/ckan/plugins/core.py", line 116, in plugins_update
environment.update_config()
File "/root/ckan/lib/default/src/ckan/ckan/config/environment.py", line 270, in update_config
search.check_solr_schema_version()
File "/root/ckan/lib/default/src/ckan/ckan/lib/search/__init__.py", line 291, in check_solr_schema_version
res = urllib2.urlopen(req)
File "/usr/lib/python2.7/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 406, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 519, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 444, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 378, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 527, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 503: Service Unavailable
This part of the stacktrace:
File "/root/ckan/lib/default/src/ckan/ckan/lib/search/init.py", line 291, in check_solr_schema_version
res = urllib2.urlopen(req)
Suggests that there is a problem connecting to Solr. You should make sure solr is running, that you can connect to it, and that the setting in your .ini file for the location and port that solr is running on is correct.
This is not the complete answer. Maybe close.
This is what I see on http:||localhost|solr|
Solr Admin (ckan)
UbuntaDataServer:8983
cwd=/var/cache/jetty/tmp SolrHome=/usr/share/solr/
This is what is running on the URL. I assume this is close or correct?
Any more suggestions?
Using CKAN 2.2 I had the same problem with proxies that require authentication
If you are installing CKAN from sources, I suggest to move to 2.2.1 version (or newer).
In these versions I found no issues with auth proxy.
Anyway, if you're bound to a specific, older version of CKAN, you can manually add a proxy handler.
First of all, set your http_proxy env vars (both uppercase and lowercase)
Now you can edit the file ckan/ckan/lib/search/__init__.py and get your hands dirty.
We need to declare a handle_proxy() function:
import os
def handle_proxy():
proxy_settings = dict()
for k,v in os.environ.items():
if k.rfind('_proxy') > -1:
proxy_settings[k] = v
proxy_handler = urllib2.ProxyHandler(proxy_settings)
opener = urllib2.build_opener(proxy_handler)
urllib2.install_opener(opener)
Now we can call it in the check_solr_schema_version() function just before sending the request.
Replace
res = urllib2.urlopen(req)
with
handle_proxy()
res = urllib2.urlopen(req)
NOTE: this is a temporary workaround, just in case upgrading to newer versions (i currently use the 2.2.2 branch) does not fix the problem for you. I wouldn't suggest it for a production environment :)
I found another answer, if the above does not work, try:
Install this again:
sudo -E apt-get install python-pastescript
. /usr/lib/ckan/default/bin/activate
cd /usr/lib/ckan/default/src/ckan
paster make-config ckan /etc/ckan/default/development.ini
Change SOLR to your IP number and not localhost
paster db init -c /etc/ckan/default/development.ini
Hope that fixes your problem