I have tried to use the below forum to item to fix the problem but it did not seam to work for me
https://stackoverflow.com/questions/21955234/ckan-install-paster-error
Amazingly I got the same issue when I tried to install CKAN on windows.
paster db init -c XXXX/development.ini not working for CKAN-command 'db' not know
This time I am trying to install CKAN on Ubuntu 12.04 (actually 12.04.5 as I couldn't get 12.0.4) as instructed in
http://docs.ckan.org/en/latest/maintaining/installing/install-from-source.html
I am having to install everything using a PROXY
I have added the password to the SQL Chemistry and the Development.ini does exist. This is my error (below)
Is this a proxy issue? I have used the chmod to change the access to the ini file as the other forum recommended. I also set the virtual path. The database base does exist as I check it.
:
(default)root#UbuntaDataServer:/usr/lib/ckan/default/src/ckan# paster db init -c /etc/ckan/default/development.ini
Traceback (most recent call last):
File "/usr/lib/ckan/default/bin/paster", line 9, in <module>
load_entry_point('PasteScript==1.7.5', 'console_scripts', 'paster')()
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 104, in run
invoke(command, command_name, options, args[1:])
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 143, in invoke
exit_code = runner.run(args)
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 238, in run
result = self.command()
File "/root/ckan/lib/default/src/ckan/ckan/lib/cli.py", line 156, in command
self._load_config()
File "/root/ckan/lib/default/src/ckan/ckan/lib/cli.py", line 98, in _load_config
load_environment(conf.global_conf, conf.local_conf)
File "/root/ckan/lib/default/src/ckan/ckan/config/environment.py", line 232, in load_environment
p.load_all(config)
File "/root/ckan/lib/default/src/ckan/ckan/plugins/core.py", line 124, in load_all
unload_all()
File "/root/ckan/lib/default/src/ckan/ckan/plugins/core.py", line 182, in unload_all
unload(*reversed(_PLUGINS))
File "/root/ckan/lib/default/src/ckan/ckan/plugins/core.py", line 210, in unload
plugins_update()
File "/root/ckan/lib/default/src/ckan/ckan/plugins/core.py", line 116, in plugins_update
environment.update_config()
File "/root/ckan/lib/default/src/ckan/ckan/config/environment.py", line 270, in update_config
search.check_solr_schema_version()
File "/root/ckan/lib/default/src/ckan/ckan/lib/search/__init__.py", line 291, in check_solr_schema_version
res = urllib2.urlopen(req)
File "/usr/lib/python2.7/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 406, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 519, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 444, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 378, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 527, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 503: Service Unavailable
This part of the stacktrace:
File "/root/ckan/lib/default/src/ckan/ckan/lib/search/init.py", line 291, in check_solr_schema_version
res = urllib2.urlopen(req)
Suggests that there is a problem connecting to Solr. You should make sure solr is running, that you can connect to it, and that the setting in your .ini file for the location and port that solr is running on is correct.
This is not the complete answer. Maybe close.
This is what I see on http:||localhost|solr|
Solr Admin (ckan)
UbuntaDataServer:8983
cwd=/var/cache/jetty/tmp SolrHome=/usr/share/solr/
This is what is running on the URL. I assume this is close or correct?
Any more suggestions?
Using CKAN 2.2 I had the same problem with proxies that require authentication
If you are installing CKAN from sources, I suggest to move to 2.2.1 version (or newer).
In these versions I found no issues with auth proxy.
Anyway, if you're bound to a specific, older version of CKAN, you can manually add a proxy handler.
First of all, set your http_proxy env vars (both uppercase and lowercase)
Now you can edit the file ckan/ckan/lib/search/__init__.py and get your hands dirty.
We need to declare a handle_proxy() function:
import os
def handle_proxy():
proxy_settings = dict()
for k,v in os.environ.items():
if k.rfind('_proxy') > -1:
proxy_settings[k] = v
proxy_handler = urllib2.ProxyHandler(proxy_settings)
opener = urllib2.build_opener(proxy_handler)
urllib2.install_opener(opener)
Now we can call it in the check_solr_schema_version() function just before sending the request.
Replace
res = urllib2.urlopen(req)
with
handle_proxy()
res = urllib2.urlopen(req)
NOTE: this is a temporary workaround, just in case upgrading to newer versions (i currently use the 2.2.2 branch) does not fix the problem for you. I wouldn't suggest it for a production environment :)
I found another answer, if the above does not work, try:
Install this again:
sudo -E apt-get install python-pastescript
. /usr/lib/ckan/default/bin/activate
cd /usr/lib/ckan/default/src/ckan
paster make-config ckan /etc/ckan/default/development.ini
Change SOLR to your IP number and not localhost
paster db init -c /etc/ckan/default/development.ini
Hope that fixes your problem
Related
I'm using the python script on raspberry pi3 from this link- inserting my google email address and google sheet number into the script:
https://gist.github.com/Thuruv/dc0e2f781b8e095b9981f265647b8304
and then my google password as I run the script but I get the below errors:
Traceback (most recent call last):
File "Googlespreadsheets.py", line 53, in <module>
csv_file = gs.download(ss)
File "Googlespreadsheets.py", line 34, in download
"Authorization": "GoogleLogin auth=" + self.get_auth_token(),
File "Googlespreadsheets.py", line 29, in get_auth_token
return self._get_auth_token(self.email, self.password, source,
service="wise")
File "Googlespreadsheets.py", line 25, in _get_auth_token
return re.findall(r"Auth=(.*)", urllib2.urlopen(req).read())[0]
File "/usr/lib/python2.7/urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 435, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 548, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 473, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 556, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 404: Not Found
Navigating to the URL in the code directly links here, displaying the warning from Google:
Important: ClientLogin has been officially deprecated since April 20, 2012 and is now no longer available. Requests to ClientLogin will fail with a HTTP 404 response. We encourage you to migrate to OAuth 2.0 as soon as possible.
This code will fail with a 404 response, as your attempt demonstrates. Try moving this code to OAuth2.
I've implemented an open source python command line utility https://pypi.org/project/google-sheets-to-csv/ that should works on pi3 as long you have python3 installed. If you want to integrate in a larger application you should be able to use it as third party API.
Basic usage on linux:
pip install google-sheets-to-csv
mkdir out
gs-to-csv <spreadsheet ID> <sheet selector (regex)> out/
I'll get one csv file per sheet that match the given regex selector.
If you've a browser installed on your pi3, first time you connect you'll be asked to allow read access to all your spreadsheets to the python application installed on your pi3. If you use your pi3 as a server without GUI you could use it on your computer and copy the generated token but I would recommand to use a google service account in that case and gives access to spreadsheets you want to download to that google account service.
I just wrote a simple python demo,while met a confusing problem.
import requests
print(requests.get('http://www.sina.com.cn/'))
I know that right result is return Response [200].But in my WIN10 x64,it returns the following error,I guess maybe some problems occur in my computer.
Traceback (most recent call last):
File "C:\Users\CJY\Desktop\Python_Demo\web.py", line 2, in <module>
print(requests.get('http://www.sina.com.cn/'))
File "D:\python3.6.1\lib\site-packages\requests\api.py", line 72, in get
return request('get', url, params=params, **kwargs)
File "D:\python3.6.1\lib\site-packages\requests\api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "D:\python3.6.1\lib\site-packages\requests\sessions.py", line 518, in request
resp = self.send(prep, **send_kwargs)
File "D:\python3.6.1\lib\site-packages\requests\sessions.py", line 639, in send
r = adapter.send(request, **kwargs)
File "D:\python3.6.1\lib\site-packages\requests\adapters.py", line 403, in send
conn = self.get_connection(request.url, proxies)
File "D:\python3.6.1\lib\site-packages\requests\adapters.py", line 302, in get_connection
conn = proxy_manager.connection_from_url(url)
File "D:\python3.6.1\lib\site-packages\requests\packages\urllib3\poolmanager.py", line 279, in connection_from_url
pool_kwargs=pool_kwargs)
File "D:\python3.6.1\lib\site-packages\requests\packages\urllib3\poolmanager.py", line 408, in connection_from_host
self.proxy.host, self.proxy.port, self.proxy.scheme, pool_kwargs=pool_kwargs)
File "D:\python3.6.1\lib\site-packages\requests\packages\urllib3\poolmanager.py", line 218, in connection_from_host
raise LocationValueError("No host specified.")
requests.packages.urllib3.exceptions.LocationValueError: No host specified.
[Finished in 0.2s]
Please help me!
That works for me. Please ensure you have internet connectivity and you can ping www.sina.com.cn
Just tested this with the same python version on windows 10 64-Bit and it worked for me.
When using requests in windows i have come across the same error when the local dns cache is pointing to an incorrect value.
If you are still having no luck try flushing the local dns cache on that machine my entering the following command in command prompt.
ipconfig /flushdns
error location:
Lib\urllib\request.py:
proxyEnable = winreg.QueryValueEx(internetSettings, 'ProxyEnable')[0]
if proxyEnable is string , you'll see the error. The reason is in your registry, ProxyEnable is set as REG_SZ but not REG_DWORD, so change it and all is ok.
open the registry:
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings \ ProxyEnable
(you can also directly search ProxyEnable)
delete ProxyEnable
create a new ProxyEnable form (REG_SZ 0) to (REG_DWORD 0x00000000(0))
see follow pictures,my pc language is chinese, but the location for ProxyEnable is the same.
create a new ProxyEnable
right value for ProxyEnable
from amazon.api import AmazonAPI
AMAZON_ACCESS_KEY = "A******************A"
AMAZON_SECRET_KEY = "7***********************E"
AMAZON_ASSOC_TAG = "j*****-20"
amazon = AmazonAPI(AMAZON_ACCESS_KEY, AMAZON_SECRET_KEY, AMAZON_ASSOC_TAG, region='US')
print(amazon)
#product = amazon.lookup(ItemId='B002RL8FBQ')
When I run the code above it works fine and I get this output from the print function:
<amazon.api.AmazonAPI object at 0x7fb6e59f7b38>
So everything is working fine with my access key, secret key, and associate tag.
However, if I un-comment the last line #product = amazon.lookup(ItemId='B00EOE0WKQ') then I get this error traceback:
Traceback (most recent call last):
File "test.py", line 8, in <module>
product = amazon.lookup(ItemId='B00EOE0WKQ')
File "/home/darren/Python_projects/amazon_wp/myvenv/lib/python3.4/site-packages/amazon/api.py", line 173, in lookup
response = self.api.ItemLookup(ResponseGroup=ResponseGroup, **kwargs)
File "/home/darren/Python_projects/amazon_wp/myvenv/lib/python3.4/site-packages/bottlenose/api.py", line 251, in __call__
{'api_url': api_url, 'cache_url': cache_url})
File "/home/darren/Python_projects/amazon_wp/myvenv/lib/python3.4/site-packages/bottlenose/api.py", line 212, in _call_api
return urllib2.urlopen(api_request, timeout=self.Timeout)
File "/usr/lib/python3.4/urllib/request.py", line 161, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.4/urllib/request.py", line 469, in open
response = meth(req, response)
File "/usr/lib/python3.4/urllib/request.py", line 579, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python3.4/urllib/request.py", line 507, in error
return self._call_chain(*args)
File "/usr/lib/python3.4/urllib/request.py", line 441, in _call_chain
result = func(*args)
File "/usr/lib/python3.4/urllib/request.py", line 587, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 400: Bad Request
I have followed the instruction from the official github for this https://github.com/yoavaviram/python-amazon-simple-product-api and the code I am using you will see I have used from the "Usage" subheading on the github page, thus I am not sure what's going wrong.
For added info, I am using a virtual environment and to show that I have the correct packages installed here is my out put of pip freeze:
(myvenv) darren#my_comp:~/Python_projects/amazon_wp$ pip3 freeze
bottlenose==0.6.3
lxml==3.6.0
python-amazon-simple-product-api==2.1.0
python-dateutil==2.5.3
six==1.10.0
Also, I have tried several different asin numbers of valid products and I get the same error message.
I am using python 3.4 on ubuntu 14.04
I think, problem is with region. Please select valid value from here. Explanation can be that AWS can validate your credentials, but when it comes to "real" call it fails, since 'US' is not valid region...
You might need to authorize your account for API access. This step-by-step should walk you through it.
Edit:
I've installed all of the same versions and am using the same python code with my own keys and it works fine.
The only time I encountered that error was when I didn't specify the region (which you are clearly doing).
One thing I'd try is to add the following code into your script:
import logging
logging.basicConfig(level=logging.DEBUG)
which should display the following request url:
DEBUG:bottlenose.api:Amazon URL:
http://webservices.amazon.co.uk/onca/xml?AWSAccessKeyId=&AssociateTag=&ItemId=B00EOE0WKQ&Operation=ItemLookup&ResponseGroup=Large&S
ervice=AWSECommerceService&Timestamp=&Version=2013-08-01&Signature=
You can visit this in your browser and should see an XML document returned. If it fails it should hopefully give you a better error than what the pythonlib gives you.
For instance, if I visit https://associates-amazon.s3.amazonaws.com/scratchpad/index.html (never got this to work for me) but it provides a list of base URLs for the region.
I created my associate account on the .co.uk region, thus my requests will only be valid for http://webservices.amazon.co.uk, if I instead try to query http://webservices.amazon.com then I will see:
The request signature we calculated does not match the signature you
provided. Check your AWS Secret Access Key and signing method. Consult
the service documentation for details.
If you've got an associate account on amazon.com, try it without specifying region as I believe that's default. Apart from the above, check that your VM has internet connectivity and if nothing else works try creating another access key and using that.
coincidentally, I run pip search django command and I got time out error. even specifing a high value of timeout
Below the logs:
D:\PERFILES\rmaceissoft\virtualenvs\fancy_budget\Scripts>pip search django --timeout=300
Exception:
Traceback (most recent call last):
File "D:\PERFILES\Marquez\rmaceissoft\Workspace\virtualenvs\fancy_budget\lib\s
ite-packages\pip-1.1-py2.7.egg\pip\basecommand.py", line 104, in main
status = self.run(options, args)
File "D:\PERFILES\Marquez\rmaceissoft\Workspace\virtualenvs\fancy_budget\lib\s
ite-packages\pip-1.1-py2.7.egg\pip\commands\search.py", line 34, in run
pypi_hits = self.search(query, index_url)
File "D:\PERFILES\Marquez\rmaceissoft\Workspace\virtualenvs\fancy_budget\lib\s
ite-packages\pip-1.1-py2.7.egg\pip\commands\search.py", line 48, in search
hits = pypi.search({'name': query, 'summary': query}, 'or')
File "C:\Python27\Lib\xmlrpclib.py", line 1224, in __call__
return self.__send(self.__name, args)
File "C:\Python27\Lib\xmlrpclib.py", line 1575, in __request
verbose=self.__verbose
File "C:\Python27\Lib\xmlrpclib.py", line 1264, in request
return self.single_request(host, handler, request_body, verbose)
File "C:\Python27\Lib\xmlrpclib.py", line 1297, in single_request
return self.parse_response(response)
File "C:\Python27\Lib\xmlrpclib.py", line 1462, in parse_response
data = stream.read(1024)
File "C:\Python27\Lib\httplib.py", line 541, in read
return self._read_chunked(amt)
File "C:\Python27\Lib\httplib.py", line 574, in _read_chunked
line = self.fp.readline(_MAXLINE + 1)
File "C:\Python27\Lib\socket.py", line 476, in readline
data = self._sock.recv(self._rbufsize)
timeout: timed out
Storing complete log in C:\Users\reiner\AppData\Roaming\pip\pip.log
however, another search command finish without problems:
pip search django-registration
Is that a bug of pip due to the big amount of packages name that contains "django"?
Note: speed internet connection = 2 Mbits
the --timeout option doesn't seem to work properly.
I can install django properly by using either:
pip --default-timeout=60 install django
or
export PIP_DEFAULT_TIMEOUT=60
pip install django
Note: using pip version 1.2.1 on RHEL 6.3
Source: DjangoDay2012-Brescia.pdf, page 11
The pypi is probably overloaded. Just enable mirror fallback and caching in pip. Maybe tune the timeout a bit. Add these in ~/.pip/pip.conf:
[global]
default-timeout = 60
download-cache = ~/.pip/cache
[install]
use-mirrors = true
There is too short default timeout set for pip by default. You should really set this environment variable PIP_DEFAULT_TIMEOUT to at least 60 (1 minute)
Source: http://www.pip-installer.org/en/latest/configuration.html
I'm trying to read a URL within our corporate network. Spesifically the server I'm contacting is in one office and the client PC is in another:
print(urlopen(r"http://london.mycompany/mydir/").read())
Whenever I run this function I get:
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "C:\Python24\lib\urllib2.py", line 130, in urlopen
return _opener.open(url, data)
File "C:\Python24\lib\urllib2.py", line 364, in open
response = meth(req, response)
File "C:\Python24\lib\urllib2.py", line 471, in http_response
response = self.parent.error(
File "C:\Python24\lib\urllib2.py", line 402, in error
return self._call_chain(*args)
File "C:\Python24\lib\urllib2.py", line 337, in _call_chain
result = func(*args)
File "C:\Python24\lib\urllib2.py", line 480, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 407: Proxy Authentication Required
The odd thing is that there's no firewall between these two computers - for some reason url has decided to connect to the web-server via the proxy which we'd normally use to connect to content outside the company, and in this case that's failing because I've not authenticated it.
I'm pretty sure that the fault occurs within the client PC: I did a nslookup and a ping to the server to confirm that there's a connection between the two computers, however when I watch the transaction using TCPView for Windows I can see that the python.exe process is connecting to a completely different server (yes, the proxy!).
So what could be causing this? Note that the os.environ["http_proxy"] variable is NOT set - this variable is often used to make urllib connect via a proxy server. That's not the case here. Could there be something else which might have the same effect?
FYI, Running Python 2.4.4 on Windows XP 32bit in a very locked-down corporate environment.
It reads from the system settings. Use urllib.FancyURLOpener:
opener = urllib.FancyURLopener({})
f = opener.open("http://london.mycompany/mydir/")
f.read()