I've just upgraded Nexus to 3.8.0-02 OSS to get Python Pypi support.
I've created a pypi-hosted and a pypi-proxy and an additional pypi-group to merge them. I've given myself all roles related to those repositories. All works fine in the UI and pip can query that proxy repository just fine (no credentials).
Problem is I cannot publish from setuptools in python. My ~/.pypirc mode 0600 and contains:
[distutils]
index-servers =
nexus
[nexus]
repository: https://ld3-nexus-3-kev.pibenchmark.com:8443/nexus/repository/pypi-hosted/
username: kevin_thorpe
password: XXXXXXX
This username and password works just fine for Maven so I know it's not that. If I do python setup.py register -r nexus then I get a 401 response. Same results using twine as in the docs. Oddly there's nothing logged on what is obviously a failed login. Traffic is going to the server but I can't see what's in the packets. I've tried both my LDAP user and a local usr with the same results.
How do I go about debugging the problems with the connection? It appears to only be python that's the problem
I can reproduce the error with:
python3 setup.py register -r myserver
In request.log:
10.255.0.3 - - [16/Feb/2018:19:12:51 +0000] "POST /nexus/repository/pypi/ HTTP/1.1" 401 0 3 "Python-urllib/3.6"
Mind the second - should be the username but NONE gets there.
Output example using curl:
curl -u admin -X POST https://my_awesome_nexus_server/nexus/repository/pypi/:
10.255.0.3 - admin [16/Feb/2018:19:14:45 +0000] "POST /nexus/repository/pypi/ HTTP/1.1" 500 1948 15 "curl/7.55.1"
Tested on Docker Container, latest (3.8), and 3.7. It seems to me that problem is at client side instead.
UPDATE:
Managed to make it work with twine (in a virtualenv):
python3 setup.py build
twine upload -r myserver dist/mypackage-0.1.0.tar.gz
And it is available at Nexus3 (3.7).
My suggestion for troubleshooting is to look at the nexus.log and request.log for more detailed log statements. These can be found in your data directory in the "log" subdirectory.
Another option is to file a ticket in https://issues.sonatype.org/projects/NEXUS/ and see if the Sonatype staff have any more ideas. It would be helpful if you include a support.zip (documented here: https://help.sonatype.com/display/NXRM3/Support+Features#SupportFeatures-CreatingaSupportZIP) which will include the logs when you do this.
I apologize for the somewhat generic advice but I see nothing wrong with your configuration from your post details so far.
Related
I am trying to do the initial steps of creating a Python script using ezsheets. In their manual, they suggest that you run the script with just import ezsheets and nothing else so that you can do the API authorization before you move forward.
I am stuck there. Does anyone know why I can't finish the authorization?
Here is literally all of my configuration.
import ezsheets
import os
Here is what happens when I run the Python script.
A tab is opened and I choose a google account.
I allow the script to "See, edit, create, and delete your spreadsheets in Google Drive"
I get a message saying I can't connect to localhost:8080
I looked around here and found someone who suggested running python -m SimpleHTTPServer 8080 --bind 127.0.0.1 but that hasn't helped. When I do that I get the following messages in the terminal.
127.0.0.1 - - [22/Oct/2020 15:53:58] "GET /?state=O1Db0mYPwVjU7veAmqiP8JBhTontLw&code=4/5gEFgsaDX0-7nGYfGmLEzuJxympuGyvlq0cJF5A7i4b4E-1fO1qvmFyK2_2BkI9bZU0czi4W5k980r_mIdbulWo&scope=https://www.googleapis.com/auth/spreadsheets HTTP/1.1" 200 -
127.0.0.1 - - [22/Oct/2020 15:53:58] code 404, message File not found
127.0.0.1 - - [22/Oct/2020 15:53:58] "GET /favicon.ico HTTP/1.1" 404 -
Some notes:
I have created an OAuth 2.0 Client ID.
The user type is internal.
I am part of a G Suite. I have added the ID to G Suite and allowed all of the expected API calls.
The installed library versions are
EZSheets 2020.10.10
google-api-core 1.23.0
google-api-python-client 1.12.4
google-auth 1.22.1
google-auth-httplib2 0.0.4
google-auth-oauthlib 0.4.1
googleapis-common-protos 1.52.0
Help would be appreciated.
I am trying to deploy Django+Scrapy project on Ubuntu 16.04. When I run scrapyd-deploy, as it is described in the docs, - I get:
Packing version 1526639948
Deploying to project "first_scrapy" in http://my_ip/addversion.json
Deploy failed (404): <full HTML code of '404.html' page>
When I run scrapyd-deploy -l - I see:
default http://my_ip
My scrapy.cfg:
[settings]
default = first_scrapy.settings
[deploy]
url = http://my_ip
username = root
password = rootpassword
project = first_scrapy
What am I doing wrong?
UPDATE 1:
If I change in my scrapy.cfg url=http://my_ip:6800 - this still throws 404 error. Next I tried to run scrapyd in the second console and this was the first time I saw another answer - details are here.
So question now is - how to run scrapyd constantly so if I close the console - it will be still working?
You just have to change directory into your project folder and then run scrapyd command with “nohup” and that will make sure that scrapyd doesn’t get closed after you disconnect with server
cd /path/to/your/project && nohup scrapyd >& /dev/null &
I have a django application inside /home//my_app that I am trying to deploy using gunicorn:
sudo gunicorn --workers=2 -b :8081 tutorial.wsgi:application
After deploying the application with the command above, I log into another ssh instance (on the same server) and run the following command:
wget 127.0.0.1:8081
This returns a 403 FORBIDDEN.
Things I have tried:
1. Tried to chmod 755, and even 777, in app directory (Did not work)
2. Tried to move app directory to /etc/www/myapp (Did not work)
3. Tried to run all commands using root access (Did not work)
It is worth noting that I am not that familiar with linux and that this error is literally driving me crazy.
SOLVED IT:
after downloading cURL, in order to see the http header, it turned out that the service worked, but returned a 403 because a missing token authorization. Oops.
Please make sure you have coded views.py and urls.py to server GET requeat at /.
Could you please help me figure out what I'm doing wrong ? Here are the steps:
followed the portia install manual found here https://github.com/scrapinghub/portia - all ok
created a new project, entered an url, tagged an item - all ok
clicked "continue browsing", browsed through site, items were being extracted as expected - all ok
Next I wanted to deploy my spider:
1st try : I tried to run, as the docs specified, scrapyd-deploy your_scrapyd_target -p project_name - got error - scrapyd wasn't installed
fix: pip install scrapyd
2nd try : I launched scrapyd server, accessed http://localhost:6800/ -all ok
After a brief reading of scrapyd docs I found out I had to edit the file scrapy.cfg from my project : slyd/data/projects/new_project/scrapy.cfg
added the following :
[deploy:local]
url = http://localhost:6800/
Went back to the console, checked all is ok :
$:> scrapyd-deploy -l
local http://localhost:6800/
$:> scrapyd-deploy -L local
default
Seemed ok so i gave it another try :
$scrapyd-deploy local -p default
Packing version 1418722113
Deploying to project "default" in http://localhost:6800/addversion.json
Server response (200):
{"status": "error", "message": "IOError: [Errno 21] Is a directory: '/Users/Mike/www/portia/slyd/data/projects/new_project'"}
What am I missing ?
For anyone who stumbles upon this issue, the fix is to deploy scrapyd in another directory other than that of the project.
See details here : https://github.com/scrapinghub/portia/issues/128
I'm trying to make a web app using the Uber api (https://developer.uber.com/v1/tutorials/).
In their tutorial they link to an example application on github with instructions to get it running. (https://github.com/uber/Python-Sample-Application). When I try running the sample application I get HTTP 400 errors. An example of the output that I get on the console is shown below.
(venv)192:Python-Sample-Application jason$ python app.py
* Running on http://127.0.0.1:7000/
127.0.0.1 - - [05/Sep/2014 05:37:28] code 400, message Bad HTTP/0.9 request type ('\x16\x03\x01\x00\xab\x01\x00\x00\xa7\x03\x03T')
127.0.0.1 - - [05/Sep/2014 05:37:28] "??T ??p},֤,??IknL?]????????C#?J??$?#?" 400 -
127.0.0.1 - - [05/Sep/2014 05:37:28] code 400, message Bad HTTP/0.9 request type ('\x16\x03\x01\x00\x9b\x01\x00\x00\x97\x03\x01T')
127.0.0.1 - - [05/Sep/2014 05:37:28] "??T ???=?-???????"u?Pg,??t?sBa`?J??$?#?" 400 -
127.0.0.1 - - [05/Sep/2014 05:37:28] code 400, message Bad HTTP/0.9 request type ('\x16\x03\x00\x00E\x01\x00\x00A\x03\x00T')
127.0.0.1 - - [05/Sep/2014 05:37:28] "EAT ??^??????<)?W?e-???\?U~?0:=?=</5" 400 -
I've pasted the instructions from the github project below.
How To Use This
Navigate over to https://developer.uber.com/, and sign up for an Uber developer account.
Register a new Uber application and make your Redirect URI http://localhost:7000/submit ensure that both the profile and history OAuth scopes are checked.
Fill in the relevant information in the config.json file in the root folder and add your client id and secret as the environment
variables UBER_CLIENT_ID and UBER_CLIENT_SECRET.
Run export UBER_CLIENT_ID="YOUR_CLIENT_ID"&&export UBER_CLIENT_SECRET="YOUR_CLIENT_SECRET"
Run pip install -r requirements.txt to install dependencies
Run python app.py
Navigate to http://localhost:7000 in your browser
On the Uber developer page I've checked all boxes for my app and set the redirect. In my config.json, I've added lines for the UBER_CLIENT_ID and UBER_CLIENT_SECRET fields. I've run the export command from step 4 and echoed both environment variables to confirm that the variables are set. I created a virtualenv to install the dependencies and then ran the pip install and python app.py instructions. I'm running OS x and have tried with both Chrome and Safari and both browsers say they fail to make a secure connection to localhost.