I'm starting to play around with the new Pepper API for an important project (phasing out Java) and I'm having an issue with this example.
https://developer.chrome.com/native-client/devguide/devcycle/vs-addin
I've installed the plugin to VS, added the paths, started the python webserver yet when I debug it gives me a 404...
I'm starting the python webserver as per https://developer.chrome.com/native-client/sdk/examples
The issue being the HTML file it's looking for is in F:\nacl_sdk\vs_addin\examples\hello_world_gles\hello_world_gles and the localhost root is F:\nacl_sdk\pepper_42\getting_started
Has anyone else had this issue?
I also have plenty of intellisense errors:
Since I posted this I tried copying the example directory to the root directory being used by localhost. The page loads, however I'm not capable of running the plugin...
I think you're not supposed to be starting the Python web server, as per the vs addin documentation:
When you run one of the Native Client platforms Visual Studio builds
the corresponding type of Native Client module (either a .nexe or
.pexe), starts a web server to serve it up, and launches a copy of
Chrome that fetches the module from the server and runs it.
However, to be honest, I'm still unable to run this sample, even though I'm following this instruction. I'm seeing an "ERR_CONNECTION_REFUSED" result page. I'm using VS 2012 Express, and Chrome 43.
Update. I've finally managed to run the sample. First, I've installed VS 2012 Ultimate instead of Express (because Express doesn't support Add-Ins). Second, the latest VS addin seems to be unable to run the Python web-server, it passes the port paramater in a wrong format. You can see that if you read the output in the "Native Client Web Server Output" pane in VS. So what I did, is I modified the %NACL_SDK_ROOT%\tools\httpd.py, so that it doesn't attempt to parse the command line arguments :)
Here is the new main from my httpd.py:
def main(args):
server = LocalHTTPServer(os.path.abspath('.'), 5103)
# Serve until the client tells us to stop. When it does, it will give us an
# errorcode.
print 'Serving %s on %s...' % (options.serve_dir, server.GetURL(''))
return server.ServeForever()
HTH.
Related
I followed Google's Quickstart for Python, step-by-step. I followed each step exactly, often copying and pasting. I definitely have the Google Calendar API enabled. I've installed the Google Client Library with Pip. I've set up the sample code and the credentials.json in its own folder. So, why am I getting this error when I run it:
"OSError: [WinError 10013] An attempt was made to access a socket in a way forbidden by its access permissions"
To figure this out, I've learned what a socket is. (It's literally the combination of an IP address and a single port). I've learned how to use netstat, though I don't know yet how this applies to what I'm doing. I've looked into using ShellExecuteEx based on an answer in this question, but I don't know how to use that with Python.
I've tried adding the script from the accepted answer to this question (which actually uses the ShellExecuteEx method though I don't notice this) into an admin.py file and import this admin.py script into quickstart.py. After updating the admin.py script to Python 3 syntax and running quickstart.py, Windows 8.1 asks me if I will allow access. I say yes, and it still gives me the OSError (WinError 10013) on accessing the socket in a forbidden way. The UAC is not the issue.
I suspect it's a port conflict, where something's already using the port that the script that Google's trying to use. But I'm worried that the port is decided by a black box function that I won't be able to change. The error itself doesn't say which port it's using, so I'll need to do more research.
It is a port issue.
Go to line 34 on the quickstart.py file (or where it says creds =
run_local_server()).
Go to the flow.py file in the
google_auth_oauthlib package with this function (in VS Code, click
run_local_server() and press F12 or right click and select "Go to
Definition").
You'll see line 369 (at the time of this writing) say self, host='localhost', port=8080,.
When I look at netstat, it actually says this port is in use, probably with an Apache server I never turned off.
Change the value in the flow.py file in the google_auth_oauthlib package to 8090, so 369 looks like self, host='localhost', port=8090,.
I ran the quickstart.py script again, and the window to authenticate my Google account popped up.
I selected my account, and it worked. No messing with the admin stuff.
I'm glad I was able to find it like this because I thought the port was selected in some black box manner, like it was decided from a server at Google.
I'm a beginner with Python and Django.
I'm setting up a program i've written locally. After almost finishing getting the app to work on the server, i've learned that the server is running python 2.6, while my local system runs 2.7. This is seemingly giving me problems when retrieving paramters from urls.
I'm using a server from Openshift. I don't know much about servers, but my current setup is that I have a local clone of the file, and I work on everything locally, and the push them via git to the server. The server was set up using a predefined quick setup from inside the Openshift interface.
I'm using the following urlpattern, which works just fine locally on my computer.
url(r'^website/(?P<url>[:\w/\-.]+)$', 'page'),
However, on my server version i'm running into some problems. The following url, returns two different urls to the view, depending on whether i'm on the server or running local.
#when using this url
website/http://example.com
#local view called page, retrives this argument
http://example.com
#server version retrieves almost the same, but with one / in the beginner less.
http:/example.com
It seems to me that a backslash is being chopped off somewhere. How can I change it to parse the argument with both backslashes?
# the receiving view
def page(request, url):
p = Page.objects.get(url=url)
domain = p.website.url
return render_to_response('seo/page.html', {'domain': domain, 'page': p}, context_instance=RequestContext(request))
The local version is returning the desired page just fine. The server version returns this:
DoesNotExist at /website/http:/coverme.dk/collections/iphone-sleeves-covers
I noticed that one of the backslashes in http:// was missing here, and assumed the error was based on it being sent to the view incorrectly.
I've just tested with an url that does not exist in the database on the local version, and it displays the error message correctly.
I've also double checked that the object for url='http://coverme.dk/collections/iphone-sleeves-covers' actually exists. I've also checked with several others.
I've experimented with messing around with the input url, and it seems to working just fine, except when I use double, triple of more backslashes. All backslashes succesively after the first are ignored in the url.
/website/http://////coverme.dk////collections/iphone-sleeves-covers
#gives the same as
/website/http:/coverme.dk/collections/iphone-sleeves-covers.
Any kind of help is much appreciated. A link to some documentation that could help me out would be greatly appreciated as well.
EDIT: Updating django solved this issue.
From a comment by the author of the question:
Using /website/http%3A%2F%2Fcoverme.dk%2Fcollections%2Fiphone-sleeves-covers as the url returns: The requested URL /website/coverme.dk/collections/iphone-sleeves-covers was not found on this server. Django version on the server is 1.4 and the one being used locally is 1.5.1. I've still to understand why i'm seeing different results locally and on the server, but i'm starting to think i should just switch to an url pattern that doesn't use //?
Updating Django solved the issue for me
We (friends and I) have a small dedicated server with nginx and the geoip module installed. (It's properly installed)
On that server we run a simple python script with UWSGI and bottle.
The script rotates banners.
(Our own banners for self-promotion)
We use this script to show banners of sites we own on other sites we own and rotate them so the user doesn't see always the same banner.
We have a problem with the geotargeting.
The following pastebin shows the python script.
http://pastebin.com/PqQ6TQeN
PAISES = ['AR', 'MX', 'CL'] means the Country_code.
TODOS is the tag to show the banner to all countries.
The different lists are for different banner sizes.
The URL for the rotating banners is like this.
exampleip /api/300x250
This calls the template for the size of 300x250 so the user will see a random banner from our list for that size.
That works fine.
But the geotargeting isn't working.
In the code (pastebin link) you can see the 300x250 banners have only the "AR" code for Argentina, so only users from that country should see those ads.
However they keep being displayed for other IPs.
And after adding this:
print('>>>>> ',request.headers.keys())
pais = request.get_header('GEOIP_CITY_COUNTRY_CODE')
print('=========== ' , pais, ' ==================')
(*Note: pais means country)
And running the UWSGI process via SSH It returns None for GEOIP_CITY_COUNTRY_CODE.
That means it isn't passing the parameters right to the python script.
The Geoip module is properly installed but this script isn't working properly.
I need to get it fixed.
I'm sure it's not something complicated and I'm just writting something wrong in the code. Maybe I'm not passing the parameters right to uwsgi or python.
Later note: the issues in the original posting below have been largely resolved.
Here's the background: For an introductory comp sci course, students develop html and server-side Python 2.7 scripts using a server provided by the instructors. That server is based on CGIHTTPRequestHandler, like the one at pointlessprogramming. When the students' html and scripts seem correct, they port those files to a remote, slow Apache server. Why support two servers? Well, the initial development using a local server has the benefit of reducing network issues and dependency on the remote, weak machine that is running Apache. Eventually porting to the Apache-running machine has the benefit of publishing their results for others to see.
For the local development to be most useful, the local server should closely resemble the Apache server. Currently there is an important difference: Apache requires that a script start its response with headers that include a content-type; if the script fails to provide such a header, Apache sends the client a 500 error ("Internal Server Error"), which too generic to help the students, who cannot use the server logs. CGIHTTPRequestHandler imposes no similar requirement. So it is common for a student to write header-free scripts that work with the local server, but get the baffling 500 error after copying files to the Apache server. It would be helpful to have a version of the local server that checks for a content-type header and gives a good error if there is none.
I seek advice about creating such a server. I am new to Python and to writing servers. Here are the issues that occur to me, but any helpful advice would be appreciated.
Is a content-type header required by the CGI standard? If so, other people might benefit from an answer to the main question here. Also, if so, I despair of finding a way to disable Apache's requirement. Maybe the relevant part of the CGI RFC is section 6.3.1 (CGI Response, Content-Type): "If an entity body is returned, the script MUST supply a Content-Type field in the response."
To make a local server that checks for the content-type header, perhaps I should sub-class CGIHTTPServer.CGIHTTPRequestHandler, to override run_cgi() with a version that issues an error for a missing header. I am looking at CGIHTTPServer.py __version__ = "0.4", which was installed with Python 2.7.3. But run_cgi() does a lot of processing, so it is a little unappealing to copy all its code, just to add a couple calls to a header-checking routine. Is there a better way?
If the answer to (2) is something like "No, overriding run_cgi() is recommended," I anticipate writing a version that invokes the desired script, then checks the script's output for headers before that output is sent to the client. There are apparently two places in the existing run_cgi() where the script is invoked:
3a. When run_cgi() is executed on a non-Unix system, the script is executed using Python's subprocess module. As a result, the standard output from the script will be available as an in-memory string, which I can presumably check for headers before the call to self.wfile.write. Does this sound right?
3b. But when run_cgi() is executed on a *nix system, the script is executed by a forked process. I think the child's stdout will write directly to self.wfile (I'm a little hazy on this), so I see no opportunity for the code in run_cgi() to check the output. Ugh. Any suggestions?
If analyzing the script's output is recommended, is email.parser the standard way to recognize whether there is a content-type header? Is another standard module recommended instead?
Is there a more appropriate forum for asking the main question ("How can a CGI server based on CGIHTTPRequestHandler require...")? It seems odd to ask if there is a better forum for asking programming questions than Stack Overflow, but I guess anything is possible.
Thanks for any help.
I have some test code (as a part of a webapp) that uses urllib2 to perform an operation I would usually perform via a browser:
Log in to a remote website
Move to another page
Perform a POST by filling in a form
I've created 4 separate, clean virtualenvs (with --no-site-packages) on 3 different machines, all with different versions of python but the exact same packages (via pip requirements file), and the code only works on the two virtualenvs on my local development machine(2.6.1 and 2.7.2) - it won't work on either of my production VPSs
In the failing cases, I can log in successfully, move to the correct page but when I submit the form, the remote server replies telling me that there has been an error - it's an application server error page ('we couldn't complete your request') and not a webserver error.
because I can successfully log in and maneuver to a second page, this doesn't seem to be a session or a cookie problem - it's particular to the final POST
because I can perform the operation on a particular machine with the EXACT same headers and data, this doesn't seem to be a problem with what I am requesting/posting
because I am trying the code on two separate VPS rented from different companies, this doesn't seem to be a problem with the VPS physical environment
because the code works on 2 different python versions, I can't imagine it being an incompabilty problem
I'm completely lost at this stage as to why this wouldn't work. I've even 'turned-it-off-and-turn-it-on-again' because I just can't see what the problem could be.
I think it has to be something to do with the final POST coming from a VPS that the remote server doesn't like, but I can't figure out what that could be. I feel like there is something going on under the hood of URLlib that is causing the remote server to dislike the reply.
EDIT
I've installed the exact same Python version (2.6.1) on the VPS as is on my working local copy and it doesn't work remotely, so it must be something to do with originating from a VPS. How could this effect the Http request? Is it something lower level?
You might try setting the debuglevel=1 for urllib2 and see what it comes up with:
import urllib2
h=urllib2.HTTPHandler(debuglevel=1)
opener = urllib2.build_opener(h)
...
This is a total shot in the dark, but are your VPSs 64-bit and your home computer 32-bit, or vice versa? Maybe a difference in default sizes or accuracies of something could be freaking out the server.
Barring that, can you try to find out any information on the software stack the web server is using?
I had similar issues with urllib2 (working with Zimbra's REST api), in the end switched to pycurl with success.
PS
for operations of login/navigate/post, I usually find Mechanize useful and easier to use. Maybe you can give it a show.
Well, it looks like I know why the problem was happening, but I'm not 100% the reason for it.
I simply had to make the server wait (time.sleep()) after it sent the 2nd request (Move to another page) before doing the 3rd request (Perform a POST by filling in a form).
I don't know is it because of a condition with the 3rd party server, or if it's some sort of odd issue with URLlib? The reason it seemed to work on my development machine is presumably because it was slower then the server at running the code?