Connect GAE Remote API to dev_appserver.py - python

I want to execute a Python script that connects to my local dev_appserver.py instance to run some DataStore queries.
The dev_appserver.py is running with:
builtins:
- remote_api: on
As per https://cloud.google.com/appengine/docs/python/tools/remoteapi I have:
remote_api_stub.ConfigureRemoteApiForOAuth(
hostname,
'/_ah/remote_api'
)
In the Python script, but what should the hostname be set to?
For example, when dev_appserver.py started, it prints:
INFO 2016-10-18 12:02:16,850 api_server.py:205] Starting API server at: http://localhost:56700
But I set the value to localhost:56700, I get the following error:
httplib2.SSLHandshakeError: [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:590)
(Same error for any port that has anything running on it - e.g. 8000, 8080, etc).
If anyone has managed to get this to run successfully, what hostname did you use?
Many thanks,
Ned

The dev_appserver.py doesn't support SSL (I can't find the doc reference anymore), so it can't answer https:// requests.
You could try using http-only URLs (not sure if possible with the remote API - I didn't use it yet, may need to disable handler secure option in app.yaml config files).
At least on my devserver I am able to direct my browser to the http-only API server URL reported by devserver.py at startup and I see {app_id: dev~my_app_name, rtok: '0'}.
Or you could setup a proxy server, see GAE dev_appserver.py over HTTPS.

Related

Connecting to Elasticsearch via python

I am running elasticsearch-8.6.1 with default settings on an Azure VM, with port 5601 open. This is a dev server with only one cluster. I am able to start Elasticsearch, Kibana and Logstash services and view them via a browser.
I have a some python code which is trying to connect to ElasticSearch using the recommended route of verifying https through the ca_certification route as per https://www.elastic.co/guide/en/elasticsearch/client/python-api/master/connecting.html
I have copied the http_ca.crt file from the VM onto my local machine and made it accessible.
es = Elasticsearch('https://localhost:9200',
ca_certs=CA_CERT,
basic_auth=(USER_ID,ELASTIC_PASSWORD))
Elasticsearch.yml has the following enabled
network.host: 0.0.0.0
http.host: 0.0.0.0
xpack.security.enabled: true
I appreciate that I can turn off security, but this isn't a sustainable approach moving forward.
The error I am getting is
elastic_transport.ConnectionError: Connection error caused by:
ConnectionError(Connection error caused by:
NewConnectionError(<urllib3.connection.HTTPSConnection object at
0x000001890CEF3730>: Failed to establish a new connection: [WinError
10061] No connection could be made because the target machine actively
refused it))
I suspect there is some configuration setting that I am missing somewhere.
Thanks in advance for any advise or pointers that can be offered.
The error message suggests that the Python code is unable to establish a connection to Elasticsearch on the specified host and port. There could be several reasons for this, including network configuration issues or problems with SSL/TLS certificates.
Here are some things you could try to troubleshoot the issue:
Check that Elasticsearch is running and listening on the correct host and port. You can use the curl command to test the connection:
curl -k https://localhost:9200
If Elasticsearch is running, you should see a JSON response with information about the cluster.
Check that the SSL/TLS certificate is valid and trusted by the Python client. You can use the openssl command to check the certificate:
openssl x509 -in http_ca.crt -text -noout
This will display detailed information about the certificate. Make sure that the Issuer and Subject fields match and that the Validity dates are correct.
Check that the firewall on the Azure VM is not blocking incoming traffic on port 9200. You can use the ufw command to check the firewall rules:
sudo ufw status
If port 9200 is not listed as "ALLOW", you can add a new rule:
sudo ufw allow 9200/tcp
Check that the Python client is using the correct ca_certs file. Make sure that the
CA_CERT
variable in your code points to the correct file location.
Check the Elasticsearch logs for any error messages that might indicate the cause of the connection problem. The logs are usually located in the logs directory of the Elasticsearch installation.
Hopefully, one of these steps will help you resolve the issue. Good luck!

How do I change the port on which Pulsar is running?

I want to run pulsar along with apache airflow. The catch is that both run on port number 8080. I do not want to change Airflow's configuration but in order to make pulsar run, I have to assign it another port. I am using a Python library, which runs airflow on 8080.
I tried going through the standalone installation document but could not get anything through it. My aim is to run the Python client of Pulsar.
How do I change the port configuration of Pulsar?
You need to change the webservicePort configuration in conf/broker.conf
Here's the github link: https://github.com/apache/pulsar/blob/master/conf/broker.conf
...
# Port to use to server HTTP request
webServicePort=8080
# Port to use to server HTTPS request - By default TLS is disabled
webServicePortTls=
# Hostname or IP address the service binds on, default is 0.0.0.0.
bindAddress=0.0.0.0
...
For standalone, you can edit the conf/standalone.conf config file and set webServicePort=8081

Route url to GAE Flex server while rest of site runs on GAE Standard

Using Google App Engine Standard Python 2.7, I have a path in my dispatch.yaml to specify all urls of type "*/flex/*" to route to the flex service.
dispatch.yaml
dispatch:
- url: '*/flex/*'
module: flex
The flex environment is a custom python 3.7 runtime which is executed normally using:
python dev_appserver.py flex.yaml --custom_entrypoint="docker run -p 9090:8080 flex_app".
With other services in my environment, I attempt to launch a dev environment with the command:
python dev_appserver.py dispatch.yaml default.yaml sync.yaml task.yaml flex.yaml --custom_entrypoint="docker run -p 9090:8080 flex_app" --port=8080 --skip_sdk_update_check"
However, when this starts, it starts assigning local ip addresses to each service when I need the flex service to be accessed from port 9090.
Example server output:
INFO devappserver2.py:278] Skipping SDK update check.
INFO dispatcher.py:223] Starting dispatcher running at: http://0.0.0.0:8080
INFO dispatcher.py:256] Starting module "default" running at: http://0.0.0.0:8081
INFO dispatcher.py:256] Starting module "sync" running at: http://0.0.0.0:8082
INFO dispatcher.py:256] Starting module "task" running at: http://0.0.0.0:8083
INFO dispatcher.py:256] Starting module "flex" running at: http://0.0.0.0:8084
I am able to successfully access the flex app if I hit the URL localhost:9090. However, if I access localhost:8084 or localhost:8080/flex/, I receive the error:
503 - This request has timed out.
The server logs reflect this but do not show an actual error:
INFO module.py:861] flex: "GET / HTTP/1.1" 503 59
Is it possible to dispatch urls from GAE Standard Environments to a Flex environment and have it route from its designated port to the desired port needed? I would think this is possible as Google App Engine's Doc specifies it is possible to mix the environments together. I've also attempted to solve this by forcing docker to run on port 8084 but the ports can't be shared.
Found this by looking in dev_appserver.py --help. Turns out the answer to this was simply changing the custom_entrypoint to the command docker run -p {port}:8080 flex_app and this would automatically forward GAE's randomly assigned port to the docker instance.
--custom_entrypoint CUSTOM_ENTRYPOINT
specify an entrypoint for custom runtime modules. This
is required when such modules are present. Include
"{port}" in the string (without quotes) to pass the
port number in as an argument.
The development server can only be used for the 1st generation standard environment apps, it doesn't work with flexible apps, see How to use Python 3 with Google App Engine's Local Development Server.
I think your attempt just ends up running the service as a standard environment one, not a flexible one (chances of it running correctly are pretty slim).
To run correctly you'd have to drop it from the local dev_server execution. Cross-service links to the flexible service would need some sort of hack locally to use the 9090 port (via env variables or simply some hardcoded values), you won't be able to use the dispatch.yaml routing in this case (since the local devserver won't know about the flexible service's existence).

Django Ngnix Uwsgi :Could not connect to the requested server host

I tried to follow the tutorial for deploying django app on ec2 using this tutorial, however getting an error:
"Could not connect to the requested server host"
When trying to deploy the first site with ref to tutorial. Only thing I changed is the server_name firstsite.com to public IP of the machine. Please help me figure, where I can find out
My guess is your EC2 instance has firewall rules (security groups) which prevent you from connecting to your app. You can follow these directions to enable inbound traffic to your instance.
Fixed my error,firstly it was a cache of ec2 on my browser,
Secondly,my config ngnix was a symbolic link to the ngnix was not updated.
I initially uninstalled nginx and then tried to replicate the scenario,then
i updated my symbolic links which got my site running up

Django Ldap authentication timed out

I am hosting a django-based site on a local machine (I have full access/control to it).
This site authenticates users against a remote active directory via the django ldap plugin.
authenticating against LDAP server used to work!
Now, when trying to authenticate against the LDAP server, the request just hangs until it times out. I couldn’t find anything useful in the logs.
The server setup is:
NginX, Django 1.3, Fedora 15, mySql 5.1.
I don’t know what logs I should try to look at.
(I've tried looking in nginx access and error logs but to no use.)
Things I tried:
Running the site on django's and accessing it via localhost (not going through Nginx, but accessing python manage.py directly, via the runserver command). this works
Running ldapsearch from the command line. this works
edit:
i used wireshark to look at the back-and-forth with the ldap server. the interaction seems to be fine - django sends a request to bind and it receives a success msg, and then sends a search query and a user object is returned. however, after this communication django seems to hang. when i "Ctrl-c" in the django shell after running "authenticate(username=user, password=pass)", the stack trace is sitting somewhere in the django-ldap library.
Please help, I have no idea what changed that caused this problem.
Thank you in advance
Active Directory does not allow anonymous binds for authorization; you can bind anonymously but you cannot do anything else.
Check if the user that is being used to bind with AD has valid credentials (ie, the account hasn't expired). If it has, you'll get these strange errors.

Categories