I am working on implementing the gRPC generated code for the flatbuffers library, and I've noticed that the python gRPC library doesn't use the user set content-type. According to the documentation the user can set whatever type they want in the Content-Type → "content-type" "application/grpc" [("+proto" / "+json" / {custom})] section. I've been trying to add simply send a +flatbuffers which breaks the go server and the python client (trying to make sure that all the languages can work well without any issues between them).
example:
Go lang to Go lang
:method POST
:scheme http
:path /models.Greeter/SayHello
:authority localhost:3000
content-type application/grpc+flatbuffers
user-agent grpc-go/1.35.0
te trailers
grpc-timeout 9997779u
As you can see if its being called from go lang it doesn't have issues however when I call the server from the python client it clearly doesn't append the +flatbuffers suffix
python to Go lang
:scheme http
:method POST
:authority localhost:3000
:path /models.Greeter/SayHello
te trailers
content-type application/grpc
user-agent grpc-python/1.35.0 grpc-c/14.0.0 (osx; chttp2)
steps taken:
1- I've tried adding options to the channel:
with grpc.insecure_channel('localhost:' + args.port, options=[('content-type', 'application/grpc+flatbuffers')]) as channel:
2- I've tried adding it on to the with_call according to https://github.com/grpc/grpc/blob/v1.35.0/examples/python/metadata/metadata_client.py
with_call(hello_request, metadata=[('content-type', 'application/grpc+flatbuffers')], compression=None)
however both options fail.
am I missing something here?
Related
Hope you're doing great !
The usecase
I'm trying to PoC something on AWS, the use case is that we need to be able to check on all our infrastructure that all instance are reachable through AWS Session Manager.
In order to do that, I will use a Lambda in Python 3.7, I make my PoC locally currently. I'm able to open the websocket, send the Token Payload and get an output that contains a shell.
The problem is that the byte output contains character that the python decode function can't decode in a lot of tested character encoding, every time something block.
The output
Here is the output I have after sending the payload :
print(event)
b'\x00\x00\x00toutput_stream_data \x00\x00\x00\x01\x00\x00\x01m\x1a\x1b\x9b\x15\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\xb1\x0b?\x19\x99A\xfc\xae%\xb2b\xab\xfd\x02A\xd7C\xcd\xd8}L\xa8\xb2J\xad\x12\xe3\x94\n\xed\xb81\xfa\xb6\x11\x18\xc2\xecR\xf66&4\x18\xf6\xbdd\x00\x00\x00\x01\x00\x00\x00\x10\x1b[?1034hsh-4.2$ '
What I already tried
I researched a lot on stackoverflow, tried to decode with ascii, cp1252, cp1251, cp1250, iso8859-1, utf-16, utf-8, utf_16_be, but everytime, it doesn't decode anything or it leads to an error because a character is unknown.
I also already tried to use chardet.detect, but the returned encoding is not working and also the probability result is really low. And also tried to strip the \x00 but strip doesn't work that time.
I already know that shell output can sometimes contains coloring character and some things that make it looks like garbled, but here, I tried to pass colorama on it, tried to match some ANSI character with some regex, nothing successfully decode this bytes response.
The code
Here is the code for my PoC, feel free to use it to try, you just have to change the target instance id (your instance needs to have the latest amazon-ssm-agent running on it).
import boto3
import uuid
import json
from websocket import create_connection
# Setting the boto3 client and the target
client = boto3.client('ssm','eu-west-1')
target = 'i-012345678910'
# Starting a session, this return a WebSocket URL and a Token for the Payload
response = client.start_session(Target=target)
# Creating a session with websocket.create_connection()
ws = create_connection(response['StreamUrl'])
# Building the Payload with the Token
payload = {
"MessageSchemaVersion": "1.0",
"RequestId": str(uuid.uuid4()),
"TokenValue": response['TokenValue']
}
# Sending the Payload
ws.send(json.dumps(payload))
# Receiving, printing and measuring the received message
event = ws.recv()
print(event)
print(len(event))
# Sending pwd, that should output /usr/bin
ws.send('pwd')
# Checking the result of the received message after the pwd
event = ws.recv()
print(event)
print(len(event))
Expected output
In the final solution, I expect to be able to do something like a curl http://169.254.169.254/latest/meta-data/instance-id through the websocket, and compare the instance-id of the command output against the target, to validate that instance is reachable. But I need to be able to decode the websocket output before achieving that.
Thank you in advance for any help on this.
Enjoy the rest of your day !
As per my reading of the amazon-ssm-agent code, the payload exchanged via the websocket connection and managed by the session-manager channels follow a specific structure called the AgentMessage.
You will have to comply with this structure to use session-manager with the remote agent through the MGS Service, which means serializing messages and deserializing responses.
The fields of the above struct are also broken down into models via additional structs.
It shouldn't be too long to re-implement that in python. Good luck!
So currently I'm having trouble with how to set the instance of my program to use a different proxy for all Internet related tasks; and I'm also having an error that stating that it: Failed to parse.
Now with the error, I'm not quite sure on the correct approach to fix it since I'm pulling information out of a .dat file; though I believe it may be an encoding error.
Anyway, now with the proxies, I've looked at other posts and submissions for awhile now, though it's still not running properly. I also see posts relating to urllib2 all the time . . . and that's not what I want.
This is basically what I'm trying proxy wise:
proxy = [ . . . ]
os.environ['https_proxy'] = 'http://' + proxy
And this is also the error that follows:
Failed to parse: x.xxx.xxx.xxx:xxxx
Overall thanks for your help, I appreciate it.
As far as I know, there's no magic bullet to change proxy settings of a "whole program". You have to find out which modules you are using (or the module that some other module you are using directly is using - e.g. pandas uses urllib2) and pass the proxy configuration. Below are two examples of the most commonly used modules: urllib2 and requests
Example of using proxies in urllib2 (on a module level):
proxy = {"http":"10.111.111.111:8080", "https":"10.111.111.222:8080"}
opener = urllib2.build_opener(urllib2.ProxyHandler(proxy))
urllib2.install_opener(opener)
resp = urllib2.urlopen('http://www.google.com')
Example of using proxies in requests (on the function call level):
proxy = {"http":"10.111.111.111:8080", "https":"10.111.111.222:8080"}
resp = requests.get('http://www.google.com', proxies = proxy)
The reason why I was receiving the error mentioned above is because: when retrieving the proxies from a .dat file, I wasn't checking for '\n' or ' ' (the ' ' is an example if your data includes spaces, though obviously proxy strings do not).
So an example would be to use .strip after your retrieved information; such as:
fileList = []
with open('[ . . . ].dat') as f:
for line in f:
fileList.append(line.strip('\n').strip(' '))
return fileList[x]
I'm following the Diazo Quickstart instructions . I start the proxy and issue a request, I get a response but I'm not sure the response is what I'm meant to be getting.
The Quickstart reads :
The idea is to run a local webserver that applies the Diazo theme to a
response coming from an existing website, either locally or somewhere
on the internet
... only that seems inconsistent with this part of the proxy.ini contained in the Quickstart guide ...
# Proxy http://diazo.org as the content
[app:content]
use = egg:Paste#proxy
address = http://diazo.org/
suppress_http_headers = accept-encoding
... after seeing that I thought what was going to happen was that the content from diazo.org would be embedded within the theme.html and with the theme.css applied to that content .
What actually happens (as far as I can tell) is that if I request ...
http://localhost:8000
... I receive an unaltered copy of the content at http://diazo.org.
On the other hand if I amend proxy.ini so that the app:content reads as follows ...
# Proxy http://diazo.org as the content
[app:content]
use = egg:Paste#proxy
address = http://python.org/
suppress_http_headers = accept-encoding
I don't get the content of python.org sent to the browser, I get the content of the theme.html file !
The proxy.ini is identical to the once shown in the QuickStart guide except I'm using port 8000 and as mentioned above I've changed the address within the app:content section for the purposes of testing.
So my question is what am I meant to see if I complete the steps in the QuickStart document ? And if it's meant to be a slightly altered version of the diazo.org content (as I suspect) why don't I see the same alterations when I change the target for the proxy ?
modify u rules.xml
<replace css:theme-children="#content" css:content-children=".container" />
I want to setup a proxy for openlayers to use so I followed these steps:
Downloaded the proxy.cgi file from the OpenLayers web site:
http://trac.osgeo.org/openlayers/browser/trunk/openlayers/examples/proxy.cgi
Modify the proxy.cgi file to include my domain in the allowedHosts list:
allowedHosts = ['localhost:6901']
Copy the proxy.cgi file to the following folder:
$TOMCAT_PATH$/webapps/yourApp/WEB-INF/cgi/
Modify the file web.xml of your web app by adding the sections below. You find the file at
$TOMCAT_PATH$/webapps/yourApp/WEB-INF/web.xml
Comment: In case the web.xml file doesn’t exist for your webapp, just create it yourself or copy it from another webapp and modify it. (created!)
Comment: the “param-value” for the “executable” parameter has to contain the path to your Python installation. (it does!)
<servlet>
<servlet-name>cgi</servlet-name>
<servlet-class>org.apache.catalina.servlets.CGIServlet</servlet-class>
<init-param>
<param-name>debug</param-name>
<param-value>0</param-value>
</init-param>
<init-param>
<param-name>cgiPathPrefix</param-name>
<param-value>WEB-INF/cgi</param-value>
</init-param>
<init-param>
<param-name>executable</param-name>
<param-value>c:\python25\python.exe</param-value>
</init-param>
<init-param>
<param-name>passShellEnvironment</param-name>
<param-value>true</param-value>
</init-param>
<load-on-startup>5</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>cgi</servlet-name>
<url-pattern>/cgi-bin/*</url-pattern>
</servlet-mapping>
Modify the file context.xml of your web app by adding the element below. You find the file at $TOMCAT_PATH$/webapps/yourApp/META-INF/context.xml
Restart Tomcat
To use the proxy with OpenLayers, just include this single line into your code:
OpenLayers.ProxyHost = "/yourWebApp/cgi-bin/proxy.cgi?url=";
But when I try to use it like:
/webappname/cgi-bin/proxy.cgi?url=labs.metacarta.com
I get this error:
Some unexpected error occurred. Error text was: list index out of range
I think its related with os.environ["REQUEST_METHOD"] but I dont know how its related ..
You're asking for an environment variable that isn't defined.
You need to either catch and handle the exception or use os.environ.get:
try:
methodq = os.environ["REQUEST_METHOD"]
except KeyError:
methodq = "default value"
Or:
methodq = os.environ.get("REQUEST_METHOD", "default value")
You're submitting :
/webappname/cgi-bin/proxy.cgi?url=labs.metacarta.com
But the proxy.cgi script is trying to do this:
host = url.split("/")[2]
Try http://labs.metacarta.com for your url so proxy.cgi has some slashes to split on, or modify it to parse the url a little smarter.
The answer is: You don't install or use cgi proxy on Tomcat.
cgi is for apache server or IIS that are used as a front-end server. Tomcat may sit behind it. The configuration of Apache is detailed in: http://tomcat.apache.org/tomcat-6.0-doc/proxy-howto.html
Be warned that OpenLayers warns that its proxy.cgi is only an example and may not have good enough check to stop it from being exploited, i.e. it may run some malicious script.
If you are serving your OpenLayers client page on Tomcat alone and it contains layers from other GeoServer or Mapserver, you can use proxy servlet and specify it as:
OpenLayers.ProxyHost = "sevlet URL on the server that served this page";
http://wiki.apache.org/tomcat/ServletProxy
https://svn.openxdata.org/Experimental/openXmapper/trunk/gwt-openlayers-server/src/main/java/org/gwtopenmaps/openlayers/server/GwtOpenLayersProxyServlet.java
There are reverse proxy or rewrite sevlet solutions to it, too. Please Google on these.
Background: I'm using https://bitbucket.org/mariocesar/django-hgwebproxy/wiki/Home to add a Mercurial browser to a Django site I'm building.
The problem I'm having is: The particular files we're storing in the HG repo are bind zone files and happen to be named /some/path/somedomain.com which is causing hgweb to set the content-type to application/x-msdos-program (when the content is really text/plain) when returning the raw view of the file. The incorrect content-type is causing hgwebproxy to dump the content into the page template, rather than just return it. It does a test like this to skip templating:
if response['content-type'].split(';')[0] in ('application/octet-stream', 'text/plain'):
return response
Some posible solutions are of course
Rename all the files to .zone (Lame and time consuming)
Hack hgwebproxy to pass application/x-msdos-program (Lame and dirty)
Convince hgweb to use the correct content-type (Awesome! I hope you'll help)
hgweb uses mimetypes to detect the mime type of a file. You might be able to override the ".com" suffix detection by adding a settings file. See: mimetypes.knownfiles:
>>> import mimetypes
>>> mimetypes.init()
>>> mimetypes.knownfiles
['/etc/mime.types', '/etc/httpd/mime.types', '/etc/httpd/conf/mime.types', '/etc/apache/mime.types', '/etc/apache2/mime.types', '/usr/local/etc/httpd/conf/mime.types', '/usr/local/lib/netscape/mime.types', '/usr/local/etc/httpd/conf/mime.types', '/usr/local/etc/mime.types']