Sending custom headers in websocket handshake - python

How to send custom headers in the first handshake that occurs in the WebSocket protocol?
I want to use custom header in my initial request "**X-Abc-Def : xxxxx"
WebSocket clients are Python & Android client.

In the python websocket-client you can indeed pass custom headers easily - there's a keyword argument header available for this; see the following example from the docs:
conn = create_connection("ws://echo.websocket.org/", header={"User-Agent": "MyProgram"})
Edit: Keyword should be header, not headers.

#ThiefMaster got it perfect.
But if you want to add custom headers, you can do that with the argument header instead of headers.
Hope this helps

Related

Is it possible to get only the header without fetching the body using the requests.get command? the server is blocking HEAD

In a configuration I am using, a minio server hosting files, accepts only GET requests and does not accepts HEAD requests. I need the header information to check for file-type to avoid fetching the entire file.
I would do it usually with requests.head(url) however as I mentioned earlier only the GET method is allowed.
In curl it is possible to do the following:
curl -I -X GET http://domain.dom/path/
which curls the header of the url but overrides the used method with the GET HTTP method.
Is there something equivalent for the Python3 requests package?
Unfortunately there doesn't seem to be a clean way to do this. If the server accepts Range header, you could try requesting the bytes from 0 to 0, which nets you access to the header data but not the body. For example
import requests
url = "http://stackoverflow.com"
headers = {"Range": "bytes=0-0"}
res = requests.get(url, headers=headers)
print(res.headers)
As said, this still depends on the server implementation. For reference: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Range
Based on the definition of a GET, it sounds like you could modify the request headers to include a range-request.
A client can alter the semantics of GET to be a "range request", requesting transfer of only some part(s) of the selected representation, by sending a Range header field in the request (Section 14.2).
I haven't tried this, but maybe setting a byte range of 0-1 would skip the body and you'd get the headers for free.

How to get header from grpc responce with python

I am trying to get header of the gRPC response with the following code, and it doesn't work:
response = stub.GetAccounts(users_pb2.GetAccountsRequest(), metadata=metadata)
header = response.header()
This is what this header looks like in Kreya, I'm trying to get it in python:
Does anyone know how to get the same header in python?
I suspect (! don't know) that you can't access the underlying HTTP/2 (response) headers from the (Python) gRPC client.
You can configure various environment variables that expose underlying details (see gRPC environment variables) and perhaps GRPC_TRACE="http" GRPC_VERBOSITRY="DEBUG".
If the headers were actually gRPC metadata, you can use Python's with_call and call.initial_metadata and call.trailing_metadata as shown in the gRPC metadata example here.

Get HTTP request message from Request object in Scrapy

I need to somehow extract plain HTTP request message from a Request object in Scrapy (so that I could, for example, copy/paste this request and run from Burp).
So given a scrapy.http.Request object, I would like to get the corresponding request message, such as e.g.
POST /test/demo_form.php HTTP/1.1
Host: w3schools.com
name1=value1&name2=value2
Clearly I have all the information I need in the Request object, however trying to reconstruct the message manually is error-prone as I could miss some edge cases. My understanding is that Scrapy first converts this Request into Twisted object, which then writes headers and body into a TCP transport. So maybe there's away to do something similar, but write to a string instead?
UPDATE
I could use the following code to get HTTP 1.0 request message, which is based on http.py. Is there a way to do something similar with HTTP 1.1 requests / http11.py, which is what's actually being sent? I would obviously like to avoid duplicating code from Scrapy/Twisted frameworks as much as possible.
factory = webclient.ScrapyHTTPClientFactory(request)
transport = StringTransport()
protocol = webclient.ScrapyHTTPPageGetter()
protocol.factory = factory protocol.makeConnection(transport)
request_message = transport.value()
print(request_message.decode("utf-8"))
As scrapy is open source and also has plenty of extension points, this should be doable.
The requests are finally assembled and sent out in scrapy/core/downloader/handlers/http11.py in ScrapyAgent.download_request ( https://github.com/scrapy/scrapy/blob/master/scrapy/core/downloader/handlers/http11.py#L270 )
If you place your hook there you can dump the request type, request headers, and request body.
To place your code there you can either try monkey patching ScrapyAgent.download_request or to subclass ScrapyAgent to do the request logging, then subclass HTTP11DownloadHandler to use your Scrapy Agent and then set HTTP11DownloadHandler as new DOWNLOAD_HANDLER for http / https requests in your project's settings.py (for details see: https://doc.scrapy.org/en/latest/topics/settings.html#download-handlers)
In my opinion this is the closest you can get to logging the requests going out without using a packet sniffer or a logging proxy (which might be a bit overkill for your scenario).

image download problem (python)

I was trying to download images with multi-thread, which has a limited max_count in python.
Each time a download_thread is started, I leave it alone and activate another one. I hope the download process could be ended in 5s, which means downloading is failed if opening the url costs more than 5s.
But how can I know it and stop the failed thread???
Can you tell which version of python you are using?
Maybe you could have posted a snippet too.
From Python 2.6, you have a timeout added in urllib2.urlopen.
Hope this will help you. It's from the python docs.
urllib2.urlopen(url[, data][,
timeout]) Open the URL url, which can
be either a string or a Request
object.
Warning HTTPS requests do not do any
verification of the server’s
certificate. data may be a string
specifying additional data to send to
the server, or None if no such data is
needed. Currently HTTP requests are
the only ones that use data; the HTTP
request will be a POST instead of a
GET when the data parameter is
provided. data should be a buffer in
the standard
application/x-www-form-urlencoded
format. The urllib.urlencode()
function takes a mapping or sequence
of 2-tuples and returns a string in
this format. urllib2 module sends
HTTP/1.1 requests with
Connection:close header included.
The optional timeout parameter
specifies a timeout in seconds for
blocking operations like the
connection attempt (if not specified,
the global default timeout setting
will be used). This actually only
works for HTTP, HTTPS and FTP
connections.
This function returns a file-like
object with two additional methods:
geturl() — return the URL of the
resource retrieved, commonly used to
determine if a redirect was followed
info() — return the meta-information
of the page, such as headers, in the
form of an mimetools.Message instance
(see Quick Reference to HTTP Headers)
Raises URLError on errors.
Note that None may be returned if no
handler handles the request (though
the default installed global
OpenerDirector uses UnknownHandler to
ensure this never happens).
In addition, default installed
ProxyHandler makes sure the requests
are handled through the proxy when
they are set.
Changed in version 2.6: timeout was
added.

PUT Variables Missing between Python and Tomcat

I'm trying to get a PUT request from Python into a servlet in Tomcat. The parameters are missing when I get into Tomcat.
The same code is happily working for POST requests, but not for PUT.
Here's the client:
lConnection = httplib.HTTPConnection('localhost:8080')
lHeaders = {"Content-type": "application/x-www-form-urlencoded",
"Accept": "text/plain"}
lParams = {'Username':'usr', 'Password':'password', 'Forenames':'First','Surname':'Last'}
lConnection.request("PUT", "/my/url/", urllib.urlencode(lParams), lHeaders)
Once in the server, a request.getParameter("Username") is returning null.
Has anyone got any clues as to where I'm losing the parameters?
I tried your code and it seems that the parameters get to the server using that code. Tcpdump gives:
PUT /my/url/ HTTP/1.1
Host: localhost
Accept-Encoding: identity
Content-Length: 59
Content-type: application/x-www-form-urlencoded
Accept: text/plain
Username=usr&Password=password&Surname=Last&Forenames=First
So the request gets to the other side correctly, it must be something with either tomcat configuration or the code that is trying to read the parameters.
I don't know what the Tomcat side of your code looks like, or how Tomcat processes and provides access to request parameters, but my guess is that Tomcat is not "automagically" parsing the body of your PUT request into nice request parameters for you.
I ran into the exact same problem using the built-in webapp framework (in Python) on App Engine. It did not parse the body of my PUT requests into request parameters available via self.request.get('param'), even though they were coming in as application/x-www-form-urlencoded.
You'll have to check on the Tomcat side to confirm this, though. You may end up having to access the body of the PUT request and parse out the parameters yourself.
Whether or not your web framework should be expected to automagically parse out application/x-www-form-urlencoded parameters in PUT requests (like it does with POST requests) is debatable.
I'm guessing here, but I think the problem is that PUT isn't meant to be used that way. The intent of PUT is to store a single entity, contained in the request, into the resource named in the headers. What's all this stuff about user name and stuff?
Your Content Type is application/X-www-form-urlencoded, which is a bunch of field contents. What PUT wants is something like an encoded file. You know, a single bunch of data it can store somewhere.

Categories