How get access to Netwrok response area with Chrome DevTools Protocol? - python

I`m trying to get access to an area with a Network response
I`m using the following code, but it does not work.
import websocket
import json
from pprint import pprint
ws = websocket.WebSocket()
ws.connect("ws://localhost:9222/devtools/page/5EC90A588BEC2DA0229988D28BA67495")
ws.send(json.dumps({"method": "Network.getResponseBody", "id": })) # don`t know where i can find it
response = ws.recv()
pprint(response)
And I`m not sure if I do the right things.
So, does anybody know how to do this?
P.S. I know, I can make a direct request to API web-source and get JSON Object, but I need to make it with Chrome DevTools Protocol

Ok, I finally found it!
U just need to use the following method: "Network.getResponseBody"

Related

Filter the response from Websocket connection

I am attempting to filter the information from a websocket request.
I can complete my request fine however the response comes back with more information than I actually require. I want to filter this information out and then use it in a variable.
For example if I just use the sample code from ByBit websocket api
import json
from websocket import create_connection
ws = create_connection("wss://stream-testnet.bybit.com/realtime")
ws.send('{"op": "subscribe", "args": ["instrument_info.100ms.BTCUSD"]}');
bybitresult = ws.recv()
print(bybitresult)
ws.close()
I get the response below
{"topic":"instrument_info.100ms.BTCUSD","type":"snapshot","data":{"id":1,"symbol":"BTCUSD","last_price_e4":192785000,"last_price":"19278.50","bid1_price_e4":192780000,"bid1_price":"19278.00","ask1_price_e4":192785000,"ask1_price":"19278.50","last_tick_direction":"ZeroPlusTick","prev_price_24h_e4":192650000,"prev_price_24h":"19265.00","price_24h_pcnt_e6":700,"high_price_24h_e4":204470000,"high_price_24h":"20447.00","low_price_24h_e4":187415000,"low_price_24h":"18741.50","prev_price_1h_e4":192785000,"prev_price_1h":"19278.50","price_1h_pcnt_e6":0,"mark_price_e4":192886700,"mark_price":"19288.67","index_price_e4":193439800,"index_price":"19343.98","open_interest":467889481,"open_value_e8":0,"total_turnover_e8":1786988413378107,"turnover_24h_e8":65984748882,"total_volume":478565052570,"volume_24h":12839296,"funding_rate_e6":-677,"predicted_funding_rate_e6":-677,"cross_seq":5562806725,"created_at":"2018-12-29T03:04:13Z","updated_at":"2022-10-25T06:09:48Z","next_funding_time":"2022-10-25T08:00:00Z","countdown_hour":2,"funding_rate_interval":8,"settle_time_e9":0,"delisting_status":"0"},"cross_seq":5562806725,"timestamp_e6":1666678189180180}
However, I only want to use some of the data within the "data" string for example 'last_price' and 'timestamp_e6'. I have attempted this by trying to split the output string but am not having any luck at the moment.
Any help would be greatly appreciated. Thank you
The string received from ws.recv() is in JSON format. This string can be turned into a dictionary by doing something like:
import json
bybitresult = json.loads(ws.recv())
From there, you can get any data out of it as you would with a dictionary.
JSON in Python
Convert string into dictionary, which can be achieved by using json package, so that we can get values by referring keys👇.
import json
dict_Bybitresult = json.loads(Bybitresult)
last_price = dict_Bybitresult['data']['last_price']
timestamp_e6 = dict_Bybitresult['timestamp_e6']

How to make a request to get a picture from an ipcam?

I have some troubles getting the picture on my ip camera on python. I have an axis camera, I almost do the work on the rtsp link and cv2 video capture but when the hours go by I got an h264 error (here I asked for that problem).
So I decided to use a get request to get the picture, but now I got 401, error. Here is my code:
import requests
from requests.auth import HTTPBasicAuth
r = requests.get("http://xxx.xxx.xxx.xxx/jpg/image.jpg", auth=HTTPBasicAuth('xxx', 'xxx'))
print(r.status_code)
I also tried with out the HTTPBasicAuth but the same, I don't know how to get a good auth here.
Any help?
There is nothing wrong with your code. I have done the same code and works fine on my side. I would suggest you to verify the credentials that you have provided as a 401 response code is received when you provide wrong password or username.
Additionally, don't forget to pass the stream=True parameter inside the requests.get parameter otherwise the process will never successfully return anything even if the credentials actually work.
import requests
from requests.auth import HTTPBasicAuth
r = requests.get("http://xxx.xxx.xxx.xxx/jpg/image.jpg", auth=HTTPBasicAuth('xxx', 'xxx'), stream=True)
for streamDataChunks in r:
process_raw_image_data(streamDataChunks)

Issue with getting the response data using Locust

Im trying to see if I'm able to get the response data as I'm trying to learn how to use regex on Locust. I'm trying to reproduce my test script from JMeter using Locust.
This is the part of the code that I'm having problem with.
import time,csv,json
from locust import HttpUser, task,between,tag
class ResponseGet(HttpUser):
response_data= ""
wait_time= between (1,1.5)
host= "https://portal.com"
username= "NA"
password= "NA"
#task
def portal(self):
print("Portal Task")
response = self.client.post('/login', json={'username':'user','password':'123'})
print(response)
self.response_data = json.loads(response.text)
print(response_data)
I've tried this suggestion and I somehow can't make it work.
My idea is get response data > use regex to extract string > pass the string for the next task to use
For example:
Get login response data > use regex to extract token > use the token for the next task.
Is there any better way to do this?
The way you're doing it should work, but Locust's HttpUser's client is based on Requests so if you want to access the response data as a JSON you should be able to do that with just self.response_data = response.json(). But that will only work if the response body is valid JSON. Your code will also fail if the response body is not JSON.
If your problem is in parsing the response text as JSON, it's likely that the response just isn't JSON, possibly because you're getting an error or something. You could print the response body before your attempt to load it as JSON. But your current print(response) won't do that because it will just be printing the Response object returned by Requests. You'd need to print(response.text()) instead.
As far as whether a regex would be the right solution for getting at the token returned in the response, that will depend on how exactly the response is formatted.

Receiving 'BadRequestKeyError' when attempting to GET data from endpoint url

I was trying to implement an API using Python, and flask to help myself learn and practice REST.
The idea was to receive a HTTP POST with data that looks like as such:
{"startDate":"2015-07-01","endDate":2015-07-08","within":{"value":9000,"units":miles}} and send some of the data to a NASA API(endpoint).
I was able to create a POST method , and I am able to receive the data (both in POSTMAN and in the browser). Here is the relevant code :
#neows.route('/UserInput',methods=['GET','POST'])
def UserInput():
startDate = request.args.get('startDate')
endDate = request.args.get('endDate')
#print (type(startDate))
#print (type(endDate))
getAsteroids(startDate,endDate)
return jsonify(request.args)
But when I extract some data from the POST above to send to a NASA API (GET), I am receiving this error:
werkzeug.exceptions.BadRequestKeyError
Here is the url I am trying to hit : (https://api.nasa.gov/neo/rest/v1/feed?start_date=START_DATE&end_date=END_DATE&api_key=API_KEY)
I am able to hit the url both on POSTMAN and browser, outside of my code.
The relevant piece of code with the error is posted below and the line that seems to be throwing the error is in Italics (marked with *).
def getAsteroids(startDate,endDate):
API_KEY='xxx'
print (startDate)
print (endDate)
*result=request.args["https://api.nasa.gov/neo/rest/v1/feed?
start_date="+startDate+"&end_date="+endDate+"&api_key="+API_KEY+""]*
I would really appreciate if some one could help me understand and resolve this issue.
If you want to do a request against the NASA's API you can use requests module. (Or any other module to send HTTP requests)
import requests
# ...
def getAsteroids(startDate, endDate):
API_KEY='xxx'
payload = {'start_date': startDate, 'end_date': endDate, 'api_key': API_KEY}
result = requests.get('https://api.nasa.gov/neo/rest/v1/feed', params=payload)
request.args is something different used to get the parameters of the incoming request.

HTTP Get Request "Moved Permanently" using HttpLib

Scope:
I am currently trying to write a Web scraper for this specific page. I have a pretty strong "Web Crawling" background using C#, but this httplib is beating me off.
Problem:
When trying to make a Http Get request for the page specified above I get a "Moved Permanently", that points to the very same URL. I can make a request using the requests lib, but I want to make it work using httplib so I can understand what I am doing wrong.
Code Sample:
I am completely new to Python, so any wrong language guideline or syntax is C#'s fault.
import httplib
# Wrapper for a "HTTP GET" Request
class HttpClient(object):
def HttpGet(self, url, host):
connection = httplib.HTTPConnection(host)
connection.request('GET', url)
return connection.getresponse().read()
# Using "HttpClient" class
httpclient = httpClient()
# This is the full URL I need to make a get request for : https://420101.com/strain-database
httpResponseText = httpclient.HttpGet('www.420101.com','/strain-database')
print httpResponseText
I really want to make it work using the httplib library, instead of requests or any other fancy one because I feel like I am missing something really small here.
The problem i've had too little or too much caffeine in my system.
To get a https, I needed the HTTPSConnection class.
Also, there is no 'www' in the address I wanted to GET. So, it shouldn't be included in the host.
Both of the wrong addresses redirect me to the correct one, with the 301 error code. If I were using requests or a more full featured module, it would have automatically followed the redirect.
My Validation:
c = httplib.HTTPSConnection('420101.com')
c.request("GET", "/strain-database")
r = c.getresponse()
print r.status, r.reason
200 OK

Categories