Please do not close this question - this is not a duplicate. I need to click the button using Python requests, not Selenium, as here
I am trying to scrape Reverso Context translation examples page. And I have a problem: I can get only 20 examples and then I need to click the "Display more examples" button lots of times while it exists on the page to get the full results list. It can simply be done using a web browser, but how can I do it with Python Requests library?
I looked at the button's HTML code, but I couldn't find an onclick attribute to look at JS script attached to it, and I don't understand what request I need to send:
<button id="load-more-examples" class="button load-more " data-default-size="14px">Display more examples</button>
And here is my Python code:
from bs4 import BeautifulSoup
import requests
import re
with requests.Session() as session: # Create a Session
# Log in
login_url = 'https://account.reverso.net/login/context.reverso.net/it?utm_source=contextweb&utm_medium=usertopmenu&utm_campaign=login'
session.post(login_url, "Email=reverso.scraping#yahoo.com&Password=sample",
headers={"User-Agent": "Mozilla/5.0", "content-type": "application/x-www-form-urlencoded"})
# Get the HTML
html_text = session.get("https://context.reverso.net/translation/russian-english/cat", headers={"User-Agent": "Mozilla/5.0"}).content
# And scrape it
for word_pair in BeautifulSoup(html_text).find_all("div", id=re.compile("^OPENSUBTITLES")):
print(word_pair.find("div", class_="src ltr").text.strip(), "=", word_pair.find("div", class_="trg ltr").text.strip())
Note: you need to log in, otherwise it will show only first 10 examples and will not show the button. You may use this real authentication data:
E-mail: reverso.scraping#yahoo.com
Password: sample
Here is a solution that gets all the example sentences using requests and removes all the HTML tags from them using BeautifulSoup:
from bs4 import BeautifulSoup
import requests
import json
headers = {
"Connection": "keep-alive",
"Accept": "application/json, text/javascript, */*; q=0.01",
"X-Requested-With": "XMLHttpRequest",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36",
"Content-Type": "application/json; charset=UTF-8",
"Content-Length": "96",
"Origin": "https://context.reverso.net",
"Sec-Fetch-Site": "same-origin",
"Sec-Fetch-Mode": "cors",
"Referer": "https://context.reverso.net/^%^D0^%^BF^%^D0^%^B5^%^D1^%^80^%^D0^%^B5^%^D0^%^B2^%^D0^%^BE^%^D0^%^B4/^%^D0^%^B0^%^D0^%^BD^%^D0^%^B3^%^D0^%^BB^%^D0^%^B8^%^D0^%^B9^%^D1^%^81^%^D0^%^BA^%^D0^%^B8^%^D0^%^B9-^%^D1^%^80^%^D1^%^83^%^D1^%^81^%^D1^%^81^%^D0^%^BA^%^D0^%^B8^%^D0^%^B9/cat",
"Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7",
}
data = {
"source_text": "cat",
"target_text": "",
"source_lang": "en",
"target_lang": "ru",
"npage": 1,
"mode": 0
}
npages = requests.post("https://context.reverso.net/bst-query-service", headers=headers, data=json.dumps(data)).json()["npages"]
for npage in range(1, npages + 1):
data["npage"] = npage
page = requests.post("https://context.reverso.net/bst-query-service", headers=headers, data=json.dumps(data)).json()["list"]
for word in page:
print(BeautifulSoup(word["s_text"]).text, "=", BeautifulSoup(word["t_text"]).text)
At first, I got the request from the Google Chrome DevTools:
Pressed F12 key to enter it and selected the Network Tab
Clicked the "Display more examples" button
Found the last request ("bst-query-service")
Right-clicked it and selected Copy > Copy as cURL (cmd)
Then, I opened this online-tool, insert the copied cURL to the textbox on the left and copied the output on the right (use Ctrl-C hotkey for this, otherwise it may not work).
After that I inserted it to the IDE and:
Removed the cookies dict - it is not necessary here
Important: Rewrote the data string as a Python dictionary and wrapped it with json.dumps(data), otherwise, it returned a request with empty words list.
Added a script, that: gets a number of times to fetch the words ("pages") and created a for loop that gets words this number of times and prints them without HTML tags (using BeautifulSoup)
UPD:
For those, who visited the question to learn how to work with Reverso Context (not just to simulate a button click request on other website) there is a Python wrapper for Reverso API released: Reverso-API. It can do the same thing as above but much simpler:
from reverso_api.context import ReversoContextAPI
api = ReversoContextAPI("cat", "", "en", "ru")
for source, target in api.get_examples_pair_by_pair():
print(highlight_example(source.text), "==", highlight_example(target.text))
Related
Before I start let me point out that I have almost no clue wtf I'm doing. Like imagine a cat that tries to do some coding. I try to write some Python code using Pycharm on Ubuntu 22.04.1 LTS and also used Insomnia if this makes any difference. Here is the code:
`
# sad_scrape_code_attempt.py
import time
import httpx
from playwright.sync_api import sync_playwright
HEADERS = {
"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:106.0) Gecko/20100101 Firefox/106.0",
"Accept": "*/*",
"Accept-Language": "en-US,en;q=0.5",
"Accept-Encoding": "gzip, deflate, br",
"Referer": "https://shop.metro.bg/shop/cart",
"CallTreeId": "||BTOC-1BF47A0C-CCDD-47BB-A9DA-592009B5FB38",
"Content-Type": "application/json; charset=UTF-8",
"x-timeout-ms": "5000",
"DNT": "1",
"Connection": "keep-alive",
"Sec-Fetch-Dest": "empty",
"Sec-Fetch-Mode": "cors",
"Sec-Fetch-Site": "same-origin"
}
def get_cookie_playwright():
with sync_playwright() as p:
browser = p.firefox.launch(headless=False, slow_mo=50)
context = browser.new_context()
page = context.new_page()
page.goto('https://shop.metro.bg/shop/cart')
page.fill('input#user_id', 'the_sad_cat_username')
page.fill('input#password', 'the_sad_cat_password')
page.click('button[type=submit]')
page.click('button.btn-primary.accept-btn.field-accept-button-name')
page.evaluate(
"""
var intervalID = setInterval(function () {
var scrollingElement = (document.scrollingElement || document.body);
scrollingElement.scrollTop = scrollingElement.scrollHeight;
}, 200);
"""
)
prev_height = None
while True:
curr_height = page.evaluate('(window.innerHeight + window.scrollY)')
if not prev_height:
prev_height = curr_height
time.sleep(1)
elif prev_height == curr_height:
page.evaluate('clearInterval(intervalID)')
break
else:
prev_height = curr_height
time.sleep(1)
# print(context.cookies())
cookie_for_requests = context.cookies()[11]['value']
browser.close()
return cookie_for_requests
def req_with_cookie(cookie_for_requests):
cookies = dict(
Cookie=f'BIGipServerbetty.metrosystems.net-80={cookie_for_requests};')
r = httpx.get('https://shop.metro.bg/ordercapture.customercart.v1/carts/alias/current', cookies=cookies)
return r.text
if __name__ == '__main__':
data = req_with_cookie(get_cookie_playwright())
print(data)
# Used packages
#Playwright
#PyTest
#PyTest-Playwirght
#JavaScript
#TypeScript
#httpx
`
so basically I copy paste the code of 2 tutorials made by John Watson Rooney called:
The Biggest Mistake Beginners Make When Web Scraping
Login and Scrape Data with Playwright and Python
Than combined them and added some JavaScript to scroll to the bottom of the page. Than I found an article called: How Headers Are Used to Block Web Scrapers and How to Fix It
thus replacing "import requests" with "import httpx" and added the HEADERS as per given from Insomnia. From what I understand browsers return headers in certain order and this is an often overlooked web scraper identification method. Primarily because many http clients in various programming languages implement their own header ordering - making identification of web scrapers very easy! If this is true I need to figure out a way to return my cookies header following the correct order, which by the way I have no clue how to figure out but I believe its #11 or #3 judging by the code generated by Insomnia:
`
import requests
url = "https://shop.metro.bg/ordercapture.customercart.v1/carts/alias/current"
querystring = {"customerId":"1001100022726355","cardholderNumber":"1","storeId":"00022","country":"BG","locale":"bg-BG","fsdAddressId":"1001100022726355994-AD0532EI","__t":"1668082324830"}
headers = {
"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:106.0) Gecko/20100101 Firefox/106.0",
"Accept": "*/*",
"Accept-Language": "en-US,en;q=0.5",
"Accept-Encoding": "gzip, deflate, br",
"Referer": "https://shop.metro.bg/shop/cart",
"CallTreeId": "||BTOC-1BF47A0C-CCDD-47BB-A9DA-592009B5FB38",
"Content-Type": "application/json; charset=UTF-8",
"x-timeout-ms": "5000",
"DNT": "1",
"Connection": "keep-alive",
"Cookie": "selectedLocale_BG=bg-BG; BIGipServerbetty.metrosystems.net-80=!DHrH53oKfz3YHEsEdKzHuTxiWd+ak6uA3C+dv7oHRDuEk+ScE0MCf7DPAzLTCmE+GApsIOFM2GKufYk=; anonymousUserId=24EE2F84-55B5-4F94-861E-33C4EB770DC6; idamUserIdToken=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6IktfYWE1NTAxNWEtMjA2YS0xMWVkLTk4ZDUtZTJjYzEyYjBkYzUwIn0.eyJleHAiOjE2NjgwODQxMjIsImlhdCI6MTY2ODA4MjMyMiwiYXVkIjoiQlRFWCIsInNlc3Npb25fc3RhdGUiOiJPTnJweFVhOG12WHRJeDR0c3pIZ09GR296WHUyeHZVVzVvNnc3eW1lLUdZLnJMRU1EWGFGIiwiaXNzIjoiaHR0cHM6Ly9pZGFtLm1ldHJvLmJnIiwiZW1haWwiOiJvZmZpY2VAdGVydmlvbi5iZyIsIm5vbmNlIjoiZjg3ZDMyYzEyYTRkNDY1ZGEzYjQwMTQ3OTlkYzc4NzMiLCJjX2hhc2giOiIiLCJzdWIiOiJVX2Y0MjBhY2E4LWY2OTMtNGMxNS1iOTIzLTc1NWY5NTc3ZTIwMCIsImF0X2hhc2giOiJlbkFGRFNJdUdmV0wzNnZ0UnJEQ253IiwicmVhbG0iOiJTU09fQ1VTVF9CRyIsImF1dGhfdGltZSI6MTY2ODA4MjMyMiwiYW1yIjpbIlVTRVJfQ1JFREVOVElBTFMiXX0.AC9vccz5PBe0d2uD6tHV5KdQ8_zbZvdARGUqo5s8KpJ0bGw97vm3xadF5TTHBUwkXX3oyJsbygC1tKvQInycU-zE0sqycIDtjP_hAGf6tUG-VV5xvtRsxBkacTBMy8OmbNHi5oncko7-dZ_tSOzQwSclLZKgKaqBcCqPBQVF0ug4pvbbqyZcw6D-MH6_T5prF7ppyqY11w9Ps_c7pFCciFR965gsO3Q-zr8CjKq1qGJeEpBFMKF0vfwinrc4wDpC5zd0Vgyf4ophzo6JkzA8TiWOGou5Z0khIpl435qUzxzt-WPFwPsPefhg_X9fYHma_OqQIpNjnV2tQwHqBD1qMTGXijtfOFQ; USER_TYPE=CUST; compressedJWT=eNpVUtlyozAQ/CJvcdrhEZtLGGGbQ4BeUlwGiTMhMeCvX5Fkt3YfVKrqme6eaalc7Tozc3Ihth8+Ae8SW/lVrvaYi3AD7Vx0eRwD4pzsuohuG3YsIm/EkUyxDybQjVzqgz2gqnBrj0ZpthNEtzUNqjWl3uqb4xkxA8Z/FhHY+ATHHld+cdFnYbZcZqIPpsflK9PpsBbw4LvfVFYcsh6LzdLJfGbOE+hR8B9ObOmG4FTqLgz4InCs+hhw81Q0BnQsHIQGmBLe3TR/7nzC7fHqmBh6uuIDMpMCuVwm2u2Xf2NbngbWDc9NQ85MpcYnhvcfOejtB5s1B3TMQefyueg9sgit8QlM8cnmc1P+rlF9hpq+QE2dIQUipMnTDRiPLBuvzjtvyISlwbF9KSKe5WH/8Izvnt5rE6FGuYDWsFMmjOa/+zMfLmWegYkEHC0/PO+P9qPYcuzbb5ztwvqVr1061LHzTHX8yDu33XbCnTHlQsgydcesK5iPO2JBvmbk3xpmH6RtNt00YnNQXXBpNV+0UIYU8lCD2ztKOdODQSJcNFVyg2aF60zS2GVvjvQk9lpAh8WliQS1aoVPwPJQn/fbr0vdxRiDJLh7d8pJhzVeNIW+75QK7H0zFVp9Z3BeGmZlA17s5LAcHDgjmc8vO/QiqorcSOenYVEx0/HJATQIqDJxAS7qsKnGQqrrXf5qNaf9GyRl3emruki8vxg0It5IhsxSfI8lGkvl+72qsoNMjhUp75xzR7NRq83w0Pp6oRqg74eq65zPaD/H9TX6GIyDfmFccfA8/fVtkPe7y5AUosA+fpZWBO0l9QzSZIfuoeG2n8aJNKG0WMfoap2XOcVJKT0ex9ep0m9vZv0gJwkqKue+Xb0TZ0Bjz+HMqi9W6Z81h+8PCaRZTJtoFYOun46FkQiPyFmGF65/VX33RdKl+ZYcXDvs7/Nv6PdLkg==; SES2_customerAdr_1001100022726355={%22addressId%22:%221001100022726355994-AD0532EI%22%2C%22addressHash%22:%221001100022726355994-AD0532EI%22%2C%22storeId%22:%2200022%22}; SES2_customerAdr_={%22addressId%22:null%2C%22addressHash%22:null%2C%22storeId%22:%2200022%22}; UserSettings=SelectedStore=1b1fc6ac-2ad6-4243-806e-a4a28c96dff4&SelectedAddress=1001100022726355994-ad0532ei",
"Sec-Fetch-Dest": "empty",
"Sec-Fetch-Mode": "cors",
"Sec-Fetch-Site": "same-origin"
}
response = requests.request("GET", url, headers=headers, params=querystring)
print(response.text)
`
So I'm stuck. Any help or ideas will be greatly appreciated.
The page you're navigating to shows this on a GET request:
HTTP ERROR 401 You must provide a http header 'JWT'
This means that this page requires a level of authorization to be accessed.
See JWTs.
"Authorization: This is the most common scenario for using JWT. Once the user is logged in, each subsequent request will include the JWT, allowing the user to access routes, services, and resources that are permitted with that token."
You can access the root page just fine, but once you navigate to more user specific pages or "routes", you will need to provide a JWT to access that page's content.
There is a way to get past this using scraping. You will have to log in to the site as a user using your scraper and collect the JWT which is created by the server and sent back to your client. Then use that JWT in your request's headers:
token = "randomjwtgibberishshahdahdahwdwa"
HEADERS = {
"Authorization": "Bearer " + token
}
I am trying to scrape ETFs from the website https://www.etf.com/channels. However no matter what I try it returns a 503 error when trying to access it. I've tried using different user agents as well as headers but it still wouldn't let me access it. Sometimes when I try to access the website by browser a page pops up that "checks if the connection is secure" So I assume they have things in place to stop scraping. I've seen others ask the same question and the answer always says to add a user agent but that didn't work for this site.
Scrapy
class BrandETFs(scrapy.Spider):
name = "etfs"
start_urls = ['https://www.etf.com/channels']
headers = {
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8",
"Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "en-US,en;q=0.5",
"Connection": "keep-alive",
"Host": "www.etf.com",
"Sec-Fetch-Dest": "document",
"Sec-Fetch-Mode": "navigate",
"Sec-Fetch-Site": "cross-site",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:103.0) Gecko/20100101 Firefox/103.0"
}
custom_settings = {'DOWNLOAD_DELAY': 0.3, "CONCURRENT_REQUESTS": 4}
def start_requests(self):
url = self.start_urls[0]
yield scrapy.Request(url=url)
def parse(self, response):
test = response.css('div.discovery-slat')
yield {
"test": test
}
Requests
import requests
url = 'https://www.etf.com/channels'
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36',
'Referer': 'https://google.com',
'Origin': 'https://www.etf.com'
}
r = requests.post(url, headers=headers)
r.raise_for_status()
Is there anyway to get around these blocks and access the website?
Status 503 - Service Unavailable is often seen in such cases, you are probably right with your assumption that they have taken measures against scraping.
For the sake of completeness, they prohibit what you are attempting in their Terms of Service (No. 7g):
[...] You agree that you will not [...]
Use automated means, including spiders, robots, crawlers [...]
Technical point of view
The User-Agent in the header is just one of many things that you should consider when you try to hide the fact that you automated the requests you are sending.
Since you see a page that seems to verify that you are still/again a human, it is likely that they have figured out what is going on and
have an eye on your IP. It might not be blacklisted (yet) because they notice changes whenever you try to access the page.
How did they find out? Based on your question and code, I guess it's just your IP that did not change in combination with
Request rate: You have sent (too many) requests too quickly, i.e. faster than they consider a human to do this.
Periodic requests: Static delays between requests, so they see pretty regular timing on their side.
There are several other aspects that might or might not be monitored. However, using proxies (i.e. changing IP addresses) would be a step in the right direction.
This is the url https://www.lowes.com/store/AK-Anchorage/2955 when we reach this url there is a button name "Shop this store" if we click the button the request made by the clicking the button and using the link are the same but still after clicking the button one gets a different page then directly using the link. I need to make the same request as the button is making.
I need to make request to "https://www.lowes.com/store/AK-Anchorage/2955" then i need to make the same request as made my clicking the button.
I have tried making the requests two consecutive times to get the desired page but no luck.
url='https://www.lowes.com/store/AK-Anchorage/2955'
ua = UserAgent()
header = {'User-Agent':str(ua.chrome)}
response = requests.get(url, headers=header)
response = requests.get(url, headers=header)
So, this seems to work. I get a 200 OK response both times, and the content isn't the same length.
For what it's worth, in Firefox, when I click the blue "Shop this store" button, it takes me to what appears to be the exact same page, but without the blue button I just clicked. In Chrome (Beta), when I click the blue button, I get a 403 Access denied page. Their server isn't playing nice. You might struggle to achieve what you want to achieve.
If I call session.get without my headers, I never get a response at all. So they're obviously checking the user-agent, possibly cookies, etc.
import requests
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Accept-Language": "en-US,en;q=0.5",
"Accept-Encoding": "gzip, deflate, br",
"Upgrade-Insecure-Requests": "1",}
session = requests.Session()
url = "https://www.lowes.com/store/AK-Anchorage/2955"
response1 = session.get(url, headers=headers)
print(response1, len(response1.content))
response2 = session.get(url, headers=headers)
print(response2, len(response2.content))
Output:
<Response [200]> 56282
<Response [200]> 56323
I've done some more testing. The server times out if you don't change the user-agent from the default Python Requests one. Even changing it to "" seems to be enough for the server to give you a response.
You can get product information, including description, specifications, and price, without selecting a specific store. Take a look at this GET request, with no cookies, and no session:
import requests, json
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0"}
url = "https://www.lowes.com/pd/Google-Nest-Learning-Thermostat-3rd-Gen-Thermostat-and-Room-Sensor-with-with-Wi-Fi-Compatibility/1001080012"
r = requests.get(url, headers=headers, timeout=5)
print("return code:", r)
print("content length:", len(r.content))
for line in r.text.splitlines():
if "window.digitalData.products = [" in line:
print("This line includes the 'sellingPrice' and the 'retailPrice'. After some splicing, we can treat it as JSON.")
left = line.find(" = ") + 3
right = line.rfind(";")
print(json.dumps(json.loads(line[left:right]), indent=True))
break
Output:
return code: <Response [200]>
content length: 107134
This line includes the 'sellingPrice' and the 'retailPrice'. After some splicing, we can treat it as JSON.
[
{
"productId": [
"1001080012"
],
"productName": "Nest_Learning_Thermostat_3rd_Gen_Thermostat_and_Room_Sensor_with_with_Wi-Fi_Compatibility",
"ivm": "753160-83910-T3007ES",
"itemNumber": "753160",
"vendorNumber": "83910",
"modelId": "T3007ES",
"type": "ANY",
"brandName": "Google",
"superCategory": "Heating & Cooling",
"quantity": 1,
"sellingPrice": 249,
"retailPrice": 249
}
]
The product description and specification can be found in this element:
<section class="pd-information met-product-information grid-100 grid-parent v-spacing-jumbo">
(It's ~300 lines, so I'm just going to copy the parent tag.)
There's an API that takes a product id and store number, and returns the pricing information:
import requests, json
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0"}
url = "https://www.lowes.com/PricingServices/price/balance?productId=1001080012&storeNumber=1955"
r = requests.get(url, headers=headers, timeout=5)
print("return code:", r)
print("content length:", len(r.content))
print(json.dumps(json.loads(r.text), indent=True))
Output:
return code: <Response [200]>
content length: 768
[
{
"productId": 1001080012,
"storeNumber": 1955,
"isSosVendorDirect": true,
"price": {
"selling": "249.00",
"retail": "249.00",
"typeCode": 1,
"typeIndicator": "Regular Price"
},
"availability": [
{
"availabilityStatus": "Available",
"productStockType": "STK",
"availabileQuantity": 822,
"deliveryMethodId": 1,
"deliveryMethodName": "Parcel Shipping",
"storeNumber": 907
},
{
"availabilityStatus": "Available",
"productStockType": "STK",
"availabileQuantity": 8,
"leadTime": 1570529161540,
"deliveryMethodId": 2,
"deliveryMethodName": "Store Pickup",
"storeNumber": 1955
},
{
"availabilityStatus": "Available",
"productStockType": "STK",
"availabileQuantity": 1,
"leadTime": 1570529161540,
"deliveryMethodId": 3,
"deliveryMethodName": "Truck Delivery",
"storeNumber": 1955
}
],
"#type": "item"
}
]
It can take multiple product numbers. For example:
https://www.lowes.com/PricingServices/price/balance?productId=1001080046%2C1001135076%2C1001091656%2C1001086418%2C1001143824%2C1001094006%2C1000170557%2C1000920864%2C1000338547%2C1000265699%2C1000561915%2C1000745998&storeNumber=1564
You can get information on every store by using this API which returns a 1.6MB json file. maxResults is normally set to 30, and query is your longitude and latitude. I would suggest saving this to disk. I doubt it changes much.
https://www.lowes.com/wcs/resources/store/10151/storelocation/v1_0?maxResults=2000&query=0%2C0
Keep in mind the PricingServices/price/balance endpoint can take multiple values for storeNumber separated by %2C (a comma), so you won't need 1763 separate GET requests. I still made multiple requests using a requests.Session (so it reuses the underlying connection).
It depends on what do you want to do with the data. In the URL you already have shop ID.
When clicking on the button it issues the request to https://www.lowes.com/store/api/2955 to get shop information. Is it what you're looking for?
If so, you don't need 2 requests, but rather just one to get needed shop information.
The goal is to print the text of all neighborhoods in the scroll at the top of a Google search when entering a term like "New York City neighborhoods"
Although there is no encoding issue when using requests as...
googleSearch = BeautifulSoup(requests.get('https://www.google.com/search?q=new+york+city+neighborhoods').content, "html.parser")
...it doesn’t return all of the response HTML that I was expecting (only a few items in the scroll exist, despite the Postman and Chrome response showing all of them) [1] , which is why the following method is being attempted (but has an encoding issue for me):
url = "https://www.google.com/search"
querystring = {"q":"New York City neighborhoods"}
headers = {
'upgrade-insecure-requests': "1",
'user-agent': "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36",
'x-chrome-uma-enabled': "1",
'x-client-data': "CIy2yQEIo7bJAQjEtskBCIuZygEI+pzKAQipncoB",
'accept': "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
'accept-encoding': "gzip, deflate, sdch, br",
'avail-dictionary': "MC9c6ZtH",
'accept-language': "en-US,en;q=0.8",
'cookie': "HSID=AQGYffYcWgUgwoIGG; SSID=AsyTtOTpG3P0TWe_e; APISID=DZOqFSNpfZmThOP6/A15eY85jEZTDT47_j; SAPISID=4jqCaE3zLEcO8GG4/ANI8HEy3etCmKfit2; SID=4AMk07dZM5wKaFcBAD7PgfLgMV1imGkqULwEdE9VI3lwoNRghaVTGT4ZT0mCGgzehY3mFg.; OGPC=5062210-7:765334528-2:699960320-1:961419264-9:; NID=97=bZNps3TAJFPAppe9EQbLyUDwXDbEFN57lT_capK2DQMWMVo7nEnYlPV-_g5OkOCERrN6MS5PxJXuVUOhjHeZGhCkS4FubcEapEzyuSQVS9rJM99rPzwE98ra47eP-ay0YTR-TawjFJ-0hAqT_j7SI7vQGVIU6yj4awM0hEt4ZXTd4k0RnH6kJPb0qVCc8AnQQLg4VZ0Kc1s83vJo6k7jFm-GCEoi; HSID=AQGYffYcWgUgwoIGG; SSID=AsyTtOTpG3P0TWe_e; APISID=DZOqFSNpfZmThOP6/A15eY85jEZTDT47_j; SAPISID=4jqCaE3zLEcO8GG4/ANI8HEy3etCmKfit2; SID=4AMk07dZM5wKaFcBAD7PgfLgMV1imGkqULwEdE9VI3lwoNRghaVTGT4ZT0mCGgzehY3mFg.; OGPC=5062210-7:765334528-2:699960320-1:961419264-9:; NID=97=bZNps3TAJFPAppe9EQbLyUDwXDbEFN57lT_capK2DQMWMVo7nEnYlPV-_g5OkOCERrN6MS5PxJXuVUOhjHeZGhCkS4FubcEapEzyuSQVS9rJM99rPzwE98ra47eP-ay0YTR-TawjFJ-0hAqT_j7SI7vQGVIU6yj4awM0hEt4ZXTd4k0RnH6kJPb0qVCc8AnQQLg4VZ0Kc1s83vJo6k7jFm-GCEoi; DV=Qg7Cq8EJDPcYvgxe_quK9y6d3FXJtAI",
'cache-control': "no-cache",
'postman-token': "e6cec459-250e-1795-0e78-c450e5dfd56b"
}
When attempting to retreive the response (which has a 200 status code):
googleSearch = BeautifulSoup(requests.request("GET", url, headers=headers, params=querystring).content, "html.parser")
googleSearch.text prints as:
No handlers could be found for logger "bs4.dammit"
��������[�#ٕ ֑�RK=��V��i$�YU��$���+Y�j2H&��L>"��R*^$��gDefukz0�j���|�ax���1��k�a��6y=���X���X�þ��`ɬ.MK;�pgoĽ�{��{��D5�gLJ�������o}�?��[��듟���[ �ݷ�������9C�m�BFQ|�
…with much more of the weird characters
Can requests be used for a google search, or is another module necessary?
[1] expected HTML: The html shown in the response in the Postman app and Chrome contains div[class=“kltat”] elements (every item in the scroll at the top of the page (neighborhoods in this case) even if not shown yet on the scroll), whereas the other data contains HTML that only contains some of the scroll items and no div[class=“kltat”] elements
Including this line is telling google's server's they can respond using http-compression:
'accept-encoding': "gzip, deflate, sdch, br"
My guess is that the compression used is gzip, although it is also allowing deflate, Brotli, and Google Shared Dictionary Compression.
You can remove the accept-encoding line from your headers; or import the gzip library and unzip the contents.
You actually need to send only user-agent, the rest of the headers is, as far as I know, overkill.
The weird characters (binary) come because you were using .content method which returns binary data for non-text requests and "accept-encoding": "gzip, deflate, sdch, br" headers, you need to use .text method instead which automatically decodes the content thus you don't receive the weird characters.
Code below scrapes (in this case, 33 elements out of ~40+), to get more results you need to use selenium or other browser automation and click on the right arrow button in order to load other elements and scrape them.
Code and example in the online IDE that scrapes thumbnails as well:
from bs4 import BeautifulSoup
import requests, lxml, re, json
headers = {
'User-agent':
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
params = {
'q': 'new york city neighborhoods',
'gl': 'us',
}
def bs4_get_top_carousel():
html = requests.get('https://www.google.com/search', headers=headers, params=params)
soup = BeautifulSoup(html.text, 'lxml')
carousel_name = soup.select_one('.F0gfrd+ .z4P7Tc').text
data = {f"{carousel_name}": []}
all_script_tags = soup.select('script')
# https://regex101.com/r/NYdrL5/1
thumbnails = re.findall(r"<script nonce=\".*?\">\(\w+\(\)\{\w+\s?\w+='(.*?)';\w+\s?\w+=\['\w+'\];\w+\(\w+,\w+\);\}\)\(\);<\/script>", str(all_script_tags))
for result, thumbnail in zip(soup.select('.ct5Ked'), thumbnails):
title = result["aria-label"]
link = f"https://www.google.com{result['href']}"
try:
extensions = result.select_one(".cp7THd .FozYP").text
except: extensions = None
decoded_thumbnail = bytes(thumbnail, 'ascii').decode('unicode-escape')
# print(f'{title}\n{link}\n{extensions}\n{decoded_thumbnail}\n')
data[carousel_name].append({
'title': title,
'link': link,
'extensions': [extensions],
'thumbnail': decoded_thumbnail
})
print(json.dumps(data, indent=2, ensure_ascii=False))
---------------------
'''
]
...
{
"title": "Lower East Side",
"link": "https://www.google.com/search?gl=us&q=Lower+East+Side&stick=H4sIAAAAAAAAAONgFuLUz9U3MIo3sjBTAjMNKy2NzbUUspOt9HPykxNLMvPz9AtyEpNTrfJSM9MzkvKLMvLzU4ofMfpxC7z8cU9YynXSmpPXGO25CGoREudic80rySypFOKV4uZCWGzFpMHEs4iV3ye_PLVIwTWxuEQhODMldQIbIwAs7VbHoAAAAA&sa=X&ved=2ahUKEwimh-H-q93zAhXCZc0KHXj3DAYQ-BZ6BQgBEI4B",
"extensions": [
null
],
"thumbnail": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/4QAqRXhpZgAASUkqAAgAAAABADEBAgAHAAAAGgAAAAAAAABHb29nbGUAAP/bAIQAAwICAw0CCg0LAwgLDhAKDgsKCg4RChANCg4OCxAIDQsIEAgLCgkLDQoIDQ0LCgoKCgoLCgoLDQ0QCgsNCwoJCgEDBAQGBQYKBgYKEA0LDhAQEBAQEA8QEA8PDxAPDg8QDw0PDw8ODw0ODxAPDQ0QDg8NDQ4PDw8QEA8NDQ0NDw0O/8AAEQgASABgAwERAAIRAQMRAf/EABwAAAIDAAMBAAAAAAAAAAAAAAUHAwQGAQIIAP/EAEMQAAICAAQEAgYGBgcJAAAAAAECAxEEBRIhAAYTMQciIzJBUWGBFFJxkaHBFRYzNEKxFyZDU2LR0wgkcnOSpMLh8f/EABoBAAIDAQEAAAAAAAAAAAAAAAMEAQIFAAb/xAAyEQABAwIEBAQGAgIDAAAAAAABAAIRAyEEEjFBE1FhcQWBocEiMpGx4fDR8RSCFULC/9oADAMBAAIRAxEAPwDYcr5hmAyl0OEejXdW/DbhQtEyhtELjLMJJ+v8JMZHpE9h+t9348cfkKKB8SihUDPYTqHqrt94+z8eO2UEfFKPrKv9IeH849ZD9zniB8hV4+IK7n6qvM8PpB3H4SVxVnylXf8AMFu8FzrmsPNuFEeO0CSaOOVaX0iGZFKjUrEbMbIo0e/bgTRYq7wmzzbyPyfLiA0+B1UgEdtIpUFew0MpNMBQYH7BZHCNN7mD4UWo3N8y85y8q5WvOzhJAo6WJXTZtR9ElG+rzH32f5caTXOc24SxaAbFUMhyTDrmZprGmx8DfsrgkEhUFijuYZepyOcWANUbG/tZfb/xDiQFxuIWTXl6IdsR+Hz9/DDSQlC0FdM2yYHKsP6Yfs2H/cSvfu7OPbxxKqRpCMZFkyBPU93v4CUdsovleE/rLF61a1NW31u1XX4cUIsigmVPlwxAxUQ+lTdlunkH3gNR+fficohTmMo7isRixnUVZliQLGwkmA7/AMQDhT8xxAYI0Vi4yr2fS5gOYBWb4v1jY6s9euR26ldvhxzW2K5xuEVxmIxI5hw1Y+UjqrqBZjfnXY6ib2v5ccwc1LzyTGxeWytF5MzkJU6TZYsV9jNqCnvsfKBt9tKtdl1RntmyXecYDGjmR/SI3o5QDpjJ/d3C+shI32on4VVjh1rgQlyCEKwME4n3Mfb6kP8Ap8WVFNNBYkXpwm4g28WHPaZBuGiIOze327jfccJUnRBZ8qX+5w1f8nB/6HFxKCSqL4Fzk6+ig2lkQehwdVUcmwMBUevvQF+2+LFUOiEQZdi1wQJHw9vu4QFQEo14UGFzKccxodBrUu+/e/v78GtC4EysbzV4gcwxczQdHIzNH0wznzghhNFGUBsjzQSPIPKa6BG+sFLggi+qmLqvz3418wCeFoOT5WFyCTWSDGUKFGpQdQdC+way6AAspMg5saEq8brSYzxkxjYmNxyli2umahQsszkeZjQStN6jZYEEjzcc0gC6s4GZCO43xgdseGXlTGhY6k1EHz+cbIN3utyNOquysTp4hsDVS4Erfc1eMHMKQwsvK88ysOpJNGXDQozOhSUJaGRCsF7hQZiQzbF0mEOMGyZqMgWKW+deOOafpnblPGJamjLewMdW/nABNsVKswqjudaI7ThKOlVsF4wZt+n1QcoYlgYGYOTQEobSIn7soKjWsgVlINXZAJRBGqpBGqY/0wGFm6RBOH7W23ponI81Haq3APwG44rmgKIlBziZyn7J/hub/H/L5cSXAIEEqhHmMv6LIED/ALdzubu44R8B/DwQ6CUIuOgXkfFcyeNa5KkhzrGrCxNTmLB9M6SEJjYJJfmIWmKMD6yqPNxkuxVFji2LhbLMK5wEEe6pZ7zXzzHgGZfEaSQh0UoI4VYF5VhBPUiApXYE13ANXtadDxVlWoKeQiZvMiwnZOVPDnMYXZgfJCsu5m8ZfpqrLnLxrRtyMMVFKSo2hLHURQ8p3O5HcMjG0XGCQO6qcG9okD9+qyWN8SPGQ5a7Nm8/rFFhKYLVsSCX1RhQtil3s0TXaiuxdBjZLh68un3VG4aq4kBpH09ymcOfecg4B5hkj81XpwxFn20sHlF3tZPw961XG02WjeE1Twb3bo5Lzrz0saf1ncEk/wAGH2oX9Qg9tiNuK/5jQQC3VScIYMO0TJw+L8W/1bSSbniUrIGhv/d9hGTUbBIKvQNSi2BAJsUaC7xClPyevRXbgHRGf0/KVviLzx4iBpU+nSyvoqOTVhaBETdK1lwwFWgUFiqj2mlJLdPGUi3NEapd+EqB0TP7+7pRYbxT8d2zKo84xZPbURgqB91x4bc+7zaT9aq4h+PoMbJePKZ9QoZgqrjGQ+ZHsUz/AA85r8W5NIm51xNmZombThx/ZfSlChYz2ABJL2WANe7PreLim95F2huaP9g3WE4zw3O1o0OaCf8AUnSfdGIc/wDEt83ZVz/GMA7ors0A1FH6ZsCMlRYsUG247/naYYCRrG/PyQz4O4ugH0/Kzo5p8Zv1kRBm86qxIa3hoMrMrKdEeokBPWAAIA37XoVsfTDc2fYGwnWI5cwkqeBeXZcm8a8vqvReSYiD+hx4v0iU14d4yt1rJdSVC2NVoCa7kC+y2PIPNQ4t4DiBnII2/wC2t1vta3gsOUE5QZ32WI5y5byFuVppD5hcJidKPm+nwIDuDalWKvtshJGkgOubgM7cZkNonXn2B5StLFAOoBw3+0HeOyWnO/P2Xrzu0a8yB41xKYdk0yPpYyOv7RTpCoU0OKYKzWzKAVPpW4KrVw/ELCDBI6iBeOuo+yyziKTHhuYagEdT16ae6uT5HAOdir5hClLMCkhA1jqoCNPrd29lMDuNwDxXBUHVGjTbXsr4mo2m42O+ndHeauRQxcpiYtp1O5qkbWwra7alAsAHVsfcxi8G8sL5Ai/kBHv0FkKhiG5mtg7j39lyOWMU2cYUHGQxr1zGWkJGoFXT0dd9NaySQNCHe6BtQp8XJJ0HPWxFv3ZdVfkzW391r8Tio5OQdCZ1hZTFguuAxIiMkesOjdNiA0hkjeMhx6MVqqg6dfBPbJLTALQY5G06aDfRGZiWmADczyt627XQqfKMnbnHR1o28sUYKmwwLTQtQJA2rdgDQZR2I4WaXf4pgQIf9YH75IxaOOATJlv3KxXih448tYHN5FGVyTKyaVZK8p1FTq1LsbtdyNxdN6vHYbwgYjCtruf8WaMsxaJmYMjsN9VetjTTxBotaIInNr0iLQf4QvlfxYy9eY9InZ1XNJGZADqEYylCD5VW21rIlFh5rsUQOPS+H4PBOrRjAA3KNS4hwz1MwIE2EMdIE6hYWOrYltDNhnEuzGwyiPgpwRMay9tzHbVOvw+znlwZbG7lyHxEqobD7sVcNIaQ0x7uFIDXZABYY1fw5rq9QNZAEGBoBfQRoO+ydo4wtpMLnTMiTuba90u/Gbm9cNn7SplZk6bvKsakDqao5BQNNpp7LkK1X6p7cScKKrm0wYlrR2v+FHH4Yc8jQk+n5QDnXmgPlsXQnkYQzsk4R8KC41GDQRicTDoVcQQrMy6vKGC6R1V3qPh8Go46F+aY6nroNOusRrlPxcBjQb5I35DpqdfSZS4h/wBpeFcCMEmXSazjEYya4GVQ8kZ0RthZpUOnu489trQ6TqAviPD6b8QK5NxsOxuT5qlHFvFE0RoTJm+8xHcfwlvyrh8Q+UkS5ajEg1VRv59Q1sYlZpSklFVYpqoL1NFhvQhx4BqN1iekDYcgVlwOKGO5xO99ydyNVP4r4zNxFqTCuTpxCkndrPSthW4KgMwNNRBO16h5zBMGUgxMj7D8LdxjyHS3kfU69xcrS+IeCzKTN5NKhunmGHlosw0qFxABG6kU4saSLPvA4FVPDFUuGojTvrzRKTOIacbGTftonBipMy+h4VosMgId2dTpb+xdQFN0CZSt1sVvuCCEcFmpRaRljba+/b9unMSG1JGhnee23fqsdy74a4l+a5+pI8aiDpAjo79PEOjC5XonpqFKFQaZtLowYcbuIfwiHA628iN/2Vk0M1VjqZFteV7b+SvZNhedBNA0mBgtcRHE7x6aK6YiHrq2rMBO1INCAUqr5VbqQpuY5sbnVRVL2uaZ2XlbLf039E0vhHXXI+tTqXUdCEWBQJ7ncGiDWntwtX4ZYCwzGm9v2EfDcQVTnbE+X97816+8IRImDeWTL4qbHOUsAs9YZ4d6QgI6odNkk0223GJjMM81mVGmxa9uu8Oct92IolgDWwYZPUiGmPoF9y5zPmknKcZXFqrfpBXdHjZDoEkUOlguKlC1hzrdkAXrjQYylzt6nESyrULLyyxgxPxEb8umq8lSAdTYHbO03iwO3ONdkE/2gpsWsGr6OZfVGgVpfViAQvnDbVJTXsV2PCGDomm+i53X/wBndHxLw9tRo6fZo2WU5h5R5e+nu4mXU8mtlGm9bPRJAJOrz+YkbAEnYGmTWLXOuYv23SbWAtbYTb2Wii8PMogyfUqb9aM/G2mUbnuQCe3avcOEaWO41YC+h+xT1TD5KRIj9KW78odTxgTpOuhSJZGLKsaeioLIzssKBpYxp6jIC9gW5CnSoPqPwml5IHaAfoL/AGSdQNbiNbRPnJ9dFpcacqdDJHNHMBJMhCamN2hsBFawEo6hezD3mlQ2s1pI9OyZL6TiASmzByf1M3m18ryuDIjKek51Fer5ltRqGmTZhY83+LhisHubACrQeGnVbJ+SsKuV7coYpey2QVUBiFqnBq7ruBZHCLadcGwTrqtI6lC8by+v0uQjJwfO++uNe7l/afaTfz4bIxBNksHUQFT5aTGu86x4ePUk8ErrrjoKUkjLaiwUnbZVJYkduGKVJ8Ozb/hLVKjfhjZKLPsh53OWEfqYxLJ3D4X0bV/im8xB7kdx7j2zqXhD2FpzTBkj6W1TbvFW3yiOR/sK14bZDz4eYCk2Fk6Y6AiDHyxllZJN7AIUu2phaqpJBsm9CthHvcwtsBmnzBA+6SbjGhrsx1yx5GUyovBfmcYo1icGAX1byqTvv7CT62+18HGGqRDil3Ylk2QXnzwwzx441klwzjzdTS8lDsFG+HYMauxtV7Mdxxd2EmLkQgHFxNplM2TkjLgNkv2+0f8AlR+7vw2KEm4WeMR1X36oqY/3Tb7Sfv3/AD4IMM3kpOIdzUuQeHcMWEZYsvhjDEayqoHYCiEZwvWMSsA6wFzCsg6gjDlnJG0Q3RQaxOqLYTw+gGI1HBx3VFioJIu6s71e9bix24nhKwqrY5TydIRXWI2rYdx7u358DLI2RhUlaDD+EBKbTtf2fy93Asx5IkyppfAjNvrMw+3t9tcWFVqgtKFYrwokSSn6i/HuPwPB2jMPhQXPy6rtD4SYBo/2zfd/7/McSWuGyFnaVFifB0BtkJ+ZB+6/zPFwJQnPhVG8MH9sB+ZP5ngopygmrC4k8Oz8B8P/ALxYUkE1Veh5Qj9ov7h+XBxTSoqItheXIwvF+Gp4iuDJIK9TieErcRTR5Xhup+xUfH3/AMuI4SuKqJQYKG9q+XHcNu6vnOy1GTrKRtG7fYGP3UOF302Dkjte481sMtweY6NkkHyII/6q4Tc1iYDnrnGZTmhNNA25oXoo7aqHxqzXuB9x4luTZQ4uOqC4jkibq/urKftWvnpJ4cD/ADSjmr6PlvMAa6S19o/IHi1jsh3G66/qtiK/gX4bn5dq/HiYVfNRycky/wB4nyH+bfyJ4sCguAX/2Q=="
}
...
]
'''
If you need more info about this topic, I wrote a dedicated blog post about how to scrape Google Carousel results.
Alternatively, you can achieve the same thing by using Google Knowledge Graph API from SerpApi. It's a paid API with a free plan.
The difference in your case is that you don't have to deal with the extraction process and figuring out what CSS selector to use or how to deal with other different things, instead you pretty much only need to iterate over structured JSON and get the data you want.
Code to integrate:
from serpapi import GoogleSearch
import os, json
def serpapi_get_top_carousel():
params = {
"api_key": os.getenv("API_KEY"),
"engine": "google",
"q": "new york city neighborhoods",
"hl": "en"
}
search = GoogleSearch(params)
results = search.get_dict()
for result in results['knowledge_graph']['neighborhoods']:
print(json.dumps(result, indent=2, ensure_ascii=False))
---------------
'''
"neighborhoods": [
{
"name":"Harlem",
"link": "https://www.google.com/search?q=Harlem&stick=H4sIAAAAAAAAAONgFuLUz9U3MIo3sjBT4gAx0yxNSrQUspOt9HPykxNLMvPz9AtyEpNTrfJSM9MzkvKLMvLzU4ofMfpxC7z8cU9YynXSmpPXGO25CGoREudic80rySypFOKV4uZC2GvFpMHEs4iVzSOxKCc1dwIbIwBbfLXHlgAAAA&sa=X&ved=2ahUKEwiipvO2q93zAhUylGoFHcyFA4wQ-BZ6BAgBEDQ",
"image": "https://serpapi.com/searches/61725341e4a23d51edb9dabf/images/d59e4f2f273f964cdd7164417183fc3f42a0f8724e78a4815f5e934903209df6acfc73bbca024b9a.jpeg"
}
...
]
'''
Disclaimer, I work for SerpApi.
I'm attempting to post the data of a pop-out form to a local web site. To do this I'm emulating the requests header and data and cookie information provided by the site. (Note: I am largely redacting my email and password from the code (for obvious reasons), but all other code will remain the same.)
I have tried mulitple permutations of the cookie, header, requests, data, etc. Additionally, I have verified in a network inspector the cookie and expected headers and data. I am able to easily set a cookie using requests' sample code. I cannot explain why my code won't work on a live site, and I'd be very grateful for any assistance. Please see the following code for further details.
import requests
import robobrowser
import json
br = robobrowser.RoboBrowser(user_agent="Windows Chrome",history=True)
url = "http://posting.cityweekly.net/gyrobase/API/Login/CookieV2"
data ={"passwordChallengeResponse":"....._SYGwbDLkSyU5gYKGg",
"email": "<email>%40bu.edu",
"ttl":"129600",
"sessionOnly": "1"
}
headers = {
"Origin": "http://posting.cityweekly.net",
"Accept-Encoding": "gzip, deflate",
"Accept-Language": "en-US,en;q=0.8,ru;q=0.6",
"User-Agent": "Windows Chrome", #"Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.65 Safari/537.36",
"Content-Type": "application/x-www-form-urlencoded; charset=UTF-8",
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
"Referer": "http://posting.cityweekly.net/utah/Events/AddEvent",
"X-Requested-With": "XMLHttpRequest",
"Connection": "keep-alive",
"Cache-Control": "max-age=0",
"Host":"posting.cityweekly.net"
}
cookie = {"Cookie": "__utma=25975215.1299783561.1416894918.1416894918.1416897574.2; __utmc=25975215; __utmz=25975215.1416894918.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __qca=P0-2083194243-1416894918675; __gads=ID=e3b24038c9228b00:T=1416894918:S=ALNI_MY7ewizuxK0oISnqPJWlLDAeKFMmw; _cb_ls=1; _chartbeat2=D6vh2H_ZbNJDycc-t.1416894962025.1416897589974.1; __utmb=25975215.3.10.1416897574; __utmt=1"}
r = br.session.get(url, data=json.dumps(data), cookies=cookie, headers=headers)
print r.headers
print [item for item in r.cookies.__dict__.items()]
Note that I print the cookies object and that the cookies attribute (a dictionary) is empty.
You need to perform a POST to login to the site. Once you do that, I believe the cookies will then have the correct values, (not 100% on that...). This post clarifies how to properly set cookies.
Note: I don't think you need to do the additional import of requests unless you're using it outside of RoboBrowser.