How can i fix the 400 status code with this data? [closed] - python

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 12 hours ago.
Improve this question
kikikicz_post='https://wtb.kikikickz.com/v1/integrations/airtable/b9586bc6-4151-4c84-a65f-b2d3443c928f/appZLS7at5DuMRxBe/WTB Softr/records?block_id=89e7021d-8d6d-434a-8803-7f64e519831f'
headers2 = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.0.0 Safari/537.36',
'Content-Type': 'application/json; charset=utf-8'}
data2 = {
"page_size": 100,
"view": "Grid view",
"filter_by_formula": "OR(SEARCH(\"dz4709-001\", LOWER(ARRAYJOIN(dz4709-001))),SEARCH(\"dz4709-001\", LOWER(ARRAYJOIN(dz4709-001))),SEARCH(\"dz4709-001\", LOWER(ARRAYJOIN(Nike))))",
"sort_resources": [
{
"field": "Nom",
"direction": "asc"
}
],
"rows": 0,
"airtable_response_formatting": {
"format": "string"
}
}
session=requests.Session()
res2=session.post(kikikicz_post,json=data2,headers=headers2)
print(res2)
I am trying to make post request but keep getting 400 error tried to change payload but still same should i change something ?
Can find any api , its airtable just searched for item and i found this post request with the response.
post
payload

Related

Polldaddy - PD_buttonXXXXX.className='pds-vote-button';alert("This poll did not load properly."); [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed yesterday.
Improve this question
Working on a project to create a bot to vote on a school context.
REQUEST:
https://polls.polldaddy.com/vote-js.php?va=50&pt=0&r=0&p=XXXX&a=YYYYY%2C&o=&t=24136&token=e987a94442b462982294c5a918bb69d6&pz=181
HEADER:
{'Authority': 'polls.polldaddy.fm', 'method': 'GET', 'path': '/vote-js.php?va=50&pt=0&r=0&p=xxxxxx&a=YYYY%2C&o=&t=24136&token=e987a94442b462982294c5a918bb69d6&pz=181', 'scheme': 'https', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,/;q=0.8', 'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.8.1.15) Gecko/20080623 Firefox/2.0.0.15', 'referer': 'https://poll.fm/XXXX', 'Upgrade-Insecure-Requests': '1', 'Accept-Encoding': 'gzip, deflate, sdch', 'Accept-Language': 'en-US,en;q=0.8'}
RESULT:
PD_buttonXXXXX.className='pds-vote-button';alert("This poll did not load properly.");
Did anyone have the same problem? Were you able to bypass this issue?
I'm getting a 200 OK, but my vote is not being processed

Using redis in Python [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed yesterday.
Improve this question
How can I use celery thats its expire time is 60sec using celery?
from bs4 import BeautifulSoup
import requests
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'
}
def weather(cities):
results = []
for city in cities:
res = requests.get(f'https://www.google.com/search?q={city} weather&oq={city} weather&aqs=chrome.0.35i39l2j0l4j46j69i60.6128j1j7&sourceid=chrome&ie=UTF-8', headers=headers)
soup = BeautifulSoup(res.text, 'html.parser')
weather = soup.select('#wob_tm')[0].getText().strip()
results.append({city: weather})
return results
cities = ["tehran" , "Mashhad","Shiraaz","Semirom","Ahvaz","zahedan","baghdad","van","herat","sari"]
weather_data = weather(cities)
print(weather_data)
def temporary_city(city):
res = requests.get(f'https://www.google.com/search?q={city} weather&oq={city} weather&aqs=chrome.0.35i39l2j0l4j46j69i60.6128j1j7&sourceid=chrome&ie=UTF-8', headers=headers)
return res

Why 400 response status code when send POST?

I want to parse product data from this page, but with requests.get it is not work. So I inspected page networks and found intereste link:
I tried to send post request to this link with correct form data, but in response i got only {"message":"Expecting value (near 1:1)","status":400}
How can I get correct product data from this page?
It looks like your post is mostly code; please add some more details.
It looks like your post is mostly code; please add some more details.
import requests
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.0.0 Safari/537.36",
"Accept": '*/*',
"Accept-Encoding": "gzip, deflate, br",
'Connection': 'keep-alive',
'Host': 'cgrd9wlxe4-dsn.algolia.net',
'Origin': 'https://www.eprice.it',
'Referer': "https://www.eprice.it/",
'Content-Type': 'application/x-www-form-urlencoded',
"Sec-Fetch-Dest": 'empty',
"Sec-Fetch-Mode": 'cors',
'Sec-Fetch-Site': 'cross-site',
'sec-ch-ua': "Not A;Brand",
"sec-ch-ua-mobile": '?0',
"sec-ch-ua-platform": "Windows",
}
form_data = {
"requests": [
{
"indexName": "prd_products_suggest",
"params": {
"highlightPreTag": "<strong>",
"highlightPostTag": "</strong>",
"query": 6970995781939,
"hitsPerPage": 36,
"clickAnalytics": 1,
"analyticsTags": ["main", "desktop"],
"ruleContexts": ["ovr", "desktop", "t1"],
"facetingAfterDistinct": 1,
"getRankingInfo": 1,
"page": 0,
"maxValuesPerFacet": 10,
"facets": ["manufacturer", "offer.price", "scegliPer", "offer.shopType",
"reviews.avgRatingInt",
"navigation.lvl0,navigation.lvl1,navigation.lvl2,navigation.lvl3"],
"tagFilters": ""
}
},
{
"indexName": "prd_products_suggest_b",
"params": {
"query": 6970995781939,
"hitsPerPage": 10,
"clickAnalytics": 1,
"analyticsTags": ["car_offerte_oggi", "desktop"],
"ruleContexts": ["ovr", "car_offerte_oggi", "desktop"],
"getRankingInfo": 1,
"page": 0,
"maxValuesPerFacet": 10,
"minProximity": 2,
"facetFilters": [],
"facets": ["manufacturer", "offer.price", "scegliPer", "offer.shopType", "reviews.avgRatingInt",
"navigation.lvl0,navigation.lvl1,navigation.lvl2,navigation.lvl3"],
"tagFilters": ""
}
}
]
}
response = requests.post(
url="https://cgrd9wlxe4-dsn.algolia.net/1/indexes/*/queries?"
"x-algolia-agent=Algolia%20for%20JavaScript%20(4.11.0)%3B%20Browser%20(lite)&"
"x-algolia-api-key=e9c9895532cb88b620f96f3e6617c00f&"
"x-algolia-application-id=CGRD9WLXE4",
headers=headers,
data=form_data
)
print(response.text)
Algolia is a hosted search API, a retail company can index their product list into Algolia and then integrate their front-end to it to query for products to display to customers.
When you're inspecting the page networks, you're seeing the calls out to this search API that are formed by the owner of the website, the retail company, likely using the client you can download from Algolia directly.
I'm not sure why the form-data isn't working, if you download the client in Python, you will find their own example of how to integrate to it. But, the point of the context above is that the hosted Search API takes requests in different ways, so you can take the source message you have, set the Content-Type header to 'application-json' and you'll get a response.
PostMan Call
The full API documentation - https://www.algolia.com/doc/rest-api/search/#search-index-post

POST request fails to interact with site

I am trying to login to a site called grailed.com and follow a certain product. The code below is what I have tried.
The code below succeeds in logging in with my credentials. However whenever I try to follow a product (the id in the payload is the id of the product) the code runs without any errors but fails to follow the product. I am confused at this behavior. Is it a similar case to Instagram (where Instagram blocks any attempt to interact programmatically with their site and force you to use their API (grailed.com does not have a API for the public to use AFAIK)
I tried the following code (which looks exactly like the POST request sent when you follow on the site).
headers/data defined here
r = requests.Session()
v = r.post("https://www.grailed.com/api/sign_in", json=data,headers = headers)
headers = {
'authority': 'www.grailed.com',
'method': 'POST',
"path": "/api/follows",
'scheme': 'https',
'accept': 'application/json',
'accept-encoding': 'gzip, deflate, br',
"content-type": "application/json",
"x-amplitude-id": "1547853919085",
"x-api-version": "application/grailed.api.v1",
"x-csrf-token": "9ph4VotTqyOBQzcUt8c3C5tJrFV7VlT9U5XrXdbt9/8G8I14mGllOMNGqGNYlkES/Z8OLfffIEJeRv9qydISIw==",
"origin": "https://www.grailed.com",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"
}
payload = {
"id": "7917017"
}
b = r.post("https://www.grailed.com/api/follows",json = payload,headers = headers)
If API is not designed to be public, you are most likely missing csrf token in your follow headers.
You have to find an CSRF token, and add it to /api/follows POST.
taking fast look at code, this might be hard as everything goes inside javascript.

Python: remove unwanted data to a standard json [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
A url has this JSON which is not in standard format:
{}&& {identifier:'ID', label:'As at 08-03-2018 5:06 PM',items:[{ID:0,N:'2ndChance W200123',SIP:'',NC:'CDWW',R:'',I:'',M:'',LT:0.009,C:0.000,VL:108.200,BV:2149.900,B:'0.008',S:'0.009',SV:7218.300,O:0.009,H:0.009,L:0.008,V:873.700,SC:'5',PV:0.009,P:0.000,BL:'100',P_:'X',V_:''},{ID:1,N:'3Cnergy',SIP:'',NC:'502',R:'',I:'',M:'t',LT:0,C:0,VL:0.000,BV:50.000,B:'0.022',S:'0.025',SV:36.000,O:0,H:0,L:0,V:0.000,SC:'2',PV:0.021,P:0,BL:'100',P_:'X',V_:''},{ID:2,N:'3Cnergy W200528',SIP:'',NC:'1E0W',R:'',I:'',M:'t',LT:0,C:0,VL:0.000,BV:0,B:'',S:'0.004',SV:50.000,O:0,H:0,L:0,V:0.000,SC:'5',PV:0.002,P:0,BL:'100',P_:'X',V_:''}`
I want to make all the data into list or in pandas started from ID.
{}&& {identifier:'ID', label:'As at 08-03-2018 5:06 PM',items: is not wanted when I requested the url
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'}
url ='AttributeError: 'http://www.sgx.com/JsonRead/JsonstData?qryId=RAll'
page = requests.get(url,headers=headers)
alldata = html.fromstring(page.content)
However, I am unable to continue as the JSON format is not standard. How to correct it?
import requests
import execjs
url = 'http://www.sgx.com/JsonRead/JsonstData?qryId=RAll'
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'}
page = requests.get(url,headers=headers)
content = page.content[len('{}&& '):] if page.content.startswith('{}&& ') else page.content
data = execjs.get().eval(content)
print(data)
The data is JavaScript Object, in literal notation.
We can use PyExecJs to eval it and get corresponding python dict.

Categories