Posting to friends facebook WITHOUT using Graph API - Python [duplicate] - python

This question already has an answer here:
Posting to friends' wall with Graph API via 'feed' connection failing since Feb 6th 2013
(1 answer)
Closed 9 years ago.
I know that using Graph API we can no longer post on a friends wall. Has anyone else found a way around it? I have my current application setup with access tokens and what not - but because Facebook graph API can no longer post to a friends wall using the friends profile ID, I am kinda lost on how to fix this. Is there a way around it? using Python?

Use the Feed Dialog instead. See here for the reasons why.

Related

NLP: How to tell wether a text is talking about which topic

I am a noob in python I was just making a website for studying for children so, I wanted the student to enter their response to a question and then their text response is reviewed and then it checks whether the child has given a response on a particular topic and whether the answer talks about a few key points, so how do I make it and implement it using Django, btw I started web dev a week ago so I am very new to this. Please help me!
thank you !!
You can check the Django tutorial. It will help you understand how django works, and the tutorial could lead you close to your need.

Is it possible to override request payload in python? [duplicate]

This question already exists:
How to add/edit data in request-payload available in google chrome dev tools [duplicate]
Closed 3 years ago.
I've been looking for this answer for quite long but still with no results. I'm working with selenium and I need to override one request which is generated after the submit button has been clicked. It contains data in json format under "Request payload" in chrome dev tools. I found something like seleniumwires which provides some functionality like request.overrides but I'm not sure it is working as I want. Can anyone give me some hint where to start or which tools are approporiate to do that ?

Django - Auto run scripts at certain time on Live Server [duplicate]

This question already has answers here:
Set up a scheduled job?
(26 answers)
Closed 5 years ago.
Newbie here. I have a lot of different functions in a Python program that downloads a bunch of data from the internet, manipulates it, and displays it to the public. I have bunch of links to different tables of data, and I figure it would be very inconvenient for people to have to wait for the data to download when they clicked the link on my website. How can I configure Django to run the scripts that download the data at say, like 6am? And save some type of cached template of the data so people could quickly view that data for the entire day, and then refresh and update the data for the next day. Your insight and guidance would be very appreciated! Thank you! and Happy holidays!
I'd suggest celery for any recurring tasks in Django. Their docs are great and already have a use with Django tutorial right in them.

How to solve a reCaptcha in advance using a web scraper? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I'm currently in the process of trying to solve a reCaptcha. One of the suggestions received was a method called token farming.
For example, it's possible to farm for reCaptcha tokens from another site, and within 2 minutes, apply one of the farmed tokens to the site I'm trying to solve by changing the site's code on the back.
Unfortunately, wasn't able to get any further explanations as to how to go about doing so, especially changing the site's code on the back.
If anyone’s able to elaborate or give insights on the process, would really appreciate the expertise.
Token farming / token harvesting has been described here in detail: https://www.blackhat.com/docs/asia-16/materials/asia-16-Sivakorn-Im-Not-a-Human-Breaking-the-Google-reCAPTCHA-wp.pdf
The approach for "token farming" discussed in this paper is based on the following mechanism:
Each user that visits a site with recaptcha is assigned a recaptcha-token.
This token is used to identify the user over multiple site visits and to to mark him a legitimate (or illegitimate) user.
Depending on various factors like age of the recaptcha-token, user behavior and browser configuration the user on each visit is either presented with one of the various recaptcha versions or even no captcha at all.
(more details can be extracted from their code here: https://github.com/neuroradiology/InsideReCaptcha)
Means, if one can create a huge number of fresh and clean tokens for a target site and age them for 9 days (that's what the article found out), these tokens can be used for accessing recaptcha a few protected sites before ever seeing a recaptcha.
To my understanding, such a fresh token has to be passed as a Cookie to the site in question.
However I recall having read somewhere that google closed this gap within a few days after this presentation
Also most probably there are other, similar approaches that have been labeled "token farming".
As far as I know all these approaches exploited loopholes in the recaptcha system and these loopholes were closed by google really fast - often even before the paper or presentation went public as responsible authors usually inform google in advance.
So for you this is most probably only of academic value or for learning about proper protection of captcha systems and token based services in general.
update
A quick check on a few recaptcha protected sites showed that the current system now scrambles the cookies, but the recaptcha-token can be found in the recaptcha form as two hidden input elements with partially different values and the id="recaptcha-token".
When visiting such a page with a clean browser you will get a new recaptcha token which you can save away and insert into the same form later when needed. At least that's the theory, it is very likely that all the cookies and some long term persisted stuff in your browser will keep you from doing this.

Scraping Google [duplicate]

This question already has an answer here:
scrape google resultstats with python [closed]
(1 answer)
Closed 9 years ago.
I am attempting to scrape Google search results as the results I receive using the API are not as useful as the results from the main site.
I am using the python requests library to grab the search page. However I am receiving an error:
Instant is off due to connection speed. Press Enter to search.
Is there any way I can disable instant search?
thanks
Python has a search api for python already, might save you some heartache.
https://developers.google.com/appengine/docs/python/search/

Categories