Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am having difficulties with the implementation of the query GET.
For example, I took no difficult task to bring in a convenient form the information on this page GET https://www.bitstamp.net/api/transactions/
Used API https://www.bitstamp.net/api/
I'm interested in everything from syntax to the modules you want to install for this request
Did you already seen this?
http://docs.python-requests.org/en/latest/
e.g:
import requests
response = requests.get(url)
print response.json()
If you don't want to install an extra library, you can use pythons urllib2 library that is just as easy for something like connecting to a url.
import urllib2
print urllib2.urlopen("https://www.bitstamp.net/api/transactions/").read()
For parsing that, use pythons json library.
There is a Python lib for that, the bitstamp-python-client. Do not waste your time reinventing the wheel. ;-)
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
What's a simple and performat way to save online published lists of IP addresses like this one in a standard python list? Example
ip_list = ['109.70.100.20','185.165.168.229','51.79.86.174']
HTML parsing library beautifulsoap seems way to sophisticated for the simple structure.
Its not that beautifulsoup is too sophisticated, its that the content type is text, not html. There are several APIs for downloading content, and requests is popular. If you use its text property, it will perform any decoding and unzipping needed
import requests
resp = requests.get("https://www.dan.me.uk/torlist/")
ip_list = resp.text.split()
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
For example, I tried getting Python to read the following filtered page
http://www.hearthpwn.com/cards?filter-attack-val=1&filter-attack-op=1&display=1
but Python only gets the unfiltered page http://www.hearthpwn.com/cards instead.
The standard library urllib2 normally follows redirects. If retrieving this URL used to work without being redirected, then the site has changed.
Although you can prevent following the redirect within urllib2 (by providing an alternative HTTP handler), I recommend using requests, where you can do:
import requests
r = requests.get('http://www.hearthpwn.com/cards?filter-attack-val=1'
'&filter-attack-op=1&display=1', allow_redirects=False)
print(r)
giving you:
<Response [302]>
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I've been searching for hours on how to extract the main text of a Wikipedia article, without all the links and references. I've tried wikitools, mwlib, BeautifulSoup and more. But I haven't really managed to.
Is there any easy and fast way for me to take the clear text (the actual article), and put it in a Python variable?
SOLUTION: Omid Raha solved it :)
You can use this package, that is a python wrapper for Wikipedia API,
Here is a quick start.
First install it:
pip install wikipedia
Example:
import wikipedia
p = wikipedia.page("Python programming language")
print(p.url)
print(p.title)
content = p.content # Content of page.
Output:
http://en.wikipedia.org/wiki/Python_(programming_language)
Python (programming language)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm developing a python script and I need to find the fastest way for getting a JSON from remote server. Currently I'm using requests module, but still requesting JSON is the slowest part of the script. So, what is the fastest way for python HTTP GET request?
Thanks for any answer.
Write a C module that does everything. Or fire up a profiler to find out in which part of the code the time is spent exactly and then fix that.
Just as guideline: Python should be faster than the network, so the HTTP request code probably isn't your problem. My guess is that you do something wrong but since you don't provide us with any information (like the code you wrote), we can't help you.
Maybe you have a lot of json requests to do, which can be done simultaneously. Then you can use async requests and thus mitigate the time spent waiting for network stuffs.
You can test this project https://github.com/kennethreitz/grequests (from Kenneth Reitz, who wrote requests).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Are there any equivalents in objective-c to the following python urllib2 functions?
Request, urlopen, HTTPError, HTTPCookieProRequest, urlopen, HTTPError, HTTPCookieProcessor
Also, how would I able to to this and change the method from "get" to "post"?
NSMutableHTTPURLRequest, a category of NSMutableURLRequest, is how you set up an HTTP request. Using that class you will specify a method (GET or POST), headers and a url.
NSURLConnection is how you open the connection. You will pass in a request and delegate, and the delegate will receive data, errors and messages related to the connection as they become available.
NSHTTPCookieStorage is how you manage existing cookies. There are a number of related classes in the NSHTTPCookie family.
With urlopen, you open a connection and read from it. There is no direct equivalent to that unless you use something lower level like CFReadStreamCreateForHTTPRequest. In Objective-C everything is passive, where you are notified when events occur on the stream.
You're looking for some combination of NSURL, NSURLRequest, NSURLConnection, NSHTTPConnection, etc. Check out the URL Loading System Programming Guide for all the information you need.