I need a tool, which can download some part of data from web server, and after that i want connection not be closed. Therfore, i thought about a script in python, which can do:
1) send request
2) read some part of response
3) will freeze - server should think that connection exist, and should not close it
is it possilbe to do it in python ? here is my code:
conn = HTTPConnection("myhost", 10000)
conn.request("GET", "/?file=myfile")
r1 = conn.getresponse()
print r1.status, r1.reason
data = r1.read(2000000)
print len(data)
When im running it, all data is received, and after that server closes connection.
thx in advance for any help
httplib doesn't support that. Use another library, like httplib2. Here's example.
Related
I'm learning how to use socket to make https request, and my problem is that I can success request (status 200), but I will only have a part of the webpage content (can't understand why it's splitted in this way)
I will receive my Http header, with a part of the html code. I tried it with at least 3 different website (including github), and I always have the same result.
I'm able to connect with my account to a website, having my cookies to use my account, load a new page with those cookie and get a status 200, and juste have a part of the website... Like just having site's navbars.
If someone have any clue.
import socket
import ssl
HOST = 'www.python.org'
PORT = 443
MySock = socket.socket()
MySock = ssl.wrap_socket(MySock, ssl_version=ssl.PROTOCOL_SSLv23)
MySock.connect((HOST,PORT))
MySock.send("""GET / HTTP/1.1
Host: {}
""".format(HOST).encode())
#Create file to check reponse content
with open('PythonOrg.html', 'w') as File:
print(MySock.recv(50000).decode(), file=File)
1) I seem to not be able to load content with a large buffer, in MySock.recv(50000), I need to loop with smaller buffer, like 4096, and concatenate a variable.
2) A request required time to receive the entire response, I used time.sleep function to manage this waiting, not sur if it's the best way with an ssl socket to wait the server. If anyone have a nice way to get the entire response message when it's big, feel free.
I am trying to use Python to get a JSON file from the Web. If I open the URL in my browser (Mozilla or Chromium) I do see the JSON. But when I do the following with the Python:
response = urllib2.urlopen(url)
data = json.loads(response.read())
I get an error message that tells me the following (after translation in English): Errno 10060, a connection troughs an error, since the server after a certain time period did not react, or the connection was erroneous, or the host did not react.
ADDED
It looks like there are many people who faced the described problem. There are also some answers to the similar (or the same) question. For example here we can see the following solution:
import requests
r = requests.get("http://www.google.com", proxies={"http": "http://61.233.25.166:80"})
print(r.text)
It is already a step forward for me (I think that it is very likely that the proxy is the reason of the problem). However, I still did not get it done since I do not know URL of my proxy and I probably will need user name and password. Howe can I find them? How did it happen that my browsers have them I do not?
ADDED 2
I think I am now one step further. I have used this site to find out what my proxy is: http://www.whatismyproxy.com/
Then I have used the following code:
proxies = {'http':'my_proxy.blabla.com/'}
r = requests.get(url, proxies = proxies)
print r
As a result I get
<Response [404]>
Looks not so good, but at least I think that my proxy is correct, because when I randomly change the address of the proxy I get another error:
Cannot connect to proxy
So, I can connect to proxy but something is not found.
I think there might be something wrong, when you're trying to get the json from the online source(URL). Just to make things clear, here is a small code snippet
#!/usr/bin/env python
try:
# For Python 3+
from urllib.request import urlopen
except ImportError:
# For Python 2
from urllib2 import urlopen
import json
def get_jsonparsed_data(url):
response = urlopen(url)
data = str(response.read())
return json.loads(data)
If you still get a connection error, You can try a couple of steps:
Try to urlopen() a random site from the Interpreter (Interactive Mode). If you are able to grab the source code you're good. If not check internet conditions or try the request module. Check here
Check and see if the json in the URL is in the correct syntax. For sample json syntax check here
Try the simplejson module.
Edit 1:
if you want to access websites using a system wide proxy you will have to use a proxy handler to use loopback(local host) to connect to that proxy.. A sample code is shown below.
proxy = urllib2.ProxyHandler({
'http': '127.0.0.1',
'https': '127.0.0.1'
})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
# this way you can send both http and https request using proxies
urllib2.urlopen('http://www.google.com')
urllib2.urlopen('https://www.google.com')
I have not not worked a lot with ProxyHandler. I just know the theory and code. I am sure there are better ways to access websites through proxies; One which does not involve installing the opener everytime you run the program. But hopefully it will point you in the right direction.
Hello!
My problem is the python sending to a controller which works with a prepared html sending script. Here the problem is that upload does not succeed (doesn't even start uploading) although the script runs through. The file is a binary container file. The problem should be in the code, because other way the upload could be completed
Here the output:
09:54:40:11 ...STEP: Upload Firmware
09:54:49:63 ...Upload was successful!
09:54:49:64 ...POST resource
09:54:50:60 ...Response: {"uploadFirmwareAck":0}
So the upload "was done" it says within 9 sec but it should take about 5 minutes. With debugger I monitored that it did not start just jump over it and give the "Upload was successful" message. I have no clue why. Any ideas?
The code:
import pycurl
from cStringIO import StringIO
import urllib2
import simplejson as json
url = 'http://eData/pvi?rName=FirmwareUpload'
req = urllib2.Request(url)
req.add_header('Content-Type','application/json')
c = pycurl.Curl()
c.setopt(c.POST, 1)
c.setopt(c.URL, url)
c.setopt(c.CONNECTTIMEOUT,0)
c.setopt(c.TIMEOUT, 0)
c.setopt(pycurl.FOLLOWLOCATION, 1)
c.setopt(pycurl.MAXREDIRS, 5)
c.setopt(pycurl.NOSIGNAL, 1)
c.setopt(c.HTTPPOST, [("file1", (c.FORM_FILE, "c:\\Users\\dem2bp\\Desktop\\HMI_Firmware update materials\\output_38.efc"))])
c.perform()
print "Upload was successful!"
print "Tx JSON:"
print "POST resource"
res = urllib2.urlopen(req)
print "Response:"
str_0 = res.read()
print str_0
c.close()
From the documentation:
PycURL is targeted at an advanced developer - if you need dozens of
concurrent, fast and reliable connections or any of the sophisticated
features listed above then PycURL is for you.
I would give http://www.python-requests.org/en/latest/ a try. For me, it's always the first choice when doing some http stuff. Usually it just does what it' supposed to do in a few lines of code.
Thank you, but as I see this request library does not run with an old version of Python like 2.6. I think it would be too risky to upgrade. Do you have other idea?
When I import requests library at some point of library which requires later versions throw me synthax errors.
I am using the suds package to make API request from one website. I wrote a function which opens up the client to the website and make request.
I am wondering should I or how can I terminate the connection in the end of the function?
I am wondering will the client be something like the MySQLDb.connect that actually opens up many many separate API connections that never close every time I call this function.
from suds.client import Client
import sys, re
def querysearch(reqPartNumber, reqMfg, lock):
try:
client = Client('http://app....')
userInfo = {'id':.., 'password':...}
apiResponse = client.service.getParts(...)
...
print apiResponse
except:
...
SOAP is still an HTTP request, which is stateless. Each request will start a whole new connection, re-auth, etc. Browsers kind of short circuit that with cookies, but SOAP doesn't. So, you don't need to close the connection, it's already closed by the time suds returns your data back to you.
Additionally, looking at the latest source, Client() doesn't define a close or __exit__ method, so there's nothing you really have to do here.
I've got python code of the form:
(o,i) = os.popen2 ("/usr/bin/ssh host executable")
ios = IOSource(i,o)
Library code then uses this IOSource, doing writes() and read()s against inputstream i and outputstream o.
Yes, there is IPC going on here.. Think RPC.
I want to do this, but in an HTTP fashion rather than spawning an ssh.
I've done python http before with:
conn=httplib.HTTPConnection('localhost',8000)
conn.connect()
conn.request('POST','/someurl/')
response=conn.getresponse()
How do I get the inputstream/outputstream from the HTTPConnection so that my lib code can read from/write to just like the ssh example above?
for output:
output = response.read()
http://docs.python.org/library/httplib.html#httpresponse-objects
for input:
pass your data in the POST body of your request