I have got a program that collects data from various sources and builds an excerpt in XML.
It would be easy to write that data to a file and then let apache do the work, but I do not want to depend on a webserver, because it is only one dynamic set of XML.
The result should be displayed in a browser, with automatic self refresh.
What I want is - how do I serve data to a browser without creating a file in the file system, but dynamically from python.
The SimpleHTTPserver does not fit.
What I've accomplished so far is - telnet to port and output XML or Text, but the browser interprets it as Text - not html and not xml
Any suggestions are welcome.
Ok, found it
just by sending the correct http headers before the xml part. Come what may the response will always be 200 OK, content type xml and go
here is the python code snippet:
try:
# wait for connection
conn, addr = sock.accept()
input = conn.recv(4096).strip()
if input == "TXT":
conn.send(OutputText(Alerts))
else:
HTTPHeader = '''HTTP/1.1 200 OK
Content-Type: text/xml
'''
HTTPReply = HTTPHeader + OutputXML(Alerts)
conn.send(HTTPReply)
# sende data and close connection
conn.close()
Related
I'm learning how to use socket to make https request, and my problem is that I can success request (status 200), but I will only have a part of the webpage content (can't understand why it's splitted in this way)
I will receive my Http header, with a part of the html code. I tried it with at least 3 different website (including github), and I always have the same result.
I'm able to connect with my account to a website, having my cookies to use my account, load a new page with those cookie and get a status 200, and juste have a part of the website... Like just having site's navbars.
If someone have any clue.
import socket
import ssl
HOST = 'www.python.org'
PORT = 443
MySock = socket.socket()
MySock = ssl.wrap_socket(MySock, ssl_version=ssl.PROTOCOL_SSLv23)
MySock.connect((HOST,PORT))
MySock.send("""GET / HTTP/1.1
Host: {}
""".format(HOST).encode())
#Create file to check reponse content
with open('PythonOrg.html', 'w') as File:
print(MySock.recv(50000).decode(), file=File)
1) I seem to not be able to load content with a large buffer, in MySock.recv(50000), I need to loop with smaller buffer, like 4096, and concatenate a variable.
2) A request required time to receive the entire response, I used time.sleep function to manage this waiting, not sur if it's the best way with an ssl socket to wait the server. If anyone have a nice way to get the entire response message when it's big, feel free.
I built a micro web service but I find it hangs a lot. By hang I mean all requests will just time out, when it hangs, I can see the process is running fine in server using only about 15MB memory as usual. I think it's a very interesting problem to post, the code is super simple, please tell me what I am doing wrong.
app = Bottle()
# static routing
#app.route('/')
def server_static_home():
return static_file('index.html', root='client/')
#app.route('/<filename>')
def server_static(filename):
return static_file(filename, root='client/')
#app.get('/api/data')
def getData():
data = {}
arrayToReturn = []
with open("data.txt", "r") as dataFile:
entryArray = json.load(dataFile)
for entry in entryArray:
if not entry['deleted']:
arrayToReturn.append(entry)
data["array"] = arrayToReturn
return data
#app.put('/api/data')
def changeEntry():
jsonObj = request.json
with open("data.txt", "r+") as dataFile:
entryArray = json.load(dataFile)
for entry in entryArray:
if entry['id'] == jsonObj['id']:
entry['val'] = jsonObj['val']
dataFile.seek(0)
json.dump(entryArray, dataFile, indent=4)
dataFile.truncate()
return {"success":True}
run_simple('0.0.0.0', 80, app, use_reloader=True)
Basically mydomain.com is route to my index.html and load necessary JS, CSS files, that's what static routing part is doing. Once page is loaded, an ajax GET request is fired to /api/data to load data and when I modify data, it fires another ajax Put request to /api/data to modify data.
How to reproduce
It's very easy to reproduce the hang, I just need to visit mydomain.com and refresh the page for 10-30 times rapidly, then it will stop responding. But I was never able to reproduce this locally how ever fast I refresh and data.txt is the same on my local machine.
Update
Turns out it's not problem with read/write to file but a problem with trying to write to broken pipe. The client that sent request close the connection before receiving all the data. I'm looking into solution now...
It looks like you are trying to open and read the same data.txt file with every PUT request. Eventually you are going to run into concurrency issues with this architecture as you will have multiple requests trying to open and write to the same file.
The best solution is to persist the data to a database (something like MySQL, Postgres, Mongodb) instead of writing to a flat file on disk.
However, if you must write to a flat file, then you should write to a different file per request where the name of the file could be the jsonObj['id'], This way you avoid the problem of multiple requests trying to read/write to the same file at the same time.
Reading and writing to your data.txt file will be victim as race conditions as Calvin mentions. Databases are pretty easy in python especially with libraries like SqlAlchemy. But if you insist, you can also use a global dictionary and a lock assuming your webserver is not running as multiple processes. Something like
entryArray = {}
mylock = threading.Lock()
#app.put('/api/data')
def changeEntry():
jsonObj = request.json
with mylock.lock:
for entry in entryArray:
if entry['id'] == jsonObj['id']:
entry['val'] = jsonObj['val']
I recorded a HSL stream by writing the MPEG-TS streams contents into GridFS filesystem.
i'm now trying to serve this content back to the browser using aiohttps SessionResponse which fails for different reasons.
async def get_video(request):
stream_response = StreamResponse()
stream_response.headers['Content-Type'] = 'video/mp2t'
stream_response.headers['Cache-Control'] = 'no-cache'
stream_response.headers['Connection'] = 'keep-alive'
await stream_response.prepare(request)
fd = GridFS()
video_stream = await fd(video_id)
while True:
try:
chunk = await video_stream.readchunk()
if not chunk:
break
stream_response.write(chunk)
except CancelledError as e:
# fails here in safari or with diff content-type also in chrome
break
await stream_response.write_eof()
return stream_response
When trying to access the url using safari i get the player ui presented but nothing plays while the server throws a CancelledError exception trying to write on the already closed SessionResponse
Opening the URL in Chrome results in downloading the video file. This file works when playing it back in VLC. Even playing the URL inside VLC using "Network Source" works.
I also tried serving a static m3u playlist in front of this direct url like this but without luck (VLC also works using the playlist instread of direct stream):
#EXT-X-MEDIA:TYPE=VIDEO,GROUP-ID="medium",NAME="Medium",AUTOSELECT=YES,DEFAULT=YES
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=992000,RESOLUTION=852x480,CODECS="avc1.66.31,mp4a.40.2",VIDEO="medium"
http://localhost:8080/videos/{video_id}
I'm not sure how do debug this any further and would appreciate any help (or ask in comments if i'm unclear). What am i missing that the files don't get played back in browser when accessing them directly? Also embedding my resource url into a html video tag didn't help (obviously, since browser do the same when accessing a video directly)
Some more informations about the video content and the raw http resonses i'm sending:
Video Informations VLC
Direct Video Stream HTTP Response (start)
M3U Playlist HTTP Response
I have no experience with HLS personally but even vast overview of RFC draft displays that you breaks the protocol.
It's not about sending video chunks all together in single endless response but about sending multiple http responses utilizing the same socket connection by keep-alive usage.
Client sends request for new data portions providing protocol-specific EXT* flags and server should respond properly. At very beginning client asks for playlist, server should answer with proper data.
Communication protocol is complex enough, sorry.
I cannot just fix a couple lines in your snippet to make it work.
Creating an Mobile application with embedded Python 2.7
Using Marmalade C++ SDK.
I'm integrating connectivity to cloud file transfer services.
FTP: file transfers work flawlessly
Dropbox: authenticates then gives me: socket [Errno 22] Invalid argument
Google Drive: Authenticates, lists metadata, but file transfers illicit some strange behavior
Since I've made all the bindings to the marmalade socket subsystem (unix like) but some features are unimplemented. To connect to Google Drive, initially I did some modifications to httplib2 / init.py, setting all instances of:
self.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
#to this:
self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
After doing this little patch I could successfully connect and download metadata from Google Drive. However:
When I upload a 7KB file, the file appears on Google Drive, but has a
file size of 0 bytes
When I download a 7KB file using urllib, I get a
54KB file back
I know it must have to do with a misconfiguration of the socket properties, but not all properties are implemented.
Here are some standard python test outputs (test_sockets , test_httplib )
Implementation here:
Marmalade /h/std/netdb.h
Are there any that I should try as a viable replacement?
I don't have a clue.
From: unix-man setsockopt(2)
SO_DEBUG enables recording of debugging information
SO_REUSEADDR enables local address reuse
SO_REUSEPORT enables duplicate address and port bindings
SO_KEEPALIVE enables keep connections alive
SO_DONTROUTE enables routing bypass for outgoing messages
SO_LINGER linger on close if data present
SO_BROADCAST enables permission to transmit broadcast messages
SO_OOBINLINE enables reception of out-of-band data in band
SO_SNDBUF set buffer size for output
SO_RCVBUF set buffer size for input
SO_SNDLOWAT set minimum count for output
SO_RCVLOWAT set minimum count for input
SO_SNDTIMEO set timeout value for output
SO_RCVTIMEO set timeout value for input
SO_ACCEPTFILTER set accept filter on listening socket
SO_TYPE get the type of the socket (get only)
SO_ERROR get and clear error on the socket (get only)
Here is my Google upload / download / listing source code
I'll brute force this until the problem is resolved, hopefully. Ill report back if I figure it out
I figured it out. it was 2 problems with my file handling code.
in uploading:
database_file = drive.CreateFile()
database_file['title'] = packageName
# this needs to be set
database_file.SetContentFile(packageName)
#
database_file['parents']=[{"kind": "drive#fileLink" ,'id': str(cloudfolderid) }]
In downloading, I was using the wrong url (webContentLink is for browsers only, use "downloadUrl" ). I also then needed to craft a header to authorize the download
import urllib2
import json
url = 'https://doc-14-5g-docs.googleusercontent.com/docs/securesc/n4vedqgda15lkaommio7l899vgqu4k84/ugncmscf57d4r6f64b78or1g6b71168t/1409342400000/13487736009921291757/13487736009921291757/0B1umnT9WASfHUHpUaWVkc0xhNzA?h=16653014193614665626&e=download&gd=true'
#Parse saved credentials
credentialsFile = open('./configs/gcreds.dat', 'r')
rawJson = credentialsFile.read()
credentialsFile.close()
values = json.loads(rawJson)
#Header must include: {"Authorization" : "Bearer xxxxxxx SomeToken xxxxx"}
ConstructedHeader = "Bearer " + str(values["token_response"]["access_token"])
Header = {"Authorization": ConstructedHeader}
req = urllib2.Request( url, headers= Header )
response = urllib2.urlopen(req)
output = open("UploadTest.z.crypt",'wb')
output.write(response.read())
output.close()
I need a tool, which can download some part of data from web server, and after that i want connection not be closed. Therfore, i thought about a script in python, which can do:
1) send request
2) read some part of response
3) will freeze - server should think that connection exist, and should not close it
is it possilbe to do it in python ? here is my code:
conn = HTTPConnection("myhost", 10000)
conn.request("GET", "/?file=myfile")
r1 = conn.getresponse()
print r1.status, r1.reason
data = r1.read(2000000)
print len(data)
When im running it, all data is received, and after that server closes connection.
thx in advance for any help
httplib doesn't support that. Use another library, like httplib2. Here's example.