How to make each tweet on its own line? - python

I am wanting to have each tweet be on its own line.
Currently, this breaks at each response (I listed Response_1...I am using through Response_10)
Any ideas?
#!/usr/bin/env python
import urllib
import json
response_1 = urllib.urlopen("http://search.twitter.com/search.json?q=microsoft&page=1")
for i in response_1:
print (i, "\n")

You have to parse the json as a python object first, only then you can iterate over it.
#!/usr/bin/env python
import urllib
import json
response_1 = json.loads(urllib.urlopen("http://search.twitter.com/search.json?q=microsoft&page=1").read())
for i in response_1['results']:
print (i, "\n")

Related

Putting Json into python variables using query (url request)

I'm attempting to use this Python 2 code snippet from the WeatherUnderground's API Page in python 3.
import urllib2
import json
f = urllib2.urlopen('http://api.wunderground.com/api/apikey/geolookup/conditions/q/IA/Cedar_Rapids.json')
json_string = f.read()
parsed_json = json.loads(json_string)
location = parsed_json['location']['city']
temp_f = parsed_json['current_observation']['temp_f']
print "Current temperature in %s is: %s" % (location, temp_f)
f.close()
I've used 2to3 to convert it over but i'm still having some issues. The main conversion here is switching from the old urllib2 to the new urllib. I've tried using the requests library to no avail.
Using urllib from python 3 this is the code I have come up with:
import urllib.request
import urllib.error
import urllib.parse
import codecs
import json
url = 'http://api.wunderground.com/api/apikey/forecast/conditions/q/C$
response = urllib.request.urlopen(url)
#Decoding on the two lines below this
reader = codecs.getreader("utf-8")
obj = json.load(reader(response))
json_string = obj.read()
parsed_json = json.loads(json_string)
currentTemp = parsed_json['current_observation']['temp_f']
todayTempLow = parsed_json['forecast']['simpleforecast']['forecastday']['low'][$
todayTempHigh = parsed_json['forecast']['simpleforecast']['forecastday']['high'$
todayPop = parsed_json['forecast']['simpleforecast']['forecastday']['pop']
Yet i'm getting an error about it being the wrong object type. (Bytes instead of str)
The closest thing I could find to the solution is this question here.
Let me know if any additional information is needed to help me find a solution!
Heres a link to the WU API website if that helps
urllib returns a byte array. You convert it to string using
json_string.decode('utf-8')
Your Python2 code would convert to
from urllib import request
import json
f = request.urlopen('http://api.wunderground.com/api/apikey/geolookup/conditions/q/IA/Cedar_Rapids.json')
json_string = f.read()
parsed_json = json.loads(json_string.decode('utf-8'))
location = parsed_json['location']['city']
temp_f = parsed_json['current_observation']['temp_f']
print ("Current temperature in %s is: %s" % (location, temp_f))
f.close()

What is wrong with my code urlopen function?

Below is my python code to check curse word in the file.
But i am unable to find why the compiler is showing error: module 'urllib' has no attribute 'urlopen'.
import urllib
def read_txt():
quote = open("c:\\read.txt") #for opening the file
content = quote.read() #for reading content in a file
print(content)
quote.close() #for closing the file
check_profanity(content)
def check_profanity(text):
connection = urllib.urlopen("https://www.wdylike.appspot.com/?q=" + text)
ouput = connection.read()
print(output)
connection.close()
read_txt()
In Python 3, urllib is now a package that collects multiple modules. urlopen() is now a part of urllib.request module:
from urllib.request import urlopen
Then using it:
connection = urlopen("https://www.wdylike.appspot.com/?q=" + text)
Well, because urllib does not have a urlopen method.
In Python2 you should use urllib2, while in Python 3 you should use urllib.request

Json, urlib2 and pprint

I have the following exercise:
Use the json module. First use urllib2 to download this file, then load the json as a python object and use pprint to make it look good when written to the terminal.
Now until now I've only worked with standard Python things (such as the codeacademy course and things such as lists).
What I understand is that I have to import urllib2 and apparently import json in some other way and use pprint...???
This is what I have done, but not sure if I got it right...
import urllib2
response = urllib2.urlopen('https://dl.dropboxusercontent.com/u/153071/test.json')
html = response.read()
import json
import pprint
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(c) #Just printing a list from earlier in the file, not sure what to print...
You don't need to import pprint. You can specify indentation using the json module itself
import urllib2
import json
response = urllib2.urlopen('https://dl.dropboxusercontent.com/u/153071/test.json')
content_dict = json.loads(response.read())
print json.dumps(content_dict, indent=4)

Python urllib-html parse

Question about parsing web-site:
My code:
#!/usr/bin/python
# -*- coding: utf-8 -*-
import sys
import os
import urllib2
import re
# Parse Web
from lxml import html
import requests
def parse():
try:
output = open('proba.xml','w')
page = requests.get('http://www.rts.rs/page/tv/sr/broadcast/22/RTS+1.html')
tree = html.fromstring(page.text)
parse = tree.xpath('//div[#class="ProgramTime"]/text()|//div[#class="ProgramName"]/text()|//a[#class="recnik"]/text()')
for line in parse:
clean = line.strip()
if clean:
print clean
except:
pass
parse()
My question is how can I write this result to file, when I try with this:
print >> output, line
I got only 6 first lines into file.
With this code:
output.write(line)
Same thing, so can you help me with this issue.
What I wanan is to output parsed content.
I am having trouble replicating the problem. Here is what I did...
import sys
import os
import urllib2
import re
from lxml import html
import requests
def parse():
output = open('proba.xml','w')
page = requests.get('http://www.rts.rs/page/tv/sr/broadcast/22/RTS+1.html')
tree = html.fromstring(page.text)
p = tree.xpath('//div[#class="ProgramTime"]/text()|//div[#class="ProgramName"]/text()|//a[#class="recnik"]/text()')
for line in p:
clean = line.strip()
if clean:
output.write(line.encode('utf-8')+'\n') # the \n adds a line break
output.close()
parse()
I think you are getting a unicode related error when writing to the file, but because you put everything in a try block and let the error pass silently you aren't getting feedback!
Try typing import this in a terminal. You will get the Zen of Python. One aphorism is "Errors should never pass silently."
Try this instead:
with file('proba.xml', 'w') as f:
f.writelines([line.strip() for line in parse]
Put this in place of for line in parse: clean = * and remove the declaration output = * above and no need for output.write again. Sorry if am not clearer am typing this on a mobile phone.

Copy text from website to text/excel file

I'm trying to create a simple (hopefully) Python script that copies the text from this address:
http://api.bitcoincharts.com/v1/trades.csv?symbol=mtgoxUSD
to either a simple text file or an excel spreadsheet.
I've tried utilising urllib and resquests libraries, but every time I would try and run a very basic script, the shell wouldn't display anything.
For example,
import requests
data = requests.get('http://api.bitcoincharts.com/v1/trades.csv?symbol=mtgoxUSD')
data.text
Any help would be appreciated. Thank you.
You're almost done;
import requests
symbol = "mtgoxUSD"
url = 'http://api.bitcoincharts.com/v1/trades.csv?symbol={}'.format(symbol)
data = requests.get(url)
# dump resulting text to file
with open("trades_{}.csv".format(symbol), "w") as out_f:
out_f.write(data.text)
Using urllib:
import urllib
f = urllib.urlopen("http://api.bitcoincharts.com/v1/trades.csv?symbol=mtgoxUSD")
print f.read()

Categories