My JSON save and load function is not working - python

I am writing a simple function to save a twitter search as a JSON, and then load the results. The save function seems to work but the load one doesn't. The error I receive is:
"UnsupportedOperation: not readable"
Can you please advise what the issue might be in my script?
import io
def save_json(filename, data):
with open('tweet2.json', 'w', encoding='utf8') as file:
json.dump(data, file, ensure_ascii = False)
def load_json(filename):
with open('tweet2.json', 'w', encoding = 'utf8') as file:
return json.load(file)
#sample usage
q = 'Test'
results = twitter_search(twitter_api, q, max_results = 10)
save_json = (q, results)
results = load_json(q)
print(json.dumps(results, indent = 1, ensure_ascii = False))

Using "w" you won't be able to read the file so you need to use "r" (Opens a file for reading only.)
open("tweet2.json","r")

Related

How can I edit my code to print out the content of my created json file?

My program takes a csv file as input and writes it as an output file in json format. On the final line, I use the print command to output the contents of the json format file to the screen. However, it does not print out the json file contents and I don't understand why.
Here is my code that I have so far:
import csv
import json
def jsonformat(infile,outfile):
contents = {}
csvfile = open(infile, 'r')
reader = csvfile.read()
for m in reader:
key = m['No']
contents[key] = m
jsonfile = open(outfile, 'w')
jsonfile.write(json.dumps(contents))
csvfile.close()
jsonfile.close()
return jsonfile
infile = 'orders.csv'
outfile = 'orders.json'
output = jsonformat(infile,outfile)
print(output)
Your function returns the jsonfile variable, which is a file.
Try adding this:
jsonfile.close()
with open(outfile, 'r') as file:
return file.read()
Your function returns a file handle to the file jsonfile that you then print. Instead, return the contents that you wrote to that file. Since you opened the file in w mode, any previous contents are removed before writing the new contents, so the contents of your file are going to be whatever you just wrote to it.
In your function, do:
def jsonformat(infile,outfile):
...
# Instead of this:
# jsonfile.write(json.dumps(contents))
# do this:
json_contents = json.dumps(contents, indent=4) # indent=4 to pretty-print
jsonfile.write(json_contents)
...
return json_contents
Aside from that, you aren't reading the CSV file the correct way. If your file has a header, you can use csv.DictReader to read each row as a dictionary. Then, you'll be able to use for m in reader: key = m['No']. Change reader = csvfile.read() to reader = csv.DictReader(csvfile)
As of now, reader is a string that contains all the contents of your file. for m in reader makes m each character in this string, and you cannot access the "No" key on a character.
a_file = open("sample.json", "r")
a_json = json.load(a_file)
pretty_json = json.dumps(a_json, indent=4)
a_file.close()
print(pretty_json)
Using this sample to print the contents of your json file. Have a good day.

How to write Arabic to a CSV file

I am trying to extract tweets with Python and store them in a CSV file, but I can't seem to include all languages. Arabic appears as special characters.
def recup_all_tweets(screen_name,api):
all_tweets = []
new_tweets = api.user_timeline(screen_name,count=300)
all_tweets.extend(new_tweets)
#outtweets = [[tweet.id_str, tweet.created_at, tweet.text,tweet.retweet_count,get_hashtagslist(tweet.text)] for tweet in all_tweets]
outtweets = [[tweet.text,tweet.entities['hashtags']] for tweet in all_tweets]
# with open('recup_all_tweets.json', 'w', encoding='utf-8') as f:
# f.write(json.dumps(outtweets, indent=4, sort_keys=True))
with open('recup_all_tweets.csv', 'w',encoding='utf-8') as f:
writer = csv.writer(f,delimiter=',')
writer.writerow(["text","tag"])
writer.writerows(outtweets)
# pass
return(outtweets)
Example of writing both CSV and JSON:
#coding:utf8
import csv
import json
s = ['عربى','عربى','عربى']
with open('output.csv','w',encoding='utf-8-sig',newline='') as f:
r = csv.writer(f)
r.writerow(['header1','header2','header3'])
r.writerow(s)
with open('output.json','w',encoding='utf8') as f:
json.dump(s,f,ensure_ascii=False)
output.csv:
header1,header2,header3
عربى,عربى,عربى
output.csv viewed in Excel:
output.json:
["عربى", "عربى", "عربى"]
Note Microsoft Excel needs utf-8-sig to read a UTF-8 file properly. Other applications may or may not need it to view properly. Many Windows applications required a UTF-8 "BOM" signature at the start of a text file or will assume an ANSI encoding instead. The ANSI encoding varies depending on the localized version of Windows used.
Maybe try with
f.write(json.dumps(outtweets, indent=4, sort_keys=True, ensure_ascii=False))
I searched a lot and finally wrote the following piece of code:
import arabic_reshaper
from bidi.algorithm import get_display
import numpy as np
itemsX = webdriver.find_elements(By.CLASS_NAME,"x1i10hfl")
item_linksX = [itemX.get_attribute("href") for itemX in itemsX]
item_linksX = filter(lambda k: '/p/' in k, item_linksX)
counter = 0
for item_linkX in item_linksX:
AllComments2 = []
counter = counter + 1
webdriver.get(item_linkX)
print(item_linkX)
sleep(11)
comments = webdriver.find_elements(By.CLASS_NAME,"_aacl")
for comment in comments:
try:
reshaped_text = arabic_reshaper.reshape(comment.text)
bidi_text = get_display(reshaped_text)
AllComments2.append(reshaped_text)
except:
pass
df = pd.DataFrame({'col':AllComments2})
df.to_csv('C:\Crawler\Comments' + str(counter) + '.csv', sep='\t', encoding='utf-16')
This code worked perfectly for me. I hope it helps those who haven't used the code from the previous post

Writing to txt file in UTF-8 - Python

My django application gets document from user, created some report about it, and write to txt file. The interesting problem is that everything works very well on my Mac OS. But on Windows, it can not read some letters, converts it to symbols like é™, ä±. Here are my codes:
views.py:
def result(request):
last_uploaded = OriginalDocument.objects.latest('id')
original = open(str(last_uploaded.document), 'r')
original_words = original.read().lower().split()
words_count = len(original_words)
open_original = open(str(last_uploaded.document), "r")
read_original = open_original.read()
characters_count = len(read_original)
report_fives = open("static/report_documents/" + str(last_uploaded.student_name) +
"-" + str(last_uploaded.document_title) + "-5.txt", 'w', encoding="utf-8")
# Path to the documents with which original doc is comparing
path = 'static/other_documents/doc*.txt'
files = glob.glob(path)
#endregion
rows, found_count, fives_count, rounded_percentage_five, percentage_for_chart_five, fives_for_report, founded_docs_for_report = search_by_five(last_uploaded, 5, original_words, report_fives, files)
context = {
...
}
return render(request, 'result.html', context)
report txt file:
['universitetindé™', 'té™hsili', 'alä±ram.', 'mé™n'] was found in static/other_documents\doc1.txt.
...
The issue here is that you're calling open() on a file without specifying the encoding. As noted in the Python documentation, the default encoding is platform dependent. That's probably why you're seeing different results in Windows and MacOS.
Assuming that the file itself was actually encoded in UTF-8, just specify that when reading the file:
original = open(str(last_uploaded.document), 'r', encoding="utf-8")

Saving and Retrieving Python object Attributes values to a file

I require 2 things to be done.
First, take the request object and save the object attribute values
to a file as values of some known keys. This file needs to be editable
after saving, ie, a user can modify the values of the keys(So I used
json format). This is handled in function
save_auth_params_to_file().
Second, get the file contents in a such a format that I can retrieve
the values using the keys. This is handled in function
get_auth_params_from_file.
import json
import os
SUCCESS_AUTH_PARAM_FILE = '/auth/success_auth_params.json'
def save_auth_params_to_file(request):
auth_params = {}
if request is not None:
auth_params['token'] = request.token
auth_params['auth_url'] = request.auth_url
auth_params['server_cert'] = request.server_cert
auth_params['local_key'] = request.local_key
auth_params['local_cert'] = request.local_cert
auth_params['timeout'] = request.timeout_secs
with open(SUCCESS_AUTH_PARAM_FILE, 'w') as fout:
json.dump(auth_params, fout, indent=4)
def get_auth_params_from_file():
auth_params = {}
if os.path.exists(SUCCESS_AUTH_PARAM_FILE):
with open(SUCCESS_AUTH_PARAM_FILE, "r") as fin:
auth_params = json.load(fin)
return auth_params
Question:
Is there a more pythonic way to achieve the 2 things ?
Any potential issues in the code which I have overlooked?
Any error conditions I have to take care ?
There are some things to be noted, yes:
i) When your request is None for some reason, you are saving an empty JSON object to your file. Maybe you'll want to write to your file only if request is not None?
auth_params = {}
if request is not None:
auth_params['token'] = request.token
auth_params['auth_url'] = request.auth_url
auth_params['server_cert'] = request.server_cert
auth_params['local_key'] = request.local_key
auth_params['local_cert'] = request.local_cert
auth_params['timeout'] = request.timeout_secs
with open(SUCCESS_AUTH_PARAM_FILE, 'w') as fout:
json.dump(auth_params, fout, indent=4)
ii) Why not create the dict all at once?
auth_params = {
'token': request.token,
'auth_url': request.auth_url,
'server_cert': request.server_cert,
'local_key': request.local_key,
'local_cert': request.local_cert,
'timeout': request.timeout,
}
iii) Make sure this file is in a SAFE location with SAFE permissions. This is sensitive data, like anything related to authentication.
iv) You are overwriting your file everytime save_auth_params_to_file is called. Maybe you mean to append your JSON to the file instead of overwriting? If that's the case:
with open(SUCCESS_AUTH_PARAM_FILE, 'a') as fout:

Save file without first and last double quotes

I am trying to save my data to a file. My problem is the file i saved contains double quotes at the first and the last of a line. I have tried many ways to solve it from str.replace(), strip, csv to json, pickle. However, the problem has been still persistent. I have got stuck with it. Please help me. I will detail my problem below.
Firstly, I have a file called angles.txt like that:
{'left_w0': -2.6978887076110842, 'left_w1': -1.3257428944152834, 'left_w2': -1.7533400385498048, 'left_e0': 0.03566505327758789, 'left_e1': 0.6948932961 181641, 'left_s0': -1.1665923878540039, 'left_s1': -0.6726505747192383}
{'left_w0': -2.6967382220214846, 'left_w1': -0.8440729275695802, 'left_w2': -1.7541070289428713, 'left_e0': 0.036048548474121096, 'left_e1': 0.166820410 49194338, 'left_s0': -0.7731263162109375, 'left_s1': -0.7056311616210938}
I read line by line from the text file and transfer to a dict variable called data. Here is the reading file code:
def read_data_from_file(file_name):
data = dict()
f = open(file_name, 'r')
for index_line in range(1, number_lines +1):
data[index_line] = eval(f.readline())
f.close()
return data
Then I changed something in the data. Something like data[index_line]['left_w0'] = data[index_line]['left_w0'] + 0.0006. After that I wrote my data into another text file. Here is the code:
def write_data_to_file(data, file_name)
f = open(file_name, 'wb')
data_convert = dict()
for index_line in range(1, number_lines):
data_convert[index_line] = repr(data[index_line])
data_convert[index_line] = data_convert[index_line].replace('"','') # I also used strip
json.dump(data_convert[index_line], f)
f.write('\n')
f.close()
The result I received in the new file is:
"{'left_w0': -2.6978887076110842, 'left_w1': -1.3257428944152834, 'left_w2': -1.7533400385498048, 'left_e0': 0.03566505327758789, 'left_e1': 0.6948932961 181641, 'left_s0': -1.1665923878540039, 'left_s1': -0.6726505747192383}"
"{'left_w0': -2.6967382220214846, 'left_w1': -0.8440729275695802, 'left_w2': -1.7541070289428713, 'left_e0': 0.036048548474121096, 'left_e1': 0.166820410 49194338, 'left_s0': -0.7731263162109375, 'left_s1': -0.7056311616210938}"
I cannot remove "".
You could simplify your code by removing unnecessary transformations:
import json
def write_data_to_file(data, filename):
with open(filename, 'w') as file:
json.dump(data, file)
def read_data_from_file(filename):
with open(filename) as file:
return json.load(file)

Categories