Python: json -> text, how to only write unique values? - python

I have a json file, from which I'm extracting quotes. It's the file from Kaggle (formatted exactly the same way).
My goal is to extract all the quotes (just the quotes, not the authors or other metadata) into a simple text document. The first 5 lines would be:
# Don't cry because it's over, smile because it happened.
# I'm selfish, impatient and a little insecure. I make mistakes, I am out of control and at times hard to handle. But if you can't handle me at my worst, then you sure as hell don't deserve me at my best.
# Be yourself; everyone else is already taken.
# Two things are infinite: the universe and human stupidity; and I'm not sure about the universe.
# Be who you are and say what you feel, because those who mind don't matter, and those who matter don't mind.
The challenge is that some quotes repeat and I only want to write each quote once. What's a good way to only write down unique values into a text doc?
The best i came up with was this:
import json
with open('quotes.json', 'r') as json_f:
data = json.load(json_f)
quote_list = []
with open('quotes.txt', 'w') as text_f:
for quote_object in data:
quote = quote_object['Quote']
if quote not in quote_list:
text_f.write(f'{quote}\n')
quote_list.append(quote)
But it feels grossly inefficient to have to create and maintain a separate list with 40,000 values.
I tried reading the file on each iteration of the write function, but somehow read always comes back empty:
with open('quotes.json', 'r') as json_f:
data = json.load(json_f)
with open('quotes.txt', 'w+') as text_f:
for quote_object in data:
quote = quote_object['Quote']
print(text_f.read()) # prints nothing?
# if it can't read the doc, I can't check if quote already there
text_f.write(f'{quote}\n')
Would love to understand why text_f.read() comes back empty, and what's a more elegant solution.

You can use a set:
import json
with open('quotes.json', 'r') as json_f:
data = json.load(json_f)
quotes = set()
with open('quotes.txt', 'w') as text_f:
for quote_object in data:
quote = quote_object['Quote']
quotes.add(quote)
Adding the same quote to the set multiple times will have no effect: only a single object is kept!

Related

How to parse a complex text file using Python string methods or regex and export into tabular form

As the title mentions, my issue is that I don't understand quite how to extract the data I need for my table (The columns for the table I need are Date, Time, Courtroom, File Number, Defendant Name, Attorney, Bond, Charge, etc.)
I think regex is what I need but my class did not go over this, so I am confused on how to parse in order to extract and output the correct data into an organized table...
I am supposed to turn my text file from this
https://pastebin.com/ZM8EPu0p
and export it into a more readable format like this- example output is below
Here is what I have so far.
def readFile(court):
csv_rows = []
# read and split txt file into pages & chunks of data by pagragraph
with open(court, "r") as file:
data_chunks = file.read().split("\n\n")
for chunk in data_chunks:
chunk = chunk.strip # .strip removes useless spaces
if str(data_chunks[:4]).isnumeric(): # if first 4 characters are digits
entry = None # initialize an empty dictionary
elif (
str(data_chunks).isspace() and entry
): # if we're on an empty line and the entry dict is not empty
csv_rows.DictWriter(dialect="excel") # turn csv_rows into needed output
entry = {}
else:
# parse here?
print(data_chunks)
return csv_rows
readFile("/Users/mia/Desktop/School/programming/court.txt")
It is quite a lot of work to achieve that, but it is possible. If you split it in a couple of sub-tasks.
First, your input looks like a text file so you could parse it line by line. -- using https://www.w3schools.com/python/ref_file_readlines.asp
Then, I noticed that your data can be split in pages. You would need to prepare a lot of regular expressions, but you can start with one for identifying where each page starts. -- you may want to read this as your expression might get quite complicated: https://www.w3schools.com/python/python_regex.asp
The goal of this step is to collect all lines from a page in some container (might be a list, dict, whatever you find it suitable).
And afterwards, write some code that parses the information page by page. But for simplicity I suggest to start with something easy, like the columns for "no, file number and defendant".
And when you got some data in a reliable manner, you can address the export part, using pandas: https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_excel.html

How to read "well" from a file in python

I have to read a file that has always the same format.
As I know it has the same format I can readline() and tokenize. But I guess there is a way to read it more, how to say it, "pretty to the eyes".
The file I have to read has this format :
Nom NMS-01
MAC AAAAAAAAAAA
UDPport 2019
TCPport 9129
I just want a different way to read it without having to tokenize, if that is possbile
Your question seems to imply that "tokenizing" is some kind of mysterious and complicated process. But in fact, the thing you are trying to do is exactly tokenizing.
Here is a perfectly valid way to read the file you show, break it up into tokens, and store it in a data structure:
def read_file_data(data_file_path):
result = {}
with open(data_file_path) as data_file:
for line in data_file:
key, value = line.split(' ', maxsplit=1)
result[key] = value
return result
That wasn't complicated, it wasn't a lot of code, it doesn't need a third-party library, and it's easy to work with:
data = read_file_data('path/to/file')
print(data['Nom']) # prints "NMS-01"
Now, this implementation makes many assumptions about the structure of the file. Among other things, it assumes:
The entire file is structured as key/value pairs
Each key/value pair fits on a single line
Every line in the file is a key/value pair (no comments or blank lines)
The key cannot contain space characters
The value cannot contain newline characters
The same key does not appear multiple times in the file (or, if it does, it is acceptable for the last value given to be the only one returned)
Some of these assumptions may be false, but they are all true for the data sample you provided.
More generally: if you want to parse some kind of structured data, you need to understand the structure of the data and how values are delimited from each other. That's why common structured data formats like XML, JSON, and YAML (among many others!) were invented. Once you know the language you are parsing, tokenization is simply the code you write to match up the language with the text of your input.
Pandas does many magical things, so maybe that is prettier for you?
import pandas as pd
pd.read_csv('input.txt',sep = ' ',header=None,index_col=0)
This gives you a dataframe that you can manipulate further:
0 1
Nom NMS-01
MAC AAAAAAAAAAA
UDPport 2019
TCPport 9129

Storing multiple lines from a file to a variable using a delimiter

I am using Python to make a filter to search through thousands of text files for specific queries. These text files consist of several sections, and they do not all have consistent formatting. I want each of these sections to be checked for specific criteria, so in the section of the text file called "DESCRIPTION OF RECORD", I was doing something like this to store the string to a variable:
with open(some_file, 'r') as r:
for line in r:
if "DESCRIPTION OF RECORD" in line:
record = line
Now this works pretty well for most files, but some files have a line break in the the section, so it does not store the whole section to the variable. I was wondering how I could use a delimiter to control how many lines are stored to the variable. I would probably use the title of the next section called "CORRELATION" for the delimiter. Any ideas?
An example structure of the file could look like:
CLINICAL HISTORY: Some information.
MEDICATIONS: Other information
INTRODUCTION: Some more information.
DESCRIPTION OF THE RECORD: Some information here....
another line of information
IMPRESSION: More info
CLINICAL CORRELATION: The last bit of information
You could use builtin re module like that:
import re
# I assume you have a list of all possible sections
sections = [
'CLINICAL HISTORY',
'MEDICATIONS',
'INTRODUCTION',
'DESCRIPTION OF THE RECORD',
'IMPRESSION',
'CLINICAL CORRELATION'
]
# Build a regexp that will match any of the section names
exp = '|'.join(sections)
with open(some_file, 'r') as r:
contents_of_file = r.read()
infos = list(re.split(exp, contents_of_file)) # infos is a list of what's between the section names
infos = [info.strip('\n :') for info in infos] # let's get rid of colons and whitespace in our infos
print(infos) # you don't have to print it :)
If I use your example text instead of a file, it prints something like that:
['', 'Some information.', 'Other information', 'Some more information.', 'Some information here....\nanother line of information', 'More info', 'The last bit of information']
The first element is empty, but you can get rid of it simply by doing so:
infos = infos[1:]
By the way, if we merge lines in which we deal with infos, into one, it would probably be cleaner, and would surely be more efficient (but maybe a little bit less understandable):
infos = [info.strip('\n :') in re.split(exp, contents_of_file)][1:]
If you do not know the sections you'll find, here's a version which seems to work, as long as the text is formatted as in your example :
import itertools
text = """
CLINICAL HISTORY: Some information.
MEDICATIONS: Other information
INTRODUCTION: Some more information.
DESCRIPTION OF THE RECORD: Some information here....
another line of information
IMPRESSION: More info
CLINICAL CORRELATION: The last bit of information
"""
def method_tuple(s):
# sp holds strings which finish with the section names.
sp = s.split(":")
# This line removes spurious "\n" at both end of the strings in sp.
# It then splits them once at "\n" starting from their end, effectively
# seperating the sections and the descriptions.
# It builds a list of strings alternating section names and information.
fragments = list(itertools.chain.from_iterable( p.strip("\n").rsplit("\n", 1) for p in sp ))
# You can now build a list of 2-uples.
pairs = [ (fragments[i*2],fragments[i*2+1]) for i in range(len(fragments)//2)]
# Or you could build a dict
# pairs = { fragments[i*2]:fragments[i*2+1] for i in range(len(fragments)//2)}
return pairs
print(method_tuple(text))
The timings compared the regex version of Ilya are roughly equivalent, although building a dictionnary seems to start winning over building a list of tuples or using regexp, on the sample text at 1 billion loops...
I found another possible solution for this using the indexes of the line. I first opened the check file, and stored its f.read() contents into a variable called info. I then did this:
with open(check_file, 'r') as r:
for line in r:
if "DESCRIPTION" in line:
record_Index = info.index(line)
record = info[info.index(line):]
if "IMPRESSION" in record:
impression_Index = info.index("IMPRESSION")
record = info[record_Index:impression_Index]
This method worked as well, although I don't know how efficient it is memory and speed wise. Instead of using with open(...) multiple times, it might be better just to store it all in the variable called info and then do everything with that.

Output py2neo recordlist to text file

I am trying to use python (v3.4) to act as a 'sandwich' between Neo4j and a text output file. This code gives me a py2neo RecordList:
from py2neo import Graph
from py2neo.packages.httpstream import http
http.socket_timeout = 9999
graph = Graph('http://localhost:7474/db/data/')
sCypher = 'MATCH (a) RETURN count(a)'
results = graph.cypher.execute(sCypher)
I also have some really simple text file interaction:
f = open('Output.txt','a') #open for append access
f.write ('\n Hello world')
f.close
What I really want to do is f.write (str(results)) but it really didn't like that at all. Can someone help me to convert my RecordList into a string please? I'm assuming I'll need to loop through the columns to get each column name, then loop through each record and write it individually, but I don't know how to go about this. Where I'm eventually planning to go with this is to change the Cypher every time.
Closest related question I could find is this one: How to convert Neo4j return types to python types. I'm sure there's someone out there who'll read this and say that using the REST API directly is a much better way of spitting out text, but I'm not quite at that level...
Thanks in advance,
Andy
Here is how you can iterate a RecordList and print the columns of the individual Records to a file (e.g. comma separated). If the properties you return are lists you would need some more formatting to get strings for your output file.
# use with to open files, this makes sure that it's properly closed after an exception
with open('output.txt', 'a') as f:
# iterate over individual Records in RecordList
for record in results:
# concatenate all columns of the Record into a string, comma separated
# list comprehension with str() to handle int and other types
output_string = ','.join([str(x) for x in record])
# actually write to file
print(output_string, file=f)
The format of the output file depends on what you want to do with it of course.

How can I extract specific data from e-prime output (.txt file)

Been learning Python the last couple of days for the function of completing a data extraction. I'm not getting anywhere & hope one of you lovely people can advise.
I need to extract data that follows: RESP, CRESP, RTTime and RT.
Here's a snippit for an example of the mess I have to deal with.
Thoughts?
Level: 4
*** LogFrame Start ***
Procedure: ActProcScenarios
No: 1
Line1: It is almost time for your town's spring festival. A friend of yours is
Line2: on the committee and asks if you would be prepared to help out with the
Line3: barbecue in the park. There is a large barn for use if it rains.
Line4: You hope that on that day it will be
pfrag: s-n-y
pword: sunny
pletter: u
Quest: Does the town have an autumn festival?
Correct: {LEFTARROW}
ScenarioListPract: 1
Topic: practice
Subtheme: practice
ActPracScenarios: 1
Running: ActPracScenarios
ActPracScenarios.Cycle: 1
ActPracScenarios.Sample: 1
DisplayFragInstr.OnsetDelay: 17
DisplayFragInstr.OnsetTime: 98031
DisplayFragInstr.DurationError: -999999
DisplayFragInstr.RTTime: 103886
DisplayFragInstr.ACC: 0
DisplayFragInstr.RT: 5855
DisplayFragInstr.RESP: {DOWNARROW}
DisplayFragInstr.CRESP:
FragInput.OnsetDelay: 13
FragInput.OnsetTime: 103899
FragInput.DurationError: -999999
FragInput.RTTime: 104998
I think regular expressions would be the right tool here because the \b word boundary anchors allow you to make sure that RESP only matches a whole word RESP and not just part of a longer word (like CRESP).
Something like this should get you started:
>>> import re
>>> for line in myfile:
... match = re.search(r"\b(RT|RTTime|RESP|CRESP): (.*)", line)
... if match:
... print("Matched {0} with value {1}".format(match.group(1),
... match.group(2)))
Output:
Matched RTTime with value 103886
Matched RT with value 5855
Matched RESP with value {DOWNARROW}
Matched CRESP with value
Matched RTTime with value 104998
transform it to a dict first, then just get items from the dict as you wish
d = {k.strip(): v.strip() for (k, v) in
[line.split(':') for line in s.split('\n') if line.find(':') != -1]}
print (d['DisplayFragInstr.RESP'], d['DisplayFragInstr.CRESP'],
d['DisplayFragInstr.RTTime'], d['DisplayFragInstr.RT'])
>>> ('{DOWNARROW}', '', '103886', '5855')
I think you may be making things harder for yourself than needed. E-prime has a file format called .edat that is designed for the purpose you are describing. An edat file is another format that contains the same information as the .txt file but it a way that makes extracting variables easier. I personally only use the type of text file you have posted here as a form of data storage redundancy.
If you are doing things this way because you do not have a software key, it might help to know that the E-Merge and E-DataAid programs for eprime don't require a key. You only need the key for editing build files. Whoever provided you with the .txt files should probably have an install disk for these programs. If not, it is available on the PST website (I believe you need a serial code to create an account, but not certain)
Eprime generally creates a .edat file that matches the content of the text file you have posted an example of. Sometimes though if eprime crashes you don't get the edat file and only have the .txt. Luckily you can generate the edat file from the .txt file.
Here's how I would approach this issue: If you do not have the edat files available first use E-DataAid to recover the files.
Then presuming you have multiple participants you can use e-merge to merge all of the edat files together for all participants in who completed this task.
Open the merged file. It might look a little chaotic depending on how much you have in the file. You can got to Go to tools->Arrange columns This will show a list of all your variables. Adjust so that only the desired variables are in the right hand box. Hit ok.
Looking at the file you posted it says level 4 at the top so I'm guessing there are a lot of procedures in this experiment. If you have many procedures in the program you might at this point have lines that just have startup info and NULL in the locations where your variables or interest are. You and fix this by going to tools->filter and creating a filter to eliminate those lines. Sometimes also depending on file structure you might also end up with duplicate lines of the same data. You can also fix this with filtering.
You can then export this file as a csv
import re
import pprint
def parse_logs(file_name):
with open(file_name, "r") as f:
lines = [line.strip() for line in f.readlines()]
base_regex = r'^.*{0}: (.*)$'
match_terms = ["RESP", "CRESP", "RTTime", "RT"]
regexes = {term: base_regex.format(term) for term in match_terms}
output_list = []
for line in lines:
for key, regex in regexes.items():
match = re.match(regex, line)
if match:
match_tuple = (key, match.groups()[0])
output_list.append(match_tuple)
return output_list
pprint.pprint(parse_logs("respregex"))
Edit: Tim and Guy's answers are both better. I was in a hurry to write something and missed two much more elegant solutions.

Categories