How do I optimize the speed of my python compression code? - python
I have made a compression code, and have tested it on 10 KB text files, which took no less than 3 minutes. However, I've tested it with a 1 MB file, which is the assessment assigned by my teacher, and it takes longer than half an hour. Compared to my classmates, mine is irregularly long. It might be my computer or my code, but I have no idea. Does anyone know any tips or shortcuts into making the speed of my code shorter? My compression code is below, if there are any quicker ways of doing loops, etc. please send me an answer (:
(by the way my code DOES work, so I'm not asking for corrections, just enhancements, or tips, thanks!)
import re #used to enable functions(loops, etc.) to find patterns in text file
import os #used for anything referring to directories(files)
from collections import Counter #used to keep track on how many times values are added
size1 = os.path.getsize('file.txt') #find the size(in bytes) of your file, INCLUDING SPACES
print('The size of your file is ', size1,)
words = re.findall('\w+', open('file.txt').read())
wordcounts = Counter(words) #turns all words into array, even capitals
common100 = [x for x, it in Counter(words).most_common(100)] #identifies the 200 most common words
keyword = []
kcount = []
z = dict(wordcounts)
for key, value in z.items():
keyword.append(key) #adds each keyword to the array called keywords
kcount.append(value)
characters =['$','#','#','!','%','^','&','*','(',')','~','-','/','{','[', ']', '+','=','}','|', '?','cb',
'dc','fd','gf','hg','kj','mk','nm','pn','qp','rq','sr','ts','vt','wv','xw','yx','zy','bc',
'cd','df','fg','gh','jk','km','mn','np','pq','qr','rs','st','tv','vw','wx','xy','yz','cbc',
'dcd','fdf','gfg','hgh','kjk','mkm','nmn','pnp','qpq','rqr','srs','tst','vtv','wvw','xwx',
'yxy','zyz','ccb','ddc','ffd','ggf','hhg','kkj','mmk','nnm','ppn','qqp','rrq','ssr','tts','vvt',
'wwv','xxw','yyx''zzy','cbb','dcc','fdd','gff','hgg','kjj','mkk','nmm','pnn','qpp','rqq','srr',
'tss','vtt','wvv','xww','yxx','zyy','bcb','cdc','dfd','fgf','ghg','jkj','kmk','mnm','npn','pqp',
'qrq','rsr','sts','tvt','vwv','wxw','xyx','yzy','QRQ','RSR','STS','TVT','VWV','WXW','XYX','YZY',
'DC','FD','GF','HG','KJ','MK','NM','PN','QP','RQ','SR','TS','VT','WV','XW','YX','ZY','BC',
'CD','DF','FG','GH','JK','KM','MN','NP','PQ','QR','RS','ST','TV','VW','WX','XY','YZ','CBC',
'DCD','FDF','GFG','HGH','KJK','MKM','NMN','PNP','QPQ','RQR','SRS','TST','VTV','WVW','XWX',
'YXY','ZYZ','CCB','DDC','FFD','GGF','HHG','KKJ','MMK','NNM','PPN','QQP','RRQ','SSR','TTS','VVT',
'WWV','XXW','YYX''ZZY','CBB','DCC','FDD','GFF','HGG','KJJ','MKK','NMM','PNN','QPP','RQQ','SRR',
'TSS','VTT','WVV','XWW','YXX','ZYY','BCB','CDC','DFD','FGF','GHG','JKJ','KMK','MNM','NPN','PQP',] #characters which I can use
symbols_words = []
char = 0
for i in common100:
symbols_words.append(characters[char]) #makes the array literally contain 0 values
char = char + 1
print("Compression has now started")
f = 0
g = 0
no = 0
while no < 100:
for i in common100:
for w in words:
if i == w and len(i)>1: #if the values in common200 are ACTUALLY in words
place = words.index(i)#find exactly where the most common words are in the text
symbols = symbols_words[common100.index(i)] #assigns one character with one common word
words[place] = symbols # replaces the word with the symbol
g = g + 1
no = no + 1
string = words
stringMade = ' '.join(map(str, string))#makes the list into a string so you can put it into a text file
file = open("compression.txt", "w")
file.write(stringMade)#imports everything in the variable 'words' into the new file
file.close()
size2 = os.path.getsize('compression.txt')
no1 = int(size1)
no2 = int(size2)
print('Compression has finished.')
print('Your original file size has been compressed by', 100 - ((100/no1) * no2 ) ,'percent.'
'The size of your file now is ', size2)
Using something like
word_substitutes = dict(zip(common100, characters))
will give you a dict that maps common words to their corresponding symbol.
Then you can simply iterate over the words:
# Iterate over all the words
# Use enumerate because we're going to modify the word in-place in the words list
for word_idx, word in enumerate(words):
# If the current word is in the `word_substitutes` dict, then we know its in the
# 'common' words, and can be replaced by the symbol
if word in word_substitutes:
# Replaces the word in-place
replacement_symbol = word_substitutes[word]
words[word_idx] = replacement_symbol
This will give much better performance, because the dictionary lookup used for the common word symbol mapping is logarithmic in time rather than linear. So the overall complexity will be something like O(N log(N)) rather than O(N^3) that you get from the 2 nested loops with the .index() call inside that.
The first thing I see that is bad for performance is:
for i in common100:
for w in words:
if i == w and len(i)>1:
...
What you are doing is seeing if the word w is in your list of common100 words. However, this check can be done in O(1) time by using a set and not looping through all of your top 100 words for each word.
common_words = set(common100)
for w in words:
if w in common_words:
...
Generally you would do the following:
Measure how much time each "part" of your program needs. You could use a profiler (e.g. this one in the standard library) or simply sprinkle some times.append(time.time.now) into your code and compute differences. Then you know which part of your code is slow.
See if you can improve the algorithm of the slow part. gnicholas answer shows one possibility to speed things up. The while no<=100 seems suspiciously, maybe that can be improved. This step needs understanding of the algorithms you use. Be careful to select the best data structures for your use case.
If you can't use a better algorithm (because you always use the best way to calculate something) you need to speed up the computations themselves. Numerical stuff benefits from numpy, with cython you can basically compile python code to C and numba uses LLVM to compile.
Related
Deciphering script in Python issue
Cheers, I am looking for help with my small Python project. Problem says, that program has to be able to decipher "monoalphabetic substitute cipher", while we have complete database, which words will definetely (at least once) be ciphered. I have tried to create such a database with words, that are ciphered: lst_sample = [] n = int(input('Number of words in database: ')) for i in range(n): x = input() lst_sample.append(x) Way, that I am trying to "decipher" is to observe words', let's say structure, where different letter I am assigning numbers based on their presence in word (e.g. feed = 0112, hood = 0112 are the same, because it is combination of three different letters in such a combination). I am using subprogram pattern() for it: def pattern(word): nextNum = 0 letternNums = {} wordPattern = [] for letter in word: if letter not in letterNums: letternNums[letter] = str(nextNum) nextNum += 1 wordPattern.append(letterNums[letter]) return ''.join(wordPattern) Right after, I have made database of ciphered words: lst_en = [] q = input('Insert ciphered words: ') if q == '': print(lst_en) else: lst_en.append(q) With such a databases I could finally create process to deciphering. for i in lst_en: for q in lst_sample: x = p word = i if pattern(x) == pattern(word): print(x) print(word) print() If words in database lst_sample have different letter length (e.g. food, car, yellow), there is no problem to assign decrypted words, even when they have the same length, I can sort them based on their different structure: (e.g. puff, sort). The main problem, which I am not able to solve, comes, when word has the same length and structure (e.g. jane, word). I have no idea how to solve this problem, while keeping such an script architecture as described above. Is there any way, how that could be solved using another if statement or anything similar? Is there any way, how to solve it with infortmation that words in lst_sample will for sure be in ciphered text? Thanks for all help!
Print a list of unique words from a text file after removing punctuation, and find longest word
Goal is to a) print a list of unique words from a text file and also b) find the longest word. I cannot use imports in this challenge. File handling and main functionality are what I want, however the list needs to be cleaned. As you can see from the output, words are getting joined with punctuation and therefore maxLength is obviously incorrect. with open("doc.txt") as reader, open("unique.txt", "w") as writer: unwanted = "[],." unique = set(reader.read().split()) unique = list(unique) unique.sort(key=len) regex = [elem.strip(unwanted).split() for elem in unique] writer.write(str(regex)) reader.close() maxLength = len(max(regex,key=len )) print(maxLength) res = [word for word in regex if len(word) == maxLength] print(res) =========== Sample: pioneered the integrated placement year concept over 50 years ago [7][8][9] with more than 70 per cent of students taking a placement year, the highest percentage in the UK.[10]
Here's a solution that uses str.translate() to throw away all bad characters (+ newline) before we ever do the split(). (Normally we'd use a regex with re.sub(), but you're not allowed.) This makes the cleaning a one-liner, which is really neat: bad = "[],.\n" bad_transtable = str.maketrans(bad, ' ' * len(bad)) # We can directly read and clean the entire output, without a reader object: cleaned_input = open('doc.txt').read().translate(bad_transtable) #with open("doc.txt") as reader: # cleaned_input = reader.read().translate(bad_transtable) # Get list of unique words, in decreasing length unique_words = sorted(set(cleaned_input.split()), key=lambda w: -len(w)) with open("unique.txt", "w") as writer: for word in unique_words: writer.write(f'{word}\n') max_length = len(unique_words[0]) print ([word for word in unique_words if len(word) == max_length]) Notes: since the input is already 100% cleaned and split, no need to append to a list/insert to a set as we go, then have to make another cleaning pass later. We can just create unique_words directly! (using set() to keep only uniques). And while we're at it, we might as well use sorted(..., key=lambda w: -len(w)) to sort it in decreasing length. Only need to call sort() once. And no iterative-append to lists. hence we guarantee that max_length = len(unique_words[0]) this approach is also going to be more performant than nested loops for line in <lines>: for word in line.split(): ...iterative append() to wordlist no need to do explicit writer/reader.open()/.close(), that's what the with statement does for you. (It's also more elegant for handling IO when exceptions happen.) you could also merge the printing of the max_length words inside the writer loop. But it's cleaner code to keep them separate. note we use f-string formatting f'{word}\n' to add the newline back when we write() an output line in Python we use lower_case_with_underscores for variable names, hence max_length not maxLength. See PEP8 in fact here, we don't strictly need a with-statement for the writer, if all we're going to do is slurp its entire contents in one go in with open('doc.txt').read(). (That's not scaleable for huge files, you'd have to read in chunks or n lines). str.maketrans() is a builtin, but if your teacher objects to the module reference, you can also call it on a bound string e.g. ' '.maketrans() str.maketrans() is really a throwback to the days when we only had 95 printable ASCII characters, not Unicode. It still works on Unicode, but building and using huge translation dicts is annoying and uses memory, regex on Unicode is easier, you can define entire character classes. Alternative solution if you don't yet know str.translate() dirty_input = open('doc.txt').read() cleaned_input = dirty_input # If you can't use either 're.sub()' or 'str.translate()', have to manually # str.replace() each bad char one-by-one (or else use a method like str.isalpha()) for bad_char in bad: cleaned_input = cleaned_input.replace(bad_char, ' ') And if you wanted to be ridiculously minimalist, you could write the entire output file in one line with a list-comprehension. Don't do this, it would be terrible for debugging, e.g if you couldn't open/write/overwrite the output file, or got IOError, or unique_words wasn't a list, etc: open("unique.txt", "w").writelines([f'{word}\n' for word in unique_words])
Here is another solution without any function. bad = '`~##$%^&*()-_=+[]{}\|;\':\".>?<,/?' clean = ' ' for i in a: if i not in bad: clean += i else: clean += ' ' cleans = [i for i in clean.split(' ') if len(i)] clean_uniq = list(set(cleans)) clean_uniq.sort(key=len) print(clean_uniq) print(len(clean_uniq[-1]))
Here is a solution. The trick is to use the python str method .isalpha() to filter non-alphanumerics. with open("unique.txt", "w") as writer: with open("doc.txt") as reader: cleaned_words = [] for line in reader.readlines(): for word in line.split(): cleaned_word = ''.join([c for c in word if c.isalpha()]) if len(cleaned_word): cleaned_words.append(cleaned_word) # print unique words unique_words = set(cleaned_words) print(unique_words) # write words to file? depends what you need here for word in unique_words: writer.write(str(word)) writer.write('\n') # print length of longest print(len(sorted(unique_words, key=len, reverse=True)[0]))
Counting how many times a string appears in a CSV file
I have a piece of code what is supposed to tell me how many times a word occurs in a CSV file. Note: the file is pretty large (2 years of text messages) This is my code: key_word1 = 'Exmple_word1' key_word2 = 'Example_word2' counter = 0 with open('PATH_TO_FILE.csv',encoding='UTF-8') as a: for line in a: if (key_word1 or key_word2) in line: counter = counter + 1 print(counter) There are two words because I did not know how to make it non-case sensitive. To test it I used the find function in word on the whole file (using only one of the words as I was able to do a non-case sensitive search there) and I received more than double of what my code has calculated. At first I did use the value_counts() function BUT I received different values for the same word (searching Exmple_word1 appeared 32 and 56 times and 2 times and so on. I kind of got stuck there for a while but it got me thinking. I use two keyboards on my phone which I change regularly - could it be that the same words could actually be different and that would explain why I am getting these results? Also, I pretty much checked all sources regarding this matter and I found different approaches that did not actually do what I want them to do. ( the value_counts() method for example) If that is the case, how can I fix this?
Notice some mistakes in your code: key_word1 or key_word2 - it's "lazy", meaning if the left part - "key_word1" evaluated to True, it won't even look at key_word2. The will cause checking only if key_word1 appeared in the line. An example to emphesize: w1 = 'word1' w2 = 'word2' s = 'bla word2' (w1 or w2) in s >> False (w2 or w1) in s >> True 2. Reading csv file: I recommend using csv package (just import it), something like: import csv with open('PATH_TO_FILE.csv') as f: for line in csv.reader(f): # do you logic here Case sensitivity - don't work hard, you probably can lower case the line you read, just to not hold 2 words.. guess the solution you are looking for should look something like: import csv word_to_search = 'donald' with open('PATH_TO_FILE.csv', encoding='UTF-8') as f: for line in csv.reader(f): if any(word_to_search in l for l in map(str.lower, line)): counter += 1 Running on input: bla,some other bla,donald rocks make,who,great again, donald is here, hura will result: counter=2
Trying to read text file and count words within defined groups
I'm a novice Python user. I'm trying to create a program that reads a text file and searches that text for certain words that are grouped (that I predefine by reading from csv). For example, if I wanted to create my own definition for "positive" containing the words "excited", "happy", and "optimistic", the csv would contain those terms. I know the below is messy - the txt file I am reading from contains 7 occurrences of the three "positive" tester words I read from the csv, yet the results print out to be 25. I think it's returning character count, not word count. Code: import csv import string import re from collections import Counter remove = dict.fromkeys(map(ord, '\n' + string.punctuation)) # Read the .txt file to analyze. with open("test.txt", "r") as f: textanalysis = f.read() textresult = textanalysis.lower().translate(remove).split() # Read the CSV list of terms. with open("positivetest.csv", "r") as senti_file: reader = csv.reader(senti_file) positivelist = list(reader) # Convert term list into flat chain. from itertools import chain newposlist = list(chain.from_iterable(positivelist)) # Convert chain list into string. posstring = ' '.join(str(e) for e in newposlist) posstring2 = posstring.split(' ') posstring3 = ', '.join('"{}"'.format(word) for word in posstring2) # Count number of words as defined in list category def positive(str): counts = dict() for word in posstring3: if word in counts: counts[word] += 1 else: counts[word] = 1 total = sum (counts.values()) return total # Print result; will write to CSV eventually print ("Positive: ", positive(textresult))
I'm a beginner as well but I stumbled upon a process that might help. After you read in the file, split the text at every space, tab, and newline. In your case, I would keep all the words lowercase and include punctuation in your split call. Save this as an array and then parse it with some sort of loop to get the number of instances of each 'positive,' or other, word. Look at this, specifically the "train" function: https://github.com/G3Kappa/Adjustable-Markov-Chains/blob/master/markovchain.py Also, this link, ignore the JSON stuff at the beginning, the article talks about sentiment analysis: https://dev.to/rodolfoferro/sentiment-analysis-on-trumpss-tweets-using-python- Same applies with this link: http://adilmoujahid.com/posts/2014/07/twitter-analytics/ Good luck!
I looked at your code and passed through some of my own as a sample. I have 2 idea's for you, based on what I think you may want. First Assumption: You want a basic sentiment count? Getting to 'textresult' is great. Then you did the same with the 'positive lexicon' - to [positivelist] which I thought would be the perfect action? Then you converted [positivelist] to essentially a big sentence. Would you not just: 1. Pass a 'stop_words' list through [textresult] 2. merge the two dataframes [textresult (less stopwords) and positivelist] for common words - as in an 'inner join' 3. Then basically do your term frequency 4. It is much easier to aggregate the score then Second assumption: you are focusing on "excited", "happy", and "optimistic" and you are trying to isolate text themes into those 3 categories? 1. again stop at [textresult] 2. download the 'nrc' and/or 'syuzhet' emotional valence dictionaries They breakdown emotive words by 8 emotional groups So if you only want 3 of the 8 emotive groups (subset) 3. Process it like you did to get [positivelist] 4. do another join Sorry, this is a bit hashed up, but if I was anywhere near what you were thinking let me know and we can make contact. Second apology, Im also a novice python user, I am adapting what I use in R to python in the above (its not subtle either :) )
Frequency of keywords in a list
Hi so i have 2 text files I have to read the first text file count the frequency of each word and remove duplicates and create a list of list with the word and its count in the file. My second text file contains keywords I need to count the frequency of these keywords in the first text file and return the result without using any imports, dict, or zips. I am stuck on how to go about this second part I have the file open and removed punctuation etc but I have no clue how to find the frequency I played around with the idea of .find() but no luck as of yet. Any suggestions would be appreciated this is my code at the moment seems to find the frequency of the keyword in the keyword file but not in the first text file def calculateFrequenciesTest(aString): listKeywords= aString listSize = len(listKeywords) keywordCountList = [] while listSize > 0: targetWord = listKeywords [0] count =0 for i in range(0,listSize): if targetWord == listKeywords [i]: count = count +1 wordAndCount = [] wordAndCount.append(targetWord) wordAndCount.append(count) keywordCountList.append(wordAndCount) for i in range (0,count): listKeywords.remove(targetWord) listSize = len(listKeywords) sortedFrequencyList = readKeywords(keywordCountList) return keywordCountList; EDIT- Currently toying around with the idea of reopening my first file again but this time without turning it into a list? I think my errors are somehow coming from it counting the frequency of my list of list. These are the types of results I am getting [[['the', 66], 1], [['of', 32], 1], [['and', 27], 1], [['a', 23], 1], [['i', 23], 1]]
You can try something like: I am taking a list of words as an example. word_list = ['hello', 'world', 'test', 'hello'] frequency_list = {} for word in word_list: if word not in frequency_list: frequency_list[word] = 1 else: frequency_list[word] += 1 print(frequency_list) RESULT: {'test': 1, 'world': 1, 'hello': 2} Since, you have put a constraint on dicts, I have made use of two lists to do the same task. I am not sure how efficient it is, but it serves the purpose. word_list = ['hello', 'world', 'test', 'hello'] frequency_list = [] frequency_word = [] for word in word_list: if word not in frequency_word: frequency_word.append(word) frequency_list.append(1) else: ind = frequency_word.index(word) frequency_list[ind] += 1 print(frequency_word) print(frequency_list) RESULT : ['hello', 'world', 'test'] [2, 1, 1] You can change it to how you like or re-factor it as you wish
I agree with #bereal that you should use Counter for this. I see that you have said that you don't want "imports, dict, or zips", so feel free to disregard this answer. Yet, one of the major advantages of Python is its great standard library, and every time you have list available, you'll also have dict, collections.Counter and re. From your code I'm getting the impression that you want to use the same style that you would have used with C or Java. I suggest trying to be a little more pythonic. Code written this way may look unfamiliar, and can take time getting used to. Yet, you'll learn way more. Claryfying what you're trying to achieve would help. Are you learning Python? Are you solving this specific problem? Why can't you use any imports, dict, or zips? So here's a proposal utilizing built in functionality (no third party) for what it's worth (tested with Python 2): #!/usr/bin/python import re # String matching import collections # collections.Counter basically solves your problem def loadwords(s): """Find the words in a long string. Words are separated by whitespace. Typical signs are ignored. """ return (s .replace(".", " ") .replace(",", " ") .replace("!", " ") .replace("?", " ") .lower()).split() def loadwords_re(s): """Find the words in a long string. Words are separated by whitespace. Only characters and ' are allowed in strings. """ return (re.sub(r"[^a-z']", " ", s.lower()) .split()) # You may want to read this from a file instead sourcefile_words = loadwords_re("""this is a sentence. This is another sentence. Let's write many sentences here. Here comes another sentence. And another one. In English, we use plenty of "a" and "the". A whole lot, actually. """) # Sets are really fast for answering the question: "is this element in the set?" # You may want to read this from a file instead keywords = set(loadwords_re(""" of and a i the """)) # Count for every word in sourcefile_words, ignoring your keywords wordcount_all = collections.Counter(sourcefile_words) # Lookup word counts like this (Counter is a dictionary) count_this = wordcount_all["this"] # returns 2 count_a = wordcount_all["a"] # returns 1 # Only look for words in the keywords-set wordcount_keywords = collections.Counter(word for word in sourcefile_words if word in keywords) count_and = wordcount_keywords["and"] # Returns 2 all_counted_keywords = wordcount_keywords.keys() # Returns ['a', 'and', 'the', 'of']
Here is a solution with no imports. It uses nested linear searches which are acceptable with a small number of searches over a small input array, but will become unwieldy and slow with larger inputs. Still the input here is quite large, but it handles it in reasonable time. I suspect if your keywords file was larger (mine has only 3 words) the slow down would start to show. Here we take an input file, iterate over the lines and remove punctuation then split by spaces and flatten all the words into a single list. The list has dupes, so to remove them we sort the list so the dupes come together and then iterate over it creating a new list containing the string and a count. We can do this by incrementing the count as long the same word appears in the list and moving to a new entry when a new word is seen. Now you have your word frequency list and you can search it for the required keyword and retrieve the count. The input text file is here and the keyword file can be cobbled together with just a few words in a file, one per line. python 3 code, it indicates where applicable how to modify for python 2. # use string.punctuation if you are somehow allowed # to import the string module. translator = str.maketrans('', '', '!"#$%&\'()*+,-./:;<=>?#[\\]^_`{|}~') words = [] with open('hamlet.txt') as f: for line in f: if line: line = line.translate(translator) # py 2 alternative #line = line.translate(None, string.punctuation) words.extend(line.strip().split()) # sort the word list, so instances of the same word are # contiguous in the list and can be counted together words.sort() thisword = '' counts = [] # for each word in the list add to the count as long as the # word does not change for w in words: if w != thisword: counts.append([w, 1]) thisword = w else: counts[-1][1] += 1 for c in counts: print('%s (%d)' % (c[0], c[1])) # function to prevent need to break out of nested loop def findword(clist, word): for c in clist: if c[0] == word: return c[1] return 0 # open keywords file and search for each word in the # frequency list. with open('keywords.txt') as f2: for line in f2: if line: word = line.strip() thiscount = findword(counts, word) print('keyword %s appear %d times in source' % (word, thiscount)) If you were so inclined you could modify findword to use a binary search, but its still not going to be anywhere near a dict. collections.Counter is the right solution when there are no restrictions. Its quicker and less code.