how to change this recursive to loop - python

I want to write all base-26 numbers (with letters of the alphabet as digits) of a certain length into an ASCII-file.
For length = 4 this would look like
aaaa
aaab
aaac
...
zzzx
zzzy
zzzz
I achieved this with the following recursive code:
def fuz(data, ll_str):
ll_str += 1
def for_once(data_once, ll_str_once):
tmp_str = ll_str_once
tmp_str -= 1
new_data = []
for m in data_once:
for i1 in range(97, 123):
new_data.append(m + chr(i1))
if tmp_str != 0:
return for_once(new_data, tmp_str)
else:
return data_once
return for_once(data, ll_str)
if __name__ == '__main__':
ll = 4
test = ['']
file_output = open("out.txt", 'a')
out_data = fuz(test, ll)
for out in out_data:
file_output.write(out + '\n')
file_output.close()
However, for any length > 4, this solution runs out of memory on my machine.
Therefore I look for an alternative without recursion - can anybody give me a hint how to do this?

This loop writes all base-26 numbers of length 4 (with letters as digits) in a file named out.txt.
base and length can be arbitrarily chosen - but prepare to be patient for higher values...
import itertools as it
base = 26
lngth = 4
with open('out.txt', 'w') as f:
for t in it.product(range(97, 97+base), repeat=lngth):
s = ''.join(map(chr, (t)))
f.write(s + chr(13))
At least it doesn't consume too much memory, as requested by the OP.
However, with base 26 a length 5 file had already 70MB and a length 6 file I stopped the writing process at 1.4GB; there Notepad++ was already not able to open it anymore. So everybody can think about the use of this code by himself.

Related

Possible to do this more efficiently (turn compact file to sparse)

I have to read in a file line by line that has indices of where a vector has 1's
so for example:
1 3 9 10
means:
0,1,0,1,0,0,0,0,0,1,1
My goal is to write program that will take each line and print out the full vector with the 0's.
I am able to do this with my current program for a few lines:
#create a sparse vector
list_line_sparse = [0] * int(num_features)
#loop over all the lines
for item in lines:
#split the line on spaces
zz = item.split(' ')
#get all ints on a line
d = [int(x.strip()) for x in zz]
#loop over all ints and change index to 1 in sparse vector
for i in d:
list_line_sparse[i]=1
out_file += (', '.join(str(item) for item in list_line_sparse))
#change back to 0's
for i in d:
list_line_sparse[i]=0
out_file +='\n'
f = open('outfile', 'w')
f.write(out_file)
f.close()
The problem is that for a file with a lot of features and lines, my program is very very inefficient - it basically never finishes. Is there anything sticking out that I should change to make it more efficent? (I.e. the 2 for loops)
It would probably be more efficient to write each line of data to your output file as it is generated, rather than building up a huge string in memory.
numpy is a popular Python module that's good for doing bulk operations on numbers. If you start with:
import numpy as np
list_line_sparse = np.zeros(num_features, dtype=np.uint8)
Then, given d as the list of numbers on the current line, you can simply do:
list_line_sparse[d] = 1
to set ALL of those indexes in the array at the same time, no loop required. (At the Python level at least, obviously there's still a loop involved, but it's down in the C implementation of numpy).
It is slowing down because you are doing string concatenation. It is better to work with lists.
Also you could use csv to read your space separated lines in, and to then write each row with commas automatically added:
import csv
num_features = 20
with open('input.txt', 'r', newline='') as f_input, open('output.txt', 'w', newline='') as f_output:
csv_input = csv.reader(f_input, delimiter=' ')
csv_output = csv.writer(f_output)
for row in csv_input:
list_line_sparse = [0] * int(num_features)
for v in map(int, row):
list_line_sparse[v] = 1
csv_output.writerow(list_line_sparse)
So if input.txt contained the following:
1 3 9 10
1 3 9 11
2 7 3 5
Giving you an output.txt containing:
0,1,0,1,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0
0,1,0,1,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0
0,0,1,1,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0
Too much loops: first, the item.split(), then the for x in zz, then for i in d, then for item in list_line_sparse, and then for i in d again. Strings concatenations could be your most expensive part: the .join and the output +=. And all this for every line.
You could try a "character by character" parsing and writing. Something like this:
#features per line
count = int(num_features)
f = open('outfile.txt', 'w')
#loop over all lines
for item in lines:
#reset the feature
i = 0
#the characters buffer
index = ""
#parse character by character
for character in item:
#if a space or end of line is found,
#and the characters buffer (index) is not empty
if character in (" ", "\r", "\n"):
if index:
#parse the characters buffer
index = int(index)
#if is not the first feature
if i > 0:
#add the separator
f.write(", ")
#add 0's until index
while i < index:
f.write("0, ")
i += 1
#and write 1
f.write("1")
i += 1
#reset the characters buffer
index = ""
#if is not a space or end on line
else:
#add the character to the buffer
index += character
#if the last line didn't end with a carriage return,
#index could be waiting to be parsed
if index:
index = int(index)
if i > 0:
f.write(", ")
while i < index:
f.write("0, ")
i += 1
f.write("1")
i += 1
index = ""
#fill with 0's
while i < count:
if i == 0:
f.write("0")
else:
f.write(", 0")
i += 1
f.write("\n")
f.close()
Let's rework your code into a simpler package that takes better advantage of Python's features:
import sys
NUM_FEATURES = 12
with open(sys.argv[1]) as source, open(sys.argv[2], 'w') as sink:
for line in source:
list_line_sparse = [0] * NUM_FEATURES
indicies = map(int, line.rstrip().split())
for index in indicies:
list_line_sparse[index] = 1
print(*list_line_sparse, file=sink, sep=',')
I revisited this problem with your "more efficiently" in mind. Although the above is more memory efficient, it is a hair slower time-wise. I reconsidered your original and came up with a solution that is less memory efficient but about 2x faster than your code:
import sys
NUM_FEATURES = 12
data = ''
with open(sys.argv[1]) as source:
for line in source:
list_line_sparse = ["0"] * NUM_FEATURES
indicies = map(int, line.rstrip().split())
for index in indicies:
list_line_sparse[index] = "1"
data += ",".join(list_line_sparse) + '\n'
with open(sys.argv[2], 'w') as sink:
sink.write(data)
Like your original solution, it stores all the data in memory and writes it out at the end which is both a disadvantage (memory-wise) and an advantage (time-wise.)
input.txt
1 3 9 10
1 3 9 11
2 7 3 5
USAGE
% python3 test.py input.txt output.txt
output.txt
0,1,0,1,0,0,0,0,0,1,1,0
0,1,0,1,0,0,0,0,0,1,0,1
0,0,1,1,0,1,0,1,0,0,0,0

Any simple python code suggestions to add a constant to every individual number in a .txt

so -----2-----3----5----2----3----- would become -----4-----5----7----4----5-----
if the constant was 2 and etc. for every individual line in the text file.
This would involve splitting recognising numbers in between strings and adding a constant to them e.g ---15--- becomes ---17--- not ---35---.
(basically getting a guitar tab and adding a constant to every fret number)
Thanks. Realised this started out vague and confusing so sorry about that.
lets say the file is:
-2--3--5---7--1/n-6---3--5-1---5
and im adding 2, it should become:
-4--5--7---9--3/n-8---5--7-3---7
Change the filename to something relevant and this code will work. Anything below new_string needs to be change for what you need, eg writing to a file.
def addXToAllNum(int: delta, str: line):
values = [x for x in s.split('-') if x.isdigit()]
values = [str(int(x) + delta) for x in values]
return '--'.join(values)
new_string = '' # change this section to save to new file
for line in open('tabfile.txt', 'r'):
new_string += addXToAllNum(delta, line) + '\n'
## general principle
s = '-4--5--7---9--3 -8---5--7-3---7'
addXToAllNum(2, s) #6--7--9--11--10--7--9--5--9
This takes all numbers and increments by the shift regardless of the type of separating characters.
import re
shift = 2
numStr = "---1----9---15---"
print("Input: " + numStr)
resStr = ""
m = re.search("[0-9]+", numStr)
while (m):
resStr += numStr[:m.start(0)]
resStr += str(int(m.group(0)) + shift)
numStr = numStr[m.end(0):]
m = re.search("[0-9]+", numStr)
resStr += numStr
print("Result:" + resStr)
Hi You Can use that to betwine every line in text file add -
rt = ''
f = open('a.txt','r')
app = f.readlines()
for i in app:
rt+=str(i)+'-'
print " ".join(rt.split())
import re
c = 2 # in this example, the increment constant value is 2
with open ('<your file path here>', 'r+') as file:
new_content = re.sub (r'\d+', lambda m : str (int (m.group (0)) + c), file.read ())
file.seek (0)
file.write (new_content)

python: how to count number in one file?

I need to write a Python program to read the values in a file, one per line, such as file: test.txt
1
2
3
4
5
6
7
8
9
10
Denoting these as j1, j2, j3, ... jn,
I need to sum the differences of consecutive values:
a=(j2-j1)+(j3-j2)+...+(jn-j[n-1])
I have example source code
a=0
for(j=2;j<=n;j++){
a=a+(j-(j-1))
}
print a
and the output is
9
If I understand correctly, the following equation;
a = (j2-j1) + (j3-j2) + ... + (jn-(jn-1))
As you iterate over the file, it will subtract the value in the previous line from the value in the current line and then add all those differences.
a = 0
with open("test.txt", "r") as f:
previous = next(f).strip()
for line in f:
line = line.strip()
if not line: continue
a = a + (int(line) - int(previous))
previous = line
print(a)
Solution (Python 3)
res = 0
with open("test.txt","r") as fp:
lines = list(map(int,fp.readlines()))
for i in range(1,len(lines)):
res += lines[i]-lines[i-1]
print(res)
Output: 9
test.text contains:
1
2
3
4
5
6
7
8
9
10
I'm not even sure if I understand the question, but here's my best attempt at solving what I think is your problem:
To read values from a file, use "with open()" in read mode ('r'):
with open('test.txt', 'r') as f:
-your code here-
"as f" means that "f" will now represent your file if you use it anywhere in that block
So, to read all the lines and store them into a list, do this:
all_lines = f.readlines()
You can now do whatever you want with the data.
If you look at the function you're trying to solve, a=(j2-j1)+(j3-j2)+...+(jn-(jn-1)), you'll notice that many of the values cancel out, e.g. (j2-j1)+(j3-j2) = j3-j1. Thus, the entire function boils down to jn-j1, so all you need is the first and last number.
Edit: That being said, please try and search this forum first before asking any questions. As someone who's been in your shoes before, I decided to help you out, but you should learn to reference other people's questions that are identical to your own.
The correct answer is 9 :
with open("data.txt") as f:
# set prev to first number in the file
prev = int(next(f))
sm = 0
# iterate over the remaining numbers
for j in f:
j = int(j)
sm += j - prev
# update prev
prev = j
print(sm)
Or using itertools.tee and zip:
from itertools import tee
with open("data.txt") as f:
a,b = tee(f)
next(b)
print(sum(int(j) - int(i) for i,j in zip(a, b)))

Counting the number of character differences between two files

I have two somewhat large (~20 MB) txt files which are essentially just long strings of integers (only either 0,1,2). I would like to write a python script which iterates through the files and compares them integer by integer. At the end of the day I want the number of integers that are different and the total number of integers in the files (they should be exactly the same length). I have done some searching and it seems like difflib may be useful but I am fairly new to python and I am not sure if anything in difflib will count the differences or the number of entries.
Any help would be greatly appreciated! What I am trying right now is the following but it only looks at one entry and then terminates and I don't understand why.
f1 = open("file1.txt", "r")
f2 = open("file2.txt", "r")
fileOne = f1.readlines()
fileTwo = f2.readlines()
f1.close()
f2.close()
correct = 0
x = 0
total = 0
for i in fileOne:
if i != fileTwo[x]:
correct +=1
x += 1
total +=1
if total != 0:
percent = (correct / total) * 100
print "The file is %.1f %% correct!" % (percent)
print "%i out of %i symbols were correct!" % (correct, total)
Not tested at all, but look at this as something a lot easier (and more Pythonic):
from itertools import izip
with open("file1.txt", "r") as f1, open("file2.txt", "r") as f2:
data=[(1, x==y) for x, y in izip(f1.read(), f2.read())]
print sum(1.0 for t in data if t[1]) / len(data) * 100
You can use enumerate to check the chars in your strings that don't match
If all strings are guaranteed to be the same length:
with open("file1.txt","r") as f:
l1 = f.readlines()
with open("file2.txt","r") as f:
l2 = f.readlines()
non_matches = 0.
total = 0.
for i,j in enumerate(l1):
non_matches += sum([1 for k,l in enumerate(j) if l2[i][k]!= l]) # add 1 for each non match
total += len(j.split(","))
print non_matches,total*2
print non_matches / (total * 2) * 100. # if strings are all same length just mult total by 2
6 40
15.0

Decoding an encrypted file using Caesar Cipher

I want to decrypt an encrypted file. I'm having trouble all the way at the bottom when converting it and comparing it to a dictionary (which is full of words). Can someone guide me in the right direction? I'm struggling comparing the two.
#this function takes a string and encrypts ONLY letters by k shifts
def CaeserCipher(string, k):
#setting up variables to move through
upper = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'*10000
lower = 'abcdefghijklmnopqrstuvwxyz'*10000
newCipher = ''
#looping each letter and moving it k times
for letter in string:
if letter in upper:
if upper.index(letter) + k > 25:
indexPosition = (upper.index(letter) + k)
newCipher = newCipher + upper[indexPosition]
else:
indexPosition = upper.index(letter) + k
newCipher = newCipher + upper[indexPosition]
elif letter in lower:
if lower.index(letter) + k > 25:
indexPosition = (lower.index(letter) + k)
newCipher = newCipher + lower[indexPosition]
else:
indexPosition = lower.index(letter) + k
newCipher = newCipher + lower[indexPosition]
else:
newCipher = newCipher + letter
return newCipher
f = open('dictionary.txt', "r")
dictionary = set()
for line in f:
word = line.strip()
dictionary.add(word)
print dictionary
#main file
#reading file and encrypting text
f = open('encryptMystery1.txt')
string = ''
out = open("plain1.txt", "w")
myList = []
for line in f:
myList.append(line)
for sentence in myList:
for k in range(26):
updatedSentence = CaeserCipher(sentence, k)
for word in updatedSentence.split():
if word in dictionary:
out.write(updatedSentence)
break
print myList
f.close()
out.close()
Let's tackle this in steps, and the first step is entitled
WHY DO YOU HAVE 260,000 CHARACTER LONG STRINGS IN A CAESAR CIPHER
Sorry, I don't mean to be overly dramatic, but you realize that's going to take up more space than, well, Space, don't you? And it's completely unnecessary. It's an ugly and slow hack to avoid understanding the % (modulo) operator. Don't do that.
Now, to the modulo:
Step two of course will have to be understanding the modulo. It's not actually hard, it's just like the remainder of a division problem. You remember when you were in school and just LEARNING division? 7/4 was 1r3 not 1.75, remember? Well Python has functions for all that. 7/4 == 1.75, 7//4 == 1 and 7 % 4 == 3. This is useful because it can serve to "wrap" a number around a fixed length.
Let's say for example you have some string with 26 indexes (like, I don't know, an alphabet?). You're trying to add some number to a starting index, then return the result but UGH YOU'RE ADDING 2 TO Y AND IT DOESN'T WORK! Well with modulo it can. Y is in index 24 (remember zero is its own index), and 24+2 is 26 and there IS no 26th index. However, if you know there's going to be only 26 elements in your string, we can take the modulo and use THAT instead.
By that logic, index + CONSTANT % len(alphabet) will ALWAYS return the right number using simple math and not sweet baby jesus the quarter million element long string you just butchered.
Ugh your mother would be ashamed.
Reversing a Caesar cipher
So you've got a good idea, going through each line in turn and applying every kind of cipher to it. If I were you I'd dump them all into separate files, or even into separate list elements. Remember though that if you're reversing the cipher, you need to use -k not k. It's probably a good idea to simply change your Caesar cipher to detect that though, since the modulo trick doesn't work in this case. Try something like:
def cipher(text, k):
cipherkey = "SOMESTRINGGOESHERE"
if k < 0:
k = len(cipherkey) + k
# len(cipherkey) - abs(k) would be more clear, but if it HAS to be
# a negative number to get in here, it seems silly to add the call
# to abs
Then you can do:
startingtext = "Encrypted_text_goes_here"
possibledecrypts = [cipher(startingtext, -i) for i in range(1,26)]

Categories