Repeated values received from a For Loop on Python - python

I am running a for loop in Python and it's coming out with the same value multiple times. Been trying everything but I can't fin where my mistake is.
I am trying to divide text into chunks of length=100 with the following code:
clean_file_body_string is my text
Context, my file has close to 500k characters.
I'm noticing the repetead values on the "print(meta) and also on my file
from tqdm.auto import tqdm # this is our progress bar
batch_size = math.ceil(len(clean_file_body_string)/100)
for i in tqdm(range(0, len(clean_file_body_string), 100)):
# set end position of each batch to take only what is needed
i_end = min(i+batch_size, len(clean_file_body_string))
# get batch of lines and IDs
#Next code is takes the text and puts it into chunks
lines_batch = [clean_file_body_string[i:i+100] for i in range(0, len(clean_file_body_string), 100)]
ids_batch = [str(n) for n in range(i, i_end)]
meta = [{'text': lines_batch} for i in range(0, len(text_chunks), 100)]
print(meta)
Been trying different methods but this code seems the simpler and only one I've managed to almost make it work.
Take into account I'm still learning python.

Related

Saving continuously generated simulation data with Python3

So my question is how I should save a large amount of simulation data to a file using Python (or update new data rows to the existing file).
Lets say I have NN=1000 particles, and I want to save the position and velocity data of each particle (x y z, vx vy vz). The data is in format [x1,y1,z1,vx1,vy1,vz1, x2,y2,z2,vx2,vy2,vz2, ...] and so on.
Simulation is working well, but I believe the methods I use for saving and keeping these information saved is not really optimal for me.
Pseudo code similar to my code
T_max = 1000 # for example
dt = 0.1 # time step
T = 0 # current time
iterations = int(T_max/dt) # number of iterations we are doing
NN = 1000 # Number of particles
ZZ = np.zeros( (iterations, 2+NN*6 ) ) # Here I generate whole data matrix at the beginning.
# ^ might not be the best idea as the system needs to keep everything in memory for the whole time
# So I guess saving could be done in chunks?
ZZ[0][0], ZZ[0][1] = T , dt
# ZZ[0][2:] = initialize_system(NN=NN) # so lets initialize the system.
# However, for this post I do this differently due to simplicity. See below
ZZ[0][2:] = np.random.uniform(-100,100,NN*6)
i = 0
while i < iteration:
T += dt
Z[i+1][0], Z[i+1][1] = T, dt
#Z[i+1][2:] = rk4(EOM_function, posvel=Z[i][2:])
# ^ Using this I would calculate new positions based on previous ones.
Z[i+1][2:] = np.random.uniform(-100,100,NN*6) #This is just for example here.
i += 1
# Now the simulation data is basically done, so one would need to save
# This one feels slow, as it takes 181s to save and is size of 1046246KB
np.savetxt('test1.txt', ZZ)
#other method with a bit less accuracy as I don't need to have all decimals saved
np.savetxt('test2.txt', ZZ, fmt='%1.6f') # Takes 125s and size is 426698KB
# Both of the above are kinda slow so I also tried to save to npy format
np.save('test.npy', ZZ) # It took 8.9s and size 164118KB
so this np.save() method seems to be fast, but I read somewhere that I can not append data to it. So this would not work if I keep saving the data in parts while calculating new positions.
So back to my question. How should/could I save the data efficiently (fast and memory friendly). I keep having some memory issues when NN and T_max gets larger because with this method I keep this whole ZZ all the time in memory.
So I guess I should calculate ZZ in parts, i.e. iterations/10 parts but then I should append this data to an existing file, and tests I have made felt slow. Any suggestions?
EDIT: feel free to ask more specifying questions as I feel like I forgot to explain something.
That highly depends on what you intend to use the output for. If it's stored for further calculations, .npy or some other binary format is always the way to go as it is faster, takes less space, and doesn't lose precision between loads and saves, instead of serializing it into a human readable format. If you need it to be readable, you might as well just output row by row to a csv file or something.
If you want to do it with binary, h5py allows you to extend a dataset after saving and append more stuff to it.
import numpy as np
import h5py
T_max = 10**4 # for example
dt = 0.1 # time step
T = 0 # current time
iterations = int(T_max/dt) # number of iterations we are doing
NN = 1000 # Number of particles
chunk_size = 10**3
ZZ = np.zeros( (chunk_size, 2+NN*6 ) )
ZZ[0][0], ZZ[0][1] = T , dt
# ZZ[0][2:] = initialize_system(NN=NN) # so lets initialize the system.
# However, for this post I do this differently due to simplicity. See below
ZZ[0][2:] = np.random.uniform(-100,100,NN*6)
with h5py.File("test.h5", "a") as f:
dset = f.create_dataset('ZZ', (0,2+NN*6), maxshape=(None,2+NN*6), dtype='float64', chunks=(chunk_size,2+NN+6))
for chunk in range(0, iterations, chunk_size):
for i in range(0, chunk_size - 1):
T += dt
ZZ[i + 1][0], ZZ[i + 1][1] = T, dt
#Z[i+1][2:] = rk4(EOM_function, posvel=Z[i][2:])
# ^ Using this I would calculate new positions based on previous ones.
ZZ[i + 1][2:] = np.random.uniform(-100,100,NN*6) #This is just for example here.
# Expand the file here to allow for more data.
dset.resize(dset.shape[0] + chunk_size, axis=0)
dset[chunk: chunk + chunk_size ] = ZZ
# update and initialize next chunk. the next chunk's first row should be the last row of the previous chunk + iteration
T += dt
ZZ[0][0], ZZ[0][1] = T, dt
#Z[0][2:] = rk4(EOM_function, posvel=Z[-1][2:])
# ^ Using this I would calculate new positions based on previous ones.
ZZ[0][2:] = np.random.uniform(-100,100,NN*6) #This is just for example here.
print(dset.shape)
This takes 70 seconds on the save step on my computer, generating a 45GB file, for a dataset that is 100 times your original code.
The above code is more general in case you are streaming your data and don't know your final size. If you know it from the start, you can replace the initial create_dataset with
dset = f.create_dataset('ZZ', (iterations,2+NN*6), dtype='float64')
and remove the dset.resize(dset.shape[0] + chunk_size, axis=0)
You'll probably also want to read it back in chunks afterwards for other processing, in which case you can follow the docs here: https://docs.h5py.org/en/latest/high/dataset.html#reading-writing-data
Okay so I'm continuing my question / providing possible answer to it based on the answer of EricChen1248. EDIT: Answer provided by EricChen1248 works now and is way better than this my code part. See his code
I do not yet still understand completely how this f.create_dataset () truly works (i.e. when does it write data to file in the loop etc).
Using the code provided by Eric, it created and saved the data files fastly, but when I read the file as follows
hf = h5py.File('temp/test.h5', 'r')
ZZ = np.array(hf['ZZ'])
hf.close()
and plotted the first column (time T column, which should increase by timestep dt after each iteration) I get the following figure
plt.plot(ZZ[:,0])
time T column plotted
and as can be seen, it grows to a time of 100, and then goes to zero. This happens after the first 'chunk_size' has been passed. I started to read docs provided by Eric, and using his code as reference I managed to write something like this
import numpy as np
import h5py
T_max = 10**4
dt = 0.1
T = 0
NN = 1000
iterations = int(T_max/dt)
chunk_size = 10**3
with h5py.File('temp/data12.h5', 'a') as hf:
dset = hf.create_dataset("ZZ", (chunk_size, 2+NN*6),maxshape=(None,2+NN*6) ,chunks=(chunk_size, 2+NN*6), dtype='f8' )
# ^ first I create data set equals to one chunk_size
# Here I initialize the system. Columns ; 0=T , 1=dt, 2=arbitrary data point, 3=sin(column2)
# all the rest columns are random numbers just to fill some numbers in
dset[0,0], dset[0,1] = T, dt
#dset[0,2:] = np.random.uniform(0,1,NN*6)
dset[0,2] = 1
dset[0,3] = np.sin(dset[0,2])
dset[0,4:] = np.random.uniform(0,1,NN*6 -2)
print('starts')
# Main difference down there is that I use dataset (dset)
# as a data matrix to be filled instead of matrix ZZ as in my question.
i = 0
#for j, s in enumerate(dset.iter_chunks()):
for j, s in enumerate(range(0, iterations, chunk_size )):
print(j, s)
while i < iterations and i < chunk_size*(j+1) -1:
#for i in range(chunk_size*j, chunk_size*(j+1)-1):
T += dt
dset[i+1,0], dset[i+1,1] = T, dt
#dset[i+1,2:] = np.sin(dset[i,2:]+dt)
dset[i+1,2] = dset[i,2] + dt
dset[i+1,3] = np.sin(dset[i,2]+dt)
dset[i+1,4:] = dset[i,4:] + np.random.uniform(-1,1,NN*6-2)
i+=1
print(dset.shape)
dset.resize(dset.shape[0] + chunk_size, axis=0)
This code runs in 1min 50s , and saves a file of size 4.47GB so I am happy with the speed, and what I'm really happy is that it do not use so much memory while iterating (I used to get into problem with huge RAM usage).
When I read the data file provided by my code (similarly as above) I get following image for time Time T column plotted, my code version and it grows nicely to T=10e4 as should be. It still generated one more chunk_size block to the end of dataset which is full of zeros. That I need to get rid of. One more proof that the code works and saves data without weird problems is this sinusoidal plot plt.plot(ZZ[500:1500,0] , ZZ[500:1500,3]). Sinusoidal image proof Note that the plot is limited for T ~ [50,150] so one could still see something there (if plotted the whole thing, one could not see lines well).
I believe this is not the best way to write this code, but it is the way I got this working. So if someone sees improvements, please let me know. Also, I am curious to know why the code provided by Eric did not work, at least for me.
EDIT : fixed typos

How I automate my python script or get multiple entries in one run?

I am running the following python script:
import random
result_str = ''.join((random.choice('abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!##$%^&*()') for i in range(8)))
with open('file_output.txt','a') as out:
out.write(f'{result_str}\n')
Is there a way I could automate this script to run automatically? or If I can get multiple outputs instantly?
Ex. Right now the output stores itself in the file one by one
kmfd5s6s
But if somehow I can get 1,000,000 entries in the file on one click and there is no duplication.
Same logic as given by PangolinPaws,but since you require it for a 1,000,000 entries, which is quite large, using numpy could be more effecient. Also, replacing random.choice() with random.choices() with k=8, inorder to avoid the for loop to generate the string.
import random
import numpy as np
a = np.array([])
for i in range(1000000):
str = ''.join((random.choices('abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!##$%^&*()', k = 8)))
if str not in a:
a = np.append(a,str)
np.savetxt("generate_strings.csv", a, fmt='%s')
You need to nest your out.write() in a loop, something like this, to make it happen multiple times:
import random
with open('file_output.txt','a') as out:
for x in range(1000): # the number of lines you want in the output file
result_str = ''.join((random.choice('abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!##$%^&*()') for i in range(8)))
out.write(f'{result_str}\n')
However, while unlikely, it is possible that you could end up with duplicate rows. To avoid this, you can generate and store your random strings in a loop and check for duplicates as you go. Once you have enough, write them all to the file outside the loop:
import random
results = []
while len(results) < 1000: # the number of lines you want in the output file
result_str = ''.join((random.choice('abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!##$%^&*()') for i in range(8)))
if result_str not in results: # check if the generated result_str is a duplicate
results.append(result_str)
with open('file_output.txt','a') as out:
out.write( '\n'.join(results) )

Request status update Twitter stream data

I retrieved Twitter data via the streaming API on Python, however, I am also interested in how the public metrics evolve during the time. As a result, I would like to request on a daily basis the metrics.
Unfortunately, the API for the status update can only handle 100 requests at a time. I have a list of all id's, how is it possible to automatically split the string of id's so that all of them will be requested, always in batches of 100?
Thank you a lot in advance!
Keep it as list of IDs instead of single string.
And then you can use range(len(...)) with [n:n+100] like
# example data
all_ids = list(range(500))
SIZE = 100
#SIZE = 10 # test on smaller size
for n in range(0, len(all_ids), SIZE):
print(all_ids[n:n+SIZE])
You can even use yield to create special function for this
def split(data, size):
for n in range(0, len(data), size):
yield data[n:n+size]
# example data
all_ids = list(range(500))
SIZE = 100
SIZE = 10
for part in split(all_ids, SIZE):
print(part)
Eventually you can get [:100] and slice [100:] but this destroy list so you have to do it on copy of this list
# example data
all_ids = list(range(500))
SIZE = 100
#SIZE = 10 # test on smaller size
all_ids_copy = all_ids.copy()
while all_ids_copy:
print(all_ids_copy[:SIZE])
all_ids_copy = all_ids_copy[SIZE:]
You can also use some external modules for this.
from toolz import partition
# example data
all_ids = list(range(500))
SIZE = 100
#SIZE = 10 # test on smaller size
for part in partition(SIZE, all_ids):
print(part)
If you will have list of strings then you can convert back to single string using join()
print( ",".join(part) )
For list of integers you may need to convert integers to strings
print( ",".join(str(x) for x in part) )

Realtime multi_line graph updates at decent performance

I'm currently using Bokeh to present a multi_line plot, that has several static lines and one line, that is live updated. This runs fine with only few lines but, depending on the resolution of the lines (usually 2000-4000 points per line), the refreshing rate drops significantly when having 50+ lines in the plot. The CPU usage of the browser is pretty high at that moment.
This is how the the plot is initialized and the live update is triggered:
figure_opts = dict(plot_width=750,
plot_height=750,
x_range=(0, dset_size),
y_range=(0, np.iinfo(dtype).max),
tools='pan,wheel_zoom')
line_opts = dict(
line_width=5, line_color='color', line_alpha=0.6,
hover_line_color='color', hover_line_alpha=1.0,
source=profile_lines
)
profile_plot = figure(**figure_opts)
profile_plot.toolbar.logo = None
multi_line_plot = profile_plot.multi_line(xs='x', ys='y', **line_opts)
profile_plot.xaxis.axis_label = "x"
profile_plot.yaxis.axis_label = "y"
ds = multi_line_plot.data_source
def update_live_plot():
random_arr = np.random.random_integers(65535 * (i % 100) / (100 + 100 / 4), 65535 * (i % 100 + 1) / 100, (2048))
profile = random_arr.astype(np.uint16)
if profile is not None:
profile_lines["x"][i] = x
profile_lines["y"][i] = profile
profile_lines["color"][i] = Category20_20[0]
ds.data = profile_lines
doc.add_periodic_callback(update_live_plot, 100)
Is there any way to make this better performing?
Is it, for example, possible to only update the one line, that needs to get updated, instead of ds.data = profile_lines?
Edit: The one line that needs to be updated has to be updated in its full length. I.e. I'm not streaming data at one end, but instead I have a full new set of 2000-4000 values and want to show those, instead of the old live line.
Currently the live line is the element at i in the arrays in the profile_lines dictionary.
You are in luck, updating a single line with all new elements while keeping the same length is something that can be accomplished with the CDS patch method. (Streaming would not help here, since streaming to the end of a CDS for a multi_line means adding an entire new line, and the other case of streaming to the end of each sub-line does not have a good solution at all.)
There is a patch_app.py example in the repository that shows how to use patch to update one line of a multi_line. The example only updates a single point in the line, but it's possible to update the entire line at once using slices:
source.patch({ 'ys' : [([i, slice(None)], new_y)]})
That will update the ith line in source.data['ys'], as long as new_y has the same length as the old line.

How can I increase the amount of array iterated during the run-time of script?

My script cleans arrays from the unwanted string like "##$!" and other stuff.
The script works as intended but the speed of it is extremely slow when the excel row size is big.
I tried to use numpy if it could speed it up but I'm not too familiar with is so I might be using it incorrectly.
xls = pd.ExcelFile(path)
df = xls.parse("Sheet2")
TeleNum = np.array(df['telephone'].values)
def replace(orignstr): # removes the unwanted string from numbers
for elem in badstr:
if elem in orignstr:
orignstr = orignstr.replace(elem, '')
return orignstr
for UncleanNum in tqdm(TeleNum):
newnum = replace(str(UncleanNum)) # calling replace function
df['telephone'] = df['telephone'].replace(UncleanNum, newnum) # store string back in data frame
I also tried removing the method to if that would help and just place it as one block of code but the speed remained the same.
for UncleanNum in tqdm(TeleNum):
orignstr = str(UncleanNum)
for elem in badstr:
if elem in orignstr:
orignstr = orignstr.replace(elem, '')
print(orignstr)
df['telephone'] = df['telephone'].replace(UncleanNum, orignstr)
TeleNum = np.array(df['telephone'].values)
The current speed of the script running an excel file of 200,000 is around 70it/s and take around an hour to finish. Which is not that good since this is just one function of many.
I'm not too advanced in python. I'm just learning as I script so if you have any pointer it would be appreciated.
Edit:
Most of the array elements Im dealing with are numbers but some have string in them. I trying to remove all string in the array element.
Ex.
FD3459002912
*345*9002912$
If you are trying to clear everything that isn't a digit from the strings you can directly use re.sub like this:
import re
string = "FD3459002912"
regex_result = re.sub("\D", "", string)
print(regex_result) # 3459002912

Categories