Loop and Save Data - python

Actually, I am new in Python, so I need more help to handle my data. The next step is make loop for a month data from daily data (one day one file). And then make the variables (numeric) resulted from this program as a new array and save those as a file. Thank you in advance.

You can add each of the files i.e., A1,A2..Ax in a list. Then save the list. That should work for you.

First of all, like #leeladam said, is better to store the values of your A variables in a list (a nested list, each element of the A list is other list with your data).
Then the csv module is the best approach for that task. Each line of the csv line (row) is an A list and each column is a data comma separated. With this module you can save the data with Excel format.

Related

subsetting very large files - python methods for optimal performance

I have one file (index1) with 17,270,877 IDs, and another file (read1) with a subset of these IDs (17,211,741). For both files, the IDs are on every 4th line.
I need a new (index2) file that contains only the IDs in read1. For each of those IDs I also need to grab the next 3 lines from index1. So I'll end up with index2 whose format exactly matches index1 except it only contains IDs from read1.
I am trying to implement the methods I've read here. But I'm stumbling on these two points: 1) I need to check IDs on every 4th line, but I need all of the data in index1 (in order) because I have to write the associated 3 lines following the ID. 2) unlike that post, which is about searching for one string in a large file, I'm searching for a huge number of strings in another huge file.
Can some folks point me in some direction? Maybe none of those 5 methods are ideal for this. I don't know any information theory; we have plenty of RAM so I think holding the data in RAM for searching is the most efficient? I'm really not sure.
Here a sample of what the index look like (IDs start with #M00347):
#M00347:30:000000000-BCWL3:1:1101:15589:1332 1:N:0:0
CCTAAGGTTCGG
+
CDDDDFFFFFCB
#M00347:30:000000000-BCWL3:1:1101:15667:1332 1:N:0:0
CGCCATGCATCC
+
BBCCBBFFFFFF
#M00347:30:000000000-BCWL3:1:1101:15711:1332 1:N:0:0
TTTGGTTCCCGG
+
CDCDECCFFFCB
read1 looks very similar, but the lines before and after the '+' are different.
If data of index1 can fit in memory, the best approach is to do a single scan of this file and store all data in a dictionary like this:
{"#M00347:30:000000000-BCWL3:1:1101:15589:1332 1:N:0:0":["CCTAAGGTTCGG","+","CDDDDFFFFFCB"],
"#M00347:30:000000000-BCWL3:1:1101:15667:1332 1:N:0:0":["CGCCATGCATCC","+","BBCCBBFFFFFF"],
..... }
Values can be stored as formatted string as you prefer.
After this, you can do a single scan on read1 and when an IDs is encountered you can do a simple lookup on the dictionary to retrieve needed data.

How to access the complete string of a dtype object in a dataframe?

I am doing some webscraping (getting the plot of books on goodreads). I have this info in a tsv file. When i get that tsv file into a dataframe it looks like i loose the best part of my string. How can i access the whole string?
Cheers
The problem is that data['Plot'] is returning a Series with 1 element (not the element itself). Much like having a list with one element you need to tell pandas to return the first element in that series.
To do this you can write:
data['plot'].iloc[0]

Python: Removing duplicates from a huge csv file (memory issues)

I have a csv file that is very big, containing a load of different people. Some of these people come up twice. Something like this:
Name,Colour,Date
John,Red,2017
Dave,Blue,2017
Tom,Blue,2017
Amy,Green,2017
John,Red,2016
Dave,Green,2016
Tom,Blue,2016
John,Green,2015
Dave,Green,2015
Tom,Blue,2015
Rebecca,Blue,2015
I want a csv file that contains only the most recent colour for each person. For example, for John, Dave, Tom and Amy I am only interested in the row for 2017. For Rebecca I will need the value from 2015.
The csv file is huge, containing over 10 million records (all people have a unique ID so repeated names don't matter). I've tried something along the lines of the following:
Open csv file
Read line 1.
If person is not in "seen" list, add to csv file 2
Add person to "Seen" list.
Read line 2...
The problem is the "seen" list gets massive and I run out of memory. The other issue is sometimes the dates are not in order so an old entry gets into the "seen" list and then the new entry won't overwrite it. This would be easy to solve if I could sort the data by descending date, but I'm struggling to sort it with the size of the file.
Any suggestions?
If the whole csv file can be stored in a list like:
csv_as_list = [
(unique_id, color, year),
…
]
then you can sort this list by:
import operator
# first sort by year descending
csv_as_list.sort(key=operator.itemgetter(2), reverse=True)
# then, since the Python sort is stable, by unique_id
csv_as_list.sort(key=operator.itemgetter(0))
and then you can:
from __future__ import print_function
import operator, itertools
for unique_id, group in itertools.groupby(csv_as_list, operator.itemgetter(0)):
latest_color = next(group)[1]
print(unique_id, latest_color)
(I just used print here, but you get the gist.)
If the csv file cannot be loaded in-memory as a list, you'll have to go through an intermediate step that uses disk (e.g. SQLite).
Open your csv file to read.
Read line by line, append user to final_list if his ID is not already found in there. If it is found, check the year of your current_data, with your final_list data. If the current data has a more recent entry, just change the date of your user in final_list, along with the color associated with it.
Only then, when your final_list is done, will you write a new csv file.
If you want this task to be faster, you want to...
Optimize your loops.
Use standard python functions and/or libraries coded in C.
If this is still not optimized enough... learn C. Reading a csv file in C, parsing it with a separator, and iterating through an array is not hard, even in C.
I see two obvious ways to solve this that don't involve keeping huge amounts of data in memory:
Use a database instead of CSV files
Reorganise your CSV files to facilitate sorting.
Using a database is fairly straightforward. I expect you could could even use the SQLite that comes with Python. This would be my preferred option, I think. To get the best performance, create an index of (person, date).
The second involves letting the first column of your CSV file be the person ID and the second column be the date. Then you could sort the CSV file from the commandline, i.e. sort myfile.csv. This will group all entries for a particular person together, and provided your date is in a proper format (e.g. YYYY-MM-DD), the entry of interest will be the last one. The Unix sort command is not known for its speed, but it's very robust.

Using Python to manipulate csv files: vlookup from another csv, insert columns, delete rows, loop

I have 100 csv files, each contains publication data of different institutions and I would like to perform the same manipulation on all of them:
1.Get the Institution name from cell B1. This is always after 'at' or 'at the'. For example 'Publications at Tohoku University'
2.Vlookup the matching InstitutionCode from another csv file called 'Codes'.
For example '1286'. (for Tohoku University).
3.Delete rows 1-14 (including the Insitution name in cell B1)
4. Insert two extra columns (column A and B) to the file with he following headers: 'Institution' and 'InstitutionCode' and fill it with the relevant information for all rows where I have data.
(In the above example Tohoku University and 1286).
I am new to Python and find it hard to put together this script from the resources I have found.
Can anyone please help me?
Below is image of the data in original format
Below is the image of the result required
I could give you the code, but instead, I'll explain to you how you can write it yourself.
Read the Codes file and store the institutions and codes in a dictionary.
You can read more about reading csv files here: https://pymotw.com/2/csv/ or here: https://pymotw.com/3/csv/.
Each row will be represented as a list of strings, so you can access cell elements by their index. Make the Institution names the keys and the codes the values.
Read the csv files one by one in a for loop. I'll call these the input files. Open a new file for writing for each input file that you read. I'll call these the output files.
Loop over the rows in the csv file. You can keep track of the row numbers by using enumerate. You can find info on this here for example: http://book.pythontips.com/en/latest/enumerate.html.
Get the contents of cell B1 by taking element 1 from row 0.
Find the Institution name by using a regular expression. More info here for example: http://dev.tutorialspoint.com/python/python_reg_expressions.htm
And get the Institution code from the dictionary you made in step 1.
Keep looping over the rows, until the first element equals 'Title'. This row contains the headers. Write "Institution" and "InstitutionCode" to the output file, followed by the headers you just found. To do this, convert your row (a list of strings) to a tuple (http://www.tutorialspoint.com/python/python_tuples.htm) and give that as an argument to the writerow method of the csv writer object (see the links in step 1).
Then for each row after the header row, make a tuple of the Institution name and code, followed by the information from the row from the input file you just read, and give that as an argument to the writerow method of the csv writer object.
Close output file.
One thing to think about is whether you want quotes around the cell contents in the output files. You can read about this in the links in step 1. The same goes for the field delimiters. If you don't specify anything, they are assumed to be commas, but you can change this.
I hope this helps!

Choosing random number from list, and then removing it?

Let's say that I have a separate text file that contains a series of numbers:
1
2
3
And so on. Is it possible for a Python program to randomly choose one of the numbers in that text file, and then remove that number from the text file? I know it is possible to do the first, but the I am struggling with the second part.
If it helps, the list is about 180000 numbers long. I am very new at this. The idea is to randomly assign a player a number, and then remove that number from the list so another player can't get it.
Do you actually have 180,000 players? If not, what about solving the problem the other way round:
Create a file listing the IDs already used
For each new user:
Create a fairly large random ID (like the ones in your current file)
Run through the 'used' IDs in your file and check your new ID doesn't collide with an existing one - if it does, generate new ones until there is no collision
Append the new ID to your file
This will be much faster than reading, checking and writing a large file each time. If your IDs are large, you won't get many collisions.
You could also optimise the process, for example using a two-part ID consisting of today's date and a random number. You would then keep a file for each day, and only need to check for collisions with the IDs issued today.
The suggestion I would say is that, you read the entire text file, make whatever changes you want to do to it, and then rewrite over the original contents of the file, which is the best way as far as i know
If the file is small, read the whole thing into a list, delete a value from the list, then write the new list to a temp file. Finally, rename the temp file to the original filename.
If the file is large, read the file one line at a time, writing the values (except one) to a temp file. Then rename the temp file to the original filename.
Like dstromberg said, if the file is small, check out the documentation on file IO and this answer's strategy for writing lists to a file. Note that writelines() "does not add line separators."

Categories