I have this .csv file (that I can't edit):
,Denmark,Norway,Sweden
TotalCases,"78,354 ","35,546 ","243,129 "
Deaths,"823","328","6,681"
Recovered,"61,461","20,956",N/A
I want to make 3 separate bar charts/graphs , one for each section (TotalCases, Deaths, Recovered). However most guides I found online have the data presented the other way round, where the TotalCases are the columns instead of rows like in this scenario. What is the right way to do this?
Then just transpose your data frame and follow the examples!
import pandas as pd
import numpy as np
from io import StringIO
s = StringIO("""
country,Denmark,Norway,Sweden
TotalCases,"78,354 ","35,546 ","243,129 "
Deaths,"823","328","6,681"
Recovered,"61,461","20,956",N/A
""")
df = pd.read_csv(s)
df_t = df.transpose()
df_t.columns = df_t.iloc[0, :]
df_t = df_t.iloc[1:, :]
df_t['country'] = df_t.index
Use df_t then following their examples.
In [45]: df_t
Out[45]:
country TotalCases Deaths Recovered country
Denmark 78,354 823 61,461 Denmark
Norway 35,546 328 20,956 Norway
Sweden 243,129 6,681 NaN Sweden
Related
I'm using tabula in order to concat all tables in the following pdf file
To be a one table within excel format.
Here's my code:
from tabula import read_pdf
import pandas as pd
allin = []
for page in range(1, 115):
table = read_pdf("goal.pdf", pages=page,
pandas_options={'header': None})[0]
allin.append(table)
new = pd.concat(allin)
new.to_excel("out.xlsx", index=False)
Also i tried the following as well:
from tabula import read_pdf
import pandas as pd
table = read_pdf("goal.pdf", pages='all', pandas_options={'header': None})
new = pd.concat(table, ignore_index=True)
new.to_excel("out.xlsx", index=False)
Current output: check
But the issue which am facing that from page# 91 i start to see the data not formatted correctly within the excel file.
I've debug the page individually and i couldn't figure out why it's formatted wrongly especially it's within same format.
from tabula import read_pdf
import pandas as pd
table = read_pdf("goal.pdf", pages='91', pandas_options={'header': None})[0]
print(table)
Example:
from tabula import read_pdf
import pandas as pd
table = read_pdf("goal.pdf", pages='90-91', pandas_options={'header': None})
new = pd.concat(table, ignore_index=True)
new.to_excel("out.xlsx", index=False)
Here I've ran the code for two pages 90 and 91.
starting from row# 48 you will see the difference here
Where you will notice the issue that name and address placed into one cell. And city and state placed into one call as well
I digged in source code and it has option columns and you can manually define column boundaries. When you set columns then you have to use guess=False.
tabula-py uses program tabula-java and in its documentation I found that it needs values in percents or points (not pixels). So I used program inkscape to measure boundaries in points.
from tabula import read_pdf
import pandas as pd
# display all columns in dataframe
pd.set_option('display.width', None)
columns = [210, 350, 420, 450] # boundaries in points
#columns = ['210,350,420,450'] # boundaries in points
pages = '90-92'
#pages = [90,91,92]
#pages = list(range(90,93))
#pages = 'all' # read all pages
tables = read_pdf("goal.pdf",
pages=pages,
pandas_options={'header': None},
columns=columns,
guess=False)
df = pd.concat(tables).reset_index(drop=True)
#df.rename(columns=df.iloc[0], inplace=True) # convert first row to headers
#df.drop(df.index[0], inplace=True) # remove first row with headers
# display
#for x in range(0, len(df), 20):
# print(df.iloc[x:x+20])
# print('----------')
print(df.iloc[45:50])
#df.to_csv('output-pdf.csv')
#print(df[ df['State'].str.contains(' ') ])
#print(df[ df.iloc[:,3].str.contains(' ') ])
Result:
0 1 2 3 4
45 JARRARD, GARY 930 FORT WORTH DRIVE DENTON TX (940) 565-6548
46 JARRARD, GARY 2219 COLORADO BLVD DENTON TX (940) 380-1661
47 MASON HARRISON, RATLIFF ENTERPRISES 1815 W. UNIVERSITY DRIVE DENTON TX (940) 387-5431
48 MASON HARRISON, RATLIFF ENTERPRISES 109 N. LOOP #288 DENTON TX (940) 484-2904
49 MASON HARRISON, RATLIFF ENTERPRISES 930 FORT WORTH DRIVE DENTON TX (940) 565-6548
EDIT:
It may need also option area (also in points) to skip headers. Or you will have to remove first row on first page.
I didn't check all rows but it may need some changes in column boundaries.
EDIT:
Few rows make problem - probably because text in City is too long.
col3 = df.iloc[:,3]
print(df[ col3.str.contains(' ') ])
Result:
0 1 2 3 4
1941 UMSTATTD RESTAURANTS, LLC 120 WEST US HIGHWAY 54 EL DORADO SPRING MS O (417) 876-5755
2079 SIMONS, GARY 1412 BURLINGTON NORTH KANSAS CIT MY O (816) 421-5941
2763 GRISHAM, ROBERT (RB) 403 WEST COURT STREET WASHINGTON COU ORTH HOU S(E740) 335-7830
2764 STAUFFER, JACOB 403 WEST COURT STREET WASHINGTON COU ORTH HOU S(E740) 335-7830
Sl. No.,Name,Address
1.,Stuart,Wall Street
2.,Charlie,Broadway
3.,Oliver,Hollywood Boulevard
4.,Harry,Las Vegas Boulevard
5.,Kyle,Bourbon Street
o/p will be like:
print(Stuart) >>> Wall Street
import csv into pandas dataframe and use loc
import pandas as pd
df = pd.read_csv('/Users/prince/Downloads/test2.csv', sep=',')
df = df.set_index('Name')
print(df.loc['Stuart']['Address'])
which gives the following output
Wall Street
link 1
link 2 *I copied table and created csv file
I need to plot total population from file 1 and Adherents total for New Jersey as a line or bar graph to compare.
I've tried append to combine both cvs's but comes out weird
import pandas as pd
import matplotlib.pyplot as plt
clifton_data = pd.read_csv('cliftondata2010census.csv')
religion = pd.read_csv('2010_ Top Five States by Adherence Rate - Sheet1.csv')
all_data = clifton_data.append(religion)
all_data.plot()
all_data.plot(kind='line',x='1',y='2') # scatter plot
all_data.plot(kind='density')
I need to plot total population from file 1 and compare to Adherents total for New Jersey as a line or bar graph.
Here is a quick guide to get you stared. I hope it helps.
From link 2, you see
Massachusetts 641 2,940,199 449.05
Rhode Island 159 466,598 443.30
New Jersey 729 3,235,290 367.99
Connecticut 399 1,252,936 350.56
New York 1,630 6,286,916 324.43
copy the texts above, paste and save the data to congregation.txt.
Link 1 is broken. However, assuming the population data are as follows,
Massachusetts 3,141,270
Rhode Island 530,698
New Jersey 4,335,399
Connecticut 2,134,935
New York 10,366,556
similarly, copy the texts above, paste and save the data to population.txt.
Then, you can run something like this
import pandas as pd
import matplotlib.pyplot as plt
con = pd.read_csv('congregation.txt', sep=r'[ \t]{2,}',header=None, index_col=False,engine='python')
pop = pd.read_csv('population.txt', sep=r'[ \t]{2,}',header=None, index_col=False,engine='python')
#note concat and not append
#con[0] is state, con[2] is congregation, pop[1] is population
#print(con.head()) and print(pop.head()) to visualize if you are still confused
df = pd.concat([con[[0,2]],pop[1]],axis=1)
df.columns = ['State', 'Congregation', 'Population']
#need to do some cleaning here to convert numbers with comma to an integer
df['Congregation'] = df['Congregation'].apply(lambda t: t.replace(',','')).astype(int)
df['Population'] = df['Population'].apply(lambda t: t.replace(',','')).astype(int)
df.set_index('State',inplace=True)
print(df.head())
#at this stage your df looks like this
# Congregation Population
#State
#Massachusetts 2940199 3141270
#Rhode Island 466598 530698
#New Jersey 3235290 4335399
#Connecticut 1252936 2134935
#New York 6286916 10366556
Output
Note: I am retaining other states here for the sake of demonstration, otherwise if it is only New Jersey, the bar plot will look empty.
ax = df.plot.bar()
plt.show()
Edit: I meant 'Adherent' not 'Congregation'. I made a mistake there.
I'm new to Python and I'm trying to analyse this CSV file. It has a lot of different countries (as an example below).
country iso2 iso3 iso_numeric g_whoregion year e_pop_num e_inc_100k e_inc_100k_lo
Afghanistan AF AFG 4 EMR 2000 20093756 190 123
American Samoa AS ASM 16 WPR 2003 59117 5.8 5 6.7 3 3 4
Gambia GM GMB 270 AFR 2010 1692149 178 115 254 3000 1900 4300
I want to try and obtain only specific data, so only specific countries and only specific columns (like "e_pop_numb"). How would I go about doing that?
The only basic code I have is:
import csv
import itertools
f = csv.reader(open('TB_burden_countries_2018-03-06.csv'))
for row in itertools.islice(f, 0, 10):
print (row)
Which just lets me choose specific rows I want, but not necessarily the country I want to look at, or the specific columns I want.
IF you can help me or provide me a guide so I can do my own learning, I'd very much appreciate that! Thank you.
I recommend you to use pandas python library. Please follow the article as here below there is a snippet code to iluminate your thoughts.
import pandas as pd
df1=pd.read_csv("https://pythonhow.com/wp-content/uploads/2016/01/Income_data.csv")
df2.loc["Alaska":"Arkansas","2005":"2007"]
source of this information: https://pythonhow.com/accessing-dataframe-columns-rows-and-cells/
Pandas will probably be the easiest way. https://pandas.pydata.org/pandas-docs/stable/
To get it run
pip install pandas
Then read the csv into a dataframe and filter it
import pandas as pd
df = pd.read_csv(‘TB_burden_countries_2018-03-06.csv’)
df = df[df[‘country’] == ‘Gambia’]
print(df)
with
open('file') as f:
fields = f.readline().split("\t")
print(fields)
If you supply more details about what you want to see, the answer would differ.
I am trying to load in a really messy text file into Python/Pandas. Here is an example of what the data in the file looks like
('9ebabd77-45f5-409c-b4dd-6db7951521fd','9da3f80c-6bcd-44ae-bbe8-760177fd4dbc','Seattle, WA','2014-08-05 10:06:24','viewed_home_page'),('9ebabd77-45f5-409c-b4dd-6db7951521fd','9da3f80c-6bcd-44ae-bbe8-760177fd4dbc','Seattle, WA','2014-08-05 10:06:36','viewed_search_results'),('41aa8fac-1bd8-4f95-918c-413879ed43f1','bcca257d-68d3-47e6-bc58-52c166f3b27b','Madison, WI','2014-08-16 17:42:31','visit_start')
Here is my code
import pandas as pd
cols=['ID','Visit','Market','Event Time','Event Name']
table=pd.read_table('C:\Users\Desktop\Dump.txt',sep=',', header=None,names=cols,nrows=10)
But when I look at the table, it still does not read correctly.
All of the data is mainly on one row.
You could use ast.literal_eval to parse the data into a Python tuple of tuples, and then you could call pd.DataFrame on that:
import pandas as pd
import ast
cols=['ID','Visit','Market','Event Time','Event Name']
with open(filename, 'rb') as f:
data = ast.literal_eval(f.read())
df = pd.DataFrame(list(data), columns=cols)
print(df)
yields
ID Visit \
0 9ebabd77-45f5-409c-b4dd-6db7951521fd 9da3f80c-6bcd-44ae-bbe8-760177fd4dbc
1 9ebabd77-45f5-409c-b4dd-6db7951521fd 9da3f80c-6bcd-44ae-bbe8-760177fd4dbc
2 41aa8fac-1bd8-4f95-918c-413879ed43f1 bcca257d-68d3-47e6-bc58-52c166f3b27b
Market Event Time Event Name
0 Seattle, WA 2014-08-05 10:06:24 viewed_home_page
1 Seattle, WA 2014-08-05 10:06:36 viewed_search_results
2 Madison, WI 2014-08-16 17:42:31 visit_start