Kindly give me the access to post quality questions. I am really upset by this action
You can try to pivot the table. Which may give the format you require.
Considering the information you gave as as ActionsOnly.csv
userId,movieId,rating
18,9,3
32,204,4
49,2817,1
62,160438,4
70,667,5
73,1599,1
73,4441,3
73,4614,3.5
73,86142,4
95,4636,2
103,71,1
118,3769,4
150,4866,2
You wan to find out what user rated what movie out of 5.
The userId is the index column, the movieId becomes the header row and the rating is what decides the values. If there is no Value it will display NaN or Not A Number
movie_pivot = movie.pivot_table(index='userId', columns='movieId', values='rating')
To Save a file in Pandas to CSV there is a simple command to_csv
so
movie_pivot.to_csv('ActionsOnly_pivot.csv')
Will save to csv.
So the full code you need is:
import pandas as pd
movie = pd.read_csv('movies.csv')
movie_pivot = movie.pivot_table(index='userId', columns='movieId', values='rating')
movie_pivot.to_csv('movies_pivot.csv')
I also strongly recommend reading about pandas, It is surprisingly easy and logical :)
Related
for doc in sample['documents']:
The error is 'sample' undefined (I was trying to reproduce a natural language processing model)
I this you are searching that the way to display the natural processing language and i this this is helpful to you. i mention the link below so please come check this..
https://www.tableau.com/learn/articles/natural-language-processing-examples
In this case, your problem is the way you are reading the input. Not big deal no worries !
In the loop:
for doc in sample['documents']
sample is the Dataframe of input, or a dictionary, and 'documents' is the name of the column.
Let's suppose I have a csv of input like the following:
documents,label
Being offensive isnt illegal you idiot, negative
Loving the first days of summer! <3, positive
I hate when people put lol when we are having a serious talk ., negative
in python you will read the csv using pandas dataframe, for example:
sample=pd.read_csv('inputdata.csv',header=0)
and your sample['documents'] is the first colum of the input file. header =0 means that the label of your column are specified at the first line of the csv.
for doc in sample['documents'] will iterate over the first column, like this:
Being offensive isnt illegal you idiot
Loving the first days of summer! <3
I hate when people put lol when we are having a serious talk
This means that maybe the origin of your error is that you call your input data in some other ways instead of sample or it is not reading the header of the csv input.
If the csv doesn't have documents as the name of the header you can specify it like this:
columns = ['documents', 'labels']
sample = pd.read_csv(inputdata.csv', header = None, names = columns)
sample
Hope it helps !
I have a allstock .CSV file with list of stocks(only ticker symbol) I want to create another .CSV with ONLY the stocks that gapped more than 20% from previous day+ their daily data .
I am trying to create a loop but I’m not even sure where to begin. I know how to return the daily data (high , low, open, close , volume) for each name but can’t do a loop that does that AND create a .CSV of only the gappers+data.
I checked the .loc command and try to integrate it but I am not sure about the right structure.
Any advice that can put me in the right direction is appreciated. If anyone can share a code to do that it’s obviously amazing. Or just bread crumbs. Anything. Really.
Thanks in advance 🙏
Edit: I’m adding a simplified version as requested of what I’m trying to do. (Apologize on the mess I’m editing it on my phone)
import pandas as pd
flag='no gap'
data=[['Monday',3,3,2,flag],['Tuesday',2,2,1,flag],['Wednsday',2,3,2,flag]]
df=pd.DataFrame(data,columns=['day','previous close','open','close','gap'])
for df[flag] in df:
if df['open']>df['previous close']:
df[flag]='gap'
You were pretty close with your attempt and your guess that .loc is the appropriate function:
import pandas as pd
flag='no gap'
data=[['Monday',3,3,2,flag],['Tuesday',2,2,1,flag],['Wednsday',2,3,2,flag]]
df=pd.DataFrame(data,columns=['day','previous close','open','close','gap'])
df.loc[df.open > df["previous close"]]
Output
day previous close open close gap
2 Wednsday 2 3 2 no gap
You can then write that to a csv:
df.loc[df.open > df["previous close"]].to_csv(...)`
I am absolutely new to Python or coding for that matter, hence, any help would be greatly appreciated. I have around 21 Salesforce orgs and am trying to get some information from each of the org into one place to send out in an email.
import pandas as pd
df = pd.read_csv("secretCSV.csv", usecols = ['client','uname','passw','stoken'])
username = df.loc[[1],'uname'].values[0]
password = df.loc[[1],'passw'].values[0]
sectocken = df.loc[[1],'stoken'].values[0]
I have saved all my username, password, security tokens in secretCSV.csv file and with the above code I can get the data for 1 row as the index value I have given is 0. I would like to know how can I loop through this and after each loop, how to increase the index value until all rows from the CSV file is read.
Thank you in advance for any assistance you all can offer.
Adil
--
You can iterate on the dataframe but it's highly not recommend (not efficient, looks bad, too much code etc)
df = pd.read_csv("secretCSV.csv", usecols = ['client','uname','passw','stoken'])
so DO NOT DO THIS EVEN IF IT WORKS:
for i in range (0, df.shape[0]):
username = df.loc[[i],'uname'].values[0]
password = df.loc[[i],'passw'].values[0]
sectocken = df.loc[[i],'stoken'].values[0]
Instead, do this:
sec_list = [(u,p,s) for _,u,p,s in df.values]
now you have a sec_list with tuples (username, password, sectocken)
access example: sec_list[0][1] - as in row=0 and get the password (located at [1]).
Pandas is great when you want to apply operations to a large set of data, but is usually not a good fit when you want to manipulate individual cells in python. Each cell would need to be converted to a python object each time its touched.
For your goals, I think the standard csv module is what you want
import csv
with open("secretCSV.csv", newline='') as f:
for username, password, sectoken in csv.reader(f):
# do all the things
Thank you everyone for your responses. I think I will first start with python learning and then get back to this. I should have learnt coding before coding. :)
Also, I was able to iterate (sorry, most of you said not to iterate the dataframe) and get the credentials from the file.
I actually have 21 salesforce orgs and am trying to get License information from each of them and email to certain people on a daily basis. I didn't want to expose salesforce credentials, hence, went with a flat file option.
I have build the code to get the salesforce license details and able to pull the same in the format I want for 1 client. However, I have to do this for 21 clients and thought of iterating the credentials so I can run the getLicense function on loop until all 21 client's data is fetched.
I will learn Python or at least learn a little bit more than what I know now and come back to this again. Until then, Informatica and batch script would have to do.
Thank you again to each one of you for your help!
Adil
--
So I am making a code like a guessing game. The data for the guessing game is in the CSV file so I decided to use pandas. I have tried to use pandas to import my csv file, pick a random row and put the data into variables so I can use it in the rest of the code but, I can't figure out how to format the data in the variable correctly.
I've tried to split the string with split() but I am quite lost.
ar = pandas.read_csv('names.csv')
ar.columns = ["Song Name","Artist","Intials"]
randomsong = ar.sample(1)
songartist = randomsong["Artist"]
songname = (randomsong["Song Name"])
songintials = randomsong["Intials"]
print(songname)
My CSV file looks like this.
Song Name,Artist,Intials
Someone you loved,Lewis Capaldi,SYL
Bad Guy,Billie Eilish,BG
Ransom,Lil Tecca,R
Wow,Post Malone, W
I expect the output to be the name of the song from the csv file. For Example
Bad Guy
Instead the output is
1 Bad Guy
Name: Song Name, dtype:object
If anyone knows the solution please let me know. Thanks
You're getting a series object as output. You can try
randomsong["Song Name"].to_string()
Use df['column].values to get values of the column.
In your case, songartist = randomsong["Artist"].values[0] because you want only the first element of the returned list.
I'm starting a work to analyse data from Stats Institutions like Eurostat using python, and so pandas. I found out there are two methods to get data from Eurostat.
pandas_datareader: it seems very easy to use but I found some problems to get some specific data
pandasdmx: I've found it a bit complicated but it seems a promising solution, but documentation is poor
I use a free Azure notebook, online service, but I don't think it will complicate more my situation.
Let me explain the problems for pandas_datareader. According to the pandas documentation, in the section API, there is this short documented package and it works. Apart from the shown example, that nicely works, a problem arises about other tables. For example, I can get data about European house price, which ID table is prc_hpi_a with this simple code:
import pandas_datareader.data as web
import datetime
df = web.DataReader('prc_hpi_a', 'eurostat')
But the table has three types of data about dwellings: TOTAL, EXISTING and NEW. I got only Existing dwellings and I don't know how to get the other ones. Do you have a solution for these types of filtering.
Secondly there is the path using pandasdmx. Here it is more complicated. My idea is to upload all data to a pandas DataFrame, and then I can analyse as I want. Easy to say, but I've not find many tutorials that explain this passage: upload data to pandas structures. For example, I found this tutorial, but I'm stuck to the first step, that is instantiate a client:
import pandasdmx
from pandasdmx import client
#estat=client('Eurostat', 'milk.db')
and it returns:
--------------------------------------------------------------------------- ImportError Traceback (most recent call
last) in ()
1 import pandasdmx
----> 2 from pandasdmx import client
3 estat=client('Eurostat', 'milk.db')
ImportError: cannot import name 'client'
What's the problem here? I've looked around but no answer to this problem
I also followed this tutorial:
from pandasdmx import Request
estat = Request('ESTAT')
metadata = estat.datastructure('DSD_une_rt_a').write()
metadata.codelist.iloc[8:18]
resp = estat.data('une_rt_a', key={'GEO': 'EL+ES+IE'}, params={'startPeriod': '2007'})
data = resp.write(s for s in resp.data.series if s.key.AGE == 'TOTAL')
data.columns.names
data.columns.levels
data.loc[:, ('PC_ACT', 'TOTAL', 'T')]
I got the data, but my purpose is to upload them to a pandas structure (Series, DataFrame, etc..), so I can handle easily according to my work. How to do that?
Actually I did with this working line (below the previous ones):
s=pd.DataFrame(data)
But it doesn't work if I try to get other data tables. Let me explain with another example about the Harmonized Index Current Price table:
estat = Request('ESTAT')
metadata = estat.datastructure('DSD_prc_hicp_midx').write()
resp = estat.data('prc_hicp_midx')
data = resp.write(s for s in resp.data.series if s.key.COICOP == 'CP00')
It returns an error here, that is:
--------------------------------------------------------------------------- AttributeError Traceback (most recent call
last) in ()
2 metadata = estat.datastructure('DSD_prc_hicp_midx').write()
3 resp = estat.data('prc_hicp_midx')
----> 4 data = resp.write(s for s in resp.data.series if s.key.COICOP == 'CP00')
5 #metadata.codelist
6 #data.loc[:, ('TOTAL', 'INX_Q','EA', 'Q')]
~/anaconda3_501/lib/python3.6/site-packages/pandasdmx/api.py in
getattr(self, name)
622 Make Message attributes directly readable from Response instance
623 '''
--> 624 return getattr(self.msg, name)
625
626 def _init_writer(self, writer):
AttributeError: 'DataMessage' object has no attribute 'data'
Why does it do not get data now? What's wrong now?
I lost almost a day looking around for some clear examples and explanations. Do you have some to propose? Is there a full and clear documentation? I found also this page with other examples, explaining the use of categorical schemes, but it is not for Eurostat (as explained at some point)
Both methods could work, apart from some explained issues, but I need also a suggestion to have a definitely method to use, to query Eurostat but also many other institutions like OECD, World Bank, etc...
Could you guide me to a definitive and working solution, even if it is different for each institution?
That's my definitive answer to my question that works for each type of data collected from Eurostat. I post here because it can be useful for many.
Let me propose some examples. They produce three pandas series (EU_unempl,EU_GDP,EU_intRates) with data and correct time indexes
#----Unemployment Rate---------
dataEU_unempl=pd.read_json('http://ec.europa.eu/eurostat/wdds/rest/data/v2.1/json/en/ei_lmhr_m?geo=EA&indic=LM-UN-T-TOT&s_adj=NSA&unit=PC_ACT',typ='series',orient='table',numpy=True) #,typ='DataFrame',orient='table'
x=[]
for i in range(int(sorted(dataEU_unempl['value'].keys())[0]),1+int(sorted(dataEU_unempl['value'].keys(),reverse=True)[0])):
x=numpy.append(x,dataEU_unempl['value'][str(i)])
EU_unempl=pd.Series(x,index=pd.date_range((pd.to_datetime((sorted(dataEU_unempl['dimension']['time']['category']['index'].keys())[(sorted(int(v) for v in dataEU_unempl['value'].keys())[0])]),format='%YM%M')), periods=len(x), freq='M')) #'1/1993'
#----GDP---------
dataEU_GDP=pd.read_json('http://ec.europa.eu/eurostat/wdds/rest/data/v2.1/json/en/namq_10_gdp?geo=EA&na_item=B1GQ&s_adj=NSA&unit=CP_MEUR',typ='series',orient='table',numpy=True) #,typ='DataFrame',orient='table'
x=[]
for i in range((sorted(int(v) for v in dataEU_GDP['value'].keys())[0]),1+(sorted((int(v) for v in dataEU_GDP['value'].keys()),reverse=True))[0]):
x=numpy.append(x,dataEU_GDP['value'][str(i)])
EU_GDP=pd.Series(x,index=pd.date_range((pd.Timestamp(sorted(dataEU_GDP['dimension']['time']['category']['index'].keys())[(sorted(int(v) for v in dataEU_GDP['value'].keys())[0])])), periods=len(x), freq='Q'))
#----Money market interest rates---------
dataEU_intRates=pd.read_json('http://ec.europa.eu/eurostat/wdds/rest/data/v2.1/json/en/irt_st_m?geo=EA&intrt=MAT_ON',typ='series',orient='table',numpy=True) #,typ='DataFrame',orient='table'
x=[]
for i in range((sorted(int(v) for v in dataEU_intRates['value'].keys())[0]),1+(sorted((int(v) for v in dataEU_intRates['value'].keys()),reverse=True))[0]):
x=numpy.append(x,dataEU_intRates['value'][str(i)])
EU_intRates=pd.Series(x,index=pd.date_range((pd.to_datetime((sorted(dataEU_intRates['dimension']['time']['category']['index'].keys())[(sorted(int(v) for v in dataEU_intRates['value'].keys())[0])]),format='%YM%M')), periods=len(x), freq='M'))
The general solution is to not rely on overly-specific APIs like datareader and instead go to the source. You can use datareader's source code as inspiration and as a guide for how to do it. But ultimately when you need to get data from a source, you may want to directly access that source and load the data.
One very popular tool for HTTP APIs is requests. You can easily use it to load JSON data from any website or HTTP(S) service. Once you have the JSON, you can load it into Pandas. Because this solution is based on general-purpose building blocks, it is applicable to virtually any data source on the Web (as opposed to e.g. pandaSDMX, which is only applicable to SDMX data sources).
Load with read_csv and multiple separators
The problem with eurostat data from the bulk download repository is that they are tab separated files where the first 3 columns are separated by commas. Pandas read_csv() can deal with mulitple separators as a regex if you specify engine="python". This works for some data sets, but the OP's dataset also contains flags, which cannot be ignored in the last column.
# Load the house price index from the Eurostat bulk download facility
import pandas
code = "prc_hpi_a"
url = f"https://ec.europa.eu/eurostat/estat-navtree-portlet-prod/BulkDownloadListing?sort=1&file=data%2F{code}.tsv.gz" # Pandas.read_csv could almost read it directly with a multiple separator
df = pandas.read_csv(url, sep=",|\t| [^ ]?\t", na_values=":", engine="python")
# But the last column is a character column instead of a numeric because of the
# presence of a flag ": c" illustrated in the last line of the table extract
# below
# purchase,unit,geo\time\t 2006\t 2005
# DW_EXST,I10_A_AVG,AT\t :\t :
# DW_EXST,I10_A_AVG,BE\t 83.86\t 75.16
# DW_EXST,I10_A_AVG,BG\t 87.81\t 76.56
# DW_EXST,I10_A_AVG,CY\t :\t :
# DW_EXST,I10_A_AVG,CZ\t :\t :
# DW_EXST,I10_A_AVG,DE\t100.80\t101.10
# DW_EXST,I10_A_AVG,DK\t113.85\t 91.79
# DW_EXST,I10_A_AVG,EE\t156.23\t 98.69
# DW_EXST,I10_A_AVG,ES\t109.68\t :
# DW_EXST,I10_A_AVG,FI\t : c\t : c
Load with the eurostat package
There is also a python package called eurostat which makes it possible to search and load data set from the bulk facility into pandas data frames.
Load 2 different monthly exchange rate data sets:
import eurostat
df1 = eurostat.get_data_df(code)
The table of content of the bulk download facility can be read with
toc_url = "https://ec.europa.eu/eurostat/estat-navtree-portlet-prod/BulkDownloadListing?sort=1&file=table_of_contents_en.txt"
toc2 = pandas.read_csv(toc_url, sep="\t")
# Remove white spaces at the beginning and end of strings
toc2 = toc2.applymap(lambda x: x.strip() if isinstance(x, str) else x)
or with
toc = eurostat.get_toc_df()
toc0 = (eurostat.subset_toc_df(toc, "exchange"))
The last line searches for the datasets that have "exchange" in their title
Reshape to long format
It might be useful to reshape the eurostat data to long format
with
if any(df.columns.str.contains("time")):
time_column = df.columns[df.columns.str.contains("time")][-1]
# Id columns are before the time columns
id_columns = df.loc[:, :time_column].columns
df = df.melt(id_vars=id_columns, var_name="period", value_name="value")
# Remove "\time" from the rightmost column of the index
df = df.rename(columns=lambda x: re.sub(r"\\time", "", x))