(Python) manually copy/paste data from pandas table without copying the index - python

I've been looking around but could not find an similar post, so I thought I'd give it a go.
I wrote an pandas program that sucessfully displays the resulting dataframe in pandas table format in a tkinter textbox. the aim is that the user can select the data ancopy/paste it into an (existing)excel sheet. when doing this, the index is always copied as well. I was wondering if one could programmatically select the complete table except the index?
I know that one can save to excel or other with index=false, but I could not find a kind of df.select....index=false. I hope my explanation is more or less clear ;-)
Thanks a lot
screenshot

you could use dataframe's 'to_string' function, here you could pass 'index = False' as one of the parameters. For Ex: say we have this df:
import pandas as pd
df = pd.DataFrame({'a': ['yes', 'no', 'yes' ], 'b': [10, 5, 20]})
print(df.to_string(index = False))
this would give you:
a b
yes 10
no 5
yes 20
Hope this helps!

I finally found it.
Instead of using something like self.mytable.copy('columns') to select everything and then switch to Excel and paste it, I use this line of code which does exactly what I need :
df.to_clipboard(sep="\t", index=False)
The sep="\t" makes it split up amongst columns in Excel.
Hopefully someone can use this at some stage.

Related

Snowpark-Python Dynamic Join

I have searched through a large amount of documentation to try to find an example of what I'm trying to do. I admit that the bigger issue may be my lack of python expertise. So i'm reaching out here in hopes that someone can point me in the right direction. I am trying to create a python function that dynamically queries tables based on a function parameters. Here is an example of what i'm trying to do:
def validateData(_ses, table_name,sel_col,join_col, data_state, validation_state):
sdf_t1 = _ses.table(table_name).select(sel_col).filter(col('state') == data_state)
sdf_t2 = _ses.table(table_name).select(sel_col).filter(col('state') == validation_state)
df_join = sdf_t1.join(sdf_t2, [sdf_t1[i] == sdf_t2[i] for i in join_col],'full')
return df_join.to_pandas()
This would be called like this:
df = validateData(ses,'table_name',[col('c1'),col('c2')],[col('c2'),col('c3')],'AZ','TX')
this issue i'm having is with line 5 from the funtion:
df_join = sdf_t1.join(sdf_t2, [col(sdf_t1[i]) == col(sdf_t2[i]) for i in join_col],'full')
I know that code is incorrect, but I'm hoping it explains what i'm trying to do. If anyone has any advice on if this is possible or how, I would greatly appreciate it.
Instead of joining in data frame, i think its easier to use a direct SQL and pull the data in a snow frame and convert it to a pandas data frame.
from snowflake.snowpark import Session
import pandas as pd
#snow df creation using SQL
data = session.sql("select t1.col1, t2.col2, t2.col2 from mytable t1 full outer join mytable2 t2 on t1.id=t2.id where t1.col3='something'")
#Convert snow DF to Pandas DF. You can use this pandas data frame.
data= pd.DataFrame(data.collect())
Essentially what you need is to create a python expression from two lists of variables. I don't have a better idea than using eval.
Maybe try eval(" & ".join(["(col(sdf_t1[i]) == col(sdf_t2[i]))" for i in join_col]). Be mindful that I have not completely test this but just to toss an idea.

Can i make dataframe "active" in pandas

I dont know if i'm asking this question right but fell free ask more info if needed.
So i do this dataframe where i read csv file. Then i want to use the file to do another tasks. i want that df to be "active" but it seems like it dont recognise that dataframe outside of button.
def on_button_clicked(b):
df = pd.read_csv(F"./siivous/cleanedfiles/node_{karry.value}.csv")
with output:
display (df)
display(img)
clear_output(wait=True)
So how can i make that dataframe active just click of the button. So excample i wrote print(df) it print that df.
Your dataframe named df is declared inside of a function. If you do this you cannot access to it outside of that function.
I suggest you the check out this thread.
I hope it helped!

How to align numbers to right in excel using pandas

I have written a program in Python. using pandas that creates an excel file which shows numbers and strings in a few sheets. I want the numbers to be aligned to the right.
Right now it looks like this:
Test name Failed\passed Running time
Test A Passed 3
How can i move the number column to the right?
I tried using this:
df1.style.set_properties(**{'text-align': 'right'})
But it did not fix the problem.
Thanks.
Take a look # this one: https://xlsxwriter.readthedocs.io/example_pandas_column_formats.html
Maybe this one will work for you
format1 = workbook.add_format({'align': 'right','bold': True, 'bottom':6})
and then maybe pass it to conditional_format

While true, pandas dataframe won't show?

Using Jupyter Notebook, if I put in the following code:
import pandas as pd
df = pd.read_csv('path/to/csv')
while True:
df
The dataframe won't show. Can anyone tell me why this is the case? I'm guessing it's because the constant looping is preventing the dataframe from loading fully. Is that what's happening here?
I need code that would let me get a user's input. If they type in a name, for example, I'll extract the person with that name's info from the dataframe and display it, then the program needs to ask them to give another name. This will continue until they type in "quit". I figured a while loop would be the best for that, but it looks like there's just something about while loops and pandas that won't mix. Does anyone have any suggestions on what I can do instead?

pandas read_csv is putting all values in one column and one row

I've sought out an answer on multiple forums and YouTube but to no avail, sorry in advance if it is widely available and my keywords just weren't right.
I'm attempting to execute a simple pandas.read_csv('.csv',sep=','). However, the output I'm receiving is not splitting the data out into multiple columns as I imagine it should.
I'm getting back all of my headers in one row, separated by commas. The same is true for each line item tied to the respective headers.
I've tried setting this data up in a dataframe, manipulating the headers, manually adding the headers with no success.
For better understanding I've copied and pasted from Ipython notebook of what I'm seeing:
In [15]:
import pandas as pd
pd.read_csv('C:\Users\Dale\Desktop\ShpData\TrackerTW0.csv',sep=',')
Out[15]:
PurchaseOrderNumber,ShipmentFinalDestinationCity,TransferPointCity,POType,PlannedMode,ProgramType,FreightPaymentTerms,ContainerNumber,BL/AWB#,Mode,ShipmentFinalDestinationLocation,CarrierSCAC,Carrier,Forwarder,BrandDesc,POLCity,PODCity,InDCOutlookDate,InDCOriginalDate,AnticipatedShipDate,PlannedStockedDate,ExFactoryActualDate(LT),OriginConsolActualDate(LT),DepartLoadPortActualDate(LT),FullOutGatefromOceanTerminal(CYorPort)ActualDate(LT),DPArrivalActualDate(LT),FreightAvailableActualDate(LT),DestConsolActualDate(LT),DomDepartActualDate(LT),YardArrivalActualDate(LT),CarrierDropActualDate(LT),InDCActualDate(LT),StockedActualDate(LT),Vessel,VesselETADischargePortCity,DPArrivalOutlookDate,VesselETADischargePortActualDate(LT),FullOutGatefromOceanTerminal(CYorPort)OutlookDate,StockedOutlookDate,ShipmentLeg#,Metrics,TotalShippedQty
0 1251708,Rugby,Tuticorin,Initial Order,Ocean,Re...
1 1262597,Rugby,Hong Kong,Initial Order,Ocean,Re...
Thanks
You might want to try this, you have like 40 columns.
import pandas as pd
df = pd.read_csv('input.csv', names=['PurchaseOrderNumber','ShipmentFinalDestinationCity','TransferPointCity','POType','PlannedMode','ProgramType','FreightPaymentTerms','ContainerNumber','BL/AWB#','Mode','ShipmentFinalDestinationLocation','CarrierSCAC','Carrier','Forwarder','BrandDesc','POLCity','PODCity','InDCOutlookDate','InDCOriginalDate','AnticipatedShipDate','PlannedStockedDate','ExFactoryActualDate(LT)','OriginConsolActualDate(LT)','DepartLoadPortActualDate(LT)','FullOutGatefromOceanTerminal(CYorPort)ActualDate(LT)','DPArrivalActualDate(LT)','FreightAvailableActualDate(LT)','DestConsolActualDate(LT)','DomDepartActualDate(LT)','YardArrivalActualDate(LT)','CarrierDropActualDate(LT)','InDCActualDate(LT)','StockedActualDate(LT)','Vessel','VesselETADischargePortCity','DPArrivalOutlookDate','VesselETADischargePortActualDate(LT)','FullOutGatefromOceanTerminal(CYorPort)OutlookDate','StockedOutlookDate','ShipmentLeg#','Metrics','TotalShippedQty']
print df
Recently, I wanna process a csv file, the code is like this:
data = pd.read_csv(dir, sep=" ")
print(data)
the output also put all values in one row,
then I just use the default "sep" value, the problem has been solved.
data = pd.read_csv(dir, sep=",")
the situation seems like different from which the asker raised,
but I hope it's helpful for some other friends like me,
and this is my first comment, I hope it's not too bad!
It may not be the best option but it works!
Read the file as it is:
df = pd.read_csv('input.csv')
Get all the column names and assign them to a variable.
names= df.columns.str.split(',').tolist()
Split all the values by ','
df= df.iloc[:,0].str.split(',', expand=True)
Finally, assign 'names' to column names and that's it!
df.columns = names
I was also having the same issues. All of the columns were coming as one value. So the following worked for me.
df = pd.read_csv('/content/Reviews.csv',
sep=',',
error_bad_lines=False,
engine='python')

Categories