Dataframe not showing in Pycharm - python

I am using PyCharm 2016.2.1 . When I try to view a Pandas dataframe through the newly added feature 'View as DataFrame' in the debugger, this works as expected for a small (e.g. 4x4) DataFrame.
However when I try to view a DataFrame (generated by custom script) of ~10,000 rows x ~50 columns, I get the message: "Nothing to show".
When I run the same script (that generates the DataFrame) in Spyder, I am able to view it, so I am pretty sure it's not an error in my script.
Does anyone know if there is a maximum size to the DataFrames that can be viewed in PyCharm, and if there is a way to change this?
EDIT:
It seems that the maximum size allowed is 1000 x 15 , as in some cases it gets truncated to this size (when the number of rows is too large, but when there are too many columns pycharm just says 'nothing to show').
Still, I would like to know if there is a way to increase the maximum allowed rows and columns viewable through the DataFrame viewer.

I have faced the same problem with PyCharm 2018.2.2. The reason was having a special character in a column's name as mentioned by Yunzhao .
If your having a column name like 'R&D' changing it to 'RnD' will fix the problem. It's really strange JetBrains hasn't solved this problem for over 2 years.

As you said in your edit, there's a limit on the number of columns (on my PC though it's far less than 15). However, you can see the whole thing by typing:
df.values
It will show you the whole dataframe, but without the names of the columns.
Edit:
To show the column names as well:
np.vstack([df.columns, df.values])

I have met the same problems.
I figured it was because of the special characters in column names (in my case)
In my case, I have "%" in the column name, then it doesn't show the data in View as DataFrame function. After I remove it, everything was correctly shown.
Please double check if you also have some special characters in the column names.

This might be helpful for some people experiencing similar problem:
As of August 2019 SciView in PyCharm does struggle with displaying DataFrames that have contain nullable integer type, see issue on JetBrains

I use PyCharm 2019.1.1 (Community Edition). And when I used right-click "View as DataFrame". I get the message: "Nothing to show".
But when I click the object tail button "...View as DataFrame", it worked.
I find out that my problem is my DataFrame Object is an Object's param. Right-click the "View as DataFrame" doesn't transfer the class name, need to user input the class's name and param's name.
Hope can help somebody.

In the case that you don't strictly need to use the functionalities given by the DataFrame viewer, you can print the whole DataFrame in the output window, using:
def print_full(x):
pd.set_option('display.max_rows', len(x))
print(x)
pd.reset_option('display.max_rows')

I use PyCharm 2019.1.1 (Community Edition) and I run Python 3.7.
When I first click on "View as DataFrame" there seems to be the same issue, but if I wait a few second the content pops up. For me it is a matter of loading.

For the sake of completeness: I face the same problem, due to the fact that some elements in the index of the dataframe contain a question mark '?'. One should avoid that too, if you still want to use the data viewer. Data viewer still worked, if the index strings contain hashes or less-than/greather-than signs though.

In my situation, the problem is caused by two same cloumn name in my dataframe.
Check it by:len(list(df)) == len(set(df))

As of 2020-10-02, using PyCharm 2020.1.4, I found that this issue also occurs if the DataFrame contains a column containing a tuple.

The same thing is happening to me in version 2021.3.3
In my case, it seems to have something to do with the column dtype being Int64, but then completely full of <NA> values. As soon as I change even a single row's value in the offending column to an actual integer, it renders again.
So, if feasible, you can fix it by dropping the column, or setting all (or at least one) of the values to some meaningful replacement value, like -1 or -99999 or something:
df["col"] = df["col"].fillna(value=-1)

Related

How to update dataframe.ix code when refering to dates

I'm trying to run a code which was written a while ago and the problem is they used pandas.dataframe.ix which apparently no longer exists (I have the version 1.2.4)
So I'm trying to make it work using iloc and loc but there are parts where it refers to dates and I don't know what I could do as it's neither a label nor an integer
Here is an example of my problem:
def ExtractPeriode(self, dtStartPeriode, dtEndPeriode):
start=self.Energie.index.searchsorted(dtStartPeriode)
end=self.Energie.index.searchsorted(dtEndPeriode)
self.Energie = self.Energie.ix[start:end]
Does someone know what I could use to replace .ix in this situation? Also you might have noticed I'm not very experienced with coding so please try to keep your answers simple if you can
.loc is used for any label, so if you have dates as your index, you can pass in the date. But .searchsorted returns an integer - the row number where you should insert, so you should use .iloc.
self.Energie = self.Energie.iloc[start:end]

Replacing empty cells in column with variable value

I'm trying to replace the emtpy cells of a column called 'City' with the most common value in the same column through the use of the python library (i think) called pandas.
(workin with a csv file here)
This is what i've tried, assume the file is read and ready to be edited:
location = df['City'].mode()
basicdf = "df['City'].replace('',"+location+", inplace=True)"
basicdf
so the logic here was to use .mode which gives the most frequent value in a row and make that value into the variable 'location'
and then add that variable into the second line of code.
(i dont know how to do all this in the correct way at all.)
the second line of code seemed to be the only way to add whatever variable i desire into this .replace command.
Edit: have tried this code instead, this ends up writing in other columns aswell, other than 'City' which is not great.
df['City'].replace('',np.nan,inplace=True)
df = df.fillna(df['City'].value_counts().index[0])
any tips would be appreciated, mainly how to achieve what im trying to do ( while not needing to restart from scratch, cause i have a lot of other code in the file using pandas library) and
how to insert variables into these pandas commands (if even possible).
Found the answer, thanks mainly to Pygirl,
df['City'].replace('',np.nan,inplace=True)
df['City'].fillna(df['City'].value_counts().index[0], inplace=True)
these will first replace the blanks or empty cells with NaNs and then 'fill' in the NaNs with the most common value in the column selected, in this case: 'City'.

Pandas add column to new data frame at associated string value?

I am trying to add a column from one dataframe to another,
df.head()
street_map2[["PRE_DIR","ST_NAME","ST_TYPE","STREET_ID"]].head()
The PRE_DIR is just the prefix of the street name. What I want to do is add the column STREET_ID at the associated street to df. I have tried a few approaches but my inexperience with pandas and the comparison of strings is getting in the way,
street_map2['STREET'] = df["STREET"]
street_map2['STREET'] = np.where(street_map2['STREET'] == street_map2["ST_NAME"])
The above code shows an "ValueError: Length of values does not match length of index". I've also tried using street_map2['STREET'].str in street_map2["ST_NAME"].str. Can anyone think of a good way to do this? (note it doesn't need to be 100% accurate just get most and it can be completely different from the approach tried above)
EDIT Thank you to all who have tried so far I have not resolved the issues yet. Here is some more data,
street_map2["ST_NAME"]
I have tried this approach as suggested but still have some indexing problems,
def get_street_id(street_name):
return street_map2[street_map2['ST_NAME'].isin(df["STREET"])].iloc[0].ST_NAME
df["STREET_ID"] = df["STREET"].map(get_street_id)
df["STREET_ID"]
This throws this error,
If it helps the data frames are not the same length. Any more ideas or a way to fix the above would be greatly appreciated.
For you to do this, you need to merge these dataframes. One way to do it is:
df.merge(street_map2, left_on='STREET', right_on='ST_NAME')
What this will do is: it will look for equal values in ST_NAME and STREET columns and fill the rows with values from the other columns from both dataframes.
Check this link for more information: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html
Also, the strings on the columns you try to merge on have to match perfectly (case included).
You can do something like this, with a map function:
df["STREET_ID"] = df["STREET"].map(get_street_id)
Where get_street_id is defined as a function that, given a value from df["STREET"]. will return a value to insert into the new column:
(disclaimer; currently untested)
def get_street_id(street_name):
return street_map2[street_map2["ST_NAME"] == street_name].iloc[0].ST_NAME
We get a dataframe of street_map2 filtered by where the st-name column is the same as the street-name:
street_map2[street_map2["ST_NAME"] == street_name]
Then we take the first element of that with iloc[0], and return the ST_NAME value.
We can then add that error-tolerance that you've addressed in your question by updating the indexing operation:
...
street_map2[street_map2["ST_NAME"].str.contains(street_name)]
...
or perhaps,
...
street_map2[street_map2["ST_NAME"].str.startswith(street_name)]
...
Or, more flexibly:
...
street_map2[
street_map2["ST_NAME"].str.lower().replace("street", "st").startswith(street_name.lower().replace("street", "st"))
]
...
...which will lowercase both values, convert, for example, "street" to "st" (so the mapping is more likely to overlap) and then check for equality.
If this is still not working for you, you may unfortunately need to come up with a more accurate mapping dataset between your street names! It is very possible that the street names are just too different to easily match with string comparisons.
(If you're able to provide some examples of street names and where they should overlap, we may be able to help you better develop a "fuzzy" match!)
Alright, I managed to figure it out but the solution probably won't be too helpful if you aren't in the exact same situation with the same data. Bernardo Alencar's answer was essential correct except I was unable to apply an operation on the strings while doing the merge (I still am not sure if there is a way to do it). I found another dataset that had the street names formatted similar to the first. I then merged the first with the third new data frame. After this I had the first and second both with columns ["STREET_ID"]. Then I finally managed to merge the second one with the combined one by using,
temp = combined["STREET_ID"]
CrimesToMapDF = street_maps.merge(temp, left_on='STREET_ID', right_on='STREET_ID')
Thus getting the desired final data frame with associated street ID's

not able to change object to float in pandas dataframe

just started learning python. trying to change a columns data type from object to float to take out the mean. I have tried to change [] to () and even the "". I dont know whether it makes a difference or not. Please help me figure out what the issue is. thanks!!
My code:
df["normalized-losses"]=df["normalized-losses"].astype(float)
error which i see: attached as imageenter image description here
Use:
df['normalized-losses'] = df['normalized-losses'][~(df['normalized-losses'] == '?' )].astype(float)
Using df.normalized-losses leads to interpreter evaluating df.normalized which doesn't exist. The statement you have written executes (df.normalized) - (losses.astype(float)).There appears to be a question mark in your data which can't be converted to float.The above statement converts to float only those rows which don't contain a question mark and drops the rest.If you don't want to drop the columns you can replace them with 0 using:
df['normalized-losses'] = df['normalized-losses'].replace('?', 0.0)
df['normalized-losses'] = df['normalized-losses'].astype(float)
Welcome to Stack Overflow, and good luck on your Python journey! An important part of coding is learning how to interpret error messages. In this case, the traceback is quite helpful - it is telling you that you cannot call normalized after df, since a dataframe does not have a method of this name.
Of course you weren't trying to call something called normalized, but rather the normalized-losses column. The way to do this is as you already did once - df["normalized-losses"].
As to your main problem - if even one of your values can't be converted to a float, the columnwide operation will fail. This is very common. You need to first eliminate all of the non-numerical items in the column, one way to find them is with df[~df['normalized_losses'].str.isnumeric()].
The "df.normalized-losses" does not signify anything to python in this case. you can replace it with df["normalized-losses"]. Usually, if you try
df["normalized-losses"]=df["normalized-losses"].astype(float)
This should work. What this does is, it takes normalized-losses column from dataframe, converts it to float, and reassigns it to normalized column in the same dataframe. But sometimes it might need some data processing before you try the above statement.
You can't use - in an attribute or variable name. Perhaps you mean normalized_losses?

Pandas -- String of boolean conditions -- SettingWithCopyWarning

I am wondering if anyone can assist me with this warning I get in my code. The code DOES score items correctly, but this warning is bugging me and I can't seem to find a good fix, given that I need to string a few boolean conditions together.
Background: Imagine that I have a magical fruit identifier and I have a csv file that lists what fruit was identified and in which area (1, 2, etc.). I read in the csv file with columns of "FruitID" and "Area." An identification of "APPLE" or "apple" in Zone 1 is scored as correct/true (other identified fruits are incorrect/false). I apply similar logic for other areas, but I won't get into that.
Any ideas for how to correct this? Should I use .loc, although I'm not sure that this will work with multiple booleans. Thanks!
My code snippet that initiates the CopyWarning:
Area1_ID_df['Area 1, Score']=(Area1_ID_df['FruitID']=='APPLE')|(Area1_ID_df['FruitID']=='apple')
Stacktrace:
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
Pandas finds it ambiguous what you are trying to do. Certain operations return a view of the dataset, whereas other operations make a copy of the dataset. The confusion is whether you want to modify a copy of the dataset or whether you want to modify the original dataset or are trying to create something new.
https://www.dataquest.io/blog/settingwithcopywarning/ is a great link to learn more about the problem you are having.
If the line that's causing this error is truly: s = t | u, where t and u are Boolean series indexed consistently, you should not worry about SettingWithCopyWarning.
This is a warning rather than an error. The latter indicates there is a problem, the former indicates there may be a problem. In this case, Pandas guesses you may be working with a copy rather than a view.
If the result is as you expect, you can safely ignore the warning.

Categories