How to disable warning in jupyter notebook? - python

I want to remove the red warnings.
None of the answers mentioned here have worked

For your particular example (pd.read_table where the warnings are all regarding an incorrect number of columns in the data file) you can add warn_bad_lines=False to the call:
pd.read_table(filename, header=0, error_bad_lines=False, warn_bad_lines=False)

Related

List all Pandas ParserError when using pd.read_csv()

I have a csv with multiple lines that produce the following error:
df1 = pd.read_csv('df1.csv', skiprows=[176, 2009, 2483, 3432, 7486, 7608, 7990, 11992, 12421])
ParserError: Error tokenizing data. C error: Expected 52 fields in line 12541, saw 501
As you can probably notice, I have multiple lines that produce a ParserError.
To work around this, I am just updating 'skiprows' to include the error and continue parsing the csv. I have over 30K lines and would prefer to just do this all at once rather than hitting run in Jupyter Notebook, getting a new error, and updating. Otherwise, I wish it would just skip the errors and parse the rest, I've tried googling a solution that way - but all the SO responses were too complicated for me to follow and reproduce for my data structures.
P.S. why is that when using skiprows with just 1 line, like 177, I can just enter skiprows = 177, but when using skiprows with a list, I have to do skiprows = 'errored line - 1'? Why does the counting change?
pandas ≥ 1.3
You should use the on_bad_lines parameter of read_csv (pandas ≥ 1.3.0)
df1 = pd.read_csv('df1.csv', on_bad_lines='warn')
This will skip the invalid lines and give you a warning. If you use on_bad_lines='skip' you skip the lines without warning. The default value of on_bad_lines='error' raises an error for the first issue and aborts.
pandas < 1.3
The parameters are error_bad_lines=False and warn_bad_lines=True.

Ignore UserWarning from openpyxl using pandas

I have tons of .xlsm files that I have to load. Each Excel file has 6 sheets. Because of that, I'm opening each Excel file like this, using pandas:
for excel_file in files_list:
with pd.ExcelFile(excel_file, engine = "openpyxl") as f:
df1 = pd.read_excel(f, "Sheet1")
df2 = pd.read_excel(f, "Sheet2")
df3 = pd.read_excel(f, "Sheet3")
...
After each iteration I am passing the df to other function and do some stuff with it. I am using pd.ExcelFile to load the file into memory just once and then separate it on DataFrames.
However, when doing this, I am getting the following warning:
/opt/anaconda3/lib/python3.8/site-packages/openpyxl/worksheet/_reader.py:300: UserWarning: Data Validation extension is not supported and will be removed
warn(msg)
No matter the warning, the information is loaded correctly from the Excel file and no data is missing. It takes about 0.8s to load each Excel file and all of its sheets into df. If I use the default engine on pandas to load each Excel file, the warning goes away, but the time it takes for each file goes up to 5 or even 6 seconds.
I saw this post, but there wasn't an answer on how to remove the warning, which is what I need, as everything's working correctly.
How can I disable said UserWarning?
You can do this using warnings core module:
import warnings
warnings.filterwarnings('ignore', category=UserWarning, module='openpyxl')
You can also specify the particular module you'd like to silence warnings for by adding an argument module="openpyxl".
See What causes "UserWarning: Discarded range with reserved name" - openpyxl -- different warning, same solution - so you put the warnings back to default after you open the book, since there may be other warnings that you do want to see.
import warnings
warnings.simplefilter("ignore")
wb = load_workbook(path)
warnings.simplefilter("default")
If you want to ignore this warning specifically, and do it in a given context only, you can combine catch_warnings and filterwarnings with the message argument. E.g:
import warnings
with warnings.catch_warnings():
warnings.filterwarnings("ignore", message="Data Validation extension is not supported and will be removed")
data = pd.read_excel(f, sheet_name=None)
Note: sheet_name=None will read all the Excel sheets in one go too.

Strange warning using dask.dataframe to read csv

I am using dask dataframe module to read a csv.
In [3]: from dask import dataframe as dd
In [4]: dd.read_csv("/file.csv", sep=",", dtype=str, encoding="utf-8", error_bad_lines=False, collection=True, blocksize=64e6)
I used to this with no problem, but today a strange warning showed up:
FutureWarning: The default value of auto_mkdir=True has been deprecated and will be changed to auto_mkdir=False by default in a future release.
FutureWarning,
This didn't worried me until I realised it breaks my unit tests, because, when using this from console, it's simple a warning, but the tests set for my app have broken because of this.
Does anyone know the cause of this warning or how to get rid of it?
Auto-answering for documentation:
This issue appears in fsspec==0.6.3 and dask==2.12.0 and will be removed in the future.
To prevent pytest failing because of the warning, add or edit a pytest.ini file in your project and set
filterwarnings =
error
ignore::UserWarning
If you want dask to silent the warning at all, explicit set this in the function call storage_options=dict("auto_mkdir"=True)
I got the same thing. Finding no answers as to what might have replaced the feature, I decided to see if the feature is even needed any more. Sure enough, as of Pandas 1.3.0 the warnings that previously motivated the feature no longer appear. So
pd.read_csv(import_path, error_bad_lines=False, warn_bad_lines=False, names=cols)
simply became
pd.read_csv(import_path, names=cols)
and works fine with no errors or warnings.

Panda's ExcelWriter append mode not working

I don't understand why mode='a' or even mode='w' is not working.
My code is :
file = "Data_Complet.xlsx"
df1 = pd.read_excel(file, encoding='utf-8')
df_transfo = DataFrame(df1, columns = ['DATE','Nb'])
print(df_transfo)
with ExcelWriter(path="Data_Complet.xlsx", mode='a') as writer:
df_transfo.to_excel(writer)
The result is:
TypeError: init() got an unexpected keyword argument 'mode'.
I am using Spyder.
I keep coming to this question and forget what I did! So, I'm adding the code for it.
As stated in the comments, you need to have pandas version greater than 0.24.0.
Just run
pip install --upgrade pandas

Excel dates turn into "yy-mm-dd hh:mm:ss" format after using read_excel and to_excel

I have to get rid of duplicate rows on a *.xlsx file on a project. I have the code down here. But in the output file, date values turns into "yy-mm-dd hh:mm:ss" format after runnning my code. What would be the cause and solution to that wierd problem?
Running it on Pycharm 2019.2 Pro and Python 3.7.4
import pandas
mExcelFile = pandas.read_excel('Input/ogr.xlsx')
mExcelFile.drop_duplicates(subset=['FName', 'LName', 'Class', '_KDT'], inplace=True)
mExcelFile.to_excel('Output/NoDup.xlsx')
I'm expecting dates stay in original format which is "dd.mm.yy" but values become "yy-mm-dd hh:mm:ss"
To control date format when writing to Excel, try this:
writer = pd.ExcelWriter(fileName, engine='xlsxwriter', datetime_format='dd/mm/yy')
df.to_excel(writer)
Actually answer from the link below solved it. Since I am new to python programming I didn't realize where the problem was. It was actually pandas converting cellvalues to datetimes. Detailed answer : https://stackoverflow.com/a/49159393/11584604

Categories