I'm doing a program responsible for managing a bookstore and I'm at the end of it. I'm creating the executable from it, but it's giving an error when running I don't know what it is.
Error:
Traceback (most recent call last):
File "main.py", line 3494, in <module>
File "db_manager.py", line 278, in titulo_livros
File "pandas\io\parsers.py", line 605, in read_csv
File "pandas\io\parsers.py", line 457, in _read
File "pandas\io\parsers.py", line 814, in __init__
File "pandas\io\parsers.py", line 1045, in _make_engine
File "pandas\io\parsers.py", line 1862, in __init__
File "pandas\io\parsers.py", line 1357, in _open_handles
File "pandas\io\common.py", line 642, in get_handle
FileNotFoundError: [Errno 2] No such file or directory: 'livros.csv'
[432] Failed to execute script main
The command I am using to generate the exe file is "pyinstaller --onefile main.py".
And that is my tree folder:
my tree folder project
Please help me, i have no idea of what is going on.
Thank you very much in advance.
somewhere you are doing pandas.read_csv(fname) where fname='livros.csv'
you need to give it the right path to the csv (or bundle the csv into the executable ... but that probably doesnt make sense, im not sure why you would ever bundle the csv into the executable)
after alot of back and forth I think this is what you want
import os
import pandas
import sqlalchemy
from sqlalchemy import create_engine
db_path = os.path.expanduser('~/my_file.db')
engine = create_engine('sqlite:///'+db_path, echo=False)
try:
existing = pandas.read_sql('SELECT title, author FROM books', engine)
except:
existing = pandas.DataFrame({'title':['Title 1','Title 2'],'author':['Bob Roberts','Sam Spade']})
print("DBPATH:",db_path)
# ... do some stuff (add/edit/remove items from your dataframe)
existing.to_sql("books",engine)
Related
I am trying to use pyinstaller to create a .exe for someone else to run. The program uses mysql to ping a sql database and return information from it. When I run the program in PyCharm and with pyinstaller --onedir, everything works fine. However, when I bundle the program with --onefile, I get this error and traceback when running:
Traceback (most recent call last):
File "main.py", line 266, in <module>
File "main.py", line 88, in main
File "main.py", line 108, in grabData
File "sql.py", line 12, in SQLconnect
File "mysql\connector\pooling.py", line 286, in connect
File "mysql\connector\connection_cext.py", line 101, in __init__
File "mysql\connector\abstracts.py", line 1095, in connect
File "mysql\connector\connection_cext.py", line 199, in _open_connection
TypeError: argument 6 must be str, not None
For reference, here is the relevant line in the _open_connection function call in connection_cext.py:
self._cmysql = _mysql_connector.MySQL(
buffered=self._buffered,
raw=self._raw,
charset_name=charset_name,
connection_timeout=(self._connection_timeout or 0),
use_unicode=self._use_unicode,
auth_plugin=self._auth_plugin,
plugin_dir=self._plugin_dir,
)
The __init__ of this file has this code snippet:
self._plugin_dir = os.path.join(
os.path.dirname(os.path.abspath(_mysql_connector.__file__)),
"mysql",
"vendor",
"plugin",
)
I believe this plugin directory is the reason my code is failing, but I do not know how to bypass this or set it so that it references the right file at runtime. I know pyinstaller creates a temp file at runtime, but I don't know how to get the module to use it as the plugin directory.
Any help would be greatly appreciated! I really really wanna keep it in --onefile mode for simplicity for the user, so any solutions that maintain that would be ideal. Thanks!
I've been running a script for several months now where I read and concat several excel exports using the following code:
files = os.listdir(os.path.abspath('exports/'))
for file in files:
if file.startswith('ap_statistics_') and file.endswith('.xlsx'):
excel_list.append(pd.read_excel('exports/' + file, sheet_name='Access Points'))
df = pd.concat(excel_list, axis=0, ignore_index=True)
This has worked just fine until this Saturday when I uploaded new exports to the CentOS Stream 9 VM where I have a cronjob running the script every hour.
Now I always get this error:
Traceback (most recent call last):
File "/root/projects/beacon_check_v8/main.py", line 310, in <module>
ap_check()
File "/root/projects/beacon_check_v8/main.py", line 260, in ap_check
siteaps_result = getaps()
File "/root/projects/beacon_check_v8/main.py", line 30, in getaps
excel_list.append(pd.read_excel('exports/' + file, sheet_name='Access Points'))
File "/root/projects/beacon_check_v8/venv/lib64/python3.9/site-packages/pandas/util/_decorators.py", line 311, in wrapper
return func(*args, **kwargs)
File "/root/projects/beacon_check_v8/venv/lib64/python3.9/site-packages/pandas/io/excel/_base.py", line 457, in read_excel
io = ExcelFile(io, storage_options=storage_options, engine=engine)
File "/root/projects/beacon_check_v8/venv/lib64/python3.9/site-packages/pandas/io/excel/_base.py", line 1419, in __init__
self._reader = self._engines[engine](self._io, storage_options=storage_options)
File "/root/projects/beacon_check_v8/venv/lib64/python3.9/site-packages/pandas/io/excel/_openpyxl.py", line 525, in __init__
super().__init__(filepath_or_buffer, storage_options=storage_options)
File "/root/projects/beacon_check_v8/venv/lib64/python3.9/site-packages/pandas/io/excel/_base.py", line 518, in __init__
self.book = self.load_workbook(self.handles.handle)
File "/root/projects/beacon_check_v8/venv/lib64/python3.9/site-packages/pandas/io/excel/_openpyxl.py", line 536, in load_workbook
return load_workbook(
File "/root/projects/beacon_check_v8/venv/lib64/python3.9/site-packages/openpyxl/reader/excel.py", line 317, in load_workbook
reader.read()
File "/root/projects/beacon_check_v8/venv/lib64/python3.9/site-packages/openpyxl/reader/excel.py", line 277, in read
self.read_strings()
File "/root/projects/beacon_check_v8/venv/lib64/python3.9/site-packages/openpyxl/reader/excel.py", line 143, in read_strings
with self.archive.open(strings_path,) as src:
File "/usr/lib64/python3.9/zipfile.py", line 1523, in open
raise BadZipFile("Bad magic number for file header")
zipfile.BadZipFile: Bad magic number for file header
I develop on my Windows 10 notebook using PyCharm with a Python 3.9 venv, same as on the VM, where the script continued to work just fine.
When researching online all I found was that sometimes .pyc files can cause issues so I created a completely new venv on the VM, installed all libraries (netmiko, pandas, openpyxl, etc.) and tried running the script again before and after deleting all .pyc files in the directory but no luck.
I have extracted the Excel file header using the following code:
with open('exports/' + file, 'rb') as myexcel:
print(myexcel.read(4))
Unfortunately it comes back as the same values on both my Windows venv as well as the CentOS venv:
b'PK\x03\x04'
I don't know if this header value is correct or not but I can read the files on my Windows notebook just fine using pandas or excel.
Any help would be greatly appreciated.
The issue was actually the program I used to transfer the files between my notebook and the VM, WinSCP. I don't know why or how this caused the error but I was able to fix it by transferring directly over pscp.
This question already has answers here:
What exactly is current working directory?
(5 answers)
Closed 27 days ago.
I am using a python package from Gray Arrow's Dejavu. I have installed all the required dependencies and other packages necessary to run my code. When running the following code:
import warnings
import json
warnings.filterwarnings("ignore")
from dejavu import Dejavu
from dejavu.recognize import FileRecognizer
with open('dejavu.cnf') as f:
config = json.load(f)
djv = Dejavu(config)
djv.fingerprint_file('mp3/Sean-Fournier--Falling-For-You.mp3')
song = djv.recognize(FileRecognizer, 'mp3/Sean-Fournier--Falling-For-
You.mp3')
print("from file recognized: {}\n".format(song))
I receive the following errors:
Traceback (most recent call last):
File "testing.py", line 13, in <module>
djv.fingerprint_file('mp3/Sean-Fournier--Falling-For-You.mp3')
File "D:\dejavu-master\dejavu\__init__.py", line 109, in
fingerprint_file song_name=song_name
File "D:\dejavu-master\dejavu\__init__.py", line 182, in
_fingerprint_worker channels, Fs, file_hash = decoder.read(filename,
limit)
File "D:\dejavu-master\dejavu\decoder.py", line 51, in read
audiofile = AudioSegment.from_file(filename)
File "C:\Program Files\Python37\lib\site-
packages\pydub\audio_segment.py", line 665, in from_file info =
mediainfo_json(orig_file)
File "C:\Program Files\Python37\lib\site-packages\pydub\utils.py",
line 263, in mediainfo_json res = Popen(command,
stdin=stdin_parameter, stdout=PIPE, stderr=PIPE)
File "C:\Program Files\Python37\lib\subprocess.py", line 775, in
__init__ restore_signals, start_new_session)
File "C:\Program Files\Python37\lib\subprocess.py", line 1178, in
_execute_child startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file
specified
I have done complete re-installations of python 3.7 and all of the packages and dependencies to attempt correction of this problem.
I do not understand what file cannot be found and have done research in hopes to find an answer to this issue. I truly appreciate any help and knowledge that can be shared with me on this topic.
Try and place "Sean-Fournier--Falling-For-You.mp3" file in the exact directory you have your script, I don't think the folder reference to the mp3 folder is being handled correctly :)
djv.fingerprint_file('mp3/Sean-Fournier--Falling-For-You.mp3')
song = djv.recognize(FileRecognizer, 'mp3/Sean-Fournier--Falling-For-
You.mp3')
After a lengthy search, I haven't found an example of a Dataflow / Beam pipeline that spans several files. Beam docs do suggest a file structure (under the section "Multiple File Dependencies"), but the Juliaset example they give has in effect a single code/source file (and the main file that calls it). Based on the Juliaset example, I need a similar file structure:
juliaset/__init__.py
juliaset/juliaset.py # actual code
juliaset/some_conf.py
__init__.py
juliaset_main.py
setup.py
Now I want to import .some_conf from juliaset/juliaset.py, which works when run locally but gives me an error when run on Dataflow
INFO:root:2017-12-15T17:34:09.333Z: JOB_MESSAGE_ERROR: (8cdf3e226105b90a): Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 706, in run
self._load_main_session(self.local_staging_directory)
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 446, in _load_main_session
pickler.load_session(session_file)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py", line 247, in load_session
return dill.load_session(file_path)
File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 363, in load_session
module = unpickler.load()
File "/usr/lib/python2.7/pickle.py", line 858, in load
dispatch[key](self)
File "/usr/lib/python2.7/pickle.py", line 1133, in load_reduce
value = func(*args)
File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 767, in _import_module
return getattr(__import__(module, None, None, [obj]), obj)
ImportError: No module named package_name.juliaset.some_conf
A full working example would be very much appreciated!
Can you verify your setup.py containing a structure like:
import setuptools
setuptools.setup(
name='My Project',
version='1.0',
install_requires=[],
packages=setuptools.find_packages(),
)
Import your modules like from juliaset.juliaset import SomeClass
And when you call the Python script, use python -m juliaset_main (without the .py)
Not sure if you already tried this, but just to be sure.
I'm attempting to download all the files in an SFTP directory to a local folder using the pysftp library. My code looks like this:
import pysftp
sftp = pysftp.Connection('server', username = 'name', password = 'password')
sftp.get_d('Daily_Reports', '/home/jchrysostom/Documents/SupplyChain/Daily_Reports/')
Daily_Reports is a folder that exists on the SFTP server - I have verified this. I have also verified that /home/jchrysostom/Documents/SupplyChain/Daily_Reports/ exists. I can cd to it in terminal with no problems.
However, when I run this python script, I get the following error: IOError: [Errno 2] File not found.
Any ideas what may be causing this?
UPDATE: A little investigation shows that the files actually have downloaded. In fact, all of them have downloaded just fine. However, I'm unable to run the rest of the script, because it's erroring out for some reason. Is this just a bug in the library?
UPDATE 2 - Full Traceback, as requested:
Traceback (most recent call last):
File "supplychain.py", line 20, in <module>
sftp.get_d('Daily_Reports','/home/jchrysostom/Documents/SupplyChain/Daily_Reports/')
File "/usr/local/lib/python2.7/dist-packages/pysftp.py", line 255, in get_d
preserve_mtime=preserve_mtime)
File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
File "/usr/local/lib/python2.7/dist-packages/pysftp.py", line 497, in cd
self.cwd(original_path)
File "/usr/local/lib/python2.7/dist-packages/pysftp.py", line 510, in chdir
self._sftp.chdir(remotepath)
File "/usr/local/lib/python2.7/dist-packages/paramiko/sftp_client.py", line 580, in chdir
if not stat.S_ISDIR(self.stat(path).st_mode):
File "/usr/local/lib/python2.7/dist-packages/paramiko/sftp_client.py", line 413, in stat
t, msg = self._request(CMD_STAT, path)
File "/usr/local/lib/python2.7/dist-packages/paramiko/sftp_client.py", line 729, in _request
return self._read_response(num)
File "/usr/local/lib/python2.7/dist-packages/paramiko/sftp_client.py", line 776, in _read_response
self._convert_status(msg)
File "/usr/local/lib/python2.7/dist-packages/paramiko/sftp_client.py", line 802, in _convert_status
raise IOError(errno.ENOENT, text)
IOError: [Errno 2] File not found
As best as I can tell, this is a bug in pysftp. The files are being copied successfully, but (at least according to the traceback here) the library is blowing up when it tries to change back to the original remote working directory on the FTP server.
Workaround is to iterate over the files in the directory and get() each individually...
for filename in sftp.listdir('Daily_Reports'):
sftp.get('Daily_Reports/' + filename, localpath = '/home/jchrysostom/Documents/SupplyChain/Daily_Reports/' + filename)