I am running a python script with bcp to export the data from my local drive to SQL Server. The python script works well when I run it manually on Jupyter notebook. But, when I create a batch file to automatically run the task, I am getting an error.
Below is the batch file:
#echo off
"C:\ProgramData\Anaconda3\python.exe" "C:\Users\Atom\Desktop\Untitled1.ipynb" %*
pause
And the error that I get while running the batch file:
Traceback (most recent call last):
File "C:\Users\Atom\Desktop\Untitled1.ipynb", line 40, in <module>
"execution_count": null,
NameError: name 'null' is not defined
Kindly advise what could be the issue.
According to the IPython web site (emphasis mine):
Notebook documents contains the inputs and outputs of a interactive
session as well as additional text that accompanies the code but is
not meant for execution. In this way, notebook files can serve as a
complete computational record of a session, interleaving executable
code with explanatory text, mathematics, and rich representations of
resulting objects. These documents are internally JSON files and are
saved with the .ipynb extension.
Related
I have 10 pytest files and the demand is to convert these to a single executable file, The project contains properties file in which data is present.I need the data for pytest as in the console I was running via python -m pytest but in pyinstaller it is asking for a single .py file to be converted. I tried but came into no solution. Please help me its important. Even though I convert to a single executable the exe file never runs.
In Java I used to create a jar file and it can be processed via .bat file and it reads data from properties file and it works
If in python something like that we can achieve will be welcomed
I'd like to include code from another file in another Jupyter .ipynb file on the AWS Elastic MapReduce platform.
But many of the methods I have seen online for including utility functions / common Python code that would seem to work outside of AWS, don't work inside the hosting environment for EMR-based Notebooks. I'm assuming this is file system security/server restrictions. If someone knows of an example of including code from a .py file and/or .ipynb file from the same directory that works on AWS I would love to see an example.
This method did not work for me. The find_notebook returns None.
https://jupyter-notebook.readthedocs.io/en/4.x/examples/Notebook/rstversions/Importing%20Notebooks.html
or this library / method
ipynb import another ipynb file
Is there an AWS "approved" or recommended way of including common Python/PySpark code into a Jupyter Notebook?
Note: this question
How to import from another ipynb file in EMR jupyter notebook which runs a PySpark kernel?
I have ssh'd into the master server and installed different packages like ipynb.
Didn't work even though the module installed fine. And the environment can see it. But the overall technique did not work.
Error:
An error was encountered:
"'name' not in globals"
Traceback (most recent call last):
KeyError: "'name' not in globals"
I've got a Python script for retrieving PDF documents from a remote system and storing them locally. I run it manually from a Linux command line (using Ubuntu).
Whenever I store the file locally I would like to somehow add it to Recent Files so that it shows in the file dialog without me having to navigate through the directory structure to it.
How is this done? Is it some dbus magic? I don't need a complete solution, just a pointer where to look because I have no idea...
Thanks!
I am getting the following error when I run my python script:
/Documents/stage/crocus/python$ python bonaiguaforcing.py
sh: 1: ncks: not found
sh: 1: ncatted: not found
Traceback (most recent call last):
File "bonaiguaforcing.py", line 142, in <module>
creatforc('/home/chomette/Documents/stage/crocus/bonaigua2.txt','/home/chomette/Documents/stage/crocus/FORCING_bonaigua.nc')
File "bonaiguaforcing.py", line 46, in creatforc
meteo=netCDF4.Dataset(net_out,'a')
File "netCDF4/_netCDF4.pyx", line 1746, in netCDF4._netCDF4.Dataset.__init__ (netCDF4/_netCDF4.c:10983)
RuntimeError: No such file or directory
In my python script I create a netCDF file to copy data and then I create a new netCDF file with a new variable, it seems that python didn't find the first netCDF file created.... but I'm not sure.
Thanks for your help =)
Without seeing the code producing the error, this looks like an environment definition problem. Your shell can't find where NCO is installed (if you don't have NCO then this is a dependency problem and you need to install it for your script to work).
Have you tried in bash :
which ncks
which ncatted
If these are not in your path, you are going to need to add aliases pointing to them in your bash rc, execute under your home directory the following (with vi or another editor) :
vi .bashrc
then add to the file:
alias ncks='/usr/bin/ncks'
alias ncatted='/usr/bin/ncatted'
You will need to change /usr/ to the location of your NCO installation. Also, don't forget to source . .bashrc before testing your program again. You can also just type your aliases into the shell, but you will need to do this each time you open a new terminal.
Updated answer (based on your comment below):
now it appears that your script is not finding part of the netCDF4 module (the part of it written in c, hence the .pyx extension). You'll need to make sure that your environment is correctly defined and that the netCDF module has been correctly compiled. Before going any farther, type the following commands in a terminal:
python
from netCDF4 import Dataset
to make sure that the module exists. If that works, then you can follow the instructions on https://netcdf4-python.googlecode.com/svn/trunk/docs/netCDF4-module.html to create a dataset in order to make sure that the module was correctly compiled.
For information, are your porting the crocus model to a new machine ? If so, that might explain why you are missing so many dependencies (modules, libraries and operators that your code needs in order to function). If not, there may be another error in your script which is making this look like a dependency problem. Please post part of your script for generating the crocus forcings if you do not think this is a problem with your environment/dependencies (ie if someone has already run the same script on your machine and it worked). Thanks!
You're seeing a RuntimeError because the filename specified in netout does not exist--the 'a' mode (append) requires that the file exist.
bit of background. I work for a VFX studio and we have in the past had to alter .py files for other programs just using notepad or notepad++, but no one ever actually used python.
So I have been rolling out python automation scripts now and they're working great, except one problem. Any machine who previously had python scripts associated with something other then python fail.
The script is called with an argument:
myScript.py <argument>
then I use:
print sys.argv
versionName = sys.argv[1]
This works great on all machines that never had python files associated with anything, however machines that had python previously associated with another application fail
it won't read the argument and i get a list index out or range error.
The print line shows it is not recieving the input properly either.
Any thoughts on how to fix this?
Edit: Script returns this when run:
z:\pythonScripts>Make_version_1.py test
['Z:\\pythonscripts\\Make_Version_1.py']
Traceback (most recent call last):
File "Z:\PythonScripts\Make_version_1.py", line 20, in <module>
versionName = sys.argv[1]
IndexError: list Out of range
This error is not returned form the majority of the machines in the office .. just ones where .py files had been associated with another program before python 2.7.6 was installed, so i know the code works.
You need to tell Windows that you want to pass arguments to python. Open a command prompt, and do this:
assoc .py=PythonFile
ftype PythonFile=python.exe %1 %*
Just wanted to share that i fixed the issue. Apparently preassociating a .py file breaks the setup of the windows path during installation. On my main machine i didn't need to adjust it, but i now have on all machines by adding ;c:\python27;c:\python27\scripts to the environment variable. Now it works fine