I am trying to convert multiple raster files to NetCDF files using an ArcPy script. When I run the below script, I get the following error message:
Message File Name Line Position
Traceback
<module> <module1> 19
RasterToNetCDF C:\Program Files\ArcGIS\Desktop10.3\ArcPy\arcpy\md.py 253
ExecuteError: Failed to execute. Parameters are not valid.
ERROR 000840: The value is not a Raster Layer.
ERROR 000840: The value is not a Raster Catalog.
Failed to execute (RasterToNetCDF).
Python script:
# Import system modules
import arcpy
from arcpy import env
# Set environment settings
env.workspace = r"D:\2012A"
# Set local variables
inRaster = r"D:\2012A"
outNetCDFFile = r"D:\2012A/nppnetcdf.nc"
variable = "elevation"
units = "meter"
XDimension = "x"
YDimension = "y"
bandDimension = ""
# Process: RasterToNetCDF
arcpy.RasterToNetCDF_md(inRaster, outNetCDFFile, variable, units,
XDimension, YDimension, bandDimension)
#Erica answered why you are receiving an error, though if you want to perform your conversion for each raster dataset within a directory, you'll have to first create a list of rasters within it.This can be done with something like this:
rasterlist = arcpy.ListRasters()
## other variables, as you have already defined them in your code
for raster in rasterlist:
RastertoNetCDF_md(variables)
So to implement this:
# Set environment settings
env.workspace = r"D:\2012A"
# Set local variables
inRaster = r"D:\2012A"
outNetCDFFile = r"D:\2012A\nppnetcdf.nc"
variable = "elevation"
units = "meter"
XDimension = "x"
YDimension = "y"
bandDimension = ""
rasterlist = arcpy.ListerRasters()
# Process: RasterToNetCDF
for raster in rasterlist:
arcpy.RasterToNetCDF_md(inRaster, outNetCDFFile, variable, units,
XDimension, YDimension, bandDimension)
Two problems stand out to me.
First, your file path here outNetCDFFile = r"D:\2012A/nppnetcdf.nc" is invalid. Both should be backslashes \ to get a valid path.
Second, and what is more likely causing the error -- inRaster = r"D:\2012A" appears to be a directory. You can't pass just a directory to the RasterToNetCDF_md tool -- the input parameter has to be a raster layer. Run the MakeRasterLayer_management tool on a raster file (not on a directory!) to create a raster layer, and pass that result to RasterToNetCDF_md.
Related
How do I set different keras backends in different conda environments? Because in a specific environment, if I change the backend to tensorflow in keras.json, then in another python environment, keras backend will be tensorflow too. There is only one keras.json in my documents.
For using different keras backends in different environments in Anaconda - 'env1' and 'env2'
Activate the first environment 'env1'
Import keras from python with the default backend (if it fails to load tensorflow for example, install tensorflow in that environment) successfully
In the ~ folder, a '.keras' folder will be created which will contain keras.json file
For the other environment create a copy of the '.keras' folder as '.keras1'
Change the keras.json file in that folder as per requirements (the 'backend' field)
For using that config in 'env2' go to '~/anaconda3/envs/env2/lib/pythonx.x/site-packages/keras/backend' and edit the __init__.py file
Make the changes marked with ##
You will be able to import keras with different backends in env1 and env2
from __future__ import absolute_import
from __future__ import print_function
import os
import json
import sys
import importlib
from .common import epsilon
from .common import floatx
from .common import set_epsilon
from .common import set_floatx
from .common import cast_to_floatx
from .common import image_data_format
from .common import set_image_data_format
# Set Keras base dir path given KERAS_HOME env variable, if applicable.
# Otherwise either ~/.keras or /tmp.
if 'KERAS_HOME' in os.environ:
_keras_dir = os.environ.get('KERAS_HOME')
else:
_keras_base_dir = os.path.expanduser('~')
if not os.access(_keras_base_dir, os.W_OK):
_keras_base_dir = '/tmp'
_keras_dir = os.path.join(_keras_base_dir, '.keras1')##
# Default backend: TensorFlow.
_BACKEND = 'tensorflow'
# Attempt to read Keras config file.
_config_path = os.path.expanduser(os.path.join(_keras_dir, 'keras.json'))
if os.path.exists(_config_path):
try:
with open(_config_path) as f:
_config = json.load(f)
except ValueError:
_config = {}
_floatx = _config.get('floatx', floatx())
assert _floatx in {'float16', 'float32', 'float64'}
_epsilon = _config.get('epsilon', epsilon())
assert isinstance(_epsilon, float)
_backend = _config.get('backend', _BACKEND)
_image_data_format = _config.get('image_data_format',
image_data_format())
assert _image_data_format in {'channels_last', 'channels_first'}
set_floatx(_floatx)
set_epsilon(_epsilon)
set_image_data_format(_image_data_format)
_BACKEND = _backend
# Save config file, if possible.
if not os.path.exists(_keras_dir):
try:
os.makedirs(_keras_dir)
except OSError:
# Except permission denied and potential race conditions
# in multi-threaded environments.
pass
if not os.path.exists(_config_path):
_config = {
'floatx': floatx(),
'epsilon': epsilon(),
'backend': _BACKEND,
'image_data_format': image_data_format()
}
try:
with open(_config_path, 'w') as f:
f.write(json.dumps(_config, indent=4))
except IOError:
# Except permission denied.
pass
# Set backend based on KERAS_BACKEND flag, if applicable.
if 'KERAS_BACKEND' in os.environ:
_backend = os.environ['KERAS_BACKEND']
_BACKEND = _backend
# Import backend functions.
if _BACKEND == 'cntk':
sys.stderr.write('Using CNTK backend\n')
from .cntk_backend import *
elif _BACKEND == 'theano':
sys.stderr.write('Using Theano backend.\n')
from .theano_backend import *
elif _BACKEND == 'tensorflow':
sys.stderr.write('Using TensorFlow backend.\n')
from .tensorflow_backend import *
else:
# Try and load external backend.
try:
backend_module = importlib.import_module(_BACKEND)
entries = backend_module.__dict__
# Check if valid backend.
# Module is a valid backend if it has the required entries.
required_entries = ['placeholder', 'variable', 'function']
for e in required_entries:
if e not in entries:
raise ValueError('Invalid backend. Missing required entry : ' + e)
namespace = globals()
for k, v in entries.items():
# Make sure we don't override any entries from common, such as epsilon.
if k not in namespace:
namespace[k] = v
sys.stderr.write('Using ' + _BACKEND + ' backend.\n')
except ImportError:
raise ValueError('Unable to import backend : ' + str(_BACKEND))
def backend():
"""Publicly accessible method
for determining the current backend.
# Returns
String, the name of the backend Keras is currently using.
# Example
```python
>>> keras.backend.backend()
'tensorflow'
```
"""
return _BACKEND
Here is what I did for my own purposes, same logic as Kedar's answer, but on a Windows install (and Keras version) for which locations and file names may differ :
1/ Set a specific keras.json file, in a folder of your targeted Anaconda environment. Modify the "backend" value.
2/ Then force the 'load_backend.py' (the one specific to your anaconda env.) to load this specific keras.json.
Also, force de "default backend" to the one you want in that very same file.
=======================================================
IN DETAILS :
1.1 Open the Anaconda environment folder for which you want a specific backend. In my case it's C:\ProgramData\Anaconda3\envs\[MyAnacondaEnvironment]\
1.2 Here create a folder .keras, and in that folder copy or create a file keras.json (I copied mine from C:\Users\[MyWindowsUserProfile]\.keras\keras.json).
Now in that file, change the backend for the one you want, I've chosen 'cntk' for some tests. The file's content should now look like that :
{
"floatx": "float32",
"epsilon": 1e-07,
"backend": "cntk",
"image_data_format": "channels_last"
}
And the file's name and location looks like C:\ProgramData\Anaconda3\envs\[MyAnacondaEnvironment]\.keras\keras.json
2.1 Now open the file 'load_backend.py' specific to the environment you are customizing, located here (in my case) C:\ProgramData\Anaconda3\envs\[MyAnacondaEnvironment]\Lib\site-packages\keras\backend
2.2 Here line 17 to 25 in my Keras version (2.3.1), the file usually loads the backend from the configuration file it locates with the help of your environment variables or of your current Windows user for instance. That's why currently your backend is cross environment.
Get rid of this by forcing 'load_backend.py' to look which backend to load directly in your environment specific configuration file (the one you created at step 1.2)
For instance, line 26 of that 'load_backend.py' file (line 26 in my case, anyway right after the attempt to load the configuration file automatically) add that line (and customize it for your own location) :
_keras_dir = 'C:\ProgramData\Anaconda3\envs\[MyAnacondaEnvironment]\.keras' ##Force script to get configuration from a specific file
3.1 Then replace (line 28 in my case, anyway right after you forced _keras_dir path) the default backend _BACKEND = 'tensorflow' by _BACKEND = 'cntk'.
You should be done
One solution is to create different users for different environments and put different keras.json files for both:
$HOME/.keras/keras.json
This way you'll be able to change any keras parameter independently.
If you only need to change the backend, it is easier to use KERAS_BACKEND env variable. The following command will use tensorflow, not matter what's in keras.json:
$ KERAS_BACKEND=tensorflow python -c "from keras import backend"
Using TensorFlow backend.
So can you start a new shell terminal, run export KERAS_BACKEND=tensorflow in it and all subsequent commands will use tensorflow. You can go further and set this variable per conda env activation as discussed in this question (if you need it permanently):
$PREFIX/etc/conda/activate.d
My specific issue is exactly the title. I have a large raster processing script in python and need to perform a clump function which I cannot find in gdal / python nor have I figured out how to 'write it' myself.
I am becoming better with python all the time just still newish, but am learning R for this task. (installed R version 3.4.1 (2017-06-30))
I am able to get rpy2 installed within python after spending a little time learning R and through help on Stackoverflow I have been able to perform several 'tests' of rpy2.
The most helpful info in getting rpy2 to respond was to establish where your R is within your python session or script. from another Stack answer. As below:
import os
os.environ['PYTHONHOME'] = r'C:\Python27\ArcGIS10.3\Scripts\new_ve_folder\Scripts'
os.environ['PYTHONPATH'] = r'C:\Python27\ArcGIS10.3\Scripts\new_ve_folder\Lib\site-packages'
os.environ['R_HOME'] = r'C:\Program Files\R\R-3.4.1'
os.environ['R_USER'] = r'C:\Python27\ArcGIS10.3\Scripts\new_ve_folder\Lib\site-packages\rpy2'
However, the main tests listed in the documentation http://rpy.sourceforge.net/rpy2/doc-2.1/html/overview.html I cannot get to work.
import rpy2.robjects.tests
import unittest
# the verbosity level can be increased if needed
tr = unittest.TextTestRunner(verbosity = 1)
suite = rpy2.robjects.tests.suite()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'suite'
However:
import rpy2.robjects as robjects
pi = robjects.r['pi']
pi[0]
works just fine. as do a few other rpy2.robjects tests I have found. I can create string = ''' f <- functions ect ''' and call those from python.
If i use:
python -m 'rpy2.tests'
I get the following error.
r\Scripts>python -m 'rpy2.tests'
r\Scripts\python.exe: No module named 'rpy2
Documentation states: On Python 2.6, this should return that all tests were successful. I am using Python 2.7 and I also tried this in Python 3.3.
My script for clump starts as below:
I do not want to have to actually install the package names each time I run the script as they are already installed in my R Home.
I would like to use my python variables if possible.
I need to figure out why rpy2 does not respond as the documentation indicates, or why I am getting errors. And then after that figure out the correct way to write my clump portion of my python script.
packageNames = ('raster', 'rgdal')
if all(rpackages.isinstalled(x) for x in packageNames):
have_packages = True
else:
have_packages = False
if not have_packages:
utils = rpackages.importr('utils')
utils.chooseCRANmirror(ind=1)
packnames_to_install = [x for x in packageNames if not rpackages.isinstalled(x)]
if len(packnames_to_install) > 0:
utils.install_packages(StrVector(packnames_to_install))
from rpy2.robjects.packages import importr
import rpy2.robjects as robjects
There are several ways I have found to call the raster and clump options from R, however, if I cannot get rpy2 to respond correctly, I am not going to get these to work at all But since several other tests work I am not positive.
raster = robjects.r['raster']
raster = importr('raster')
clump = raster.clump
clump = robjects.r.clump
type(raster.clump)
tempDIR = r"C:\Users\script_out\temp"
slope_recode = os.path.join(tempDIR, "step2b_input.img")
outfile = os.path.join(tempDIR, "Rclumpfile.img")
raster.clump(slope_recode, filename=outfile, direction=4, gaps=True, format='HFA', overwrite=True)
Which results in a large amount of errors.
Traceback (most recent call last):
File "C:/Python27/ArcGIS10.3/Scripts/new_ve_folder/Scripts/rpy2_practice.py", line 97, in <module>
raster.clump(slope_recode, filename=outfile, direction=4, gaps=True, format='HFA', overwrite=True)
File "C:\Python27\ArcGIS10.3\Scripts\new_ve_folder\lib\site-packages\rpy2\robjects\functions.py", line 178, in __call__
return super(SignatureTranslatedFunction, self).__call__(*args, **kwargs)
File "C:\Python27\ArcGIS10.3\Scripts\new_ve_folder\lib\site-packages\rpy2\robjects\functions.py", line 106, in __call__
res = super(Function, self).__call__(*new_args, **new_kwargs)
rpy2.rinterface.RRuntimeError: Error in (function (classes, fdef, mtable) :
unable to find an inherited method for function 'clump' for signature '"character"'
Issues:
testing rpy2 in command line and script (both produce errors, but I am still able to use basic rpy2
importing the R packages so as not to install them each time
finally getting my clump script called correctly
If I have missed something basic, please point me in the right direction. Thanks all.
For your first problem, replace suite = rpy2.robjects.tests.suite() with suite = rpy2.tests.suite().
For your third problem (getting clump to work correctly), you need to create a RasterLayer object in R using the image. I'm not familiar with the raster package, so I can't give you the exact steps.
I will point out the arcpy module is not "pythonic". Normally, strings of filenames are just strings in Python. arcpy is weird in using plain strings to represent objects like map layers.
In your example, slope_recode is just a string. That's why you got the error unable to find an inherited method for function 'clump' for signature '"character"'. It means slope_recode was passed to R as a character value (which it is), and the clump function expects a RasterLayer object. It doesn't know how to handle character values.
I got this all to work with the below code.
import warnings
os.environ['PATH'] = os.path.join(scriptPath, 'path\\my_VE\\R\\R-3.4.2\\bin\\x64')
os.environ['PYTHONHOME'] = os.path.join(scriptPath, 'path\\my_VE\\Scripts\\64bit')
os.environ['PYTHONPATH'] = os.path.join(scriptPath, 'path\\my_VE\\Lib\\site-packages')
os.environ['R_HOME'] = os.path.join(scriptPath, 'path\\my_VE\\R\\R-3.4.2')
os.environ['R_USER'] = os.path.join(scriptPath, 'path\\my_VE\\Scripts\\new_ve_folder\\Scripts\\rpy2')
#
import platform
z = platform.architecture()
print(z)
## above will confirm you are working on 64 bit
gc.collect()
## this code snippit will tell you which library is being Read
command = 'Rscript'
cmd = [command, '-e', ".libPaths()"]
print(cmd)
x = subprocess.Popen(cmd, shell=True)
x.wait()
import rpy2.robjects.packages as rpackages
import rpy2.robjects as robjects
from rpy2.robjects import r
import rpy2.interactive.packages
from rpy2.robjects import lib
from rpy2.robjects.lib import grid
# # grab r packages
print("loading packages from R")
## fails at this point with the following error
## Error: cannot allocate vector of size 232.6 Mb when working with large rasters
rpy2.robjects.packages.importr('raster')
rpy2.robjects.packages.importr('rgdal')
rpy2.robjects.packages.importr('sp')
rpy2.robjects.packages.importr('utils')
# rpy2.robjects.packages.importr('memory')
# rpy2.robjects.packages.importr('dplyr')
rpy2.robjects.packages.importr('data.table')
grid.activate()
# set python variables for R code names
raster = robjects.r['raster']
writeRaster = robjects.r['writeRaster']
# setwd = robjects.r['setwd']
clump = robjects.r['clump']
# head = robjects.r['head']
crs = robjects.r['crs']
dim = robjects.r['dim']
projInfo = robjects.r['projInfo']
slope_recode = os.path.join(tempDIR, "_lope_recode.img")
outfile = os.path.join(tempDIR, "Rclumpfile.img")
recode = raster(slope_recode) # this is taking the image and reading it into R raster package
## https://stackoverflow.com/questions/47399682/clear-r-memory-using-rpy2
gc.collect() # No noticeable effect on memory usage
time.sleep(2)
gc.collect() # Finally, memory usage drops
R = robjects.r
R('memory.limit()')
R('memory.limit(size = 65535)')
R('memory.limit()')
print"starting Clump with rpy2"
clump(recode, filename=outfile, direction=4, gaps="True", format="HFA")
final = raster(outfile)
final = crs("+proj=longlat +datum=WGS84 +ellps=WGS84 +towgs84=0,0,0,-0,-0,-0,0 +no_defs")
print ("clump file created, CRS accurate, next step")
I would like to know what is the content of a tdms file, which is produced by Labview.
Following this site, I write in Python:
import numpy as np
from nptdms import TdmsFile
from nptdms import tdms
#read a tdms file
filenameS = "RESULTS.tdms"
tdms_file = TdmsFile(filenameS)
tdmsinfo [--properties] tdms_file
I receive the following error:
tdmsinfo [--properties] tdms_file
^
SyntaxError: invalid syntax
I do not how to fix it.
Thank you for your help :)
What you are looking for is:
First create a TMDS objet from file:
tdms_file = TdmsFile("C:\\Users\\XXXX\\Desktop\\xx Python\\XXXX.tdms")
then get the group names with:
tdms_groups = tdms_file.groups()
after you can figure out which group names you have into the file, just write
tdms_groups
It will print the following:
['Variables_1', 'Variables_2', 'Variables_3', 'Variables_4', etc..]
With group names now u will be able to get channels with the following:
tdms_Variables_1 = tdms_file.group_channels("Variables_1")
Next print your channels contain into that group:
tdms_Variables_1
It will show:
[ TdmsObject with path /'Variables_1'/'Channel_1', TdmsObject with path /'Variables_1'/'Channel_2', etc..]
At the end get the vectors and its data:
MessageData_channel_1 = tdms_file.object('Variables_1', 'Channel_1')
MessageData_data_1 = MessageData_channel_1.data
Check your data
MessageData_data_1
do stuff with your data!
cheers!
To loop over all properties from the root object try this:
#read a tdms file
filenameS = "RESULTS.tdms"
tdms_file = TdmsFile(filenameS)
root_object = tdms_file.object()
# Iterate over all items in the properties dictionary and print them
for name, value in root_object.properties.items():
print("{0}: {1}".format(name, value))
That should give you all properties names.
Your problem seems to be that tdmsinfo will not work inside a Python script as it is not a python command: it's "a command line program".
The solution is to either use 'tdmsinfo' from a windows shell, or make a wrapper in python so that it runs the command in a subprocess for you. For instance in Python3 with the subprocess package
import subprocess
tdmsfile='my_file.tdms'
# startup info to hide the windows shell
si = subprocess.STARTUPINFO()
si.dwFlags |= subprocess.STARTF_USESHOWWINDOW
#si.wShowWindow = subprocess.SW_HIDE # default
# run tdmsinfo in a subprocess and capture the output in a
a = subprocess.run(['tdmsinfo',tdmsfile],
stdout=subprocess.PIPE,
startupinfo=si).stdout
a = a.decode('utf-8')
print(a)
the code above should give you only the channels and groups, but you can also run with the -p flag to include all the TDMS object properties
a = subprocess.run(['tdmsinfo','-p',tdmsfile],
stdout=subprocess.PIPE,
startupinfo=si).stdout
I am working on a project for my beginners' Python class and have gotten a little stuck. I have three .tif files that I want to do Zonal Statistics for, but I am getting an error. Here is my script:
import arcpy
import os
from arcpy import env
from arcpy.sa import *
env.workspace = r'C:\Users\alvaremi\Documents\Final Project_Python'
path = r'C:\Users\alvaremi\Documents\Final Project_Pythonn'
env.overwriteOutput = 1
arcpy.CheckOutExtension('Spatial')
in_zone_data = 'counties_in_cog.shp'
zone_field = 'NAME'
impervious = os.listdir(env.workspace + '\ImpvClipped')
print impervious
for year in impervious:
if year.endswith(".tif"):
outZonalStatistics = ZonalStatistics(in_zone_data, zone_field, year, "MEAN", "NODATA")
outZonalStatistics.save(year[:8] + 'zonalstats')
print 'Done'
When I run it, I get this error:
ExecuteError: Failed to execute. Parameters are not valid.
ERROR 000865: Input value raster: 2001impvclipped.tif does not exist.
Failed to execute (ZonalStatistics).
I am also unsure of how to save the new files so that they keep the date on them. The files I want to run the Zonal Stats on are "2001impclipped", "2006impclipped", and "2011impclipped".
Thanks!
You need to add the full directory path to the filename in order for Python to find it.
fileName = env.workspace + '\ImpvClipped\' + year
ZonalStatistics(in_zone_data, zone_field, fileName, "MEAN", "NODATA")
I am trying to create symlinks using Python on Windows 8. I found This Post and this is part of my script.
import os
link_dst = unicode(os.path.join(style_path, album_path))
link_src = unicode(album_path)
kdll = ctypes.windll.LoadLibrary("kernel32.dll")
kdll.CreateSymbolicLinkW(link_dst, link_src, 1)
Firstly, It can create symlinks only when it is executed through administrator cmd. Why is that happening?
Secondly, When I am trying to open those symlinks from windows explorer I get This Error:
...Directory is not accessible. The Name Of The File Cannot Be Resolved By The System.
Is there a better way of creating symlinks using Python? If not, How can I solve this?
EDIT
This is the for loop in album_linker:
def album_Linker(album_path, album_Genre, album_Style):
genre_basedir = "E:\Music\#02.Genre"
artist_basedir = "E:\Music\#03.Artist"
release_data_basedir = "E:\Music\#04.ReleaseDate"
for genre in os.listdir(genre_basedir):
genre_path = os.path.join(genre_basedir, "_" + album_Genre)
if not os.path.isdir(genre_path):
os.mkdir(genre_path)
album_Style_list = album_Style.split(', ')
print album_Style_list
for style in album_Style_list:
style_path = os.path.join(genre_path, "_" + style)
if not os.path.isdir(style_path):
os.mkdir(style_path)
album_path_list = album_path.split("_")
print album_path_list
#link_dst = unicode(os.path.join(style_path, album_path_list[2] + "_" + album_path_list[1] + "_" + album_path_list[0]))
link_dst = unicode(os.path.join(style_path, album_path))
link_src = unicode(album_path)
kdll = ctypes.windll.LoadLibrary("kernel32.dll")
kdll.CreateSymbolicLinkW(link_dst, link_src, 1)
It takes album_Genre and album_Style And then It creates directories under E:\Music\#02.Genre . It also takes album_path from the main body of the script. This album_path is the path of directory which i want to create the symlink under E:\Music\#02.Genre\Genre\Style . So album_path is a variable taken from another for loop in the main body of the script
for label in os.listdir(basedir):
label_path = os.path.join(basedir, label)
for album in os.listdir(label_path):
album_path = os.path.join(label_path, album)
if not os.path.isdir(album_path):
# Not A Directory
continue
else:
# Is A Directory
os.mkdir(os.path.join(album_path + ".copy"))
# Let Us Count
j = 1
z = 0
# Change Directory
os.chdir(album_path)
Firstly, It can create symlinks only when it is executed through administrator cmd.
Users need "Create symbolic links" rights to create a symlink. By default, normal users don't have it but administrator does. One way to change that is with the security policy editor. Open a command prompt as administrator, run secpol.msc and then go to Security Settings\Local Policies\User Rights Assignment\Create symbolic links to make the change.
Secondly, When I am trying to open those symlinks from windows explorer I get This Error:
You aren't escaping the backslashes in the file name. Just by adding an "r" to the front for a raw string, the file name changes. You are setting a non-existant file name and so explorer can't find it.
>>> link_dst1 = "E:\Music\#02.Genre_Electronic_Bass Music\1-800Dinosaur-1-800-001_[JamesBlake-Voyeur(Dub)AndHolyGhost]_2013-05-00"
>>> link_dst2 = r"E:\Music\#02.Genre_Electronic_Bass Music\1-800Dinosaur-1-800-001_[JamesBlake-Voyeur(Dub)AndHolyGhost]_2013-05-00"
>>> link_dst1 == link_dst2
False
>>> print link_dst1
E:\Music\#02.Genre_Electronic_Bass Music☺-800Dinosaur-1-800-001_[JamesBlake-Voyeur(Dub)AndHolyGhost]_2013-05-00
os.symlink works out of the box since python 3.8 on windows, as long as Developer Mode is turned on.
If you're just trying to create a link to a directory, you could also create a "Junction", no admin privileges required:
import os
import _winapi
src_dir = "C:/Users/joe/Desktop/my_existing_folder"
dst_dir = "C:/Users/joe/Desktop/generated_link"
src_dir = os.path.normpath(os.path.realpath(src_dir))
dst_dir = os.path.normpath(os.path.realpath(dst_dir))
if not os.path.exists(dst_dir):
os.makedirs(os.path.dirname(dst_dir), exist_ok=True)
_winapi.CreateJunction(src_dir, dst_dir)