Yesterday I setup Apache to serve my Mercurial repositories and got everything working properly. I then tested pushing changes back to this repository and was presented with an error, and now that error pops up for every single operation I attempt - even just a simple GET request of the repositories! Here is the error:
mod_wsgi (pid=1771): Target WSGI script '/var/hg/hgweb.wsgi' cannot be loaded as Python module.
mod_wsgi (pid=1771): Exception occurred processing WSGI script '/var/hg/hgweb.wsgi'.
Traceback (most recent call last):
File "/var/hg/hgweb.wsgi", line 18, in ?
application = hgwebdir(config)
File "/usr/lib64/python2.4/site-packages/mercurial/hgweb/__init__.py", line 15, in hgwebdir
return hgwebdir_mod.hgwebdir(*args, **kwargs)
File "/usr/lib64/python2.4/site-packages/mercurial/hgweb/hgwebdir_mod.py", line 52, in __init__
self.refresh()
File "/usr/lib64/python2.4/site-packages/mercurial/hgweb/hgwebdir_mod.py", line 82, in refresh
self.repos = findrepos(paths)
File "/usr/lib64/python2.4/site-packages/mercurial/hgweb/hgwebdir_mod.py", line 36, in findrepos
for path in util.walkrepos(roothead, followsym=True, recurse=recurse):
File "/usr/lib64/python2.4/site-packages/mercurial/util.py", line 1164, in walkrepos
for hgname in walkrepos(fname, True, seen_dirs):
File "/usr/lib64/python2.4/site-packages/mercurial/util.py", line 1146, in walkrepos
for root, dirs, files in os.walk(path, topdown=True, onerror=errhandler):
File "/usr/lib64/python2.4/os.py", line 276, in walk
onerror(err)
File "/usr/lib64/python2.4/site-packages/mercurial/util.py", line 1127, in errhandler
raise err
OSError: [Errno 13] Permission denied: './dev/fd'
My repository directory is owned by apache, the user running Apache. I dont know why './dev/fd' is being operated on either. I've restarted the server numerous times, recreated the repository directory, but I still get this error no matter what! I dont have access to restart the machine, so that is not an option. But it seems to have gotten in a very bad persistent state, and I dont know how to fix it. Any help is appreciated!
This turned out to be a configuration error on my part, and rather than delete the question I'll post the resolution here in case someone has this problem in the future.
Here was the hgweb.config I was using:
[paths]
/ = /var/hg/repos/*
#[web]
style = gitweb
allow_archive = bz2 gz zip
maxchanges = 200
allow_push = *
push_ssl = false
Two problems here, one is obvious. I had the [web] header commented out, and I assume that many of the options are not valid for the [paths] section. Also, after re-reading the Hg docs again, the push_ssl directive does not belong in the hgweb.config file, but rather in each repository's .hg/hgrc (or the ~/.hgrc of the user that runs apache). After fixing these, things are working perfectly!
Related
I am trying to build an API using an MLflow model.
the funny thing is it works from one location on my PC and not from another. So, the reason for doing I wanted to change my repo etc.
So, the simple code of
from mlflow.pyfunc import load_model
MODEL_ARTIFACT_PATH = "./model/model_name/"
MODEL = load_model(MODEL_ARTIFACT_PATH)
now fails with
ERROR: Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/starlette/routing.py", line 540, in lifespan
async for item in self.lifespan_context(app):
File "/usr/local/lib/python3.8/dist-packages/starlette/routing.py", line 481, in default_lifespan
await self.startup()
File "/usr/local/lib/python3.8/dist-packages/starlette/routing.py", line 516, in startup
await handler()
File "/code/./app/main.py", line 32, in startup_load_model
MODEL = load_model(MODEL_ARTIFACT_PATH)
File "/usr/local/lib/python3.8/dist-packages/mlflow/pyfunc/__init__.py", line 733, in load_model
model_impl = importlib.import_module(conf[MAIN])._load_pyfunc(data_path)
File "/usr/local/lib/python3.8/dist-packages/mlflow/spark.py", line 737, in _load_pyfunc
return _PyFuncModelWrapper(spark, _load_model(model_uri=path))
File "/usr/local/lib/python3.8/dist-packages/mlflow/spark.py", line 656, in _load_model
return PipelineModel.load(model_uri)
File "/usr/local/lib/python3.8/dist-packages/pyspark/ml/util.py", line 332, in load
return cls.read().load(path)
File "/usr/local/lib/python3.8/dist-packages/pyspark/ml/pipeline.py", line 258, in load
return JavaMLReader(self.cls).load(path)
File "/usr/local/lib/python3.8/dist-packages/pyspark/ml/util.py", line 282, in load
java_obj = self._jread.load(path)
File "/usr/local/lib/python3.8/dist-packages/py4j/java_gateway.py", line 1321, in __call__
return_value = get_return_value(
File "/usr/local/lib/python3.8/dist-packages/pyspark/sql/utils.py", line 117, in deco
raise converted from None
pyspark.sql.utils.AnalysisException: Unable to infer schema for Parquet. It must be specified manually.
The model artifacts are already downloaded to the folder /model folder which has the following structure.
the load model call is in the main.py file
As I mentioned it works from another directory, but there is no reference to any absolute paths. Also, I have made sure that my package references are identical. e,g I have pinned them all down
# Model
mlflow==1.25.1
protobuf==3.20.1
pyspark==3.2.1
scipy==1.6.2
six==1.15.0
also, the same docker file is used both places, which among other things, makes sure that the final resulting folder structure is the same
......other stuffs
COPY ./app /code/app
COPY ./model /code/model
what can explain it throwing this exception whereas in another location (on my PC), it works (same model artifacts) ?
Since it uses load_model function, it should be able to read the parquet files ?
Any question and I can explain.
EDIT1: I have debugged this a little more in the docker container and it seems the parquet files in the itemFactors folder (listed in my screenshot above) are not getting copied over to my image , even though I have the copy command to copy all files under the model folder. It is copying the _started , _committed and _SUCCESS files, just not the parquet files. Anyone knows why would that be? I DO NOT have a .dockerignore file. Why are those files ignored while copying?
I found the problem. Like I wrote in the EDIT1 of my post, with further observations, the parquet files were missing in the docker container. That was strange because I was copying the entire folder in my Dockerfile.
I then realized that I was hitting this problem mentioned here. File paths exceeding 260 characters, silently fail and do not get copied over to the docker container. This was really frustrating because nothing failed during build and then during run, it gave me that cryptic error of "unable to infer schema for parquet", essentially because the parquet files were not copied over during docker build.
I'm calling the Paramiko sftp_client.put(locapath,remotepath) method
This is throwing the [Errno 2] File not found error below.
01/07/2020 01:12:03 PM - ERROR - [Errno 2] File not found
Traceback (most recent call last):
File "file_transfer\TransferFiles.py", line 123, in main
File "paramiko\sftp_client.py", line 727, in put
File "paramiko\sftp_client.py", line 689, in putfo
File "paramiko\sftp_client.py", line 460, in stat
File "paramiko\sftp_client.py", line 780, in _request
File "paramiko\sftp_client.py", line 832, in _read_response
File "paramiko\sftp_client.py", line 861, in _convert_status
Having tried many of the other recommend fixes I found that the error is due to the server having an automatic trigger to move the file immediately to another location upon the file being uploaded.
I've not seen another post relating to this issue and wanted to know if anyone else has fixed this as the SFTP server is owned by a third party and not wanting to change trigger attributes.
The file actually uploads correctly, so I could catch the Exception and ignore the error. But I'd prefer to handle it, if possible.
Paramiko by default verifies a size of the uploaded file after the upload.
If the file is moved away immediately after upload, the check fails.
To avoid the check, set confirm parameter of SFTPClient.put to False.
sftp_client.put(localpath, remotepath, confirm=False)
I believe the check is redundant anyway, see
How to perform checksums during a SFTP file transfer for data integrity?
For a similar question about pysftp (what is a wrapper around Paramiko), see:
Python pysftp.put raises "No such file" exception although file is uploaded
Also had this issue of the file automatically getting moved before paramiko could do an os.stat on the uploaded file and compare the local and uploaded file sizes.
#Martin_Prikryl solution works works fine for removing the error by passing in confirm=False when using sftp.put or sftp.putfo
If you want this check to still run like you mention in the post to see if the file has been uploaded fully you can run something along these lines. For this to work you will need to know the moved file location and have the ability to read the file.
import os
sftp.putfo(source_file_object, destination_file, confirm=False)
upload_size = sftp.stat(moved_path).st_size
local_size = os.stat(source_file_object).st_size
if upload_size != local_size:
raise IOError(
"size mismatch in put! {} != {}".format(upload_size, local_size)
)
Both checks use os.stat
I'm calling the Paramiko sftp_client.put(locapath,remotepath) method
This is throwing the [Errno 2] File not found error below.
01/07/2020 01:12:03 PM - ERROR - [Errno 2] File not found
Traceback (most recent call last):
File "file_transfer\TransferFiles.py", line 123, in main
File "paramiko\sftp_client.py", line 727, in put
File "paramiko\sftp_client.py", line 689, in putfo
File "paramiko\sftp_client.py", line 460, in stat
File "paramiko\sftp_client.py", line 780, in _request
File "paramiko\sftp_client.py", line 832, in _read_response
File "paramiko\sftp_client.py", line 861, in _convert_status
Having tried many of the other recommend fixes I found that the error is due to the server having an automatic trigger to move the file immediately to another location upon the file being uploaded.
I've not seen another post relating to this issue and wanted to know if anyone else has fixed this as the SFTP server is owned by a third party and not wanting to change trigger attributes.
The file actually uploads correctly, so I could catch the Exception and ignore the error. But I'd prefer to handle it, if possible.
Paramiko by default verifies a size of the uploaded file after the upload.
If the file is moved away immediately after upload, the check fails.
To avoid the check, set confirm parameter of SFTPClient.put to False.
sftp_client.put(localpath, remotepath, confirm=False)
I believe the check is redundant anyway, see
How to perform checksums during a SFTP file transfer for data integrity?
For a similar question about pysftp (what is a wrapper around Paramiko), see:
Python pysftp.put raises "No such file" exception although file is uploaded
Also had this issue of the file automatically getting moved before paramiko could do an os.stat on the uploaded file and compare the local and uploaded file sizes.
#Martin_Prikryl solution works works fine for removing the error by passing in confirm=False when using sftp.put or sftp.putfo
If you want this check to still run like you mention in the post to see if the file has been uploaded fully you can run something along these lines. For this to work you will need to know the moved file location and have the ability to read the file.
import os
sftp.putfo(source_file_object, destination_file, confirm=False)
upload_size = sftp.stat(moved_path).st_size
local_size = os.stat(source_file_object).st_size
if upload_size != local_size:
raise IOError(
"size mismatch in put! {} != {}".format(upload_size, local_size)
)
Both checks use os.stat
Question: How can I change file permissions on a Windows 10 PC with a Python script?
I have written a Python script that takes folders, which are created by proprietary software, and moves them to a network drive with shutil.move().
It seems that the proprietary software creates folders that are read-only by default. I need to change the file permissions for these folders in order for shutil.move() to delete the folders after they are copied to the network drive.
I have searched on SO to discover that os.chmod(path, 0o777) only works to grant access on Unix systems. On Windows, it modifies the read-only attribute of a file or folder. This question seems to yield a solution, which I tried as follows:
import win32security
import ntsecuritycon as con
account = r"admin"
userx, domain, type = win32security.LookupAccountName ("", account)
sd = win32security.GetFileSecurity(path, win32security.DACL_SECURITY_INFORMATION)
dacl = sd.GetSecurityDescriptorDacl() # instead of dacl = win32security.ACL()
dacl.AddAccessAllowedAce(win32security.ACL_REVISION, con.FILE_GENERIC_READ | con.FILE_GENERIC_WRITE, userx)
sd.SetSecurityDescriptorDacl(1, dacl, 0) # may not be necessary
win32security.SetFileSecurity(path, win32security.DACL_SECURITY_INFORMATION, sd)
But it does not seem to work. Also, I don't understand what I am doing with the modules win32security and ntsecuritycon. Maybe someone can give an easy explanation.
edit: ok so i looked at stuff. This is the exception that gets raised:
Traceback (most recent call last):
File "copyscript.py", line 108, in <module>
copyscript()# the loop needs to be called as a function to delete all assigned variables after each loop
File "copyscript.py", line 93, in copyscript
shutil.move(run, str(target_dir2))#move files renamed to user folder
File "C:\Users\admin\AppData\Local\Programs\Python\Python35\lib\shutil.py", line 550, in move
rmtree(src)
File "C:\Users\admin\AppData\Local\Programs\Python\Python35\lib\shutil.py", line 488, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Users\admin\AppData\Local\Programs\Python\Python35\lib\shutil.py", line 378, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
File "C:\Users\admin\AppData\Local\Programs\Python\Python35\lib\shutil.py", line 383, in _rmtree_unsafe
onerror(os.unlink, fullname, sys.exc_info())
File "C:\Users\admin\AppData\Local\Programs\Python\Python35\lib\shutil.py", line 381, in _rmtree_unsafe
os.unlink(fullname)
PermissionError: [WinError 5] Access is denied: 'THG126.D\\AcqData\\sample_info.xml'
the full path of this file is D:\MSD_Data\THG126.D\AcqData\sample_info.xml.
the user account is named "admin" and it belongs to the "Administrators" group.
"admin" is the owner and has "full control" according to the "advanced security settings" for MSD_Data, THG126.D, sample_info.xml and the python script.
i have also tried running the script via CLI using "run as administrator". The same error occurs.
i looked at all files in the folders and found that only sample_info.xml has RA attributes, whereas all other have only A, so i added
path2 = r"D:\\MSD_data\\"+run+r"\\AcqData\\sample_info.xml"
subprocess.check_call(["attrib", "-r", path2, "/S", "/D"])
to the script and it seems to work now. I need to wait a little for new folders to be generated by the other software to see if the script is working correctly now.
The problem seems to have been that a file had the attribute "RA", which means "read-only" and "archived". Even though the used user account wqas the owner of all files and folders, shutil.move() fails when it tries to delete the file after copying to the target location.
A workaround to this problem is to use
subprocess.check_call(["attrib", "-r", path])
to remove the read-only file attribute. This resolved my issue. If you still have trouble with shutil.move() you could also try this solution.
I'm learning web programming with Python, and still basically going through lectures/tutorial.
I'm trying to upload a file to a server. This is my code:
import ftplib
import sys
filename = sys.argv[1]
connect = ftplib.FTP("***.**.***.**")
connect.login("testuser","pass")
file = open(filename, "rb")
connect.storbinary("STOR " + filename, file)
connect.quit()
and this is the error I have:
File "C:\Users\test\putfile.py", line 8, in <module>
connect.storbinary("STOR " + filename, file)
File "C:\Python27\lib\ftplib.py", line 471, in storbinary
conn = self.transfercmd(cmd, rest)
File "C:\Python27\lib\ftplib.py", line 376, in transfercmd
return self.ntransfercmd(cmd, rest)[0]
File "C:\Python27\lib\ftplib.py", line 339, in ntransfercmd
resp = self.sendcmd(cmd)
File "C:\Python27\lib\ftplib.py", line 249, in sendcmd
return self.getresp()
File "C:\Python27\lib\ftplib.py", line 224, in getresp
raise error_perm, resp
ftplib.error_perm: 550 Permission denied.
testuser should have the permission to write files, since the folder is owned by him, and he has root privilege(was added in sudoer file).
the same thing happens if I add the line:
connect.cwd('/testfolder')
I will get error_perm: 550 Failed to change directory.
However I can still read the existing files just fine (with
connect.retrlines("RETR " + filename))
I'm pretty new about Python as well as Linux, so I don't have idea what I'm doing. I need some help.
Perhaps this can help:
With FTP is not sufficient be owner of files and directories.
The service and daemon FTP must be correctly configured in order to write and create files etc.
For example in Ubuntu:
Edit /etc/vsftpd.conf
And in the line
;write_enable=YES
Delete the semicolon
Finally restart the service:
sudo service vsftpd restart
I would check if you are in the right location. I got the same problem, and then I realised that I was in a different location that I intended, in the root folder, above "/public_html", so there was no folder that I wanted to enter, and I didn't have permissions to store any files.
You can check where you are this way:
print connect.pwd()
and what the contents of the current directory are:
print ftplib.FTP.dir(connect)
So, if you are in the root folder ("/"), above the "/public_html" and you want to change current directory to "/testfolder" you need to use:
connect.cwd('/public_html/testfolder')
Have you checked the access permission on the FTP server? I just ran into this same problem. This issue happened cause I did not have the permission to read the folder that I want to upload my files into.
There are a few things one can check if encountered with this error.
Check the current directory of the ftp server that you are trying to access using connect.pwd(). Make sure you have the write access to this directory. You can try copy pasting manually to verify the same.
Make sure you only provide the filename and not the complete path. For me, this was causing an issue. For example, filename = "upload_img.jpg" instead of filename = "D:/apth/to/upload_img.jpg". Workaround will be to extract the CWD using os.split() and then set the CWD using the os.chdir()