I have a task to download some artifacts from Nexus. I am using nexuscli library.
The nexus repository structure is very heirarchical.
https://localhostABC.de/nexus/#browse/browse
will lead to a folder structure as below:
release
--de
----Updates
------operating
--------libs
-----------Alib
-------------1.0.0
---------------Alib-1.0.0-zip
On the right click of the zip, I get a file path: de/release/operating/libs/Alib/1.0.0-zip
When the file path is copied and pasted to another tab in a browser, it directly downloads the zip. The address is https://localhost.de/nexus/repository/release/de/Updates/operating/libs/Alib/1.0.0/Alib-1.0.0-32-bit.zip
To accomodate all of this inside the script for python:
import nexuscli.nexus_http
from nexuscli.api.repository.base_models.repository import Repository
nexus_config = nexuscli.nexus_config.NexusConfig(url="https://localhost.de/nexus", username="XXX",password="XXX")
nexus_client = nexuscli.NexusClient(nexus_config)
nexus_http = nexuscli.nexus_http.NexusHttp(nexus_config)
nexus_repository = Repository(name="release", nexus_client=nexus_client, nexus_http=nexus_http)
nexus_repository.download("de/release/operating/libs/Alib/1.0.0-zip",
"C:\\Users\\XXX\\Pycharm\\Projects\\Trial", flatten=True)
print("HELLO")
It fails and throws an error saying provide creds for Nexus3.
But the credentials are correct as I can download from the browser.
Any suggestions?
Related
I have a problem using the videohash package for python when deployed to Azure function.
My deployed azure function does not seem to be able to use a nested dependency properly. Specifically, I am trying to use the package “videohash” and the function VideoHash from it. The
input to VideoHash is a SAS url token for a video placed on an Azure blob storage.
In the monitor of my output it prints:
Accessing the sas url token directly takes me to the video, so that part seems to be working.
Looking at the source code for videohash this error seems to occur in the process of downloading the video from a given url (link:
https://github.com/akamhy/videohash/blob/main/videohash/downloader.py).
.. where self.yt_dlp_path = str(which("yt-dlp")). This to me indicates, that after deploying the function, the package yt-dlp isn’t properly activated. This is a dependency from the videohash
module, but adding yt-dlp directly to the requirements file of the azure function also does not solve the issue.
Any ideas on what is happening?
Deploying code to Azure function, which resulted in the details highlighted in the issue description.
I have a work around where you download the video file on you own instead of the videohash using azure.storage.blob
To download you will need a BlobServiceClient , ContainerClient and connection string of azure storage account.
Please create two files called v1.mp3 and v2.mp3 before downloading the video.
file structure:
Complete Code:
import logging
from videohash import VideoHash
import azure.functions as func
import subprocess
import tempfile
import os
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient
def main(req: func.HttpRequest) -> func.HttpResponse:
# local file path on the server
local_path = tempfile.gettempdir()
filepath1 = os.path.join(local_path, "v1.mp3")
filepath2 = os.path.join(local_path,"v2.mp3")
# Reference to Blob Storage
client = BlobServiceClient.from_connection_string("<Connection String >")
# Reference to Container
container = client.get_container_client(container= "test")
# Downloading the file
with open(file=filepath1, mode="wb") as download_file:
download_file.write(container.download_blob("v1.mp3").readall())
with open(file=filepath2, mode="wb") as download_file:
download_file.write(container.download_blob("v2.mp3").readall())
// video hash code .
videohash1 = VideoHash(path=filepath1)
videohash2 = VideoHash(path=filepath2)
t = videohash2.is_similar(videohash1)
return func.HttpResponse(f"Hello, {t}. This HTTP triggered function executed successfully.")
Output :
Now here I am getting the ffmpeg error which related to my test file and not related to error you are facing.
This work around as far as I know will not affect performance as in both scenario you are downloading blobs anyway
I'm writing a python application requiring that I download a folder from OneDrive. I understand that there was a package called onedrivesdk in the past for doing this, but it has since been deprecated and it is now recommended that the Microsoft Graph API be used to access OneDrive files (https://pypi.org/project/onedrivesdk/). It is my understanding that this requires somehow producing a DriveItem object referring to the target folder.
I was able to access the folder via GraphClient in msgraph.core :
from azure.identity import DeviceCodeCredential
from msgraph.core import GraphClient
import configparser
config = configparser.ConfigParser()
config.read(['config.cfg'])
azure_settings = config['azure']
scopes = azure_settings['graphUserScopes'].split(' ')
device_code_credential = DeviceCodeCredential(azure_settings['clientId'], tenant_id=azure_settings['tenantId'])
client = GraphClient(credential=device_code_credential, scopes=scopes)
import json
endpoint = '/me/drive/root:/Desktop'
x = client.get(endpoint)
x is the requests.models.Response object referring to the target folder (Desktop). I don't know how to extract from x a DriveItem or otherwise iterate over its contents programmatically. How can I do this?
Thanks
The instructions for authentication can be found here: https://gspread.readthedocs.io/en/latest/oauth2.html#for-end-users-using-oauth-client-id
Step 7 in the authentication sequence says:
"Move the downloaded file to ~/.config/gspread/credentials.json. Windows users should put this file to %APPDATA%\gspread\credentials.json"
Is anyone aware of a way to keep the credentials.json file somewhere else and use it to authorize gpread?
Alternatively I have thought about using shutil.move to grab the json file move it to the desired location but need to be able to do that without making assumptions about the whereabout of the python library or even if it is on a windows or unix machine. Any environmental variables that would reveal location of certain libraries?I could do something like this:
import gspread, os, shutil
loc = gspread.__location__
cred_path = os.path.join(loc, "credentials.json")
if not os.path.isfile(cred_path):
shutil.move(input("Enter creds path:"), cred_path)
Found the solution to my own question. This function will set all the relevant environmental variables to your directory of choice where the credentials.json file should be kept (and the authorized_user.json file.):
import gspread.auth as ga
def gspread_paths(dir):
ga.DEFAULT_CONFIG_DIR = dir
ga.DEFAULT_CREDENTIALS_FILENAME = os.path.join(
ga.DEFAULT_CONFIG_DIR, 'credentials.json')
ga.DEFAULT_AUTHORIZED_USER_FILENAME = os.path.join(
ga.DEFAULT_CONFIG_DIR, 'authorized_user.json')
ga.DEFAULT_SERVICE_ACCOUNT_FILENAME = os.path.join(
ga.DEFAULT_CONFIG_DIR, 'service_account.json')
ga.load_credentials.__defaults__ = (ga.DEFAULT_AUTHORIZED_USER_FILENAME,)
ga.store_credentials.__defaults__ = (ga.DEFAULT_AUTHORIZED_USER_FILENAME, 'token')
EDIT: Recently there was a merged pull request that added this functionality to gspread: https://github.com/burnash/gspread/pull/847
I have created a text file using file operations in python. I want the file to be pushed to my existed GITLAB repository.
I have tried the below code where i get the created file in my local folders.
file_path = 'E:\My material\output.txt'
k= 'Fail/Pass'
with open (file_path, 'w+') as text:
text.write('Test case :' +k)
text.close()
What is the process or steps or any modifications in file_path to move the created text file to the GITLAB repository through python code.
Using python gitlab module :
We can push the file to the gitlab,but you need to follow the below steps:
Step 1) Clone the repository to your local
Step 2) Add the file to the clonned repository
Step 3) Push the code to the gitlab
code:
import gitlab
import base64
from gitlab import Gitlab
import shutil
callbacks = pygit2.RemoteCallbacks(pygit2.UserPass("Your_private_token", 'x-oauth-basic'))
repoClone = pygit2.clone_repository("https://gitlab.com/group/"+api_name+".git", local_clone_path,checkout_branch=target_branch,callbacks=callbacks) # clonning the repo to local
shutil.copy(src,dst_clone_repo_path) #copy your file to the cloned repo
repoClone.remotes.set_url("origin", "https://gitlab.com/group/"+api_name+".git")
index = repoClone.index
index.add_all()
index.write()
tree = index.write_tree()
oid = repoClone.create_commit('refs/heads/'+target_branch, author, commiter, "init commit",tree,[repoClone.head.peel().hex])
remote = repoClone.remotes["origin"]
credentials = pygit2.UserPass("your_private_token", 'x-oauth-basic') # passing credentials
remote.credentials = credentials
remote.push(['refs/heads/'+target_branch],callbacks=callbacks) # push the code to the gitlab repo
Do you mean executing Shell Commands with Python? Suppose this newly created file and this python script are both under the specific local git repository which connected with the remote repository you want to commit. our plan is packing all the bash command in os.system.
import os
os.system('git add E:\My material\output.txt; git commit -m "anything"; git push -u origin master')
update
import os
os.system('cd /local/repo; mv E:\My material\output.txt .; git add output.txt; git commit -m "anything"; git push -u origin master')
A bit late to the party but:
Using the gitlab python library you can do this:
def commit_file(project_id: int, file_path: str, gitlab_url: str, private_token: str, branch: str = "main") -> bool:
"""Commit a file to the repository
Parameters
----------
project_id: int
the project id to commit into. E.g. 1582
file_path: str
the file path to commit. NOTE: expecting a path relative to the
repo path. This will also be used as the path to commit into.
If you want to use absolute local path you will also have to
pass a parameter for the file relative repo path
gitlab_url: str
The gitlab url. E.g. https://gitlab.example.com
private_token: str
Private access token. See doc for more details
branch: str
The branch you are working in. See note below for more about this
"""
gl = gitlab.Gitlab(gitlab_url, private_token=private_token)
try:
# get the project by the project id
project = gl.projects.get(project_id)
# read the file contents
with open(file_path, 'r') as fin:
content = fin.read()
file_data = {'file_path': file_path,
'branch': branch,
'content': content,
'author_email': "your#email.com", # don't think this is required
'commit_message': 'Created a file'}
resp = project.files.create(file_data)
# do something with resp
except gitlab.exceptions.GitlabGetError as get_error:
# project does not exists
print(f"could not find no project with id {project_id}: {get_error}")
return False
except gitlab.exceptions.GitlabCreateError as create_error:
# project does not exists
print(f"could not create file: {create_error}")
return False
return True
Example is based on gitlab project files documentation python-gitlab package docs
Note that if you want to create the file in a new branch, you will have to also provide a start_branch attribute:
file_data = {'file_path': file_path,
'branch': your_new_branch,
'start_branch': base_branch,
'content': content,
'author_email': "your#email.com",
'commit_message': 'Created a file in new branch'}
Also, if you don't care for using the python-gitlab package, you can use the rest api directly (using request or something like that. The relevant documentation is here
Sending an invalidation request using python Boto library Cloudfront is receiving an Object Path like this: /p/30100/30151/15198/%2A but I'm sending the file like this: /p/30100/30151/15198/* and cloudfront don't invalidate the folder using the wildcard, ¿Theres a way to send the wildcard without codification?
f = self.aws_bucket_name + path + '/*'
files = [f]
conn = CloudFrontConnection(self.aws_access_key, self.aws_secret_access_key)
req = conn.create_invalidation_request(self.aws_cf_distribution_id, files)
print req.status
I got the answer and implemented on my system. Basically, boto fixed this in their develop branch and their latest release is on May
Solution just need to install git develop branch instead.
pip install git+https://github.com/boto/boto.git#develop
One more thing,
f = path + '/*'
Update - As per comment below, this fix is in place with version 2.43.00+