I've been using a combination of apt_pkg and apt libraries to obtain the following details from each package:
package.name
package.installedVersion
package.description
package.homepage
package.priority
I was able to obtain what I needed in the following manner, which I'm not entirely sure it's the best method of obtaining the results:
import apt_pkg, apt
apt_pkg.InitConfig()
apt_pkg.InitSystem()
aptpkg_cache = apt_pkg.GetCache() #Low level
apt_cache = apt.Cache() #High level
apt_cache.update()
apt_cache.open()
pkgs = {}
list_pkgs = []
for package in aptpkg_cache.Packages:
try:
#I use this to pass in the pkg name from the apt_pkg.packages
#to the high level apt_cache which allows me to obtain the
#details I need. Is it better to just stick to one library here?
#In other words, can I obtain this with just apt_pkg instead of using apt?
selected_package = apt_cache[package.name]
#Verify that the package can be upgraded
if check_pkg_status(package) == "upgradable":
pkgs["name"] = selected_package.name
pkgs["version"] = selected_package.installedVersion
pkgs["desc"] = selected_package.description
pkgs["homepage"] = selected_package.homepage
pkgs["severity"] = selected_package.prority
list_pkgs.append(pkgs)
else:
print "Package: " + package.name + " does not exist"
pass #Not upgradable?
except:
pass #This is one of the main reasons why I want to try a different method.
#I'm using this Try/Catch because there are a lot of times that when
#I pass in package.name to apt_cache[], I get error that package does not
#exists...
def check_pkg_status(package):
versions = package.VersionList
version = versions[0]
for other_version in versions:
if apt_pkg.VersionCompare(version.VerStr, other_version.VerStr)<0:
version = other_version
if package.CurrentVer:
current = package.CurrentVer
if apt_pkg.VersionCompare(current.VerStr, version.VerStr)<0:
return "upgradable"
else:
return "current"
else:
return "uninstalled"
I want to find a good way of using apt_pkg/apt to get the details for each package that's a possible upgrade/update candidate?
The way I'm currently doing this, I only get updates/upgrades for packages already in the system, even though I noticed the update manager for Debian shows me packages that I don't have in my system.
The following script is based on your python code, works on my Ubuntu 12.04, should also works with any system has python-apt 0.8+
import apt
apt_cache = apt.Cache() #High level
apt_cache.update()
apt_cache.open()
list_pkgs = []
for package_name in apt_cache.keys():
selected_package = apt_cache[package_name]
#Verify that the package can be upgraded
if selected_package.isUpgradable:
pkg = dict(
name=selected_package.name,
version= selected_package.installedVersion,
desc= selected_package.description,
homepage= selected_package.homepage,
severity= selected_package.priority)
list_pkgs.append(pkg)
print list_pkgs
Related
I have a program that loads objects via entry points. Later, I want to check to see what type those objects are, but I'm getting stuck because isinstance() returns False when I'd expect it to return True.
Below is a minimal working example. It has a couple files, because it needs to be "pip-installed" in order to define entry points, so I also uploaded it to GitHub in case that's more convenient.
Can anyone explain (i) why isinstance() is returning False and (ii) if there's any way to make it return True?
The entry point:
# pyproject.toml
[build-system]
requires = ["flit_core >=3.2,<4"]
build-backend = "flit_core.buildapi"
[project]
name = "entry_point_isinstance"
authors = [
{name = "Kale Kundert", email = "kale#thekunderts.net"},
]
readme = "README.rst"
version = "0.0.0"
description = "Reproduce a bug involving entry points and ``isinstance()``."
requires-python = "~=3.10"
dependencies = []
[project.entry-points.entry_point_isinstance]
my_class = "entry_point_isinstance:MyClass"
The code:
# entry_point_isinstance.py
from importlib.metadata import entry_points
class MyClass:
pass
if __name__ == '__main__':
entry_points = entry_points(group='entry_point_isinstance')
plugin_cls = next(iter(entry_points)).load()
plugin_obj = plugin_cls()
print(
f'{MyClass=}',
f'{plugin_cls=}',
f'{plugin_obj=}',
f'{plugin_cls is MyClass=}',
f'{isinstance(plugin_obj, MyClass)=}',
sep='\n',
)
The terminal commands:
$ pip install -e .
$ python entry_point_isinstance.py
MyClass=<class '__main__.MyClass'>
plugin_cls=<class 'entry_point_isinstance.MyClass'>
plugin_obj=<entry_point_isinstance.MyClass object at 0x7f96b58893f0>
plugin_cls is MyClass=False
isinstance(plugin_obj, MyClass)=False
Note that MyClass and plugin_cls seem to come from different modules: __main__ and entry_point_isinstance, respectively. This seems suspicious, but I don't understand why it's happening. Normally python avoids loading the same module twice.
I have the following Python code to label images using Clarifai. It was a working code and had been usable for the past 6-8 months. However, for the last few days, I have been getting the error mentioned below. Note that I have not made any changes to the working version of the code for the error to creep in.
#python program to analyse an image and label it
'''
Dependencies:
pip install flask
pip install clarifai-grpc
pip install logging
'''
import json
from flask import Flask, render_template, request
import logging
import os
from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_pb2, status_code_pb2
channel = ClarifaiChannel.get_json_channel()
stub = service_pb2_grpc.V2Stub(channel)
# This will be used by every Clarifai endpoint call.
# The word 'Key' is required to precede the authentication key.
metadata = (('authorization', 'Key API_KEY_HERE'),)
webapp = Flask(__name__) #creating a web application for the current python file
#decorators
#webapp.route('/')
def index():
return render_template('index.html', len=0)
#webapp.route('/' , methods = ['POST'])
def search():
if request.form['searchByURL']:
url = request.form['searchByURL']
my_request = service_pb2.PostModelOutputsRequest(
# This is the model ID of a publicly available General model.
#You may use any other public or custom model ID.
model_id='aaa03c23b3724a16a56b629203edc62c',
inputs=[
resources_pb2.Input(data=resources_pb2.Data(image=resources_pb2.Image(url=url)))
])
response = stub.PostModelOutputs(my_request, metadata=metadata)
if response.status.code != status_code_pb2.SUCCESS:
message = ["You have reached the limit for today!"]
return render_template('/index.html' , len = 1, searchResults = message)
concepts = []
for concept in response.outputs[0].data.concepts:
concepts.append(concept.name)
concepts = concepts[0:10]
return render_template('/index.html' , len = len(concepts), searchResults = concepts )
elif request.files['searchByImage']:
file = request.files['searchByImage']
file.save(file.filename)
#IMAGE INPUT:
with open(file.filename, "rb") as f:
file_bytes = f.read()
post_model_outputs_response = stub.PostModelOutputs(
service_pb2.PostModelOutputsRequest(
model_id="aaa03c23b3724a16a56b629203edc62c",
version_id="aa7f35c01e0642fda5cf400f543e7c40", # This is optional. Defaults to the latest model version.
inputs=[
resources_pb2.Input(
data=resources_pb2.Data(
image=resources_pb2.Image(
base64=file_bytes
)
)
)
]
),
metadata=metadata
)
if post_model_outputs_response.status.code != status_code_pb2.SUCCESS:
message = ["You have reached the limit for today!"]
return render_template('/index.html' , len = 1, searchResults = message)
# Since we have one input, one output will exist here.
output = post_model_outputs_response.outputs[0]
os.remove(file.filename)
concepts = []
#Predicted concepts:
for concept in output.data.concepts:
concepts.append(concept.name)
concepts = concepts[0:10]
return render_template('/index.html' , len = len(concepts), searchResults = concepts )
else:
return render_template('/index.html' , len = 1, searchResults = ["No Image entered!"] )
#run the server
if __name__ == "__main__":
logging.basicConfig(filename = 'error.log' , level = logging.DEBUG, )
webapp.run(debug=True)
Error:
Exception has occurred: ImportError
dlopen(/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/grpc/_cython/cygrpc.cpython-310-darwin.so, 0x0002): tried: '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/grpc/_cython/cygrpc.cpython-310-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e'))
File "/Users/eshaangupta/Desktop/Python-Level-4/Image Analyser.py", line 15, in <module>
from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
It looks like you are trying to run this on a different architecture than you've been in the past. You've been running on x86 (looks like likely MacOS) and now have moved to an arm architecture. I'm guessing you've upgraded to an M1 chip macbook, although maybe you've moved over to a different ARM based chip.
(mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e'))
This is a problem with a file in grpc - specifically the library file cygrpc.cpython-310-darwin.so. I'd recommend removing gRPC and re-installing and see if that helps to resolve it.
Something like this might work:
python -m pip install --force-reinstall grpcio
(This is assuming python points to python3.10 in your system).
although I'm not sure how you've installed it so that is just a guess.
I am following the tutorial found here https://www.geeksforgeeks.org/youtube-data-api-set-1/. After I run the below code, I am getting a "No module named 'apiclient'" error. I also tried using "from googleapiclient import discovery" but that gave an error as well. Does anyone have alternatives I can try out?
I have already imported pip install --upgrade google-api-python-client
Would appreciate any help/suggestions!
Here is the code:
from apiclient.discovery import build
# Arguments that need to passed to the build function
DEVELOPER_KEY = "your_API_Key"
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
# creating Youtube Resource Object
youtube_object = build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION,
developerKey = DEVELOPER_KEY)
def youtube_search_keyword(query, max_results):
# calling the search.list method to
# retrieve youtube search results
search_keyword = youtube_object.search().list(q = query, part = "id, snippet",
maxResults = max_results).execute()
# extracting the results from search response
results = search_keyword.get("items", [])
# empty list to store video,
# channel, playlist metadata
videos = []
playlists = []
channels = []
# extracting required info from each result object
for result in results:
# video result object
if result['id']['kind'] == "youtube# video":
videos.append("% s (% s) (% s) (% s)" % (result["snippet"]["title"],
result["id"]["videoId"], result['snippet']['description'],
result['snippet']['thumbnails']['default']['url']))
# playlist result object
elif result['id']['kind'] == "youtube# playlist":
playlists.append("% s (% s) (% s) (% s)" % (result["snippet"]["title"],
result["id"]["playlistId"],
result['snippet']['description'],
result['snippet']['thumbnails']['default']['url']))
# channel result object
elif result['id']['kind'] == "youtube# channel":
channels.append("% s (% s) (% s) (% s)" % (result["snippet"]["title"],
result["id"]["channelId"],
result['snippet']['description'],
result['snippet']['thumbnails']['default']['url']))
print("Videos:\n", "\n".join(videos), "\n")
print("Channels:\n", "\n".join(channels), "\n")
print("Playlists:\n", "\n".join(playlists), "\n")
if __name__ == "__main__":
youtube_search_keyword('Geeksforgeeks', max_results = 10)
With this information it's hard to say what is the problem. But sometimes I've been banging my head to wall when installing something with pip (Python2) and then trying to import module in Python3 or vice versa.
So if you are running your script with Python3, try install package by using pip3 install --upgrade google-api-python-client
Try the YouTube docs here:
https://developers.google.com/youtube/v3/code_samples
They worked for me on a recently updated Slackware_64 14.2
I use them with Python 3.8. Since there may also be a version 2 of Python installed, I make sure to use this in the Interpreter line:
!/usr/bin/python3.8
Likewise with pip, I use pip3.8 to install dependencies
I installed Python from source. python3.8 --version Python 3.8.2
You can also look at this video here:
https://www.youtube.com/watch?v=qDWtB2q_09g
It sort of explains how to use YouTube's API explorer. You can copy code samples directly from there. The video above covers Android, but the same concept applies to Python regarding using YouTube's API Explorer.
I concur with the previous answer regarding version control.
Hi I'm trying to set up a toolchain for the Fn project. The approach is to set up a toolchain per binary available in GitHub and then, in theory use it in a rule.
I have a common package which has the available binaries:
default_version = "0.5.44"
os_list = [
"linux",
"mac",
"windows"
]
def get_bin_name(os):
return "fn_cli_%s_bin" % os
The download part looks like this:
load(":common.bzl", "get_bin_name", "os_list", "default_version")
_url = "https://github.com/fnproject/cli/releases/download/{version}/{file}"
_os_to_file = {
"linux" : "fn_linux",
"mac" : "fn_mac",
"windows" : "fn.exe",
}
def _fn_binary(os):
name = get_bin_name(os)
file = _os_to_file.get(os)
url = _url.format(
file = file,
version = default_version
)
native.http_file(
name = name,
urls = [url],
executable = True
)
def fn_binaries():
"""
Installs the hermetic binary for Fn.
"""
for os in os_list:
_fn_binary(os)
Then I set up the toolchain like this:
load(":common.bzl", "get_bin_name", "os_list")
_toolchain_type = "toolchain_type"
FnInfo = provider(
doc = "Information about the Fn Framework CLI.",
fields = {
"bin" : "The Fn Framework binary."
}
)
def _fn_cli_toolchain(ctx):
toolchain_info = platform_common.ToolchainInfo(
fn_info = FnInfo(
bin = ctx.attr.bin
)
)
return [toolchain_info]
fn_toolchain = rule(
implementation = _fn_cli_toolchain,
attrs = {
"bin" : attr.label(mandatory = True)
}
)
def _add_toolchain(os):
toolchain_name = "fn_cli_%s" % os
native_toolchain_name = "fn_cli_%s_toolchain" % os
bin_name = get_bin_name(os)
compatibility = ["#bazel_tools//platforms:%s" % os]
fn_toolchain(
name = toolchain_name,
bin = ":%s" % bin_name,
visibility = ["//visibility:public"]
)
native.toolchain(
name = native_toolchain_name,
toolchain = ":%s" % toolchain_name,
toolchain_type = ":%s" % _toolchain_type,
target_compatible_with = compatibility
)
def setup_toolchains():
"""
Macro te set up the toolchains for the different platforms
"""
native.toolchain_type(name = _toolchain_type)
for os in os_list:
_add_toolchain(os)
def fn_register():
"""
Registers the Fn toolchains.
"""
path = "//tools/bazel_rules/fn/internal/cli:fn_cli_%s_toolchain"
for os in os_list:
native.register_toolchains(path % os)
In my BUILD file I call setup_toolchains:
load(":toolchain.bzl", "setup_toolchains")
setup_toolchains()
With this set up I have a rule which looks like this:
_toolchain = "//tools/bazel_rules/fn/cli:toolchain_type"
def _fn(ctx):
print("HEY")
bin = ctx.toolchains[_toolchain].fn_info.bin
print(bin)
# TEST RULE
fn = rule(
implementation = _fn,
toolchains = [_toolchain]
)
Workpace:
workspace(name = "basicwindow")
load("//tools/bazel_rules/fn:defs.bzl", "fn_binaries", "fn_register")
fn_binaries()
fn_register()
When I query for the different binaries with bazel query //tools/bazel_rules/fn/internal/cli:fn_cli_linux_bin they are there but calling bazel build //... results in an error which complains of:
ERROR: /Users/marcguilera/Code/Marc/basicwindow/tools/bazel_rules/fn/internal/cli/BUILD.bazel:2:1: in bin attribute of fn_toolchain rule //tools/bazel_rules/fn/internal/cli:fn_cli_windows: rule '//tools/bazel_rules/fn/internal/cli:fn_cli_windows_bin' does not exist. Since this rule was created by the macro 'setup_toolchains', the error might have been caused by the macro implementation in /Users/marcguilera/Code/Marc/basicwindow/tools/bazel_rules/fn/internal/cli/toolchain.bzl:35:15
ERROR: Analysis of target '//tools/bazel_rules/fn/internal/cli:fn_cli_windows' failed; build aborted: Analysis of target '//tools/bazel_rules/fn/internal/cli:fn_cli_windows' failed; build aborted
INFO: Elapsed time: 0.079s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded, 0 targets configured)
I tried to follow the toolchain tutorial in the documentation but I can't get it to work. Another interesting thing is that I'm actually using mac so the toolchain compatibility seems to also be wrong.
I'm using this toolchain in a repo so the paths vary but here's a repo containing only the fn stuff for ease of read.
Two things:
One, I suspect this is your actual issue: https://github.com/bazelbuild/bazel/issues/6828
The core of the problem is that, is the toolchain_type target is in an external repository, it always needs to be referred to by the fully-qualified name, never by the locally-qualified name.
The second is a little more fundamental: you have a lot of Starlark macros here that are generating other targets, and it's very hard to read. It would actually be a lot simpler to remove a lot of the macros, such as _fn_binary, fn_binaries, and _add_toolchains. Just have setup_toolchains directly create the needed native.toolchain targets, and have a repository macro that calls http_archive three times to declare the three different sets of binaries. This will make the code much easier to read and thus easier to debug.
For debugging toolchains, I follow two steps: first, I verify that the tool repositories exist and can be accessed directly, and then I check the toolchain registration and resolution.
After going several levels deep, it looks like you're calling http_archive, naming the new repository #linux, and downloading a specific binary file. This isn't how http_archive works: it expects to fetch a zip file (or tar.gz file), extract that, and find a WORKSPACE and at least one BUILD file inside.
My suggestions: simplify your macros, get the external repositories clearly defined, and then explore using toolchain resolution to choose the right one.
I'm happy to help answer further questions as needed.
I am looking to alter the following code so as t run it on python2.x and beautifulsoup3.x
import requests
import BeautifulSoup
session = requests.session()
pages = []
req = session.get('webpage')
content = req.content.split("</html>")
for page in content[:-1]:
doc = BeautifulSoup.BeautifulSoup(page)
name = doc.find('table', id='table2').find('table').findAll('td')[3].text
print name
tables = doc.findAll('table', id="conn")
target_table = None
for table in tables:
try:
title = table.find('thead').find('td').text
except:
title = None
if title == 'ESME DETAILS':
target_table = table
break
if target_table:
esme_trs = target_table.find('tbody').findAll('tr')
for tr in esme_trs:
print "\t", tr.find('td').text
The problem is that requests is not installed in the python2.X installation, only for python3.X
requests isn't standard library so it doesnt't come installed with python, so you need to install it manually.
See the instructions are on the requests website for how to install it.
When setting up requests, either set your default Python installation to your Py2.x or install requests via the source code and instead of just running python setup.py install rather run /path/to/python2.x setup.py install to have it install to your 2.x instance.