I read this in addition to MANY other things: Importing requests module does not work
I am using VSCode and python 3.8.
I am able to import it seems any library except "requests"
Given the ages of the previous posts I hope to know what a good current next step could be, please and thank you
import math
import asynchat
import signal
import importlib
import requests <-----Will NOT import
response = requests.get("http://api.open-notify.org/astros.json" [api.open-notify.org])
print(response.text)
print(response)
I think your requests is not well installed. Make sure it's installed with the python you are using with.
Try pip3 install requests.
Related
[I am new to Python (and programming in general) and will definitely say something stupid in this question.]
I had two python programs. In one of them the import statements were working. And in the other one the import statements were not working.
I suspected this had something to do with the file location of the modules relative to the Python files.
It turned out the program that wasn't working was in a sub folder of the program that was working.
So, as an experiment, I tried moving the venv folder into the sub folder where the other program was, but I ended up canceling that once I discovered that I would need to replace some of the files. (Due to the fact that is already had a venv folder.)
Then, as an experiment, I tried renaming the venv folder to "venv1" just to see if the good program would run. I was not surprised when it didn't.
But then I renamed it back to "venv," and it still wasn't working.
from bs4 import BeautifulSoup
import requests
import json, requests
import urllib.request
import bs4 as bs
import urllib
# .... etc ...
output
ModuleNotFoundError: No module named 'bs4'
...
...
...
oh, and if I try:
#from bs4 import BeautifulSoup
import requests
import json, requests
import urllib.request
import bs4 as bs
import urllib
# .... etc ...
Output:
ModuleNotFoundError: No module named 'requests'
I tried pip installing them again (my terminal doesn't recognize sudo pip install) and this is what I got
PS C:\Users\****\Desktop> pip install requests
Requirement already satisfied: requests in c:\users\****\appdata\local\programs\python\python310\lib\site-packages (2.27.1)
I thought maybe I'd look this one up, but the folder "appdata" doesn't exist on my computer, in that location.
What happened and how can I fix it?
The appdata folder should exist in that location. It is a hidden folder, and by default, Windows won't display hidden files/folders. You can view it by pressing WIN+R, and then typing "appdata", and clicking "OK". It should then come up in a file explorer window.
The python packages are installed, but not visible to the scripts. It sounds like you virtual environment may be incorrectly set up. If you open a CMD prompt, and then type in python -m site, it will show you the locations of your python's system path. You should see the install locations for the packages, in this case, you'll probably see the following: C:\\Users\\****\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages.
I have a simple python program, that is supposed to scrape some information from the internet and do stuff with it. When I run the code in PyCharm (IDE) it works fine, but when I run it directly it doesn't work (Right-click -> Open with -> Python). In order to find the error I put several input statements in the code, as such:
from random import choice
input(1)
import webbrowser as web
input(2)
from time import sleep
input(3)
import pyautogui as pg
input(4)
from platform import system
input(5) # It closes after this.
from bs4 import BeautifulSoup
input(6)
import requests
input(7)
It all seems to work fine until 5. I press enter then the terminal instantly closes without any error messages. So there must be something wrong when I import BeautifulSoup from bs4.
from bs4 import BeautifulSoup
Why does the python terminal close when executing the import statement? Any help would be appreciated :)
This is probably due to an exception that is terminating the process. I had the same problem a while ago and it was solved by installing the package.
Are you sure you have bs4 installed? Try running pip install bs4 to make sure it is installed.
I am trying to create a dataFrame in python and the only way I have found so far is to use the pandas library.
import pandas as pd
newfilename = pd.read_csv("filename.csv")
However, when I try to excute this code, I am getting an error which says:
ImportError: cannot import name Counter
I googled and found that in python 2.6 and older versions, this error is common and they said I should install backport_collections : 0.1 to solve this issue like is explained in the below page:
https://pypi.python.org/pypi/backport_collections/0.1
However, I have no experience in doing it in the unix virtual machine.
It seems that I have to download the package "backport_collections-0.1.tar.gz" and after that I dont know how to install it or in which directory to install it.
After installation I need to import counter as follows:
from backport_collections import Counter
Please advise me on how to install this package "backport_collections-0.1.tar.gz" as I am completely new to python.
Getting strange difference inside Enthought Canopy vs. command line when trying to load and utilize urllib and/or urllib.request
Here's what I mean. I'm running Python 3.5 on MacOS 10.11.3. But I've tried this on Windows 10 machine too, and I'm getting the same results. The difference appears to be between using Canopy and using command line.
I'm trying to do basic screen scraping. Based on reading, I think I should be doing:
from urllib.request import urlopen
html = urlopen("http://pythonscraping.com/pages/page1.html")
print(html.read())
This works at a command prompt.
BUT, inside canopy, this does not work. Inside canopy I get the error
ImportError: No module named request
When Canopy tries to execute the from urllib.request import urlopen
Inside Canopy, THIS is what works:
import urllib
html = urllib.urlopen("http://pythonscraping.com/pages/page1.html")
print(html.read())
I would really like to understand what is happening, as I don't want my Canopy python scripts to fail when I run them outside of Canopy. Also, the Canopy approach does not seem consistent with docs that I've read... I just got there by trial & error.
urllib.request is a module that only exists in Python 3. Enthought Canopy Distribution still ships with a version of Python 2.7 (2.7.10 as of the current version 1.6.2).
In Python 2.x, you have the choice of using either urllib or urllib2, which expose functions like urlopen at the top level (e.g. urllib.urlopen rather than urllib.request.urlopen).
If you want your scripts to be able to run through either Python 3.x or in Enthought Canopy's Python distribution, then there are two possible solutions:
Use requests - this is generally the recommended library to use for interacting with HTTP in Python. It's a third-party module which you can install using standard pip or easy_install, or from the Canopy Package Index.
Your equivalent code would look similar to:
# This allows you to use the print() function inside Python 2.x
from __future__ import print_function
import requests
response = requests.get("http://pythonscraping.com/pages/page1.html")
print(response.text)
Use conditional importing to bring in the current function you need regardless of version. This is just using built-in features of Python and will not require third-party libraries.
Your code would then look similar to:
# This allows you to use the print() function inside Python 2.x
from __future__ import print_function
import sys
try:
# Try importing Python 3's urllib.request first.
from urllib.request import urlopen
except ImportError:
# Looks like we're running Python 2.something.
from urllib import urlopen
response = urlopen("http://pythonscraping.com/pages/page1.html")
# urllib.urlopen's response object is different based
# on Python version.
if sys.version_info[0] < 3:
print(response.read())
else:
# Python 3's urllib responses return the
# stream as a byte-stream, and it's up to you
# to properly set the encoding of the stream. This
# block just checks if the stream has a content-type set
# and if not, it defaults to just using utf-8
encoding = response.headers.get_content_charset()
if not encoding:
encoding = 'utf-8'
print(response.read().decode(encoding))
I wanted to install urllib2 package from PyPI but it is not available.
It seems that it has been updated to urllib3, but is there any way to download urllib2 ?
import urllib2
Is that what you want?
If you find any library under http://docs.python.org/ you can always import without installing it.
Update 1:
If you need the source code...
The official Cpython code: http://hg.python.org/cpython/file/3b5fdb5bc597/Lib/urllib
Note The urllib2 module has been split across several modules in
Python 3 named urllib.request and urllib.error. The 2to3 tool will
automatically adapt imports when converting your sources to Python 3.
or try this? http://code.reddit.com/docs/urllib2-pysrc.html
I can't guarantee the integrity for the second alternative link.