pywhatkit.playonyt is not opening any page - python

I have MacBook and I am trying to use pywhatkit to play YouTube videos based on the user-provided input. but every time I run my code, it won't take me anywhere and after some time, it will give me this error. (p.s. I am using python 3.8)
Exception has occurred: ConnectionError
HTTPSConnectionPool(host='www.youtube.com', port=443): Max retries exceeded with url: /results?q=Despacito (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fcd17139f70>: Failed to establish a new connection: [Errno 60] Operation timed out'))
this is the code:
import pywhatkit as tube
tube.playonyt('Despacito') # for ex despacito is the user input

Related

Multiprocessing doesn't work with Selenium in Python

I am being honest, so the reason I use Selenium is because I try to make instagram spam-bot which will send messages to many people where I will ask them to not to be quiet about Russian invasion in Ukraine, because I'm Ukrainian and it's so awful and terrifying what happens around. And this is my duty to help to stop it as mush as I can!
Explanation:
Here is the multishare() function which is called from another function that creates self.followers set, that contains usernames of people who will be sent text. In multishare() function I redefine self.followers as a list and separate it into 4 equal parts to make spammer four times faster. And then I call share() function through the multiprocessing, but it doesn't even open Chrome.
Below you see the code.
Here is the code:
def multishare(self):
self.followers = list(self.followers)
ftq = int(len(self.followers) // 4)
sq = int(len(self.followers) // 2)
tq = int(len(self.followers) // 1.33)
fhq = len(self.followers)
first_part = self.followers[0:ftq]
second_part = self.followers[ftq:sq]
third_part = self.followers[sq:tq]
fourth_part = self.followers[tq:fhq]
self.followers = [first_part, second_part, third_part, fourth_part]
pool = Pool(processes=4)
pool.map(self.share, self.followers)
def share(self, users):
self.safe_get('https://www.instagram.com/')
for user in users:
time.sleep(1)
self.safe_get('https://www.instagram.com/direct/inbox/')
self.element_existence('//*[#id="react-root"]/section/div/div[2]/div/div/div[1]/div[1]/div/div[3]/button').click()
time.sleep(1)
search_user = self.element_existence('/html/body/div[6]/div/div/div[2]/div[1]/div/div[2]/input')
search_user.clear()
time.sleep(1)
search_user.send_keys(user)
time.sleep(1)
self.element_existence("/html/body/div[6]/div/div/div[2]/div[2]/div[1]/div/div[3]/button").click()
time.sleep(1)
self.element_existence("/html/body/div[6]/div/div/div[2]/div[1]/div/div[2]/input").click()
And that's the errors that I got:
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=37193): Max retries exceeded with url: /session/5f6dd5e3e048e0bd62e2dd4b1aa5e8ae/timeouts (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fa5089d4250>: Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fa50c23d1f0>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=37193): Max retries exceeded with url: /session/5f6dd5e3e048e0bd62e2dd4b1aa5e8ae/window (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fa50c23d1f0>: Failed to establish a new connection: [Errno 111] Connection refused'))
So I cant actually solve this. **It's not about multiprocessing, my main target is to make spammer as fast as it possible. So if u have better idea or just know how to solve my issue, help me please !

Python-Request - Receiving pointer errors when trying to request data of a website

I'm currently using Python 3.7.6, running the code in a Jupyter Notebook and trying to retrieve data of a website by using the library "request" and I'm receiving a Pointer error -
Code:
from bs4 import BeautifulSoup
import requests
source = requests.get('http://bvmf.bmfbovespa.com.br/indices/ResumoCarteiraTeorica.aspx?Indice=IBOV&idioma=pt-br').text
Error:
OSError: [WinError 10014] - The system detected an invalid pointer address in attempting to use a pointer argument in a call
[...]
NewConnectionError: <urllib3.connection.HTTPConnection object at 0x00000179055507C8>: Failed to establish a new connection: [WinError 10014] The system detected an invalid pointer address in attempting to use a pointer argument in a call
[...]
MaxRetryError: HTTPConnectionPool(host='bvmf.bmfbovespa.com.br.x.ecf9251d0725104833087180eb40dc1a5570.9270ee5e.id.opendns.com', port=80): Max retries exceeded with url: /h/bvmf.bmfbovespa.com.br/indices/ResumoCarteiraTeorica.aspx?X-OpenDNS-Session=_ecf9251d0725104833087180eb40dc1a55709270ee5e_JPweB49M_Indice=IBOV&idioma=pt-br (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x00000179055507C8>: Failed to establish a new connection: [WinError 10014] The system detected an invalid pointer address in attempting to use a pointer argument in a call'))
[...]
ConnectionError: HTTPConnectionPool(host='bvmf.bmfbovespa.com.br.x.ecf9251d0725104833087180eb40dc1a5570.9270ee5e.id.opendns.com', port=80): Max retries exceeded with url: /h/bvmf.bmfbovespa.com.br/indices/ResumoCarteiraTeorica.aspx?X-OpenDNS-Session=_ecf9251d0725104833087180eb40dc1a55709270ee5e_JPweB49M_Indice=IBOV&idioma=pt-br (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x00000179055507C8>: Failed to establish a new connection: [WinError 10014] The system detected an invalid pointer address in attempting to use a pointer argument in a call'))
Note: When I try to run the exact same code on my personal computer it works, however when I try to run at my job's it doesn't.
It was actually a proxy issue.
After the following change in the code I could access the content of the website -
proxies = {
"http": "http://myproxy:myport"
}
source = requests.get('http://bvmf.bmfbovespa.com.br/indices/ResumoCarteiraTeorica.aspx?Indice=IBOV&idioma=pt-br', proxies=proxies).text
soup = BeautifulSoup(source, 'html.parser')

Python get and post requests fails with Connection and GetAddrinfo

import requests
r = requests.get('http://http2bin.org/get')
I am getting below errors:
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError:
HTTPConnectionPool(host='http2bin.org', port=80): Max retries exceeded
with url: /get (Caused by
NewConnectionError('<urllib3.connection.HTTPConnection object at
0x031CAB08>: Failed to establish a new connection: [Errno 11004]
getaddrinfo failed'))
What could be the reason? Is it related to Proxy ?
It's a website problem.Try it with another website surely it would work.

Requests library get method error in python3.6

When I try to use the requests lib get method I get an error:
(using tkinter)
The error comes from this line:
var.set(get('https://api.ipify.org').text)
the error message says :
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='api.ipify.org', port=443): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))

Failing to establish connection using requests

I'm trying to use requests to get the text off a website, but it is not working and I'm not sure why. Here is my code:
import requests
print(requests.get("https://projecteuler.net/project/resources/p079_keylog.txt").text)
which gives me the following error:
HTTPSConnectionPool(host='projecteuler.net', port=443): Max retries exceeded
with url: /project/resources/p079_keylog.txt (Caused by
NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnectio
n object at 0x000002701CC15128>: Failed to establish a new connection: [Errno
11001] getaddrinfo failed',))
So it seems a connection cannot be made. The website is valid and works. Is there anything simple I'm doing wrong that may be causing this?
You have a little error:
print(requests.get("https://projecteuler.net/project/resources/p079_keylog.txt").text)
.text is out of url string but anyway I've tried it and I have not found any error. It seems that you have exceeded the number of requests.
Result:
319
680
...

Categories