I'm trying to put a zip file on artifactory. I'm using 'repositorytools' package and this is how I'm doing it;
try:
local_path = os.path.join(os.getcwd(), "sample-0.1.0.zip")
artifact = repositorytools.LocalArtifact(local_path=local_path,
group='widgets', artifact='sample')
client = repositorytools.repository_client_factory(user='user',
password='pw')
remote_artifacts = client.upload_artifacts(local_artifacts=
[artifact],repo_id='https://artifacts.zeki.com/zeki-development/')
print(remote_artifacts)
except Exception as e:
print colored(("Exception has occurred of type: {} with def:
{}".format(type(e), e)), 'red')
Well all I'm getting this annoying error;
Exception has occurred of type: with def: HTTPSConnectionPool(host='repository', port=443): Max retries exceeded with url: /service/local/artifact/maven/content (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',))
So, i tried couple of ways to overcome like increasing retry count but nothing seemed to work. Any ideas?
Thx
repo_id should be only 'zeki-development'.
You need to export environment variable REPOSITORY_URL with value https://artifacts.zeki.com or better pass that value to repository_url parameter of repository_client_factor
``
so it can look like this:
client = repositorytools.repository_client_factory(repository_url='https://artifacts.zeki.com', user='user', password='pw')
Related
I'm trying to run some code from this website but I don't understand why I get this error:
qbittorrentapi.exceptions.APIConnectionError: Failed to connect to qBittorrent. Connection Error: ConnectionError(MaxRetryError("HTTPConnectionPool(host='localhost', port=8080): Max retries exceeded with url: /api/v2/auth/login (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001FA519F5840>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))"))
The code in question:
import qbittorrentapi
# instantiate a Client using the appropriate WebUI configuration
qbt_client = qbittorrentapi.Client(
host='localhost',
port=8080,
username='admin',
password='adminadmin',
)
# the Client will automatically acquire/maintain a logged-in state
# in line with any request. therefore, this is not strictly necessary;
# however, you may want to test the provided login credentials.
try:
qbt_client.auth_log_in()
except qbittorrentapi.LoginFailed as e:
print(e)
# display qBittorrent info
print(f'qBittorrent: {qbt_client.app.version}')
print(f'qBittorrent Web API: {qbt_client.app.web_api_version}')
for k,v in qbt_client.app.build_info.items(): print(f'{k}: {v}')
# retrieve and show all torrents
for torrent in qbt_client.torrents_info():
print(f'{torrent.hash[-6:]}: {torrent.name} ({torrent.state})')
# pause all torrents
qbt_client.torrents.pause.all()
I'd really appreciate some help with this, thanks ahead :)
I'm following the tutorial, but I get an error when I use item_public_token_exchange.
itempublic_tokenexchange
exchange_request = ItemPublicTokenExchangeRequest(
public_token=plaid_token
)
exchange_response = CLIENT.item_public_token_exchange(exchange_request)
the error:
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='sandbox', port=80): Max retries exceeded with url: /item/public_token/exchange (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x10f1493a0>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known'))
Not sure what is happening.
It looks like the problem is that you are trying to connect to sandbox instead of sandbox.plaid.com. Make sure you have your host set up as plaid.Environment.Sandbox (or, alternatively, sandbox.plaid.com) and not just sandbox.
I am being honest, so the reason I use Selenium is because I try to make instagram spam-bot which will send messages to many people where I will ask them to not to be quiet about Russian invasion in Ukraine, because I'm Ukrainian and it's so awful and terrifying what happens around. And this is my duty to help to stop it as mush as I can!
Explanation:
Here is the multishare() function which is called from another function that creates self.followers set, that contains usernames of people who will be sent text. In multishare() function I redefine self.followers as a list and separate it into 4 equal parts to make spammer four times faster. And then I call share() function through the multiprocessing, but it doesn't even open Chrome.
Below you see the code.
Here is the code:
def multishare(self):
self.followers = list(self.followers)
ftq = int(len(self.followers) // 4)
sq = int(len(self.followers) // 2)
tq = int(len(self.followers) // 1.33)
fhq = len(self.followers)
first_part = self.followers[0:ftq]
second_part = self.followers[ftq:sq]
third_part = self.followers[sq:tq]
fourth_part = self.followers[tq:fhq]
self.followers = [first_part, second_part, third_part, fourth_part]
pool = Pool(processes=4)
pool.map(self.share, self.followers)
def share(self, users):
self.safe_get('https://www.instagram.com/')
for user in users:
time.sleep(1)
self.safe_get('https://www.instagram.com/direct/inbox/')
self.element_existence('//*[#id="react-root"]/section/div/div[2]/div/div/div[1]/div[1]/div/div[3]/button').click()
time.sleep(1)
search_user = self.element_existence('/html/body/div[6]/div/div/div[2]/div[1]/div/div[2]/input')
search_user.clear()
time.sleep(1)
search_user.send_keys(user)
time.sleep(1)
self.element_existence("/html/body/div[6]/div/div/div[2]/div[2]/div[1]/div/div[3]/button").click()
time.sleep(1)
self.element_existence("/html/body/div[6]/div/div/div[2]/div[1]/div/div[2]/input").click()
And that's the errors that I got:
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=37193): Max retries exceeded with url: /session/5f6dd5e3e048e0bd62e2dd4b1aa5e8ae/timeouts (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fa5089d4250>: Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fa50c23d1f0>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=37193): Max retries exceeded with url: /session/5f6dd5e3e048e0bd62e2dd4b1aa5e8ae/window (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fa50c23d1f0>: Failed to establish a new connection: [Errno 111] Connection refused'))
So I cant actually solve this. **It's not about multiprocessing, my main target is to make spammer as fast as it possible. So if u have better idea or just know how to solve my issue, help me please !
I am trying to get the html code of onion websites using the requests library (or urllib.request). I tried diverse methods but none of them seemed to work properly.
At first, I simply tried to connect to a proxy using the requests library and get the HTML code of the facebook deep web:
import requests
session = requests.session()
session.proxie = {}
session.proxies['http'] = 'socks5h://localhost:9050'
session.proxies['https'] = 'socks5h://localhost:9050'
r = requests.get('https://facebookcorewwwi.onion/')
print(r.text)
However, when I do this, the connection to the proxy doesn't work (my IP stays the same with or without the proxy).
I get the following error:
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='facebookcorewwwi.onion', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x109e8b198>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',))
After doing some research, I saw someone who tried to do a similar thing and the solution was to connect to the proxy before importing the requests/urllib.request library.
So I tried connecting using the libraries socks and socket:
import socks
import socket
def create_connection(address, timeout=None, source_address=None):
sock = socks.socksocket()
sock.connect(address)
return sock
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, "127.0.0.1", 9050)
# patch the socket module
socket.socket = socks.socksocket
socket.create_connection = create_connection
import urllib.request
with urllib.request.urlopen('https://facebookcorewwwi.onion/') as response:
html = response.read()
print(html)
When I do this, my connection the the proxy gets refused:
urllib.error.URLError: <urlopen error Error connecting to SOCKS5 proxy 127.0.0.1:9050: [Errno 61] Connection refused>
I tried to use the requests library instead like follow (simply replace it from the line that says import urllib.request)
import requests
r = requests.get('https://facebookcorewwwi.onion/')
print(r.text)
But here I get this error:
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='facebookcorewwwi.onion', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x10d93ee80>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',))
It seem that no matter what I do my connection to a proxy gets refused. Does anyone have an alternative solution or a way to fix this?
I'm trying to use requests to get the text off a website, but it is not working and I'm not sure why. Here is my code:
import requests
print(requests.get("https://projecteuler.net/project/resources/p079_keylog.txt").text)
which gives me the following error:
HTTPSConnectionPool(host='projecteuler.net', port=443): Max retries exceeded
with url: /project/resources/p079_keylog.txt (Caused by
NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnectio
n object at 0x000002701CC15128>: Failed to establish a new connection: [Errno
11001] getaddrinfo failed',))
So it seems a connection cannot be made. The website is valid and works. Is there anything simple I'm doing wrong that may be causing this?
You have a little error:
print(requests.get("https://projecteuler.net/project/resources/p079_keylog.txt").text)
.text is out of url string but anyway I've tried it and I have not found any error. It seems that you have exceeded the number of requests.
Result:
319
680
...