I am trying to create a python script that can deploy an artifact to Artifactory.
I am using Python 3.4 and I want the resulted script to put it through py2exe, so external libraries might create issues.
Through all my research, I found that one way is this, but I don't know how to "translate" it to Python:
curl -X PUT -u user:password --data-binary #/absolute/path/my-utils-2.3.jar "http://localhost/artifactory/my-repo/my/utils/2.3/"
How can I achieve that into Python? Or is it any either way for deploying?
Been trying the whole day and I've had some successful testing using the requests library.
import requests
url = "repo/path/test.txt"
file_name = "test.txt"
auth=(USERNAME, PASSWORD)
with open(file_name, 'rb') as fobj:
res = requests.put(url, auth=auth, data=fobj)
print(res.text)
print(res.status_code)
And py2exe had no issues with it.
You might want to take look at Party, either look on how they do it, or just use it directly.
Related
I want to test my different env. like DEV,TEST,STAGE,PRODUCTION. The API call for the env. are different. For instance, http://dev.myclient.com, http://stage.myclient.com etc.
So, I want to write the test cases that will go to my specific URL make a search of specific thing and whatever the response come for example. I search for apples and I got 500 results related to that so, I want that result to print and save into text or Json. and same applies for the all different env.
then I will compare all the environments by one each other when I have a raw response data.
Any ideas how I can do that? In python specifically.
Thanks In advance!!
You could use the requests library for sending HTTP requests.This isn't a built-in library, so you should use pip install requests to install the library.
Here's an example:
import requests
url = "http://dev.myclient.com/"
response = requests.get(url).json()
print(response)
The console should now print the JSON response of the request you just sent. For this to work the URL provided should return a JSON string response.
I want to download a .csv file from here automatically, every time I run the script:
https://www.nasdaq.com/market-activity/quotes/historical
So whenever I open the notebook I will get the latest data from Nasdaq, if I run the correct script, I don't have to upload the dataset manually.
This is the URL of the download:
https://www.nasdaq.com/api/v1/historical/AMZN/stocks/2010-10-26/2020-10-26
I tried some libraries, but none of them worked for me.
I use google colaboratory and Python3.
For a reason which I don't understand (maybe someone here who knows more can let us both know), you need to provide headers along with your request. I got the answer from here.
So in your case the following seems to work for me:
import requests
url = 'https://www.nasdaq.com/api/v1/historical/AMZN/stocks/2010-10-26/2020-10-26'
headers = { "user-agent":"Mozilla"}
r = requests.get(url, headers=headers)
open('HistoricalQuotes_2020_10_26.csv', 'wb').write(r.content)
I am trying to learn simple automation. I have set up an Ubuntu Server and I want to configure it to download html source from a specific URL and append to a file in a specified folder on the server every 1 minute.
The URL is just basic html with no CSS whatsoever.
I want to use python but admittedly can use any language. What is a good, simple day to do this?
Jeff's answer works for a one time use.
You could do this to run it repeatedly-
import time
import requests
while True:
with open('filename.extension', 'a') as fp:
newHtml = requests.get('url').text
fp.write(newHtml)
time.sleep(60)
You could run this as a background process for as long as you want.
$ python3 script_name.py &
Just pip install the requests library.
$ pip install requests
Then, it's super easy to get the HTML (put this in a file called get_html.py, or whatever name you like):
import requests
req = requests.get('http://docs.python-requests.org/en/latest/user/quickstart/')
print(req.text)
There are a variety of options for saving the HTML to a directory. For example, you could redirect the output from the above script to a file by calling it like this:
python get_html.py > file.html
Hope this helps
I am very new to Python (and relatively new to programming) and would appreciate any help.
How would I use the following example link to download the supplied information using Python.
https://url.com/InfoService/GetFlightByFlightNum?board={BOARD}&flightNum={FLIGHTNUM}
Method: GET
Thanks
The most easy way to access APIs from python is using the requests library
You can quickly install it with pip install requests
Then you can do that for your example:
import requests
payload = {'board': {BOARD}, 'flightNum': {FLIGHTNUM}}
r = requests.get('https://url.com/InfoService/GetFlightByFlightNum', params=payload)
# print the response result
print r.text
You can use the builtin library urllib2.
Python 2 documentation: http://docs.python.org/2/library/urllib2.html
Python 3 documentation: http://docs.python.org/3.1/howto/urllib2.html
Here are some examples: How do I download a file over HTTP using Python?
I'm using MultipartPostHandler in file sending. My code is following:
params = {'file':open(file_name, 'rb')}
headers = {'cookie':session_id}
urllib2.install_opener(urllib2.build_opener(MultipartPostHandler.MultipartPostHandler))
response = urllib2.urlopen(urllib2.Request("www.example.com/upload", params, headers))
How could I do the same (send file to the server) without using MultipartPostHandler? It would be good to use only buid-in python modules and urllib2. Is it possible.
MultipartPostHandler needs to install it using easy_install, pip or
from source. I want like to write the python script that would not
require new instalations.
Just add it to your original script - it is just one file. Copy paste the code for the module.
Unfortunately, there is no direct method available to post a multiple part file using urllib2. But there are ways to accomplish that by writing a custom form object using mimetype and mimetools module. You could follow this recipe and adopt your form to do a mutipart upload using urllib2.
(In Python3, urllib.request data can take a pointer to file object and that does read the whole file into memory)