Use Python to download a file from ServiceNow using URL - python

I have multiple reports that I need to run and export from service now. I have been using a URL with filtering to get the reports downloaded directly into the downloads folder this way,but would like to create a python 3.10.5 script that would do all this for me and save it all to a specified location. I would then add to this code to do some data merging and add extra columns to the CSV. Is this even possible given ServiceNow requires credentials?
from urllib.request import urlopen
# Download from URL.
with urlopen( 'MyURL' ) as download:
content = webpage.read()
# Save to file.
with open( 'output.html', 'wb' ) as download:
download.write( content )
getting a heap of errors most notably HTTP Error 401: unauthorized

Related

Generate a Kibana/Elasticsearch report PDF to be opened directly in Python

I'm trying to write a python script to automatically grab a pdf snapshot of a Kibana dashboard and upload it to a website called Jira.
I've worked out the part of uploading a pdf stored locally to Jira via python, however I need to remove the step of having to download the pdf to my computer and then open it in python (i.e., a function that opens a link from Kibana to the pdf). Is this possible?
from jira import JIRA
import requests
# Server Authentication
username = xxxxxxx
password = xxxxxxx
options = {'server': 'https://yourhostname.jira.net'}
jira = JIRA(options = options, auth = (username, password))
issue = jira.issue('PROJ-1000')
# Upload the file
with open('/some/path/attachment.pdf', 'rb') as f:
jira.add_attachment(issue=issue, attachment=f)
This code that I've been using works for attaching a pdf that I have stored locally, but I would like to be able to grab a pdf dashboard report directly from Kibana without having to store the pdf on my computer first.

Accessing data from the internet

I want to access the file automatically using Python 3. the website is https://www.dax-indices.com/documents/dax-indices/Documents/Resources/WeightingFiles/Ranking/2019/March/MDAX_RKC.20190329.xls
when you manually enter the url into explorer it asks you to download the file but i want to do this in python automatically and load the data as a df.
i get the below error
URLError:
from urllib.request import urlretrieve
import pandas as pd
# Assign url of file: url
url = 'https://www.dax-indices.com/documents/dax-indices/Documents/Resources/WeightingFiles/Ranking/2019/March/MDAX_RKC.20190329.xls'
# Save file locally
urlretrieve(url, 'my-sheet.xls')
# Read file into a DataFrame and print its head
df=pd.read_excel('my-sheet.xls')
print(df.head())
URLError: <urlopen error [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond>
$ curl https://www.dax-indices.com/documents/dax-indices/Documents/Resources/WeightingFiles/Ranking/2019/March/MDAX_RKC.20190329.xls
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>307 Temporary Redirect</title>
</head><body>
<h1>Temporary Redirect</h1>
<p>The document has moved here.</p>
</body></html>
You are just getting redirected. There are ways to implement this in code, but I would just change url to "https://www.dax-indices.com/document/Resources/WeightingFiles/Ranking/2019/March/MDAX_RKC.20190329.xls"
I ran your code in a jupyter environment, and it worked. No error was prompted, but the dataframe has only NaN values. I checked the xls file you are trying to read, and it seems to not contain any data...
There are other ways to retrieve xls data, such as: downloading an excel file from the web in python
import requests
url = 'https://www.dax-indices.com/documents/dax-indices/Documents/Resources/WeightingFiles/Ranking/2019/March/MDAX_RKC.20190329.xls'
resp = requests.get(url)
output = open('my-sheet.xls', 'wb')
output.write(resp.content)
output.close()
df=pd.read_excel('my-sheet.xls')
print(df.head())
You can do it directly with pandas and .read_excel method
df = pd.read_excel("https://www.dax-indices.com/documents/dax-indices/Documents/Resources/WeightingFiles/Ranking/2019/March/MDAX_RKC.20190329.xls", sheet_name='Data', skiprows=5)
df.head(1)
Output
Sorry mate. It works on my PC (not a very helpful comment tbh). Here's a list of things you can do ->
Obtain a reference and check the status code of the reference (200 or 300 means that everything is good, anything else has different meanings)
Check if that link has bot access blocked (Certain sites do that)
In case of blocked access to bot, use selenium for python

When saving FogBugz attachment, server always returns empty response (with some headers)

I'am trying to get case attachment to save in local folder. I have problem with using attachment url to download it, each time server returns empty results and status code 200.
This is a sample url I use (changed host and token) :
https://example.fogbugz.com/default.asp?pg=pgDownload&pgType=pgFile&ixBugEvent=385319&ixAttachment=56220&sFileName=Log.7z&sTicket=&sToken=1234567890627ama72kaors2grlgsk
I have tried using token instead of sToken but no difference. If I copy above url to chrome then it won't work either, but If I login to FogBugz (manuscript) and then try this url again then it works. So I suppose there are some security issues here.
btw. I use python FogBugz api for this and save url using urllib urllib.request.urlretrieve(url, "fb/" + file_name)
Solution I have found is to use cookies from the web browser where I have previously loged in to FB account I use. So it looks like a security issue.
For that I used pycookiecheat (for windows see my fork: https://github.com/luskan/pycookiecheat). For full code see here: https://gist.github.com/luskan/66ffb8f82afb96d29d3f56a730340adc

Python : Passing username password to fetch a url response in json using method GET

Scenario:
I have a python code which gives me exact results in windows 10 OS but when I run the same in REDHAT 6.4 it throws an error.
I am trying to open a API URL and fetch the json content and further manipulate it.
Here is the code:
import json
import urllib
url = 'https://#username:password#api.locu.com/v1_0/venue/search'
data = json.load(urllib.urlopen(url)
print data
When I run this code in windows10 platform with python version 2.7.13 it gives me exact output, but the same when run in REDHAT 6.4 platform gives an error related to authentication inturn it returns me with no json data.
Can't I use the format below or I need to pass the authentication as headers?
https://#username:password#apiurl
If I need to pass the authentication as headers how can I do that?
*note: I am using the Redhat OS in a VM.
Not sure if this is proxy issue.

How to add attachment to JIRA issue from web url using python?

I've successfully attached a file to a existing issue via JIRA rest api using python :
from jira.client import JIRA
jira = JIRA(basic_auth=([username], [password]), options={'server': [our_jira_server]})
issue = jira.issue([issue_id])
file = open('test.txt')
attachment_object = jira.add_attachment(issue,file)
But instead of local filepath , I need the file to be fetched from remote url. (Let's say google drive)
I 've read the this document , but no help. If python implementation is there, its awesome ..But curl implementation would do good too.
Thanks in advance.

Categories