I'm currently looking to make an updater program for my plugins for guildwars 2, but I got a little issue for the last download. The name of the file to download isn't consistent from version to version as you can see there. Asking the creator to update it so it is consistent is already something that has been done some month ago, but as the updates are fairly rare and nothing has been done.
Would there be a way to get either all the release files, or to downlaod the using filter so it doesn't get the other ones?
For now i've been using the following code to download the other plugins and write them to the corresponding file, but this method doesn't work at all with that specific one because the name of this release changes.
(using python 3.9.6)
import requests
test = requests.get('https://github.com/knoxfighter/arcdps-killproof.me-plugin/releases/latest/download/d3d9_arcdps_killproof_me.dll', allow_redirects=True)
print("code :" + str(test.status_code))
open('d3d9_arcdps_killproof_me.dll', 'wb').write(test.content)
Any ideas on how I could work arround this and still download this last plugin?
If you're looking for an example of how to call git pull from within python, this seems to be a good solution:
The code is using this library:
https://github.com/gitpython-developers/GitPython
import git
g = git.cmd.Git(git_dir)
g.pull()
Related
I have a problem. Let's say I have a website (e.g. www.google.com). Is there any way to create a file with a .url extension linking to this website in python? (I am currently looking for a flat, and I am trying to save shortcuts on my hard drive only to apartment offers posted online matching my expectations ) I've tried to use the os and requests module to create such files, but with no success. I would really appreciate the help. (I am using python 3.9.6 on Windows 10)
This is pretty straightforward. I had no idea what .URL files were before seeing this post, so I decided to drag its URL to my desktop. It created a file with the following contents which I viewed in Notepad:
[InternetShortcut]
URL=https://stackoverflow.com/questions/68304057/internet-shortcut-in-python
So, you just need to write out the same thing via Python, except replace the URL with the one you want:
test_url = r'https://www.google.com/'
with open('Google.url','w') as f:
f.write(f"""[InternetShortcut]
URL={test_url}
""")
With regards to your current attempts:
I've tried to use os and requests module to create such file
It's not clear what you're using requests or os for, since you didn't provide a Minimal Reproduceable Example of what you'd tried so far; so, if there's a more complex element to this that you didn't specify, such as automatically generating the file while you're in your browser, or something like that, then you need to update your question to include all of your requirements.
I was trying to implement the Optopsy code for backtesting option strategies.
I get as far as pip install optopsy on the instructions and then I'm not sure how to implement the next line:
python strategies/sample_strategy.py
Also, which folder do you save the data file in?
For this line
data = op.get(FILE, SPX_FILE_STRUCT, prompt=False)
do you put the actual file name and location?
Website link to the code is below:
https://pypi.org/project/optopsy/
Thank you, RK
I had the same trouble from the posted example in the early days of Optopsy development. The snippet was incomplete. The author Michael Chu has since improved and published his source code.
Try this module and corresponding data from:
https://github.com/michaelchu/optopsy.
It is the same data file that you have. "./data/Sample_SPX_20151001_to_20151030.csv"
and the code you started with is now complete, working great and found here
https://github.com/michaelchu/optopsy/blob/master/samples/spx_singles_example.py
This Optopsy is a phenomenal library. The Optopsy v 2.0 is now the current.
Other Python backtesting libraries bt., backtrader, pyalgotrade, quantopian, zipline and pysystemtrade account for underlying backtesting and frustratingly suffer from insufficient testing available for options.
Feel free to reach out if you have a question I am actively using Optopsy works great with v3.7 and the output is straight to the point displaying your results in a pleasant formant.
Yes to your question with line
data = op.get(FILE, SPX_FILE_STRUCT, prompt=False)
You do need your current directory. I think you want something like this:
def filepath():
curr_file = os.path.abspath(os.path.dirname(__file__))
return os.path.join("./your_directory/sample_spx_strategy.py" , "SPX_20151001_to_20151030.csv")
I find I need to change all backslashes to forward slashes when you copy paste your full path.
-Cloihdna
I may sound rather uninformed writing this, and unfortunately, my current issue may require a very articulate answer to fix. Therefore, I will try to be specific as possible as to ensure that my problem can be concisely understood.
My apologizes for that- as this Python code was merely obtained from a friend of mine who wrote it for me in order to complete a certain task. I myself had had extremely minimal programming knowledge.
Essentially, I am running Python 3.6 on a Mac. I am trying to work out a code that allows Python to scan through a bulk of a particular website's potentially existent subdomains in order to find possibly-existent JPG images files contained within said subdomains, and download any and all of the resulting found files to a distinct folder on my Desktop.
The Setup-
The code itself, named "download.py" on my computer, is written as follows:
import urllib.request
start = int(input("Start range:100000"))
stop = int(input("End range:199999"))
for i in range(start, stop + 1):
filename = str(i).rjust(6, '0') + ".jpg"
url = "http://website.com/Image_" + filename
urllib.request.urlretrieve(url, filename)
print(url)
(Note that the words "website" and "Image" have been substituted for the actual text included in my code).
Before I proceed, perhaps some explanation would be necessary.
Basically, the website in question contains several subdomains that include .JPG images, however, the majority of the exact URLs that allow the user to access these sub-domains are unknown and are a hidden component of the internal website itself. The format is "website.com/Image_xxxxxx.jpg", wherein x indicates a particular digit, and there are 6 total numerical digits by which only when combined to make a valid code pertain to each of the existent images on the site.
So as you can see, I have calibrated the code so that Python will initially search through number values in the aforementioned URL format from 100000 to 199999, and upon discovering any .JPG images attributed to any of the thousands of link combinations, will directly download all existent uncovered images to a specific folder that resides within my Desktop. The aim would be to start from that specific portion of number values, and upon running the code and fetching any images (or not), continually renumbering the code to work my way through all of the possible 6-digit combos until the operation is ultimately a success.
(Possible Side-Issue- Although I am fairly confident that my friend's code is written in a manner so that Python will only download .JPG files to my computer from images that actually do exist on that particular URL, rather than swarming my folder with blank/bare files from every single one of URL attempts regardless of whether that URL happens to be successful or not, I am admittedly not completely certain. If the latter is the case, informing me of a more suitable edit to my code would be tremendously appreciated.)
The Execution-
Right off the bat, the code experienced a large error. I'll list through the series of steps that led to the creation of said error.
#1- Of course, I first copy-pasted the code into a text document, and saved it as "download.py". I saved it inside of a folder named "Images" where I sought the images to be directly downloaded to. I used BBEdit.
#2- I proceeded, in Terminal, to input the commands "cd Desktop/Images" (to account for the file being held within the "Images" folder on my Desktop), followed by the command "Python download.py" (to actually run the code).
As you can see, the error which I obtained following my attempt to run the code was the ImportError: No module named request. Despite me guessing that the answer to solving this is simple, I can legitimately say I have got such minimal knowledge regarding Python that I've absolutely no idea how to solve this.
Hint: Prior to making the download.py file, the folder, and typing the Terminal code the only interactions I made with Python were downloading the program (3.6) and placing it in my toolbar. I'm not even quite sure if I am required to create any additional scripts/text files, or make any additional downloads before a script like this would work and successfully download the resulting images into my "Images" folder as is my desired goal. If I sincerely missed something integral at any point during this long read, hopefully, someone in here can provide a thoroughly detailed explanation as to how to solve my issue.
Finishing statements for those who've managed to stick along this far:
Thank you. I know this is one hell of a read, and I'm getting more tired as I go along. What I hope to get out of this question is
1.) Obviously, what would constitute a direct solution to the "No module named request" Input Error in Terminal. In other words, what I did wrong there or am missing.
2.) Any other helpful information that you know would assist this code, for example, if there is any integral step or condition I've missed or failed to meet that would ultimately cause the entirety of my code to cease to work. If you do see a fault in this, I only ask of you to be specific, as I've not got much experience in the programming world. After all, I know there is a lot of developers out here that are far more informed and experienced than am I. Thanks.
urllib.request is in Python 3 only. When running 'python' on a Mac, you're running Python 2 by default. Try running executing with python3.
python --version
might need to
brew install python3
urllib.request is a Python 3 construct. Most systems run Python 2 as default and this is what you get when you run simply python.
To install Python 3, go to https://brew.sh/ and follow the instructions to install the Hombrew package manager. Then run
brew install python3
python3 download.py
I am writing a python script using the python-bugzilla 1.1.0 pypi. I am having a hard time trying to get some of the tags (some may not be supported with the package) from a bug on Bugzilla. Here is the code I have so far:
bz = bugzilla.Bugzilla(url='https://bugzilla.redhat.com/xmlrpc.cgi')
bug = bz.getbug(495561)
print bug.description #this works (it's the first comment)
I don't know how to get the rest of the comments. Also I don;t know how the get access to an attached file with the bug as well. Can anyone help me with this? Are comments and downloading attached file not supported with this package?
You can get the comments with:
for comment in bug.comments:
print comment
Where comments have links you can download them with urllib2, scapy or some such and where there is an attachment you can get the ID from the comment and then use bugzilla.openattachment(ID) to get it.
I am trying to make a script that runs pylint on the files present in the pull request and creates inline comments for the linting errors.
I got a hang on how to use PyGithub. The problem is that in order to comment on a pull-request you will have to know the commit that modified the file and the line number from the patch. Full documentation on the review comments API is found here.
Pylint returns the line in the resulted file. I need to get from foo/bar.py:30 to the commit that modified line 30 in foo/bar.py and to get the actual position in the diff for that file.
Is there something that already can do this or do I have to manually search for ## lines in every commit involved in a pull request?
What you are asking for is exactly what the blame feature does.
The only API I could find was this restfulgit.
Based on blind text search this here looks like the function that implements getting blame info, if you understand how it uses the underlying git api then you can just copy that part instead of using the restfulgit