I am creating a python application that will download a Minecraft server JAR file and use it to automatically create and configure a Minecraft server. My goal is to have the software download a Minecraft server JAR to use, but I do not know how to download all versions of Minecraft server software (how to format a download URL to do so) and how to do that with Python. TLDR: I need to know how to specify in the URL which version of Minecraft software to download.
This will download and store the Minecraft 1.9 server in a Jar file though this link is subject to change, which means you may want to look into hosting the server binaries yourself to get a permanent URL that wont change as Mojang changes the Minecraft website.
import requests
URL = "https://piston-data.mojang.com/v1/objects/f69c284232d7c7580bd89a5a4931c3581eae1378/server.jar"
response = requests.get(URL)
with open("server.jar", "wb") as file:
file.write(response.content)
Related
I want to make a python script that would run a server which I can then send a request to from another pc using curl to send a file. So I would send a file from a pc to the server using curl and then server would save that file.
I tried googling but couldn't seem to find anything like that, but I found that pycurl might be useful. If anyone could show me the direction towards my goal I would appreciate it.
Sending files is typically just a POST with data. You can make a webserver using something such as flask, and make an endpoint that receives post data, then saves it as a file.
On the server, you don't need curl at all, just an HTTP server that accepts requests.
You can use http.server in the standard library...
https://gist.github.com/mdonkers/63e115cc0c79b4f6b8b3a6b797e485c7
The server is mine, I will run the python script on my aws machine.
And the pc is windows 10, I would want to just use curl to send the
file which is pre installed on windows so that I don't have to install
anything or write any scripts. Purely send the file using curl from cmd
Sending the file via curl is likely to be problematic. When I Google curl upload file, I usually find people that have questions that were not answered or advice that does not work. Without writing a script you easy option is to use FTP. Curl would not be required on the Server. Either you write a simple script, or use FTP. The more difficult is on the PC side. Not all that difficult because you control both sides of the transfer.
Have you considered using a command line SFTP client? A secure FTP transfer is very easy using a CMD prompt. One single and simple CMD statement.
I did a web app for a laboratory. Every day PDFs with patient lab results are uploaded to a doctor portal. The docs then go to the portal to view the PDFs. I first wrote this web app 10 years ago. It has been very trouble free.
I was just beginning my answer, gathering some options when I got you message.
Let me know if FTP is a viable option. ASAP. I will get you the command line to FTP a file to the server.
pscp is a free SSH FTP file transfer utility. I have been using this for over 10 years. Very reliable. The date on my pscp.exe, a 316K byte file, is September 2013.
Putty also has another CMD utility psftp. I used it a few times. psftp has a scripting protocol.
Both Putty utilities are very popular and well documented.
This is a sample pcsp CMD. The RUN is the command from my Window app programming language as well.
RUN('CMD /c ECHO ON & pscp -v -p -pw $password $filename user#domain.com:server path & EXIT')
I went to Fillzilla Pro CLI (command line interface) Even though Filezilla CLI is poorly documented, Filezilla Pro supports transfers to and from These storage services:
Amazon S3
Backblaze B2
Box
Dropbox
Google Cloud Storage
Google Drive
Microsoft Azure File Storage Service
Microsoft Azure Blob Storage Service
Microsoft OneDrive
Microsoft OneDrive for Business
Microsoft SharePoint
OpenStack Swift
Rackspace Cloud
WebDAV
I would recommend the free psftp utility from Putty
The sftp.exe utility is a stand alone 734 KB file. No installation required just copy sftp.exe to any PC where you need it.
If FTP is an option.
Here is a link to the docs for psftp
These are the FTP commands it supports:
6.2.4 The quit command: end your session
6.2.5 The close command: close your connection
6.2.6 The help command: get quick online help
6.2.7 The cd and pwd commands: changing the remote working directory
6.2.8 The lcd and lpwd commands: changing the local working directory
6.2.9 The get command: fetch a file from the server
6.2.10 The put command: send a file to the server
6.2.11 The mget and mput commands: fetch or send multiple files
6.2.12 The reget and reput commands: resuming file transfers
6.2.13 The dir command: list remote files
6.2.14 The chmod command: change permissions on remote files
6.2.15 The del command: delete remote files
6.2.16 The mkdir command: create remote directories
6.2.17 The rmdir command: remove remote directories
6.2.18 The mv command: move and rename remote files
Content-Type: application/octet-stream
Content-Disposition: attachment
Content-Disposition: form-data; name="fieldName"; filename="filename.jpg"
Content-Disposition: form-data; name="fieldName"
<input type="file">
I need to implement a project to upload/download file from/to localhost, by Python, from the command line. But the uploaded files need to be viewable from the browser.
Basically i know i need to have a client, a server, and an endpoint(http://localhost).
(1)upload:
From the client side(command line), i send the file through python request package by http requests. The server side will receive this file and parse this file to get the information in the file. I need to see the uploaded file from the browser.
(2)download:
from the command line of the client side, i ask for the file through http request. Then the request will be parsed by server. Then the file will be saved locally to my host machine.
(3)i know how to use the Python request package.
Question: what do i need to work on the server side and client side?
I read through the similar posts for this question, and they are not helpful for my question.
I created a Scrapy project with several spiders to crawl some websites. Now I want to use TOR to:
Hide my ip from the crawled servers;
Associate my requests to different ips, simulating accesses from different users.
I have read some info about this, for example:
using tor with scrapy framework, How to connect to https site with Scrapy via Polipo over TOR?
The answers from these links weren't helpful to me. What are the steps that I should take to make Scrapy work properly with TOR?
EDIT 1:
Considering answer 1, I started by installing TOR. As I am using Windows I downloaded the TOR Expert Bundle (https://www.torproject.org/dist/torbrowser/5.0.1/tor-win32-0.2.6.10.zip) and read the chapter about how to configure TOR as a relay (https://www.torproject.org/docs/tor-doc-windows.html.en). Unfortunately there is little or any information about how to do it on Windows. If I unzip the downloaded archive and run the file Tor\Tor.exe nothing happens. However, I can see in the Task Manager that a new process is instantiated. I don't know what is the best way to proceed from here.
After a lot of research, I found a way to setup my Scrapy project to work with TOR on Windows OS:
Download TOR Expert Bundle for Windows (1) and unzip the files to a folder (ex. \tor-win32-0.2.6.10).
The recent TOR's versions for Windows don't come with a graphical user interface (2). It is probably possible to setup TOR only through config files and cmd commands but for me, the best option was to use Vidalia. Download it (3) and unzip the files to a folder (ex. vidalia-standalone-0.2.21-win32). Run "Start Vidalia.exe" and go to Settings. On the "General" tab, point Vidalia to TOR (\tor-win32-0.2.6.10\Tor\tor.exe).
Check on "Advanced" tab and "Tor Configuration File" section the torrc file. I have the next ports configured:
ControlPort 9151
SocksPort 9050
Click Start Tor on the Vidalia Control Panel UI. After some processing you should se on the status the message "Connected to the Tor network!".
Download Polipo proxy (4) and unzip the files to a folder (ex. polipo-1.1.0-win32). Read about this proxy on the link 5.
Edit the file config.sample and add the next lines to it (in the beginning of the file, for example):
socksParentProxy = "localhost:9050"
socksProxyType = socks5
diskCacheRoot = ""
Start Polipo through cmd. Go to the folder where you unzipped the files and enter the next command "polipo.exe -c config.sample".
Now you have Polipo and TOR up and running. Polipo will redirect any request to TOR through port 9050 with SOCKS protocol. Polipo will receive any HTTP request to redirect trough port 8123.
Now you can follow the rest of the tutorial "Torifying Scrapy Project On Ubuntu" (6). Continue in the step where the tutorial explains how to test the TOR/Polipo communications.
Links:
https://www.torproject.org/download/download.html.en
https://tor.stackexchange.com/questions/6496/tor-expert-bundle-on-windows-no-installation-instructions
https://people.torproject.org/~erinn/vidalia-standalone-bundles/
http://www.pps.univ-paris-diderot.fr/~jch/software/files/polipo/
http://www.pps.univ-paris-diderot.fr/~jch/software/polipo/tor.html
http://blog.privatenode.in/torifying-scrapy-project-on-ubuntu
A detailed step-by-step Explanation is here
http://blog.privatenode.in/torifying-scrapy-project-on-ubuntu/
The Basic steps there are:
Install Tor and Polipo (for linux this might require to add a repository).
Configure Polipo to talk with TOR using SOCK Connection (see above link).
Create a custom Middleware to use tor as a http proxy and to randomly change the scrapy user agent
to suppress depreciation warning from above example, write
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
instead of 'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': None,
What is your szenario? Have you thought about renting Proxy Servers?
I need a file downloader and I tried to write it but I have problems to download files over a https connection. It's easy to download files over http but for the https connection I have a username and password. I usually connect to the website with this line of code in firefox:
https://username:password#site.com/path
I want to download every single file + (sub)folder in there. How can I do this? This is how it looks like when I'm connected in firefox:
http://img543.imageshack.us/img543/6355/52961177.png
http://img713.imageshack.us/img713/624/55225462.png
Do you need Python ? Using a command line tool as curl should to the job.
http://docs.python-requests.org/en/latest/user/quickstart/ is a good library for implementing HTTP clients.
I am designing a website for a local server on our lan, so that anyone who tires to access that IP from a browser sees a web page and when he clicks on some link on that web page then a directory or some folder from that server should open.
I am using python for this purpose and the server is just like another PC with windows installed.
If you just want to redirect the user to your file server, then it sort of depends on what operating system they're using. If everybody's going to be on Windows, then you should be able to include a link to "//Your-Fileserver-Name/Path1/Path2". Obviously you have to share the appropriate files on your server using Windows file-sharing.