How to upload multiple file under same CID with python - python

This work fine but I want to upload multiple file (metadata) to ipfs under same CID using python.
import requests
import json
import os
import csv
header = ['image', 'IPFS']
images = os.listdir("./images/")
with open('ipfs.csv', 'w', encoding='UTF8') as f:
writer = csv.writer(f)
# write the header
writer.writerow(header)
for image in images:
# write the data
data=[]
name=image.replace(".png","").replace(".jpg","")
data.append(name)
url = "https://api.pinata.cloud/pinning/pinFileToIPFS"
payload={}
files=[
('file',('file',open("./images/"+image,'rb'),'application/octet-stream'))
]
headers = {
'pinata_api_key': 'APIKEY',
'pinata_secret_api_key': 'SECRETAPIKEY'
}
response = requests.request("POST", url, headers=headers, data=payload, files=files)
info=json.loads(response.text)
data.append("ipfs://"+info['IpfsHash'])
writer.writerow(data)
And if I get solution by using another api it fine.
And one more thing I'm running this code on Android pydroid3

You cant use the same cid for different files. Cid's hash the file and return a cid/the hash.

Related

Using python script to upload a file to django server localhost

I build an django rest framework that can help me to upload a file
in view.py
class FileResultViewSet(viewsets.ModelViewSet):
queryset = FileResult.objects.order_by('-timestamp')
serializer_class = FileResultSerializer
parser_classes = (MultiPartParser, FormParser)
def perform_create(self, serializer):
serializer.save()
The script work well without any error, and the file is stored in media folder
Then, I use an python script that can help me to upload file to django
import requests
import openpyxl
url = "http://localhost:8000/upload/"
headers = {'Content-Type': 'multipart/form-data'}
file_path = "H:/MEGA_H/Data/uploaded/Book1.xlsx"
'''
with open(file_path, 'rb') as f:
file = {'file': f}
response = requests.get(url, headers=headers, files=file)
'''
filename = "Book1.xlsx"
# Open the file and read its contents
with open(file_path, 'rb') as f:
file_contents = f.read()
# Create a dictionary to store the file information
#file_data = {'file': (filename, file_contents,result_file)}
file_data = {
"result_file": file_contents
}
# Make a POST request to the URL with the file information
response = requests.post(url, files=file_data)
# Check if the request was successful
if response.status_code == 201:
print("File successfully uploaded")
else:
print("Failed to upload file")
The code still works with an issue, after upload the file to media folder, the file doesnt include the file type, so it can not read. I have to change the name of the file , and typing file type ".xlsx"
The file name is only Book1, without Book1.xlsx
with open(file_path, 'rb') as f:
file_contents = f.read()
file_data = {
"result_file": file_contents
}
This is the wrong way to do it. The dict should contain the file object itself, not the file contents.
with open(file_path, 'rb') as f:
file_data = {"result_file": f}
response = requests.post(url, files=file_data)

Is there any way to open python's data in a browser page without needing to create a server?

Part 1 (solved from the help indicated in the comments of this question):
I would like to open this JSON response print() in a page of my default browser (Chrome), as I have a JSON Viewer extension and I would like to study this data with an easier view than in the Visual Studio Code terminal.
Is there any simple method for this to be done without the need to create an html file with this data, create a specific server and only then be able to open it in the browser?
I tried to use the webbrowser but it just opened any page, it didn't open the data.
import requests
import json
import webbrowser
url="https://api.betfair.com/exchange/betting/json-rpc/v1"
header = { 'X-Application' : 'AAAAAAAAAAAAA', 'X-Authentication' : 'BBBBBBBBBBBB' ,'content-type' : 'application/json' }
jsonrpc_req='{"jsonrpc": "2.0", "method": "SportsAPING/v1.0/listCompetitions", "params": {"filter":{"eventTypeIds": [1]}}, "id": 2}'
response = requests.post(url, data=jsonrpc_req, headers=header)
data = json.dumps(json.loads(response.text), indent=3)
webbrowser.open(data)
Part 2:
I'm trying to open a .json file by the browser so that I can analyze the data more easily, according to the tips in the comments, I started using the webbrowser, but when I try to open .json the page doesn't open and when I try to open it as .txt or .html the extension that improves the visualization of JSON in the browser, doesn't recognize the data and doesn't format for it.
How could I manage to do this?
My extension in Chrome:
https://chrome.google.com/webstore/detail/json-viewer/gbmdgpbipfallnflgajpaliibnhdgobh?hl=pt-BR
Github project extension:
https://github.com/tulios/json-viewer
I post my Python code here just to make it easier to see print():
import requests
import json
url="https://api.betfair.com/exchange/betting/json-rpc/v1"
header = { 'X-Application' : 'AAAAAAAAAAAAA', 'X-Authentication' : 'BBBBBBBBBBBB' ,'content-type' : 'application/json' }
jsonrpc_req='{"jsonrpc": "2.0", "method": "SportsAPING/v1.0/listCompetitions", "params": {"filter":{"eventTypeIds": [1]}}, "id": 2}'
response = requests.post(url, data=jsonrpc_req, headers=header)
with open("ApiBetfair.html", "w+", newline="", encoding="UTF-8") as f:
data = json.dumps(json.loads(response.text), indent=3)
f.write(data)
new = 2
webbrowser.open('file://' + os.path.realpath('ApiBetfair.html'),new=new)
You could store the JSON in a local file and open it with the webbrowser module. As we figured out in the comments above, it is necessary to enable local file support in chrome://extensions for the extension to work properly.
import webbrowser
fp = "ApiBetfair.html"
with open(fp, "w+", newline="", encoding="UTF-8") as f:
data = json.dumps(json.loads(response.text), indent=3)
f.write(data)
new = 2 # open in new tab
webbrowser.open('file://' + os.path.realpath(fp),new=new)

Python - How to add delimiter and remove line breaks in CSV output?

I am doing this for the first time and so far have setup a simple script to fetch 2 columns of data from an APIThe data comes through and I can see it with print commandNow I am trying to write it to CSV and setup the code below which creates the file but I can't figure out how to:1. Remove the blank lines in between each data row2. Add delimiters to the data which I want to be " "3. If a value such as IP is blank then just show " "I searched and tried all sorts of examples but just getting errorsMy code snippet which writes the CSV successfully is
import requests
import csv
import json
# Make an API call and store response
url = 'https://api-url-goes-here.com'
filename = "test.csv"
headers = {
'accept': 'application/json',
}
r = requests.get(url, headers=headers, auth=('User','PWD'))
print(f"Status code: {r.status_code}")
#Store API response in a variable
response_dict = r.json()
#Open a File for Writing
f = csv.writer(open(filename, "w", encoding='utf8'))
# Write CSV Header
f.writerow(["Computer_Name", "IP_Addresses"])
for computer in response_dict["advanced_computer_search"]["computers"]:
f.writerow([computer["Computer_Name"],computer["IP_Addresses"]])
CSV output I get looks like this:
Computer_Name,IP_Addresses
HYDM002543514,
HYDM002543513,10.93.96.144 - AirPort - en1
HYDM002544581,192.168.1.8 - AirPort - en1 / 10.93.224.177 -
GlobalProtect - gpd0
HYDM002544580,10.93.80.101 - Ethernet - en0
HYDM002543515,192.168.0.6 - AirPort - en0 / 10.91.224.58 -
GlobalProtect - gpd0
CHAM002369458,10.209.5.3 - Ethernet - en0
CHAM002370188,192.168.0.148 - AirPort - en0 / 10.125.91.23 -
GlobalProtect - gpd0
MacBook-Pro,
I tried adding
csv.writer(f, delimiter =' ',quotechar =',',quoting=csv.QUOTE_MINIMAL)
after the f = csv.writer line but that creates an error:TypeError: argument 1 must have a "write" method
I am sure its something simple but just can't find the correct solution to implement in the code I have. Any help is appreciated.
Also, does the file get closed automatically? Some examples suggest to use something like f.close() but that causes errors. Do I need it? The file seems to get created fine as-is.
I suggest you use pandas package to write .csv file, which is a most used package for data analysis.
For your problem:
import requests
import csv
import json
import pandas
# Make an API call and store response
url = 'https://api-url-goes-here.com'
filename = "test.csv"
headers = {
'accept': 'application/json',
}
r = requests.get(url, headers=headers, auth=('User','PWD'))
print(f"Status code: {r.status_code}")
#Store API response in a variable
response_dict = r.json()
#collect data to build pandas.DataFrame
data = []
for computer in response_dict["advanced_computer_search"]["computers"]:
# filter blank line
if computer["Computer_Name"] or computer["IP_Addresses"]:
data.append({"Computer_Name":computer["Computer_Name"],"IP_Addresses":computer["IP_Addresses"]})
pandas.DataFrame(data=data).to_csv(filename, index=False)
if you want use " " to separate value, you can set sep=" " in the last line output the .csv file. However, I recommend to use , as delimiters due to it's a common standard. Also much more configs could be set for DataFrame.to_csv() method, you can check the official docs. https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_csv.html
As you said in comment, pandas is not a standard python package. You can simply open a file and write lines to that file, with the lines you build manually. For example:
import requests
import csv
import json
# Make an API call and store response
url = 'https://api-url-goes-here.com'
filename = "test.csv"
headers = {
'accept': 'application/json',
}
r = requests.get(url, headers=headers, auth=('User','PWD'))
print(f"Status code: {r.status_code}")
#Store API response in a variable
response_dict = r.json()
r = requests.get(url, headers=headers, auth=('User','PWD'))
print(f"Status code: {r.status_code}")
#Store API response in a variable
response_dict = r.json()
#Open a File for Writing
f = csv.writer(open(filename, "w", encoding='utf8'))
with open(filename, mode='w') as f:
# Write CSV Header
f.write("Computer_Name,"+"IP_Addresses"+"\n")
for computer in response_dict["advanced_computer_search"]["computers"]:
# filter blank line
if computer["Computer_Name"] or computer["IP_Addresses"]:
f.write("\""+computer["Computer_Name"]+"\","+"\""+computer["IP_Addresses"]+"\"\n")
Note that " around value was build by appending \". \n to change new line after each loop.

Post Large File Using requests_toolbelt to vk

I am new to python, I wrote simple script for uploading video from url to vk, I test this script with small files it's working, but for large files I get run out of memory, I read that using 'requests_toolbelt' it's possible to post large file, How can I add this to my script?
import vk
import requests
from homura import download
import glob
import os
import json
url=raw_input("Enter URL: ")
download(url)
file_name = glob.glob('*.mp4')[0]
session = vk.Session(access_token='TOKEN')
vkapi = vk.API(session,v='5.80' )
params={'name' : file_name,'privacy_view' : 'nobody', 'privacy_comment' : 'nobody'}
param = vkapi.video.save(**params)
upload_url = param['upload_url']
print ("Uploading ...")
request = requests.post(upload_url, files={'video_file': open(file_name, "rb")})
os.remove (file_name)
requests_toolbelt (https://github.com/requests/toolbelt) has just the example that might work for you:
import requests
from requests_toolbelt import MultipartEncoder
...
...
m=MultipartEncoder( fields={'video_file':(file_name, open(file_name, "rb"))})
response = requests.post(upload_url, data=m, headers={'Content-Type': m.content_type})
If you know your video file's MIME type, you can add it as a 3-rd item in the () tuple like this:
m=MultipartEncoder( fields={
'video_file':(file_name, open(file_name,"rb"), "video/mp4")})

How to POST a tgz file in Python using urllib2

I would like to POST a .tgz file with the Python urllib2 library to a backend server. I can't use requests due to some licensing issues. There are some examples of file upload on stackoverflow but all relate to attaching a file in a form.
My code is the following but it unfortunately fails:
stats["random"] = "data"
statsFile = "mydata.json"
headersFile = "header-data.txt"
tarFile = "body.tgz"
headers = {}
#Some custom headers
headers["X-confidential"] = "Confidential"
headers["X-version"] = "2"
headers["Content-Type"] = "application/x-gtar"
#Create the json and txt files
with open(statsFile, 'w') as a, open(headersFile, 'w') as b:
json.dump(stats, a, indent=4)
for k,v in headers.items():
b.write(k+":"+v+"\n")
#Create a compressed file to send
tar = tarfile.open(tarFile, 'w:gz' )
for name in [statsFile,headersFile]:
tar.add(name)
tar.close()
#Read the binary data from the file
with open(tarFile, 'rb') as f:
content = f.read()
url = "http://www.myurl.com"
req = urllib2.Request(url, data=content, headers=headers)
response = urllib2.urlopen(req, timeout=timeout)
If I use requests, it works like a charm:
r = requests.post(url, files={tarFile: open(tarFile, 'rb')}, headers=headers)
I essentially need the equivalent of the above for urllib2. Does anybody maybe know it? I have checked the docs as well but I was not able to make it work..What am I missing?
Thanks!

Categories