Related
I have tried stripe but the problem is that in the docs they have listed that for accepting international payments from India, I have to be registered and also I need to add billing address, name of the customer and the payment intent. They have provided documentation on how to add names and payment intent, but I don't know how to implement the provided code in my application.
So, pls tell me how to do it...
Just in case you, this is my checkout code
#app.route('/create-checkout-session', methods=['POST'])
def create_checkout_session():
session = stripe.checkout.Session.create(
payment_method_types=['card'],
line_items=[{
'price_data': {
'currency': 'usd',
'product_data': {
'name': 'T-shirt',
},
'unit_amount': 2000,
},
'quantity': 1,
}],
mode='payment',
success_url=redirect("success.html"),
cancel_url=redirect("cancel.html"),
)
If you're using Stripe Checkout you don't need to change your code; Checkout will collect the required information from your customer (name and billing address) on the Checkout page.
Edited respons
This is how you can add other optional params:
#bp.route('/create-checkout-session')
def create_checkout_session():
domain_url = 'http://localhost:5000/'
stripe.api_key = current_app.config['STRIPE_SECRET_KEY']
try:
checkout_session = stripe.checkout.Session.create(
success_url=domain_url + 'success',
cancel_url=domain_url + 'cancelled',
payment_method_types=['card'],
billing_address_collection='required',
mode='payment',
customer='customer_id',
line_items=[
{
# using the price api takes care of the product well
# rather than having to specify name, currency etc
'quantity': 1,
'price': 'price_1IYgbtFWpU2KHaPLODAVgoKU'
}
],
payment_intent_data=[
{
# Place your data here
'param-key': 'value',
# ...
}
]
)
# your return statement
except Exception as e:
# your return statement
You can do this for the other params
I'm trying to create a Python program using the Requests library that searches ebay for an item that they enter. Rather than hard-coding the url, is it possible to use requests library to perform an Ebay search (or a search on any website)?
I believe what you want here is to input a text in a search element. According to realpython:
The requests library is the de facto standard for making HTTP requests in Python.
I would recommend to use selenium to control the website's source code such as inputting a text in an element and press a button on the website.
However, if you still want to use requests then try to find their api endpoint which handle the searching part and use POST method to get data from it.
resp = requests.post(url)
I created and Ebay developer account to access the API then wrote a small script to search eBay for historical pricing on an item. Save it an call is search.py and call it like this:
./search.py "ebay item you are looking for"
You can change the itemFilter to your liking, currently it is set for solditems for since 10-10-2019. The complete list is here: https://developer.ebay.com/devzone/finding/callref/types/ItemFilterType.html
The comments at the bottom show the complete set of fields returned from Ebay, you can pick and choose the fields you like and add them to a print statement.
Also, this script will return for than the first page of items and each page costs you one of your 5,000 developer queries for the day. I am unable to get it to work with the sandbox, not matter what I try. I believe the Ebay sandbox is broken.
#!/usr/local/bin/python3
from ebaysdk.finding import Connection
import sys
DEBUG = False
#search_keywords = "2019 Hot Wheels Dumbo"
search_keywords = sys.argv[1]
print ("Search Keywords: " + search_keywords)
# Function accepts keywords for query and pageNumber of search to pull
# Ebay will only return 100 items per search
def build_request( keywords, pageNumber):
# Create a request structure
# Item Filter List https://developer.ebay.com/devzone/finding/callref/types/ItemFilterType.html
request = {
'keywords': keywords,
'itemFilter': [
{'name': 'condition', 'value': 'new' ,
'name': 'SoldItemsOnly', 'value': True ,
'name': 'EndTimeFrom', 'value': '2019-10-10T00:00:00.000Z' }
],
'paginationInput': {
'entriesPerPage': 100, # EBay limits API Calls to 100 items per page
'pageNumber': pageNumber
},
'sortOrder': 'PricePlusShippingLowest',
}
return (request)
# Connect using yaml file to EBAY-US production site
# put in __main__ just in case we turn this into a module later
if __name__ == '__main__':
api = Connection(config_file='ebay.yaml', debug=False, siteid="EBAY-US")
#api = Connection(config_file='ebay.yaml', debug=False, domain="api.sandbox.ebay.com", siteid="EBAY-US")
# Run the request
query=build_request(search_keywords, 1)
query['paginationInput']['pageNumber'] = 1
response = api.execute('findCompletedItems', query)
if DEBUG:
print (response.dict()) #Use this to see the dictionary structure
# Display how many entries and results are returned
print("API Call: findCompletedItems")
print("----------------------------")
print(f"totalEntries: {response.reply.paginationOutput.totalEntries}, totalPages: {response.reply.paginationOutput.totalPages}")
maxpage = int(str(response.reply.paginationOutput.totalPages)) + 0
# Display item information fields from the request, see below for all possible fields
for item in response.reply.searchResult.item:
print(f"Date: {item.listingInfo.endTime} Title: {item.title}, Price: {item.sellingStatus.currentPrice.value} Shipping: {item.shippingInfo.shippingServiceCost.value}")
# Now run the request for each page and change the page in the request each time
for page in range (2,maxpage):
print ("**** PAGE: "+str(page) +" of "+ str(maxpage)+ " ****")
# Rebuild the Request and Update the Page Number
# Run the request
query['paginationInput']['pageNumber'] = page
response = api.execute('findCompletedItems', query)
# Display item information fields from the request, see below for all possible fields
for item in response.reply.searchResult.item:
print(f"Date: {item.listingInfo.endTime} Title: {item.title}, Price: {item.sellingStatus.currentPrice.value} Shipping: {item.shippingInfo.shippingServiceCost.value}")
#{'ack': 'Success', 'version': '1.13.0', 'timestamp': '2019-10-16T01:28:25.891Z',
#
#searchResult': {'item': [{'itemId': '123719989207', 'title': '2019 HOT WHEELS 2 SET CORVETTE STINGRAY SUPER CHROMES 5/5 TREASURE HUNT PAIR', 'globalId': 'EBAY-US', 'primaryCategory': {'categoryId': '180506', 'categoryName': 'Contemporary Manufacture'}, 'galleryURL': 'https://thumbs4.ebaystatic.com/m/mFuyRQgYjSutGli33dqsqcA/140.jpg', 'viewItemURL': 'https://www.ebay.com/itm/2019-HOT-WHEELS-2-SET-CORVETTE-STINGRAY-SUPER-CHROMES-5-5-TREASURE-HUNT-PAIR-/123719989207', 'paymentMethod': 'PayPal', 'autoPay': 'false', 'postalCode': '54650', 'location': 'Onalaska,WI,USA', 'country': 'US', 'shippingInfo': {'shippingServiceCost': {'_currencyId': 'USD', 'value': '6.0'}, 'shippingType': 'Flat', 'shipToLocations': 'Worldwide', 'expeditedShipping': 'false', 'oneDayShippingAvailable': 'false', 'handlingTime': '2'}, 'sellingStatus': {'currentPrice': {'_currencyId': 'USD', 'value': '9.0'}, 'convertedCurrentPrice': {'_currencyId': 'USD', 'value': '9.0'}, 'sellingState': 'Ended'}, 'listingInfo': {'bestOfferEnabled': 'false', 'buyItNowAvailable': 'false', 'startTime': '2019-04-02T22:14:03.000Z', 'endTime': '2019-10-02T18:44:49.000Z', 'listingType': 'StoreInventory', 'gift': 'false', 'watchCount': '2'}, 'returnsAccepted': 'false', 'condition': {'conditionId': '1000', 'conditionDisplayName': 'New'}, 'isMultiVariationListing': 'false', 'topRatedListing': 'false'},
#
#
#{'itemId': '153679182310', 'title': "Hot Wheels 2019 Super Treasure Hunt '68 Mercury Cougar Loose 1/64 STH Green", 'globalId': 'EBAY-US', 'primaryCategory': {'categoryId': '73252', 'categoryName': 'Collections & Lots'}, 'galleryURL': 'https://thumbs3.ebaystatic.com/m/mEN9EsbCJY0wb6WzXjO8hNg/140.jpg', 'viewItemURL': 'https://www.ebay.com/itm/Hot-Wheels-2019-Super-Treasure-Hunt-68-Mercury-Cougar-Loose-1-64-STH-Green-/153679182310', 'paymentMethod': 'PayPal', 'autoPay': 'false', 'location': 'Malaysia', 'country': 'MY', 'shippingInfo': {'shippingServiceCost': {'_currencyId': 'USD', 'value': '9.0'}, 'shippingType': 'Flat', 'shipToLocations': 'Worldwide', 'expeditedShipping': 'false', 'oneDayShippingAvailable': 'false', 'handlingTime': '15'}, 'sellingStatus': {'currentPrice': {'_currencyId': 'USD', 'value': '9.9'}, 'convertedCurrentPrice': {'_currencyId': 'USD', 'value': '9.9'}, 'bidCount': '1', 'sellingState': 'Ended'}, 'listingInfo': {'bestOfferEnabled': 'false', 'buyItNowAvailable': 'false', 'startTime': '2019-10-10T04:13:32.000Z', 'endTime': '2019-10-15T04:13:32.000Z', 'listingType': 'Auction', 'gift': 'false', 'watchCount': '1'}, 'returnsAccepted': 'false', 'condition': {'conditionId': '3000', 'conditionDisplayName': 'Used'}, 'isMultiVariationListing': 'false', 'topRatedListing': 'false'}],
#
#'_count': '100'}, 'paginationOutput': {'pageNumber': '3', 'entriesPerPage': '100', 'totalPages': '40', 'totalEntries': '3966'}}
You can scrape eBay using BeautifulSoup web scraping library.
In order not to enter the full URL of the request, you can set params in which the necessary request parameters will be indicated and the input of the question itself for the search:
query = input('Your query is: ')
params = {
'_nkw': query, # search query
'_pgn': 1 # page number
#'LH_Sold': '1' # shows sold items
}
If using requests library the request might be blocked as default user-agent in requests library is a python-requests so website understands that's it's a bot or a script that sends a request. Check what's your user-agent.
An additional step besides providing browser user-agent could be to rotate user-agent, for example, to switch between PC, mobile, and tablet, as well as between browsers e.g. Chrome, Firefox, Safari, Edge and so on.
Check code in online IDE.
from bs4 import BeautifulSoup
import requests, json, lxml
# https://requests.readthedocs.io/en/latest/user/quickstart/#custom-headers
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36",
}
query = input('Your query is: ')
params = {
'_nkw': query, # search query
'_pgn': 1 # page number
#'LH_Sold': '1' # shows sold items
}
data = []
while True:
page = requests.get('https://www.ebay.com/sch/i.html', params=params, headers=headers, timeout=30)
soup = BeautifulSoup(page.text, 'lxml')
print(f"Extracting page: {params['_pgn']}")
print("-" * 10)
for products in soup.select(".s-item__info"):
title = products.select_one(".s-item__title span").text
price = products.select_one(".s-item__price").text
link = products.select_one(".s-item__link")["href"]
data.append({
"title" : title,
"price" : price,
"link" : link
})
if soup.select_one(".pagination__next"):
params['_pgn'] += 1
else:
break
print(json.dumps(data, indent=2, ensure_ascii=False))
Example output:
Your query is: shirt # query entry example
Extracting page: 1
----------
[
{
"title": "Men's Polo Shirt 100% Cotton Knockout Jeans NVY WHT 220 Stripe MEDIUM Free Ship",
"price": "$11.99",
"link": "https://www.ebay.com/itm/133992813518?hash=item1f329813ce:g:tWMAAOSwXBxhTP7Q&amdata=enc%3AAQAHAAAAwJ9%2BDbqKGCoZye6JelYY1tJHQWotUalKHQJ%2FixwyplnvOC60SofXkLVsNgRfoX09uOZLerjkBtwcW%2FQQa1wmJ6%2BYVEEagzH1GAK6Bx4rX%2BRNnj9g6SlvB2WagWETpbmrLdiFHGTIRvAL2EvfXDRqPFnEGWZ2nk%2BM0zEkiGzp%2F4ADUbPslGui3zTDJsIgVpXjAHzL2EUH3s7tiOxtd3qVTXxaE095evq5YrBgkJFJu4KB5o%2F%2BCiCURfy7xR%2FbTU7mnQ%3D%3D%7Ctkp%3ABlBMUJavlrOEYQ"
},
{
"title": "5 Pack Oroblu Micromodal Perfect Line Round Neck Short Sleeve T-Shirt",
"price": "$192.00",
"link": "https://www.ebay.com/itm/275287531865?hash=item40186a6159:g:OtUAAOSweKFiZr2S&amdata=enc%3AAQAHAAAAsMRLg1VeYAIKHTiXXdD8xv56DpaeH6jc3EhFP26RJ66bqmlzXHQrMMxuo78x6S2i8DfxvuzjbXrpmYYdyRLhzgQCoaauMNvRwVNuhx11qorNlPoHrig%2BdIGG2RB4xHmXdB2fjOciLCsdYkL23jaH23ehXakQu%2BrBzER%2F2v94Sdg%2BkchjwWmRidsv0kPfLRcpiy%2BOeDBHEas4i9EQY%2F0VAzLGj2U%2FwLdcqjqSjgngj%2BRr%7Ctkp%3ABlBMUJavlrOEYQ"
},
# ...
]
As an alternative, you can use Ebay Organic Results API from SerpApi. It`s a paid API with a free plan that handles blocks and parsing on their backend.
Example code that paginates through all pages with input query:
from serpapi import EbaySearch
import os, json
query = input('Your query is: ')
params = {
"api_key": os.getenv("API_KEY"), # serpapi api key
"engine": "ebay", # search engine
"ebay_domain": "ebay.com", # ebay domain
"_nkw": query, # search query
"_pgn": 1 # page number
#"LH_Sold": "1" # shows sold items
}
search = EbaySearch(params) # where data extraction happens
page_num = 0
data = []
while True:
results = search.get_dict() # JSON -> Python dict
if "error" in results:
print(results["error"])
break
for organic_result in results.get("organic_results", []):
link = organic_result.get("link")
price = organic_result.get("price")
data.append({
"price" : price,
"link" : link
})
page_num += 1
print(page_num)
if "next" in results.get("pagination", {}):
params['_pgn'] += 1
else:
break
print(json.dumps(data, indent=2))
Output:
[
{
"price": {
"raw": "$25.99",
"extracted": 25.99
},
"link": "https://www.ebay.com/itm/285018595898?hash=item425c6ea23a:g:mT0AAOSwBjljAFsl&amdata=enc%3AAQAHAAAAkI1P1C%2BE2boIutliCMWXCADm%2BXyUp2a6Q1qOjpifaAIo6%2FWD0yHCd8Mejyfc2jc%2BQ5zzVcITrcWM0XxIfiSUILMZFsMewB154skl5re5%2FS8W9kRrabjRdy%2BoC6aQoS%2FWGq%2F6A%2BZWQ1GQkcd5Tstamu%2FgzZKoL6VYfO4YpC4oO4Im23h0wiIfI0%2BxPG8uuFRMPw%3D%3D%7Ctkp%3ABk9SR_i1vbKEYQ"
},
{
"price": {
"raw": "$14.16",
"extracted": 14.16
},
"link": "https://www.ebay.com/itm/234347615312?hash=item369034d450:g:hvYAAOSwNspg0TAH&amdata=enc%3AAQAHAAAA0B1m3DPC4q0R4AQp6MO8rXnKt6qFIX2p%2BaypmySYXkIvi6XE3FHzpbtN%2B%2Bvd9P3TZPYu3fuQVl5kH0ZYDO5eqtnjh1EcZ%2Fb9rZMlMx6r6RcH%2B5wOY7X65bvRcmQ7OUmoaNGAMOZpOc4hg8vHj2afxCa%2FR7F3jDr1KjnHk%2BKnln3opoiqAVMFIoXv338f70KZw8CDd%2Fg9xU0jQlzgxDpDwSL6Y6OMz0oKxh4T%2BRUMKHj03VE5E9%2B8VKzPUMWAQ%2BZWuZyGMpWxwzn%2BomggywV5RhI%3D%7Ctkp%3ABk9SR_i1vbKEYQ"
},
# ...
]
I try to use the Google Adwords API, with the official library here : https://github.com/googleads/googleads-python-lib
I use an Manager Account on Google Adwords and want to work with my client's accounts.
I can get all the the Adwords account ID (like 123-456-7891) but I don't know how to pass the account ID to my Google Adwords functions as a parameter.
Here's my main function :
def main(argv):
adwords_client = adwords.AdWordsClient.LoadFromStorage(path="googleads.yaml")
add_campaign(adwords_client)
I see any Account ID parameter in the official samples, as :
import datetime
import uuid
from googleads import adwords
def add_campaign(client):
# Initialize appropriate services.
campaign_service = client.GetService('CampaignService', version='v201809')
budget_service = client.GetService('BudgetService', version='v201809')
# Create a budget, which can be shared by multiple campaigns.
budget = {
'name': 'Interplanetary budget #%s' % uuid.uuid4(),
'amount': {
'microAmount': '50000000'
},
'deliveryMethod': 'STANDARD'
}
budget_operations = [{
'operator': 'ADD',
'operand': budget
}]
# Add the budget.
budget_id = budget_service.mutate(budget_operations)['value'][0][
'budgetId']
# Construct operations and add campaigns.
operations = [{
'operator': 'ADD',
'operand': {
'name': 'Interplanetary Cruise #%s' % uuid.uuid4(),
# Recommendation: Set the campaign to PAUSED when creating it to
# stop the ads from immediately serving. Set to ENABLED once you've
# added targeting and the ads are ready to serve.
'status': 'PAUSED',
'advertisingChannelType': 'SEARCH',
'biddingStrategyConfiguration': {
'biddingStrategyType': 'MANUAL_CPC',
},
'endDate': (datetime.datetime.now() +
datetime.timedelta(365)).strftime('%Y%m%d'),
# Note that only the budgetId is required
'budget': {
'budgetId': budget_id
},
'networkSetting': {
'targetGoogleSearch': 'true',
'targetSearchNetwork': 'true',
'targetContentNetwork': 'false',
'targetPartnerSearchNetwork': 'false'
},
# Optional fields
'startDate': (datetime.datetime.now() +
datetime.timedelta(1)).strftime('%Y%m%d'),
'frequencyCap': {
'impressions': '5',
'timeUnit': 'DAY',
'level': 'ADGROUP'
},
'settings': [
{
'xsi_type': 'GeoTargetTypeSetting',
'positiveGeoTargetType': 'DONT_CARE',
'negativeGeoTargetType': 'DONT_CARE'
}
]
}
}, {
'operator': 'ADD',
'operand': {
'name': 'Interplanetary Cruise banner #%s' % uuid.uuid4(),
'status': 'PAUSED',
'biddingStrategyConfiguration': {
'biddingStrategyType': 'MANUAL_CPC'
},
'endDate': (datetime.datetime.now() +
datetime.timedelta(365)).strftime('%Y%m%d'),
# Note that only the budgetId is required
'budget': {
'budgetId': budget_id
},
'advertisingChannelType': 'DISPLAY'
}
}]
campaigns = campaign_service.mutate(operations)
How can I tell Adwords API in which account I want to add this campaign ?
Thanks for your help !
OK my bad, I missed a documentation method (http://googleads.github.io/googleads-python-lib/googleads.adwords.AdWordsClient-class.html#SetClientCustomerId).
# ID of your customer here
CUSTOMER_SERVICE_ID = '4852XXXXX'
# Load customer account access
client = adwords.AdWordsClient.LoadFromStorage(path="googleads.yaml")
client.SetClientCustomerId(CUSTOMER_SERVICE_ID)
And the customer ID is now associate with the AdwordsClient variable as "client" set as parameters for other functions.
I'm trying to integrate CyberSource's REST API into a Django (Python) application. I'm following this GitHub example example.
It works like a charm but it is not clear to me from the example or from the documentation how to specify the device's fingerprint ID.
Here's a snippet of the request I'm sending in case it comes useful (note: this is just a method that lives inside a POPO):
def authorize_payment(self, card_token: str, total_amount: Money, customer: CustomerInformation = None,
merchant: MerchantInformation = None):
try:
request = {
'payment_information': {
# NOTE: REQUIRED.
'card': None,
'tokenized_card': None,
'customer': {
'customer_id': card_token,
},
},
'order_information': {
'amount_details': {
'total_amount': str(total_amount.amount),
'currency': str(total_amount.currency),
},
},
}
if customer:
request['order_information'].update({
'bill_to': {
'first_name': customer.first_name,
'last_name': customer.last_name,
'company': customer.company,
'address1': customer.address1,
'address2': customer.address2,
'locality': customer.locality,
'country': customer.country,
'email': customer.email,
'phone_number': customer.phone_number,
'administrative_area': customer.administrative_area,
'postalCode': customer.zip_code,
}
})
serialized_request = json.dumps(request)
data, status, body = self._payment_api_client.create_payment(create_payment_request=serialized_request)
return data.id
except Exception as e:
raise AuthorizePaymentError from e
This is python code for web scraping content from github repositories using BeautifulSoup library. I am facing error:
"NoneType' object has no attribute 'text'"
in this simple code. I am facing error in 2 lines which is commented in the code.
import requests
from bs4 import BeautifulSoup
import csv
URL = "https://github.com/DURGESHBARWAL?tab=repositories"
r = requests.get(URL)
soup = BeautifulSoup(r.text, 'html.parser')
repos = []
table = soup.find('ul', attrs = {'data-filterable-for':'your-repos-filter'})
for row in table.find_all('li', attrs = {'itemprop':'owns'}):
repo = {}
repo['name'] = row.find('div').find('h3').a.text
#First Error Position
repo['desc'] = row.find('div').p.text
#Second Error Postion
repo['lang'] = row.find('div', attrs = {'class':'f6 text-gray mt-2'}).find('span', attrs = {'class':'mr-3'}).text
repos.append(repo)
filename = 'extract.csv'
with open(filename, 'w') as f:
w = csv.DictWriter(f,['name','desc','lang'])
w.writeheader()
for repo in repos:
w.writerow(repo)
OUTPUT
Traceback (most recent call last): File "webscrapping.py", line 16,
in
repo['desc'] = row.find('div').p.text AttributeError: 'NoneType' object has no attribute 'text'
The reason this is happening is when you are finding elements via BeautifulSoup, it's acting like a dict.get() call. When you go to find elements, it gets them from the element tree. If it can't find one, rather than raising an Exception, it returns None. None doesn't have the attributes that an Element will have, like text, attr, etc. So when you make an Element.text call with no try/except or without verifying type, you are taking a gamble that the element will always be there.
I'd probably just keep the elements that are giving you issues in a temp variable first, that way you can type check. Either that or implement try/except
Type Checking
for row in table.find_all('li', attrs = {'itemprop':'owns'}):
repo = {}
repo['name'] = row.find('div').find('h3').a.text
p = row.find('div').p
if p is not None:
repo['desc'] = p.text
else:
repo['desc'] = None
lang = row.find('div', attrs = {'class':'f6 text-gray mt-2'}).find('span', attrs = {'class':'mr-3'})
if lang is not None
# Do something to pass here
repo['lang'] = lang.text
else:
repo['lang'] = None
repos.append(repo)
try/except
for row in table.find_all('li', attrs = {'itemprop':'owns'}):
repo = {}
repo['name'] = row.find('div').find('h3').a.text
#First Error Position
try:
repo['desc'] = row.find('div').p.text
except TypeError:
repo['desc'] = None
#Second Error Postion
try:
repo['lang'] = row.find('div', attrs = {'class':'f6 text-gray mt-2'}).find('span', attrs = {'class':'mr-3'}).text
except TypeError:
repo['lang'] = None
repos.append(repo)
I would tend towards try/except, personally, because it is a bit more succinct and exception catching is a good practice for robustness of your program
Your find calls are inaccurate and chained, so when you attempt to find a <div> tag that has no p child, you get None, but you proceed to call the attribute .text on None, which crashes your program with an AttributeError.
Try the following set of .find calls, which use the itemProp attributes you're after and uses a try-except block to null coalesce any missing fields:
import requests
from bs4 import BeautifulSoup
import csv
URL = "https://github.com/DURGESHBARWAL?tab=repositories"
r = requests.get(URL)
soup = BeautifulSoup(r.text, 'html.parser')
repos = []
table = soup.find('ul', attrs = {'data-filterable-for': 'your-repos-filter'})
for row in table.find_all('li', {'itemprop': 'owns'}):
repo = {
'name': row.find('a', {'itemprop' : 'name codeRepository'}),
'desc': row.find('p', {'itemprop' : 'description'}),
'lang': row.find('span', {'itemprop' : 'programmingLanguage'})
}
for k, v in repo.items():
try:
repo[k] = v.text.strip()
except AttributeError: pass
repos.append(repo)
filename = 'extract.csv'
with open(filename, 'w') as f:
w = csv.DictWriter(f,['name','desc','lang'])
w.writeheader()
for repo in repos:
w.writerow(repo)
Debug output (in addition to written CSV):
[ { 'desc': 'This a Django-Python Powered a simple functionality based '
'Bot application',
'lang': 'Python',
'name': 'Sandesh'},
{'desc': None, 'lang': 'Jupyter Notebook', 'name': 'python_notes'},
{ 'desc': 'Installing DSpace using docker',
'lang': 'Java',
'name': 'DSpace-Docker-Installation-1'},
{ 'desc': 'This Repo Contains the DSpace Installation Steps',
'lang': None,
'name': 'DSpace-Installation'},
{ 'desc': '(Official) The DSpace digital asset management system that '
'powers your Institutional Repository',
'lang': 'Java',
'name': 'DSpace'},
{ 'desc': 'This Repo contain the DSpace installation steps with '
'docker.',
'lang': None,
'name': 'DSpace-Docker-Installation'},
{ 'desc': 'This Repository contain the Intermediate system for the '
'Collaboration and DSpace System',
'lang': 'Python',
'name': 'Community-OER-Repository'},
{ 'desc': 'A class website to share the knowledge and expanding the '
'productivity through digital communication.',
'lang': 'PHP',
'name': 'class-website'},
{ 'desc': 'This is a POC for the Voting System. It is a precise '
'design and implementation of Voting System based on the '
'features of Blockchain which has the potential to '
'substitute the traditional e-ballet/EVM system for voting '
'purpose.',
'lang': 'Python',
'name': 'Blockchain-Based-Ballot-System'},
{ 'desc': 'It is a short describtion of Modern Django',
'lang': 'Python',
'name': 'modern-django'},
{ 'desc': 'It is just for the sample work.',
'lang': 'HTML',
'name': 'Task'},
{ 'desc': 'This Repo contain the sorting algorithms in C,predefiend '
'function of C, C++ and Java',
'lang': 'C',
'name': 'Sorting_Algos_Predefined_functions'},
{ 'desc': 'It is a arduino program, for monitor the temperature and '
'humidity from sensor DHT11.',
'lang': 'C++',
'name': 'DHT_11_Arduino'},
{ 'desc': "This is a registration from,which collect data from user's "
'desktop and put into database after validation.',
'lang': 'PHP',
'name': 'Registration_Form'},
{ 'desc': 'It is a dynamic multi-part data driven search engine in '
'PHP & MySQL from absolutely scratch for the website.',
'lang': 'PHP',
'name': 'search_engine'},
{ 'desc': 'It is just for learning github.',
'lang': None,
'name': 'Hello_world'}]