I have a folder containing thousands of images and each image needs a unique list of keywords added to it. I also have a table with fields showing the file path and associated list of desired keywords for each image. For example, one record might need the tags, "ORASH (a survey site code), Crew 1, Transect A Upstream, Site Layout". While the next record might need the tags, "ORWLW, Crew 2, Amphibian, Pacific Giant Salamander".
How do I iterate over each image to add the IPTC keywords to them? I'm using python 3 and the iptcinfo3 module but am willing to try other modules that may work.
Here's where I'm at now:
import os
import pandas as pd
from iptcinfo3 import IPTCInfo
srcdir = r'E:\photos'
files = os.listdir(srcdir)
# Create a dataframe from the table containing filepaths and associated keywords.
df = pd.read_excel(r'E:\photo_info.xlsx')
# Create a dictionary with the filename as the key and the tags as the value.
references = dict(df.set_index('basename')['tags'])
for file in files:
# Get the full filepath for each image.
filepath = os.path.join(srcdir, file)
# Create an object for a file that may not have IPTC data (ignore the 'Marker scan...' notification).
info = IPTCInfo(filepath, force=True)
At this point, I imagined I'd use info['keywords'] = ... in conjunction with the 'references' dictionary to plug the keywords into the correct files. Then info.save_as(filepath). I'm just not experienced enough to know how to make this work or even if it's a reasonable way of doing it. Any help would be appreciated!
I saved the table with the filenames and keywords as a .csv file where the fields and records looked like this (though the text in the 'Subject' field didn't include the quotes):
SourceFile, Artist, Subject
E:\photos\0048.JPG, MARY GRAY, "YEAR2022, REQUIRED, GPS UNIT WITH
TIME"
Because I use Jupyter Lab for other portions of this workflow, I ran this code there:
import os
os.system('cmd d: & exiftool -overwrite_original -sep ", " -csv="E:\photos\metadata.csv" E:\photos')
This opens the Windows command prompt, changes the path to the D: drive (where the exiftool.exe file was saved), invokes exiftool, sets it to overwrite the original image file rather than create a copy, defines the keyword separator in the .csv file, reads the .csv file that has the list of filenames and associated keywords, then runs it on the E:\photos directory.
Worked like a charm on about 4,000 photos!
Related
How do I access the tags attribute here in the Windows File Properties panel?
Are there any modules I can use? Most google searches yield properties related to media files, file access times, but not much related to metadata properties like Tags, Description etc.
the exif module was able to access a lot more properties than most of what I've been able to find, but still, it wasn't able to read the 'Tags' property.
The Description -> Tags property is what I want to read and write to a file.
There's an entire module dedicated to exactly what I wanted: IPTCInfo3.
import iptcinfo3, os, sys, random, string
# Random string gennerator
rnd = lambda length=3 : ''.join(random.choices(list(string.ascii_letters), k=length))
# Path to the file, open a IPTCInfo object
path = os.path.join(sys.path[0], 'DSC_7960.jpg')
info = iptcinfo3.IPTCInfo(path)
# Show the keywords
print(info['keywords'])
# Add a keyword and save
info['keywords'] = [rnd()]
info.save()
# Remove the weird ghost file created after saving
os.remove(path + '~')
I'm not particularly sure what the ghost file is or does, it looks to be an exact copy of the original file since the file size remains the same, but regardless, I remove it since it's completely useless to fulfilling the read/write purposes of metadata I need.
There have been some weird behaviours I've noticed while setting the keywords, like some get swallowed up into the file (the file size changed, I know they're there, but Windows doesn't acknowledge this), and only after manually deleting the keywords do they reappear suddenly. Very strange.
I want to import multiple csv files at once into QGIS. The files have Lat/Long data. I want the files to project the points. Basically I want the same results from importing the csv files as I would if I used Data Source Manager-Delimited Text with Point Coordinates selected and the x-field and y-field set to Long/Lat respectively.
I keep coming across the same python code on numerous forums. While I can get the files to import as tables, I can not get them to load with geometry (a next stage problem will also be getting the timestamp to load as date instead of a string, I may have to refactor all the files).
Here's the code available on forums which results in loading broken links (my files have column headers "Lat" and "Long"):
import glob, os
# Define path to directory of your csv files
path_to_csv = "C:/File Path/"
# Set current directory to path of csv files
os.chdir(path_to_csv)
# Find each .csv file and load them as vector layers
for fname in glob.glob("*.csv"):
uri ="file:///"+path_to_csv + fname+"encoding=%s&delimiter=%s&xField=%s&yField=%s&crs=%s" % ("UTF-8",",", "Long", "Lat","epsg:4326")
name=fname.replace('.csv', '')
lyr = QgsVectorLayer(uri, name, 'delimitedtext')
QgsProject.instance().addMapLayer(lyr)
This code will load layers, but with a warning triangle for "Unavailable Layer". Clicking on the triangle opens the "Repair Data Source" window. I can manually select the file and repair the link. But then it is nothing more than a table with all fields as strings.
If I run the code like this I get the files to import, but only as tables and without geometry:
import glob, os
# Define path to directory of your csv files
path_to_csv = "C:/Users/DanielStevens/Documents/Afghanistan Monitoring/Phase 2/Border Crossing/Crossing Polygons/Pakistan/"
# Set current directory to path of csv files
os.chdir(path_to_csv)
# Find each .csv file and load them as vector layers
for fname in glob.glob("*.csv"):
uri ="file:///"+path_to_csv + fname
"encoding=%s&delimiter=%s&xField=%s&yField=%s&crs=%s" % ("UTF-8",",", "Long",
"Lat","epsg:4326")
name=fname.replace('.csv', '')
lyr = QgsVectorLayer(uri, name, 'delimitedtext')
QgsProject.instance().addMapLayer(lyr)
How do I get the CSV files to batch import with geometry (Lat Long projecting points)?
I modified what you had to the line below and it worked perfectly. I removed the encoding because my data wasn't UTF-8. Not sure if that's what did it.
uri = "file:///" + path_to_csv + fname + "?delimiter=%s&crs=epsg:3857&xField=%s&yField=%s" % (",", "lon", "lat")
In case it help with part of the issue, using a csvt file when importing csv helps force the data types (a pain if you have a number of files especially if the file names change e.g. When a new batch needs to be processed). I was thinking about writing some python that would read the csv, create a csvt with the same filename and populate the file with the right number of column definitions. In the end, as I only have 30 files, it was quicker to use notepad to make the csvt and then rename it accordingly. I have also found that converting date time fields to Oracle date time is handled more consistently in Qgis. Hope that helps.
I'm trying incrementally to build a financial statement database. The first steps center around collecting 10-Ks from the SEC's EDGAR database. I have code for pulling the relevant 8-Ks, 10-Ks, and 10-Qs by CIK number and accession number, and retrieving the relevant excel spreadsheet. The code below is now centering on trying to create a folder within a directory, then name the folder with the CIK code, then pull the spreadsheet from the EDGAR database, and save the spreadsheet to the folder with the CIK code. My example is a csv file I'm calling "accessionnumtest.csv", which has headings:
company_name,report_type,cik,date,cik_accession
and data:
4Less Group, Inc.,10K/A,1438901,11/27/2019,edgar/data/1438901/000121390019024801.txt
AB INTERNATIONAL GROUP CORP.,10K,1605331,10/22/2019,edgar/data/1605331/000166357719000384.txt
ABM INDUSTRIES INC /DE/,10K,771497,12/20/2019,edgar/data/771497/000162828019015259.txt
ACTUANT CORP,10K,6955,10/29/2019,edgar/data/6955/000000695519000033.txt
my code is below
import os
import pandas as pd
path = os.getcwd()
folder_path = "C:/metricdatadb/"
df = pd.read_csv("accessionnumtest.csv")
folder_name = df['cik']
print(folder_name)
for row in df.iterrows():
dir = df.path.join(folder_path, folder_name)
os.makedirs(dir)
This code is giving me, AttributeError: 'DataFrame' object has no attribute 'path' error. I have renamed the path, checked for whitespace in the headers. Any suggestions are appreciated.
Regarding the error: os.path.join. Not pd.path.join. You are calling the wrong module.
That being said, your code is not doing what you are trying to do regardless of the error. folder_name will not update for each row. You could do row.cik to get the value for each iterrows()
dir = os.path.join(folder_path, row.cik)
It is relatively unclear what you're working towards accomplishing, particularly with .csv files and Pandas. The code you have seems to have a lot of curious errors in it, which I think might be ameliorated by going back to learn some of the more simple Python concepts before trying something as difficult as web-scraping. Note I don't mean to give up, rather than building up the fundamentals is a necessary step in this type of project.
That said, if I'm understanding your intent correctly, you want to create a file hierarchy for 10-K, 10-Q, etc. filings for several CIKs.
There shouldn't be any need to use .csv files, or pandas for this.
Probably the simplest way to do this would be to do it in the same step you download them.
Pseudocode for this would be as follows:
for cik in list_of_ciks:
first_file = find_first_file_online();
if first_file is 10-K:
save_to_10-K folder for CIK
if first_file is 10-Q:
save_to_10-Q folder for CIK
As I said above, you can skip the .csv file (Also, note that CSV stands for "comma-separated-value." Some of the entries in your data contain commas, e.g. "4Less Group, Inc." This is incompatible with a CSV file, as it will split the single entry into two columns on the comma, shifting all of your data one column).
When you process the data, you'll want to build the folders as you go.
When you iterate through a new CIK, create the master folder for that CIK. When you encounter a 10-K, create a folder for 10-K's and save it with a unique name. Since you need to use the accession numbers to get the excel sheets, that wouldn't be a bad naming convention to follow.
It would be something like this:
import requests
import pathlib
cik_list = [cik_1, cik_2... cik_n]
for cik in cik_list:
file = requests.get("cik/accession/Report.xlsx").data
with open(pathlib.Path(cik, report_type, accession_number + ".xlsx", "wb")) as excel_file:
excel_file.write(file.data)
The above code will definitely not run, and does not include everything you would need to make it work, since that information has been written by you. Integrating the above concepts into your code is up to you.
To reiterate, you have the CIK, the accession number, and the report type. To save the files in folders, you need only create the folders as you go, with the form "CIK/report_type/accession.xlsx"
My goal: To build a program that:
Opens a folder (provided by the user) from the user's computer
Iterates through that folder, opening each document in each subdirectory (named according to language codes; "AR," "EN," "ES," etc.)
Substitutes a string in for another string in each document. Crucially, the new string will change with each document (though the old string will not), according to the language code in the folder name.
My level of experience: Minimal; been learning python for a few months but this is the first program I'm building that's not paint-by-numbers. I'm building it to make a process at work faster. I'm sure I'm not building this as efficiently as possible; I've been throwing it together from my own knowledge and from reading stackexchange religiously while building it.
Research I've done on my own: I've been living in stackexchange the past few days, but I haven't found anyone doing quite what I'm doing (which was very surprising to me). I'm not sure if this is just because I lack the vocabulary to search (tried out a lot of search terms, but none of them totally match what I'm doing) or if this is just the wrong way of going about things.
The issue I'm running into:
I'm getting this error:
Traceback (most recent call last):
File "test5.py", line 52, in <module>
for f in os.listdir(src_dir):
OSError: [Errno 20] Not a directory: 'ExploringEduTubingEN(1).txt'
I'm not sure how to iterate through every file in the subdirectories and update a string within each file (not the file names) with a new and unique string. I thought I had it, but this error has totally thrown me off. Prior to this, I was getting an error for the same line that said "Not a file or directory: 'ExploringEduTubingEN(1).txt'" and it's surprising to me that the first error could request a file or a directory, and once I fixed that, it asked for just a directory; seems like it should've just asked for a directory at the beginning.
With no further ado, the code (placing at bottom because it's long to include context):
import os
ex=raw_input("Please provide an example PDF that we'll append a language code to. ")
#Asking for a PDF to which we'll iteratively append the language codes from below.
lst = ['_ar.pdf', '_cs.pdf', '_de.pdf', '_el.pdf', '_en_gb.pdf', '_es.pdf', '_es_419.pdf',
'_fr.pdf', '_id.pdf', '_it.pdf', '_ja.pdf', '_ko.pdf', '_nl.pdf', '_pl.pdf', '_pt_br.pdf', '_pt_pt.pdf', '_ro.pdf', '_ru.pdf',
'_sv.pdf', '_th.pdf', '_tr.pdf', '_vi.pdf', '_zh_tw.pdf', '_vn.pdf', '_zh_cn.pdf']
#list of language code PDF appending strings.
pdf_list=open('pdflist.txt','w+')
#creating a document to put this group of PDF filepaths in.
pdf2='pdflist.txt'
#making this an actual variable.
for word in lst:
pdf_list.write(ex + word + "\n")
#creating a version of the PDF example for every item in the language list, and then appending the language codes.
pdf_list.seek(0)
langlist=pdf_list.readlines()
#creating a list of the PDF paths so that I can use it below.
for i in langlist:
i=i.rstrip("\n")
#removing the line breaks.
pdf_list.close()
#closing the file after removing the line breaks.
file1=raw_input("Please provide the full filepath of the folder you'd like to convert. ")
#the folder provided by the user to iterate through.
folder1=os.listdir(file1)
#creating a list of the files within the folder
pdfpath1="example.pdf"
langfile="example2.pdf"
#setting variables for below
#my thought here is that i'd need to make the variable the initial folder, then make it a list, then iterate through the list.
for ogfile in folder1:
#want to iterate through all the files in the directory, including in subdirectories
src_dir=ogfile.split("/",6)
src_dir="/".join(src_dir[:6])
#goal here is to cut off the language code folder name and then join it again, w/o language code.
for f in os.listdir(src_dir):
f = os.path.join(src_dir, f)
#i admit this got a little convoluted–i'm trying to make sure the files put the right code in, I.E. that the document from the folder ending in "AR" gets the PDF that will now end in "AR"
#the perils of pulling from lots of different questions in stackexchange
with open(ogfile, 'r+') as f:
content = f.read()
f.seek(0)
f.truncate()
for langfile in langlist:
f.write(content.replace(pdfpath1, langfile))
#replacing the placeholder PDF link with the created PDF links from the beginning of the code
If you read this far, thanks. I've tried to provide as much information as possible, especially about my thought process. I'll keep trying things and reading, but I'd love to have more eyes on it.
You have to specify the full path to your directories/files. Use os.path.join to create a valid path to your file or directory (and platform-independent).
For replacing your string, simply modify your example string using the subfolder name. Assuming that ex as the format filename.pdf, you could use: newstring = ex[:-4] + '_' + str.lower(subfolder) + '.pdf'. That way, you do not have to specify the list of replacement strings nor loop through this list.
Solution
To iterate over your directory and replace the content of your files as you'd like, you can do the following:
# Get the name of the file: "example.pdf" (note the .pdf is assumed here)
ex=raw_input("Please provide an example PDF that we'll append a language code to. ")
# Get the folder to go through
folderpath=raw_input("Please provide the full filepath of the folder you'd like to convert. ")
# Get all subfolders and go through them (named: 'AR', 'DE', etc.)
subfolders=os.listdir(folderpath)
for subfolder in subfolders:
# Get the full path to the subfolder
fullsubfolder = os.path.join(folderpath,subfolder)
# If it is a directory, go through it
if os.path.isdir(fullsubfolder):
# Find all files in subdirectory and go through each of them
files = os.listdir(fullsubfolder)
for filename in files:
# Get full path to the file
fullfile = os.path.join(fullsubfolder, filename)
# If it is a file, process it (note: we do not check if it is a text file here)
if os.path.isfile(fullfile):
with open(fullfile, 'r+') as f:
content = f.read()
f.seek(0)
f.truncate()
# Create the replacing string based on the subdirectory name. Ex: 'example_ar.pdf'
newstring = ex[:-4] + '_' + str.lower(subfolder) + '.pdf'
f.write(content.replace(ex, newstring))
Note
Instead of asking the user to find write the folder, you could ask him to open the directory with a dialog box. See this question for more info: Use GUI to open directory in Python 3
Up until now, I have a structure like this on the top of all of my files (I process raw data and do analysis with pandas so I am working with a lot of raw data):
raw_location = 'C:/Users/OneDrive/raw/'
output_location = 'C:/Users/OneDrive/output/'
mtd_location = 'C:/Users/OneDrive/modified/'
py_location = 'C:/Users/OneDrive/py_files/'
There are a bunch of different paths and some .py files use the same path name to refer to a different path (for example, raw_location is the source of the data which is different for different files). It has become a mess.
Under the locations, I have a list of file names (import_filename, modified_filename, dashboard_filename). All told, I am wasting like 10+ lines of code on each file just to specify variable names. I know there must be a better way to do this.
So far I moved my .py and .ipynb files into folders within the main directory which means I can use relative paths like '../raw' which has helped. Can I create a file which has all of the paths and file name variables within it and then read that instead of listing the paths at the top of my code? What is the best practice here?
Edit: After reviewing the comments below and learning this issue deeper - I've added two additional options:
1) Use python "configparser" - https://docs.python.org/2/library/configparser.html
Examples:
https://stackoverflow.com/a/29479549/5088142
2) As BlackJack mentioned - one can remove the "class" from the imported file
You can write config file, e.g. named: LDconfig.py
raw_location = 'C:/Users/OneDrive/raw/'
output_location = 'C:/Users/OneDrive/output/'
mtd_location = 'C:/Users/OneDrive/modified/'
py_location = 'C:/Users/OneDrive/py_files/'
in your files, you will import this class from this LDconfig.py file using:
import LDconfig
In your files you can access the data using: importedmodule.variable, e.g.
LDconfig.raw_location
3) You can write config file, e.g. named: LDconfig.py with class
class LDconfig:
raw_location = 'C:/Users/OneDrive/raw/'
output_location = 'C:/Users/OneDrive/output/'
mtd_location = 'C:/Users/OneDrive/modified/'
py_location = 'C:/Users/OneDrive/py_files/'
in your files, you will import this class from this LDconfig.py file using:
from LDconfig import LDconfig
In your files you can access the data using: classname.variable, e.g.
LDconfig.raw_location