How to create and write to files indefinitely in Python? [duplicate] - python

This question already has answers here:
Implement touch using Python?
(16 answers)
Closed 4 years ago.
I have to make a script which creates a certain amount of subfolders in each main folder (dir1, dir2, and dir3). Then, inside of each subfolder, there has to be files (.txt.for example) been created constantly until I (the user) decides to stop the program. Right now, I can create subfolders (from 0 to 99) in each main folder, but I'm not sure how to create the .txt files. Any suggestions or ideas would be very much appreciated. Here's my code:
import os
folders = ["dir1", "dir2", "dir3"]
count = 0
for folder in folders:
for count in range(100):
subfolder = str(count)
newfolder = os.makedirs(os.path.join(folder, subfolder))
I'm trying to do something like this
with open("%s/01.txt" %count, "wa") as fh
so it goes to each subdirectory and create the file

You can simply open and write nothing. For example:
with open("01.txt", "w") as fh:
fh.write("")
This means that you can also write stuff in the future, if necessary. It may not be necessary to .write(), but it improves readability.

Related

Apply script to multiple folders using string in file path [duplicate]

This question already has answers here:
How to use glob() to find files recursively?
(28 answers)
Closed 1 year ago.
I am new to the programing world and I have hit a snag on a piece of code.
What I have: I have a piece of code that identifies all .MP4 files and calculates the size of the files within a directory. So far I can only apply this code to a specific folder that I input manually. But the code works.
Problem: I would like to apply what I have to multiple folders within a directory. I have several folders with years in the file path and I would like to apply this code to each year individually. For example: I need a piece of code/direction to code that can allow me to run this code on all folders with '2021' in the name.
Any pointers or suggestions are welcome.
# import module
import os
# assign size
size = 0
# assign folder path
Folderpath = r'C:\file_path_name_here'
# get size
for path, dirs, files in os.walk(Folderpath):
for f in files:
if not f.endswith('.MP4'):
continue
else:
fp = os.path.join(path, f)
size += os.path.getsize(fp)
# display size
print("Folder size: " + str(size))
You can use glob for that.
If i understand you correctly you want to iterate through all Folders right?
I myself am not a routined coder aswell but i use it in for a Script where i have to iterate over an unknown number of files and folders. In my case PDF's
which then get scanned/indexed/merged...
This obviously returns a list of of files with which you then could workd through. os.path commonpath is also handy for that.
def getpdflisting(fpath):
filelist = []
for filepath in Path(os.path.join(config['Ordner']['path'] +\
fpath)).glob('**/*.pdf'):
filelist.append(str(filepath))
if filelist:
filelist = [x for x in filelist if x]
logger.info(filelist)
return filelist
Or even better https://stackoverflow.com/a/66042729/16573616

Removing old user directories [duplicate]

This question already has answers here:
How to delete the contents of a folder?
(26 answers)
Closed 1 year ago.
I am trying to create a python script to help clean up a folder that I create with powershell. This is a directory that contains folders named after the users for them to put stuff into.
Once they leave our site, their folder remains and for all new staff who come a folder gets created. This means that we have over 250 folders but only 100 active staff. I have a test folder that I am using so that I can get the script working and then add in extra bits like moving the directories to an old location, and then deleting them based on age. But at the moment I am working on the delete portion of the script. So far I have the below script, it runs with no errors, but it doesn't actually do anything and I am failing to see why..
It should be reading a csv file that I have of all current staff members, and then comparing that to the folders located in the filepath and then removing any folders that dont match the names in the CSV file.
The CSV file is generated from powershell using the same script that I used to create them.
import os
import pandas as pd
path = "//servername/Vault/Users$"
flist = pd.read_csv('D:/Staff List.csv')
file_name = flist['fileName'].tolist()
for fileName in os.listdir(path):
#If file is not present in list
if fileName not in file_name:
#Get full path of file and remove it
full_file_path = os.path.join(path, fileName)
os.remove(full_file_path)
Use shutil and recursively remove all old user directories and their contents.
import shutil
PATH = 'D:/user/bin/old_dir_to_remove'
shutil.rmtree(PATH)

Extracting files from directory via loop [duplicate]

This question already has answers here:
Extracting specific files within directory - Windows
(2 answers)
Closed 3 years ago.
I am running a loop which needs to access circa 200 files in the directory.
In the folder - the format of the files range as follows:
Excel_YYYYMMDD.txt
Excel_YYYYMMDD_V2.txt
Excel_YYYYMMDD_orig.txt
I only need to extract the first one - that is YYYYMMDD.txt, and nothing else
I am using glob.glob to access the directory where I specified my path name as follows:
path = "Z:\T\Al8787\Box\EAST\OT\\ABB files/2019/*[0-9].txt"
However the code also extracts the .Excel_YYYYMMDD_orig.txt file too
Appreciate assistance on how to modify code to only extract desired files.
A simple solution would be to loop through the files returned by glob.glob(path). For example if
files = glob.glob("Z:\T\Al8787\Box\EAST\OT\\ABB files/2019/*[0-9].txt")
you could have
cleaned_files = [file for file in files if "orig" not in files]
This would remove every item in files that contains the substring orig
Maybe you should incorporate a split function into the code:
var=path.split('whatever letter separates them')
Then print out that variable.

How to create directory in ram and get path to it? [duplicate]

This question already has answers here:
How can I create a ramdisk in Python?
(4 answers)
Closed 3 years ago.
I'm looking for a library that allows to create a "fake" directory in RAM and get path to it that would work like a path to normal directory on disk.
My scenario is that I have a python script that executes another program (third party that I cannot modify) but it produces files and writes them to a specified location.
Then, in my script, I read these files and do something with them.
It's slow and I know that the bottleneck here is reading/writing files to/from disk (even if it's SSD).
Is it possible that I don't change the core of my script and only replace the temporary folder path that I use to store intermediate files?
I don't need them and I remove them after they are processed.
The perfect solution would be something like this:
import fakeRAM
tmp_dir_path = fakeRAM.get_path()
...
os.system("program.exe " + tmp_dir_path)
import tempfile
import pandas as pd
import io
data=pd.read_csv("data.csv")
temp = tempfile.NamedTemporaryFile(prefix="", suffix=".csv", mode='w+b', delete=False)
data.to_csv(temp.name)
print("The created file is", temp.name)

How Could Python iterate the directoriy and detect the new files? [duplicate]

This question already has answers here:
Monitoring contents of files/directories? [duplicate]
(5 answers)
Closed 8 years ago.
How can I iterate all files and subdirs in a Dir,
and can detect new file when put there?
Thank you for your help!
Try os.walk. More specifically, try:
top="."
import os
for root, dirs, files in os.walk(top):
for name in files:
# do something with each file as 'name' (a)
pass
for name in dirs:
# do something with each subdir as 'name' (b)
pass
# do something with root (dir path so far)
# break at any point if necessary
To answer the question in your comment, at point (b) in the code, you can handle any subdirectory logic (also you can check to test that you have the right subdirectory to do certain custom logic on), via another function or directly/inline.

Categories