I want to print a file first, before printing everything within a folder. I know how to import the file just to look at it.
To be more specific I am trying to print the file to my printer which is connected through Wi-Fi.
What I am asking:
What code is needed to print (hard copy to printer) 1 file?
What code is needed to print all files within a folder (hard copies to printer)?
Have you tried this solution? It appears to be as easy as
import os
os.startfile("C:/Users/TestFile.txt", "print")
Since I'm not on a Windows system, I can't test it, but the internet seems to agree.
Once you know how to print one, printing many should not be any harder than iterating over the contents of the directory and filtering for files. For example, since you already have os imported, lets itearete over "C:/Users/data", assuming that's where you keep all your files to be printed:
base_dir = "C:/Users/data"
for file in os.listdir(base_dir):
file_name = os.path.join(base_dir, file)
if os.path.isfile(file):
os.startfile(file, "print")
You can also try using Pathlib, but for such a simple use case the advantages of it over os are not that big.
Related
I have a python file, converted from a Jupiter Notebook, and there is a subfolder called 'datasets' inside this file folder. When I'm trying to open a file that is inside that 'datasets' folder, with this code:
import pandas as pd
# Load the CSV data into DataFrames
super_bowls = pd.read_csv('/datasets/super_bowls.csv')
It says that there is no such file or folder. Then I add this line
os.getcwd()
And the output is the top-level folder of the project, and not the subfolder when is this python file. And I think maybe that's the reason why it's not working.
So, how can I open that csv file with relative paths? I don't want to use absolute path because this code is going to be used in another computers.
Why os.getcwd() is not getting the actual folder path?
My observation, the dot (.) notation to move to the parent directory sometimes does not work depending on the operating system. What I generally do to make it os agnostic is this:
import pandas as pd
import os
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
super_bowls = pd.read_csv(__location__ + '/datasets/super_bowls.csv')
This works on my windows and ubantu machine equally well.
I am not sure if there are other and better ways to achieve this. Would like to hear back if there are.
(edited)
Per your comment below, the current working directory is
/Users/ivanparra/AprendizajePython/
while the file is in
/Users/ivanparra/AprendizajePython/Jupyter/datasets/super_bowls.csv
For that reason, going to the datasets subfolder of the current working directory (CWD) takes you to /Users/ivanparra/AprendizajePython/datasets which either doesn't exist or doesn't contain the file you're looking for.
You can do one of two things:
(1) Use an absolute path to the file, as in
super_bowls = pd.read_csv("/Users/ivanparra/AprendizajePython/Jupyter/datasets/super_bowls.csv")
(2) use the right relative path, as in
super_bowls = pd.read_csv("./Jupyter/datasets/super_bowls.csv")
There's also (3) - use os.path.join to contact the CWD to the relative path - it's basically the same as (2).
(you can also use
The answer really lies in the response by user2357112:
os.getcwd() is working fine. The problem is in your expectations. The current working directory is the directory where Python is running, not the directory of any particular source file. – user2357112 supports Monica May 22 at 6:03
The solution is:
data_dir = os.path.dirname(__file__)
Try this code
super_bowls = pd.read_csv( os.getcwd() + '/datasets/super_bowls.csv')
I noticed this problem a few years ago. I think it's a matter of design style. The problem is that: your workspace folder is just a folder, not a project folder. Most of the time, your relative reference is based on the current file.
VSCode actually supports the dynamic setting of cwd, but that's not the default. If your work folder is not a rigorous and professional project, I recommend you adding the following settings to launch.json. This is the simplest answer you need.
"cwd": "${fileDirname}"
Thanks to everyone that tried to help me. Thanks to the Roy2012 response, I got a code that works for me.
import pandas as pd
import os
currentPath = os.path.dirname(__file__)
# Load the CSV data into DataFrames
super_bowls = pd.read_csv(currentPath + '/datasets/super_bowls.csv')
The os.path.dirname gives me the path of the current file, and let me work with relative paths.
'/Users/ivanparra/AprendizajePython/Jupyter'
and with that it works like a charm!!
P.S.: As a side note, the behavior of os.getcwd() is quite different in a Jupyter Notebook than a python file. Inside the notebook, that function gives the current file path, but in a python file, gives the top folder path.
I have a directory containing mutliple files with similar names and subdirectories named after these so that files with like-names are located in that subdirectory. I'm trying to concatenate all the .sdf files in a given subdirectory to a single .sdf file.
import os
from os import system
for ele in os.listdir(Path):
if ele.endswith('.sdf'):
chdir(Path + '/' + ele[0:5])
system('cat' + ' ' + '*.sdf' + '>' + ele[0:5] + '.sdf')
However when I run this, the concatenated file includes every .sdf file from the original directory rather than just the .sdf files from the desired one. How do I alter my script to concatenate the files in the subdirectory only?
this is a very clumsy way of doing it. Using chdir is not recommended, and system either (deprecated, and overkill to call cat)
Let me propose a pure python implementation using glob.glob to filter the .sdf files, and read each file one by one and write to the big file opened before the loop:
import glob,os
big_sdf_file = "all_data.sdf" # I'll let you compute the name/directory you want
with open(big_sdf_file,"wb") as fw:
for sdf_file in glob.glob(os.path.join(Path,"*.sdf")):
with open(sdf_file,"rb") as fr:
fw.write(fr.read())
I left big_sdf_file not computed, I would not recommend to put it in the same directory as the other files, since running the script twice would result in taking the output as input as well.
Note that the drawback of this approach is that if the files are big, they're read fully into memory, which can cause problems. In that case, replace
fw.write(fr.read())
by:
shutil.copyfileobj(fr,fw)
(importing shutil is necessary in that case). That allows packet copy instead of full-file read/write.
I'll add that it's probably not the full solution you're expecting, since there seem to be something about scanning the sub-directories of Path to create 1 big .sdf file per sub-directory, but with the provided code which doesn't use any system command or chdir, it should be easier to adapt to your needs.
I am sure I am missing this, as it has to be easy, but I have looked in Google, The Docs, and SO and I can not find a simple way to copy a bunch of directories into another directory that already exists from Python 3?
All of the answers I find recommend using shutil.copytree but as I said, I need to copy a bunch of folders, and all their contents, into an existing folder (and keep any folders or that already existed in the destination)
Is there a way to do this on windows?
I would look into using the os module built into python. Specifically os.walk which returns all the files and subdirectories within the directory you point it to. I will not just tell you exactly how to do it, however here is some sample code that i use to backup my files. I use the os and zipfile modules.
import zipfile
import time
import os
def main():
date = time.strftime("%m.%d.%Y")
zf = zipfile.ZipFile('/media/morpheous/PORTEUS/' + date, 'w')
for root, j, files in os.walk('/home/morpheous/Documents'):
for i in files:
zf.write(os.path.join(root, i))
zf.close()
Hope this helps. Also it should be pointed out that i do this in linux, however with a few changes to the file paths it should work the same
I have recently begun working on a new computer. All my python files and my data are in the dropbox folder, so having access to the data is not a problem. However, the "user" name on the file has changed. Thus, none of my os.chdir() operations work. Obviously, I can modify all of my scripts using a find and replace, but that won't help if I try using my old computer.
Currently, all the directories called look something like this:
"C:\Users\Old_Username\Dropbox\Path"
and the files I want to access on the new computer look like:
"C:\Users\New_Username\Dropbox\Path"
Is there some sort of try/except I can build into my script so it goes through the various path-name options if the first attempt doesn't work?
Thanks!
Any solution will involve editing your code; so if you are going to edit it anyway - its best to make it generic enough so it works on all platforms.
In the answer to How can I get the Dropbox folder location programmatically in Python? there is a code snippet that you can use if this problem is limited to dropbox.
For a more generic solution, you can use environment variables to figure out the home directory of a user.
On Windows the home directory is location is stored in %UserProfile%, on Linux and OSX it is in $HOME. Luckily Python will take care of all this for you with os.path.expanduser:
import os
home_dir = os.path.expanduser('~')
Using home_dir will ensure that the same path is resolved on all systems.
Thought the file sq.py with these codes(your olds):
C:/Users/Old_Username/Dropbox/Path
for x in range:
#something
def Something():
#something...
C:/Users/Old_Username/Dropbox/Path
Then a new .py file run these codes:
with open("sq.py","r") as f:
for x in f.readlines():
y=x
if re.findall("C:/Users/Old_Username/Dropbox/Path",x) == ['C:/Users/Old_Username/Dropbox/Path']:
x="C:/Users/New_Username/Dropbox/Path"
y=y.replace(y,x)
print (y)
Output is:
C:/Users/New_Username/Dropbox/Path
for x in range:
#something
def Something():
#something...
C:/Users/New_Username/Dropbox/Path
Hope its your solution at least can give you some idea dealing with your problem.
Knowing that eventually I will move or rename my projects or scripts, I always use this code right at the beginning:
import os, inspect
this_dir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
this_script = inspect.stack()[0][1]
this_script_name = this_script.split('/')[-1]
If you call your script not with the full but a relative path, then this_script will also not contain a full path. this_dir however will always be the full path to the directory.
I'm not sure even where to start.
I have a list of output files from a program, lets call them foo. They are numbered outputs like foo_1.out
I'd like to make a directory for each file, move the file to its directory, run a bash script within that directory, take the output from each script, copy it to the root directory as a concatenated single file.
I understand that this is not a forum for "hey, do my work for me", I'm honestly trying to learn. Any suggestions on where to look are sincerely appreciated!
Thanks!
You should probably look up the documentation for the python modules os - specifically os.path and a couple of others - and subprocess which can be found here and here respectively.
Without wanting to do it all for you as you stated - you'll be wanting to do something like:
for f in filelist:
[pth, ext] = os.path.splitext(f)
os.mkdir(pth)
out = subprocess.Popen(SCRIPTNAME, stdout=...)
# and so on...
To get a list of all files in a directory or make folders, check out the os module. Specifically, try os.listdir and os.mkdir
To copy files, you could either manually open each file, copy the contents to a string, and rewrite it to a different file. Alternatively, look at the shutil module
To run bash scripts, use the subprocess library.
All three of those should be a part of python's standard library.