Why can't I find files that have py extension? - python

The below codes can find files that have a particular file extension. But same codes can't find files that have py extension. What is the reason for that?
search.py
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
def search(dir_or_file):
if dir_or_file.startswith("*."):
return [
os.path.join(i, m)
for i, j, k in os.walk("/")
for m in k
if m.endswith("." + dir_or_file.split(".")[1])
]
else:
return [
os.path.join(i, dir_or_file)
for i, j, k in os.walk("/")
if dir_or_file in k or dir_or_file in j
]
if __name__ == "__main__":
import sys
if len(sys.argv) == 2:
for i in search(sys.argv[1]):
print(i)
When I run the search.py script to search for files that have *.txt extension, I am getting the below output:
tanberk#kutlu:~$ ./search.py *.txt
/media/tanberk/Data/Belgeler/İş Başvurusu/a.txt
/media/tanberk/Data/Projects/astrology/a.txt
/home/tanberk/a.txt
/home/tanberk/Projects/astrology/a.txt
tanberk#kutlu:~$
Note: By the way there are many files that have *.txt extension. The program can not find all of them.
When I run the search.py script to search for files that have *.py extension, I am getting the below output:
tanberk#kutlu:~$ ./search.py *.py
tanberk#kutlu:~$

Your shell is expanding the pattern before your script sees it. You find all the a.txt files because *.txt expands to a.txt from your home directory (and you apparently have no other .txt files in your home directory), so the real command you ran was ./search.py a.txt, and all you see are files named a.txt.
At a guess, the reason you see nothing for *.py is because you have multiple .py files in your home directory, and your script explicitly does nothing if len(sys.argv) is anything other than 2. When the shell expands *.py, you end up with a too long sys.argv (because it actually ran something like ./search.py foo.py search.py ...) and do nothing.
To run your script as you expect (with the script interpreting the wildcards), quote your inputs to prevent shell expansion, e.g.:
$ ./search.py '*.txt'
I also suggest you change your argument processing to produce a usage message when it receives the wrong number of arguments; usage errors should never pass silently.

Related

Generate an output csv file outside the exe file made using Pyinstaller

Problem
I'm using PyInstaller on Windows to make an .exe file of my project, basically my projects generates a csv file as output and the name of the csv file is dependent on the current time so the program generates unique file each time it is ran
I couldn't find any resource online that could help me with this problem
PyInstaller Command that I used: (data.csv file added is supposed to be bundled with exe so no issue there)
pyinstaller src\main.py -F --name "Attendance_System" --add-data "src\data.csv;data" --add-data "C:\Users\Darshit Shah\OneDrive\Desktop\TCET\Att_Sys\att_sys\Lib\site-packages\customtkinter;customtkinter" --clean
code block where the file is generated:
except KeyboardInterrupt:
timer.cancel()
endTime = str(dt.datetime.now().time())
op_file = f"{app.currdate}_{app.startTime[0:-7]}_{endTime[0:-7]}.csv".replace(":","-")
app.getList().to_csv(f"{op_file}")
print("O/P File generated")
sys.exit()
Basically the code generates the file in the folder where my main.py is located but after bundling it with PyInstaller i cant seem to achieve that
Project Structure
my_proj
|
|--build
|
|--dist <--- "This is Where i want my output file to generate"
| `--my_proj.exe
|
|--proj_venv
| |--Include
| |--Lib
| |--Scripts
| `--pyvenv.cfg
|
`--src <--- "Folder where my output file would normally generate without .exe"
|--classes.py
|--interface.py
|--main.py
`--data.csv
Explanation of Problem
I echo #mrblue6's statement, but through past coding, believe that the line
app.getList().to_csv(f"{op_file}")
is the problem here. This would appear to generate the file in (most probably) the %TEMP%\_MEIXXXX folder (under the local AppData folder). This is because a compiled program uses a subdirectory of %TEMP% as its working directory (on Windows at least)
EDIT:
After posting, I remembered that as long as the output csv is in the same folder as your exe, you could do something like the following:
op_path = os.path.join(os.path.dirname(sys.executable), op_file)
As sys.executable holds the full path to the exe when compiled. This seems like a more robust solution than what I previously suggested. This would make:
import os
import sys # if you haven't already
op_file = f"{app.currdate}_{app.startTime[0:-7]}_{endTime[0:-7]}.csv".replace(":","-")
op_path = os.path.join(os.path.dirname(sys.executable), op_file)
app.getList().to_csv(f"{op_path}")
print("O/P File generated")
OLD:
I would try to change the name output to have an absolute path instead of just a file name, something like:
app.getList().to_csv(f"C:\users\dcs_2002\path\to\my_proj\dist\{op_file}")
for basic usage. If you want this is work elsewhere, it depends on the actual location of your my_proj folder, but I would do something like the following (assuming my_proj is in your home directory:
op_path = os.path.expanduser(f"~\\path\\to\\my_proj\\dist\\{op_file}")
# OR
op_path = os.path.join(os.path.expanduser("~"), "path", "to", "my_proj", "dist", op_file)
Both of these would work as os.path.expanduser() expands any leading ~'s to the path of your home directory. You could also use os.path.expandvars() to expand more complex percent-enclosed variables (ie `os.path.expandvars("%LOCALAPPDATA%\rest\of\path"). Obviously modify the paths to suit your needs, just make sure to replace any backslashes with a double backslash (to escape Python). All together this would be:
import os # if you haven't already
op_file = f"{app.currdate}_{app.startTime[0:-7]}_{endTime[0:-7]}.csv".replace(":","-")
op_path = os.path.expanduser(f"~\\path\\to\\my_proj\\dist\\{op_file}")
app.getList().to_csv(f"{op_path}")
print("O/P File generated")

PYTHON AND BATCH SCRIPT: Run file if it exists and create if it doesn't

Full Disclaimer: I DO NOT KNOW PYTHON.
Hi Guys,
I have made an AutoHotKey Script for my volume keys. I would like to create a batch file which runs a python file (so if I change computers, I can easily create this scripts) which would do the following
Check if volume_keys.ahk exists in the D Drive
If it exists, run that;
If it doesn't exist, then create a file named volume_keys.ahk and add my script to it.
My script is:
^!NumpadMult::Send {Volume_Mute}
^!NumpadAdd::Send {Volume_Up}
^!NumpadSub::Send {Volume_Down}
I know how to code the .bat file and just need help for the python point-of-view, but I request the community to check it:
#ECHO OFF
ECHO This script will run an AHK Script. If you want to stop this process from happening, then cross this window off.If you want to continye:
pause
cd d:
D:\run_volume_keys_ahk_script.py
I really appreciate any help by the community.
Thanks in advance
You can use the os library for this. Here's what the python program would look like.
import os
if os.path.isfile('D:\\volume_keys.ahk'): # check if it exists
os.system('D:\\volume_keys.ahk') # execute it
else:
with open('D:\\volume_keys.ahk', 'w') as f: # open it in w (write) mode
f.write('^!NumpadMult::Send {Volume_Mute} \
^!NumpadAdd::Send {Volume_Up} \
^!NumpadSub::Send {Volume_Down}') # Write to file
os.system('D:\\volume_keys.ahk') # execute
To activate the ahk script, you might want to use the subprocess module, of which I took the example from here
import subprocess
subprocess.call(["path/to/ahk.exe", "script.ahk"])
Note that you'll have to find the ahk executable on a computer before you can use the script, maybe you want to automatically check that too.
You can set the path you want to check for scripts in one string, and then add the filenames of your scripts as strings to a list. You can use listdir() from the os module to see any files and directories at a given path, then iterate over your scriptnames and check if it exists in that list of files. If it does, run it.
In this example I copy-pasted your script into a string as value for the key 'scriptname' in a dictionary, so that python can actually create the script file. This isn't really a neat way to do it though, you might want to have your scripts prepared in a directory next to your python script and copy them from there. See an example of how here
from os import listdir
from os.path import isfile, join
CHECK_PATH = "D:"
AHK_EXECUTABLE_PATH = "path/to/ahk.exe"
SCRIPTS_TO_CHECK = {'script1.ahk':"""^!NumpadMult::Send {Volume_Mute}
^!NumpadAdd::Send {Volume_Up}
^!NumpadSub::Send {Volume_Down} """, 'script2.ahk':" some other script here"}
files_to_check = set(listdir(CHECK_PATH)) # using a set for fast lookup later
for scriptname, script in SCRIPTS_TO_CHECK.items():
if not scriptname in files_to_check:
print(f"script {scriptname} not found, creating it.")
with open(scriptname, 'w') as file:
file.write(script)
# else
subprocess.call(AHK_EXECUTABLE_PATH, scriptname)

Find all files that use `print` but do not have `from __future__ import print_function`

I'm migrating a codebase from 2.7 to 3.6, and would like to ensure that all files that use print do the __future__ import.
How do I find/grep/ack recursively through a multi-package codebase to find all files that use print but don't have the from __future__ import print_function?
I know that 2to3 should handle this automatically, but I've seen one instance where there is a print expression of the form print("updated database: ", db_name) in a file that does not include print-function import. Running 2to3-2.7 on this file transforms the line to print(("updated database: ", db_name)), which changes the output. I would like to find all instances where this problem might arise in order to fix them before running the automated tool
If you don't mind doing this in Python itself:
import os
for folder, subfolder, files in os.walk('/my/project/dir'):
scripts = [f for f in files if f.endswith('.py')]
for script in scripts:
path = os.path.join(folder, script)
with open(path, 'r') as file:
text = file.read()
if "print" in text and "print_function" not in text:
print("future print import not found in file", path)
2 egreps (works with Mac egrep and gnu egrep, others dunno) --
#!/bin/bash
fileswithprint=` egrep --files-with-matches --include '*.py' --recursive -w print $* `
# -R: Follow all symbolic links, unlike -r
# too long: see xargs
egrep --files-without-match '^from __future .*print_function' $fileswithprint

Execution error when Passing arguments to a python script using os.system. The script takes sys.argv arguments

I have tried to execute a simple python command from cmd like C:\Users> stat.py < swagger.yaml > output.html, which executes stat.py by taking swagger.yaml as input argument and generates output.html file and it worked fine in cmd. But now i want to execute my stat.py file through another python file demo.py by passing the values swagger.yaml and output.html as sys.argv[0] and sys.argv[1] inside demo.py.
my command from cmd C:\Users> demo.py swagger.yaml output.html and my demo.py file is as follows..
# my demo.py file ....
import os
import sys
os.system('stat.py < sys.argv[1] > sys.argv[2]')
error - the system can not find the file specified.
Why i am getting this error and please any help to resolve it ..
Inside a normal string, no variable interpretation is applied. So you literally asked to read from a file named sys.argv[1] (possibly sys.argv1 if the file exists, thanks to shell globbing), and write to a file named sys.argv[2].
If you want to use the values sys.argv in your script, you need to format them into the string, e.g. with f-strings (modern Python 3.6 or so only):
os.system(f'stat.py < {sys.argv[1]} > {sys.argv[2]}') # Note f at beginning of literal
or on older Python 2.7, with str.format:
os.system('stat.py < {} > {}'.format(sys.argv[1], sys.argv[2]))
Note that however you slice it, this is dangerous; os.system is launching this in a shell, and arguments that contain shell metacharacters will be interpreted as such. It can't do anything the user didn't already have permission to do, but small mistakes by the user could dramatically change the behavior of the program. If you want to do this properly/safely, use subprocess, open the files yourself, and pass them in explicitly as stdin/stdout:
with open(sys.argv[1], 'rb') as infile, open(sys.argv[2], 'wb') as outfile:
subprocess.run(['stat.py'], stdin=infile, stdout=outfile)
This ensures the files can be opened in the first place before launching the process, doesn't allow the shell to interpret anything, and avoids the (minor) expense of launching a shell at all. It's also going to give you more useful errors if opening the files fails.

Applying a perl script to every file in a directory and obtain output using Python

I am trying to make a python script that will open a directory, apply a perl script to every file in that directory and get its out put in either multiple text files or just one.
I currently have:
import shlex, subprocess
arg_str = "perl tilt.pl *.pdb > final.txt"
arg = shlex.split(arg_str)
import os
framespdb = os.listdir("prac_frames")
for frames in framespdb:
subprocess.Popen(arg, stdout=True)
I keep getting *.pdb not found. I am very new to all of this so any help trying to complete this script would help.
*.pdb not found means exactly that - there won't be a *.pdb in whatever directory you're running the script... and as I read the code - I don't see anything to imply it's within 'frames' when it runs the perl script.
you probably need os.chdir(path) before the Popen.
How do I "cd" in Python?
...using a python script to run somewhat dubious syscalls to perl may offend some people but everyone's done it.. aside from that I'd point out:
always specify full paths (this becomes a problem if you will later say, want to run your job automatically from cron or an environment that doesn't have your PATH).
i.e. 'which perl' - put that full path in.
./.pdb would be better but not as good as the fullpath/.pdb (which you could use instead of the os.chdir option).
subprocess.Popen(arg, stdout=True)
does not expand filename wildcards. To handle your *.pdb, use shell=True.

Categories