Kind of a noob question. Sorry for that.
I have a string path
W:\documents\files
Is it possible to create an hyperlink from that and store it in a csv so that when a click it in Excel it opens the file ?
How about this?
from pathlib import Path
link = Path('W:\\documents\\files\\sample.txt').as_uri()
It should return "file:///W:/documents/files/sample.txt"
I'm guessing you want something like this:
import csv
with open('my_csv_file.csv', 'w', newline='') as file:
writer = csv.writer(file)
writer.writerow(["file:///W:/documents/files"])
the csv module lets you read and write to csv files. Adding file:/// before your file path will let you link to it. Note that this code will write the text to a csv file but it won't become a hyperlink until you put the cursor in front of it and hit enter.
I don't know exactly what you said.
I think maybe you want to write hyperlink path in csv.
That is just use "".
like this.
function("W:\documents\files")
The documentation on h5py is very good https://docs.h5py.org/en/stable/quick.html though I cannot see an example or way of achieving what I need to do..
I want to read an h5 file in, adjust it in some way and write the new file to a separate location without changing the original. Is this possible? The following option doesn't suit my use case as the files are in a workflow:
f = h5py.File('myfile.hdf5','r+')
Ideally I would do something like:
f = h5py.File('myfile.hdf5','r')
... do something to f
f.write('newfile.hdf5')
Any help appreciated!
I have a data frame and I want to export it, using to_csv.
I need it to be a csv file inside a zip.
I tried using compression but it did not work as planned:
metadata_table.to_csv(r'/tmp/meta.gz', compression='gzip')
this code will create a zipped file, but inside is not an excel file, it's a regular text editor file. if I change the file name to .csv I will only get a regular csv (in excel format) with all the information messed up inside.
is it possible to do it with one command? and not export to csv first, and compress into a zip after?
Try saving with filename as file.csv.gz as written below:
import pandas as pd
data.to_csv('file.csv.gz', compression='gzip')
Hope this is helpful!
I am reading data from an Excel file and then I am trying to find the file with the dir. The issue here is that I am missing a function.
For instance the Data could read 1011754723 but the text file would read Hamlet1011754723_Page100.txt.
I found the function .endwith(."txt") but I couldn't find one with contains. Is there a function like this?
Thanks
You can use in
With something like this:
'1011754723' in 'Hamlet1011754723_Page100.txt'
I found the solution to my answer:
for txt in glob.glob("dir/*1011754723*.txt"):
txt_agg.append(txt)
How do I use ParaView's CSVReader in a Python Script? An example would be appreciated.
If you have a .csv file that looks like this:
x,y,z,attribute
0,0,0,0
1,0,0,1
0,1,0,2
1,1,0,3
0,0,1,4
1,0,1,5
0,1,1,6
1,1,1,7
then you can import it with a command that looks like this:
myReader = CSVReader(FileName='C:\foo.csv', guiName='foo.csv')
Also, if you don't add that guiName parameter, you can change the name later using the RenameSource command like this:
RenameSource(proxy = myReader, newName = 'MySuperNewName'
Credit for the renaming part of this answer to Sebastien Jourdain.
Unfortunately, I don't know Paraview at all. But I found "... simply record your work in the desktop application in the form of a python script ..." at their site. If you import a CSV like that, it might give you a hint.
Improving the #GregNash's answer. If you want to include only a single file (called foo.csv):
outcsv = CSVReader(FileName= 'foo.csv')
Or if you want to include all files with certain pattern use glob. For example if files start with string foo (aka foo.csv.0, foo.csv.1, foo.csv.2):
myreader = CSVReader(FileName=glob.glob('foo*'))
To use glob is neccesary import glob in the preamble. In general in Filename you could work with strings generated with python which could contain more complex pattern files and file's path.