I am currently working on a LabVIEW project which consists of 2 VIs(let it be A and B). I want to use output of A as input of the B. I am facing some problems while integrating the 2 VIs the output is an appended array which is also being stored in an empty file(given as input of A) whereas the input of B is a file path. Is there any conversion possible so that the values of appended array can be converted to file path? Can python script be used to automate the project, if yes then how?
I tried downloading LabVIEW 2020 but it is neither showing error nor there is any progress in the progress bar. Thus, the snippet attached here is from 2019 version.
I looked over your code.
When I tried your vi A, i got a file with 6 values seperated by a comma because that's my default setting. It looks like this:
Temperature,Pressure,Humidity
3,369,56,019,81,268
26,458,16,571,68,245
21,902,77,986,20,107
56,759,17,852,43,869
If this is the case in your generated file, use %.;%.3f as format for the writeSpreadsheet.vi
This forces the decimal point to be a point instead of a comma.
When I tried the code like this, it worked perfectly fine.
By the way, you don't have to use the flat sequence structure, just use your error wire and connect every vis from the beginning to the end.
Like this:
Additionally you should initialize the array that you shift in your while loop. If you use it one time, it might not be needed but if you call the vi a second time, the values might get stored there and the new values would just get appended.
Feel free to ask if you need more help :)
Here is an example of the .txt file that I have generated with your vi:
Temperature,Humidity,Pressure
38.802,66.355,4.347
64.646,68.519,60.982
71.997,56.336,96.116
20.744,24.189,75.689
85.731,25.168,20.026
65.386,67.284,97.049
Related
I am really new in programming, sorry for the awful way of asking.
So for a class of kids I'm helping I am trying to make a program in python which must assign a random integer to 2 variables "A" & "B", once that is done, we must check if the ratio A/B yields an integer.
If that is the case, then we must have python print "A/B=~" so we must print this as a question without displaying the answer.
I achieved this by printing the variables as text once I checked the previously stated condition. So far everything is fine. I did this with a loop 5 times and got 5 different questions. I made it in such a way that changing I couple numbers I can make as many questions as I want.
Just to give an example I got:
14/7=
56/8=
35/5=
7/1=
81/3=
So the python part was basically done.
What I am unable to do and would appreciate if anyone could help me is the next part.
I have to take this results and and be able to make a pdf, if possible with latex, with the caveat that i don't like the idea of manually typing said results since for all the kids I have to do this around 180 times (30 times per kid). Is there a way to do this, since typing all manually in latex would take forever.
Yes, you can generate a latex by Python automatically with pylatex package. Here is a full example about the pylatex: PyLatex full examples
I write a small demo if I understand you correctly. These demo will create a latex named "test.tex" in the current directory, and an equation "a/b=0" is printed.
from pylatex import Document, Section, Subsection, Command,Package, Alignat
doc = Document(default_filepath='basic.tex', documentclass='article')
with doc.create(Subsection('Alignat math environment')):
with doc.create(Alignat(numbering=False, escape=False)) as agn:
agn.append(r'\frac{a}{b} &= 0 \\')
# trying to generate the tex and pdf, and do not clean the tex file after generating the pdf
doc.generate_pdf("test", clean_tex=False)
If you have tools like latexmk which can convert tex to pdf installed, then a pdf name test.pdf will also created. Otherwise you can use your latex editor open the test.tex file.
I am using python and I wand to run a program the will open the file specified by the user. But the problem is that, if the user doesn't specify the exact file name, then it will give an error. If the user wants to open "99999-file-name.mp3"
and he has typed "filename.mp3", then how can the program open the file closest to the one specified?
First get a list of files in the particular folder
Then use difflib.get_close_matches like so:
difflib.get_close_matches(user_specified_file, list_of_files)
to find "good" matches.
N.B: Consider putting a providing a small cutoff i.e 0.1 as suggested by #tobias_k to ensure you do you get a match always as the default cutoff of 0.6 means sometimes nothing will be a "good match" for what the user entered.
Similarly if you need to get only one file name also pass in the optional parameter n=1 to get the closest match since if you don't specify it you will get the 3 best matches.
To answer this question, you need to first define "closest" because in computing this can mean very different things. If you want to compare strings and find the most similar, then one good way of doing that is checking the edit distance. There are Python libraries out there for that, i.e. https://pypi.python.org/pypi/editdistance.
You give it two strings and it tells you how much you have to change one string to get the other. As per the documentation:
>>> import editdistance
>>> editdistance.eval('banana', 'bahama')
2L
PS. Can't help but to mention that I think this is a bad idea. If you want to do sth with the file opened and the program starts opening random files, then either you're eventually gonna overwrite a file that is not meant to be overwritten or you try to process a file that can't be processed in your intended way. I would recommend using a file select box that you can easily use with tKinter for example (even though tKinter is cancer).
I have some .sav files that I want to check for bad data. What I mean by bad data is irrelevant to the problem. I have written a script in python using the spss module to check the cases and then delete them if they are bad. I do that within a datastep by defining a dataset object and then getting its case list. I then use
del datasetObj.cases[k]
to delete the problematic cases within the datastep.
Here is my problem:
Say I have a data set foo.sav and it is the active data set in spss, then I can run something like:
BEGIN PROGRAM PYTHON.
import spss
spss.StartDataStep()
datasetObj = spss.Dataset()
caselist = datasetObj.cases
del caselist[k]
spss.EndDataStep()
END PROGRAM.
from within the spss client and it will delete the case k from the data set foo.sav. But, if I run something like the following using the directory of foo.sav as the working directory:
import os, spss
pathname = os.curdir()
foopathname = os.path.join(pathname, 'foo.sav')
spss.Submit("""
GET FILE='%(foopathname)s'.
DATASET NAME file1.
DATASET ACTIVATE file1.
""" %locals())
spss.StartDataStep()
datasetObj = spss.Dataset()
caselist = datasetObj.cases
del caselist[3]
spss.EndDataStep()
from command line, then it doesn't delete the case k. Similar code which gets values will work fine. E.g.,
print caselist[3]
will print case k (when it is in the data step). I can even change the values for the various entries of a case. But it will not delete cases. Any ideas?
I am new to python and spss, so there may be something that I am not seeing which is obvious to others; hence why I am asking the question.
Your first piece of code did not work for me. I adjusted it as follows to get it working:
BEGIN PROGRAM PYTHON.
import spss
spss.StartDataStep()
datasetObj = spss.Dataset()
del datasetObj.cases[k]
spss.EndDataStep()
END PROGRAM.
Notice that, in your code, caselist is just a list, containing values taken from the datasetObj in SPSS. The attribute .cases belongs to datasetObj.
With spss.Submit, you can also delete cases (or actually, not select them) using the SPSS command SELECT IF. For example, if your file has a variable (column) named age, with values ranging from 0 to 100, you can delete all cases with an age lower than (in SPSS: lt or <) 25 using:
BEGIN PROGRAM PYTHON.
import spss
spss.Submit("""
SELECT IF age lt 25.
""")
END PROGRAM.
Don't forget to add some code to save the edited file.
caselist is not actually a regular list containing the dataset values. Although its interface is the list interface, it actually works directly with the dataset, so it does not contain a list of values. It just accesses operations on the SPSS side to retrieve, change, or delete values. The most important difference is that since Statistics is not keeping the data in memory, the size of the caselist is not limited by memory.
However, if you are trying to iterate over the cases with a loop using
range(spss.GetCaseCount())
and deleting some, the loop will eventually fail, because the actual case count reflects the deletions, but the loop limit doesn't reflect that. And datasetObj.cases[k] might not be the case you expect if an earlier case has been deleted. So you need to keep track of the deletions and adjust the limit or the k value appropriately.
HTH
I'm importing data coming from excel files that come from another office.
In one of the columns, for each cell, I have lists of numbers used as tags. These were manually inserted, by different people and (my guess) using computers with different thousands settings, so the result is very heterogeneous.
As an example I have:
tags= ['205', '306.3', '3,206,302','7.205.206']
If this was a CSV file (I tried converting one single file to check), using
pd.read_csv(my_file,sep=';')
would give me exactly the above mentioned list.
Unfortunately as said, we're talking about excel files (plural) and I have to deal with it, and using
pd.read_excel(my_file,sheetname=my_sheet,encoding='utf-16',converters{'my_column':str})
what I get instead is:
tags= ['205', '306.3', '3,206,302','7205206']
As you see, whenever the number can be expressed logically in thousands (so, not the second number in my list) the dot is recognised as a thousands separator and I get a single number, instead of three.
I tried reading documentation, and searching on stackoverflow and google, but the keywords to describe this problem are too vague and I didn't find a viable solution, yet.
How can I get the right list using excel files?
Thanks.
This problem is likely happening because pandas is running their number parser before their date parser.
One possible fix is to add a thousands separator. For example, if you are actually using ',' as your thousands separator, you could add thousands=',' in your excel reader:
pd.read_excel(my_file,sheetname=my_sheet,encoding='utf-16',thousands=',',converters{'my_column':str})
You could also pick an arbitrary thousand separator that doesn't exist in your data to make the output stay the same if thousands=None (which should be the default according to documentation), doesn't already deal with your problem. You should also make sure that you are converting the fields to str (in which case using thousands is kind of redundant, as it's not applied to trings either way).
EDIT:
I tried using the following dummy data ('test.xlsx'):
a b c d
205 306.3 3,206,302 7.205.206
and with
dataf = pandas.read_excel('test.xlsx', header=0, converters={'a':str, 'b':str,'c':str,'d':str})
print(dataf.to_string)
I got the following output:
Columns: [205, 306.3, 3,206,302, 7.205.206]
Which is exactly what you were looking for. Are you sure you have the latest version of pandas and that you are in fact not using converters = {'col':int} or float in your converters keyword?
As it stands, it sounds like you are either converting your fields to numeric (int or float), or there is a problem elsewhere in your code. The pandas read_excel seems to work as described, and I can get the results you specified with the code specified above. In other wods: Your code should work, if it doesn't it might be due to outdated pandas version, other parts in your code or even problems with the source data. As it stands, it's not possible to answer your question further with the information you have provided.
I just got into python very recently and now I'm practicing by (what I imagine to be rather simple, but challenging enough for me) creating small tools to sort files into folders.
So far it has been going pretty well, but now I've encountered a problem:
My files are in the following format:
myAsset_prefix1_prefix2_prettyName.ext ;
(i.e. Tiger_texture_spec_brightOrange.png)
myAsset always has a different length since it's dependent on name.
I want to sort every file of the same asset ( "myAsset_" tag) in a separate folder.
The copying to a separate folder etc is no challenge but..
I don't want to update an array by hand every time I create/receive a new asset.
So instead of using the startswith operation and make it run through a list, I'd like to build that array when my script runs, by making the script look at the name of the file and store everything up to and including the first "_" in a variable/array.
Is that possible?
I think you want the glob module. This allows you to list the files that match a certain format.
For example:
for filename in glob.glob(*.ext):
asset_tag = filename.split(" ")[0]