def digits_plus(test):
test=0
while (test<=3):
print str(test)+"+",
test = test+1
return()
digits_plus(3)
The output is:
0+ 1+ 2+ 3+
However i would like to get: 0+1+2+3+
Another method to do that would be to create a list of the numbers and then join them.
mylist = []
for num in range (1, 4):
mylist.append(str(num))
we get the list [1, 2, 3]
print '+'.join(mylist) + '+'
If you're stuck using Python 2.7, start your module with
from __future__ import print_function
Then instead of
print str(test)+"+",
use
print(str(test)+"+", end='')
You'll probably want to add a print() at the end (out of the loop!-) to get a new-line after you're done printing the rest.
You could also use the sys.stdout object to write output (to stdout) that you have more fine control over. This should let you output exactly and only the characters you tell it to (whereas print will do some automatic line endings and casting for you)
#!/usr/bin/env python
import sys
test = '0'
sys.stdout.write(str(test)+"+")
# Or my preferred string formatting method:
# (The '%s' implies a cast to string)
sys.stdout.write("%s+" % test)
# You probably don't need to explicitly do this,
# If you get unexpected (missing) output, you can
# explicitly send the output like
sys.stdout.flush()
Related
text = 'PYTHON'
for index in range(len(text)):
print(*text[:index + 1])
The * in the print function is producing a space between the characters on sys.stdout. Can someone please tell me what is it called and what does it actually do?
The print of * for a text is equal as printing print(text[0], text[1], ..., text[n]) and this is printing each part with a space between.
you can do
text = 'PYTHON'
for index in range(len(text))
print("".join(list(text)[:index + 1]))
or
text = 'PYTHON'
for index in range(len(text))
print(*text[:index + 1], sep='')
that will print each part without space in between.
Output
P
PY
PYT
PYTH
PYTHO
PYTHON
It is called an asterisk.
The asterisk passes all of the items in list into the print function call as separate arguments, without us even needing to know how many arguments are in the list.
You can read more about it here:
https://treyhunner.com/2018/10/asterisks-in-python-what-they-are-and-how-to-use-them/
#!/usr/bin/env python
import os,subprocess,re
f=open("/var/tmp/disks_out","w")
proc=subprocess.Popen(['df', '-h'],stdout=subprocess.PIPE,stderr=subprocess.PIPE,shell=False)
out,err=proc.communicate()
for line in out:
f.write(line)
f.close()
f1=open("/var/tmp/disks_out","r")
disks=[]
for line in f1:
m=re.search(r'(c.*s0)',line)
if m:
disk=m.group(1)
disks.append(disk)
disks = disks[0][:-1]
slices =[disks+i for i in str(range(5))]
print(slices)
and the out put i am getting below:
['c0t5000CCA025A29894d0s0', 'c0t5000CCA025A29894d0s1', 'c0t5000CCA025A29894d0s3', 'c0t5000CCA025A29894d0s4', 'c0t50 00CCA025A29894d0s5', 'c0t5000CCA025A29894d0s6']
But i want to get output similar too:
c0t5000CCA025A29894d0s1,c0t5000CCA025A29894d0s2,c0t5000CCA025A29894d0s3
If you want to get it with commas:
print(','.join(slices))
or rather for python 2.7:
print ','.join(slices)
What you printed out was list, so python interpreted it as one. join() method joins every element of the iterable by passed string, more info here (for Python 2.7 as you put in tag) but it seems you are using Python 3 here.
I am fairly new to Python and need a little help here.
I have a Python script running on Python 2.6 that parses some JSON.
Example Code:
if "prid" in data[p]["prdts"][n]:
print data[p]["products"][n]["prid"],
if "metrics" in data[p]["prdts"][n]:
lenmet = len(data[p]["prdts"][n]["metrics"])
i = 0
while (i < lenmet):
if (data[p]["prdts"][n]["metrics"][i]["metricId"] == "price"):
print data[p]["prdts"][n]["metrics"][i]["metricValue"]["value"]
break
Now, this prints values in 2 columns:
prid price
123 20
234 40
As you see the fields separator above is ' '. How can I put a field separator like BEL character in the output?
Sample expected output:
prid price
123^G20
234^G40
FWIW, your while loop doesn't increment i, so it will loop forever, but I assume that was just a copy & paste error, and I'll ignore it in the rest of my answer.
If you want to use two separate print statements to print your data on one line you can't avoid getting that space produced by the first print statement. Instead, simply save the prid data until you can print it with the price in one go using string concatenation. Eg,
if "prid" in data[p]["prdts"][n]:
prid = [data[p]["products"][n]["prid"]]
if "metrics" in data[p]["prdts"][n]:
lenmet = len(data[p]["prdts"][n]["metrics"])
i = 0
while (i < lenmet):
if (data[p]["prdts"][n]["metrics"][i]["metricId"] == "price"):
price = data[p]["prdts"][n]["metrics"][i]["metricValue"]["value"]
print str(prid) + '\a' + str(price)
break
Note that I'm explicitly converting the prid and price to string. Obviously, if either of those items is already a string then you don't need to wrap it in str(). Normally, we can let print convert objects to string for us, but we can't do
print prid, '\a', price
here because that will give us an unwanted space between each item.
Another approach is to make use of the new print() function, which we can import using a __future__ import at the top of the script, before other imports:
from __future__ import print_function
# ...
if "prid" in data[p]["prdts"][n]:
print(data[p]["products"][n]["prid"], end='\a')
if "metrics" in data[p]["prdts"][n]:
lenmet = len(data[p]["prdts"][n]["metrics"])
i = 0
while (i < lenmet):
if (data[p]["prdts"][n]["metrics"][i]["metricId"] == "price"):
print(data[p]["prdts"][n]["metrics"][i]["metricValue"]["value"])
break
I don't understand why you want to use BEL as a separator rather than something more conventional, eg TAB. The BEL char may print as ^G in your terminal, but it's invisible in mine, and if you save this output to a text file it may not display correctly in a text viewer / editor.
BTW, It would have been better if you posted a Minimal, Complete, Verifiable Example that focuses on your actual printing problem, rather than all that crazy JSON parsing stuff, which just makes your question look more complicated than it really is, and makes it impossible to test your code or their modifications to it.
I'm trying to read a null terminated string but i'm having issues when unpacking a char and putting it together with a string.
This is the code:
def readString(f):
str = ''
while True:
char = readChar(f)
str = str.join(char)
if (hex(ord(char))) == '0x0':
break
return str
def readChar(f):
char = unpack('c',f.read(1))[0]
return char
Now this is giving me this error:
TypeError: sequence item 0: expected str instance, int found
I'm also trying the following:
char = unpack('c',f.read(1)).decode("ascii")
But it throws me:
AttributeError: 'tuple' object has no attribute 'decode'
I don't even know how to read the chars and add it to the string, Is there any proper way to do this?
Here's a version that (ab)uses __iter__'s lesser-known "sentinel" argument:
with open('file.txt', 'rb') as f:
val = ''.join(iter(lambda: f.read(1).decode('ascii'), '\x00'))
How about:
myString = myNullTerminatedString.split("\x00")[0]
For example:
myNullTerminatedString = "hello world\x00\x00\x00\x00\x00\x00"
myString = myNullTerminatedString.split("\x00")[0]
print(myString) # "hello world"
This works by splitting the string on the null character. Since the string should terminate at the first null character, we simply grab the first item in the list after splitting. split will return a list of one item if the delimiter doesn't exist, so it still works even if there's no null terminator at all.
It also will work with byte strings:
myByteString = b'hello world\x00'
myStr = myByteString.split(b'\x00')[0].decode('ascii') # "hello world" as normal string
If you're reading from a file, you can do a relatively larger read - estimate how much you'll need to read to find your null string. This is a lot faster than reading byte-by-byte. For example:
resultingStr = ''
while True:
buf = f.read(512)
resultingStr += buf
if len(buf)==0: break
if (b"\x00" in resultingStr):
extraBytes = resultingStr.index(b"\x00")
resultingStr = resultingStr.split(b"\x00")[0]
break
# now "resultingStr" contains the string
f.seek(0 - extraBytes,1) # seek backwards by the number of bytes, now the pointer will be on the null byte in the file
# or f.seek(1 - extraBytes,1) to skip the null byte in the file
(edit version 2, added extra way at the end)
Maybe there are some libraries out there that can help you with this, but as I don't know about them lets attack the problem at hand with what we know.
In python 2 bytes and string are basically the same thing, that change in python 3 where string is what in py2 is unicode and bytes is its own separate type, which mean that you don't need to define a read char if you are in py2 as no extra work is required, so I don't think you need that unpack function for this particular case, with that in mind lets define the new readString
def readString(myfile):
chars = []
while True:
c = myfile.read(1)
if c == chr(0):
return "".join(chars)
chars.append(c)
just like with your code I read a character one at the time but I instead save them in a list, the reason is that string are immutable so doing str+=char result in unnecessary copies; and when I find the null character return the join string. And chr is the inverse of ord, it will give you the character given its ascii value. This will exclude the null character, if its needed just move the appending...
Now lets test it with your sample file
for instance lets try to read "Sword_Wea_Dummy" from it
with open("sword.blendscn","rb") as archi:
#lets simulate that some prior processing was made by
#moving the pointer of the file
archi.seek(6)
string=readString(archi)
print "string repr:", repr(string)
print "string:", string
print ""
#and the rest of the file is there waiting to be processed
print "rest of the file: ", repr(archi.read())
and this is the output
string repr: 'Sword_Wea_Dummy'
string: Sword_Wea_Dummy
rest of the file: '\xcd\xcc\xcc=p=\x8a4:\xa66\xbfJ\x15\xc6=\x00\x00\x00\x00\xeaQ8?\x9e\x8d\x874$-i\xb3\x00\x00\x00\x00\x9b\xc6\xaa2K\x15\xc6=;\xa66?\x00\x00\x00\x00\xb8\x88\xbf#\x0e\xf3\xb1#ITuB\x00\x00\x80?\xcd\xcc\xcc=\x00\x00\x00\x00\xcd\xccL>'
other tests
>>> with open("sword.blendscn","rb") as archi:
print readString(archi)
print readString(archi)
print readString(archi)
sword
Sword_Wea_Dummy
ÍÌÌ=p=Š4:¦6¿JÆ=
>>> with open("sword.blendscn","rb") as archi:
print repr(readString(archi))
print repr(readString(archi))
print repr(readString(archi))
'sword'
'Sword_Wea_Dummy'
'\xcd\xcc\xcc=p=\x8a4:\xa66\xbfJ\x15\xc6='
>>>
Now that I think about it, you mention that the data portion is of fixed size, if that is true for all files and the structure on all of them is as follow
[unknow size data][know size data]
then that is a pattern we can exploit, we only need to know the size of the file and we can get both part smoothly as follow
import os
def getDataPair(filename,knowSize):
size = os.path.getsize(filename)
with open(filename, "rb") as archi:
unknown = archi.read(size-knowSize)
know = archi.read()
return unknown, know
and by knowing the size of the data portion, its use is simple (which I get by playing with the prior example)
>>> strins_data, data = getDataPair("sword.blendscn", 80)
>>> string_data, data = getDataPair("sword.blendscn", 80)
>>> string_data
'sword\x00Sword_Wea_Dummy\x00'
>>> data
'\xcd\xcc\xcc=p=\x8a4:\xa66\xbfJ\x15\xc6=\x00\x00\x00\x00\xeaQ8?\x9e\x8d\x874$-i\xb3\x00\x00\x00\x00\x9b\xc6\xaa2K\x15\xc6=;\xa66?\x00\x00\x00\x00\xb8\x88\xbf#\x0e\xf3\xb1#ITuB\x00\x00\x80?\xcd\xcc\xcc=\x00\x00\x00\x00\xcd\xccL>'
>>> string_data.split(chr(0))
['sword', 'Sword_Wea_Dummy', '']
>>>
Now to get each string a simple split will suffice and you can pass the rest of the file contained in data to the appropriated function to be processed
Doing file I/O one character at a time is horribly slow.
Instead use readline0, now on pypi: https://pypi.org/project/readline0/ . Or something like it.
In 3.x, there's a "newline" argument to open, but it doesn't appear to be as flexible as readline0.
Here is my implementation:
import struct
def read_null_str(f):
r_str = ""
while 1:
back_offset = f.tell()
try:
r_char = struct.unpack("c", f.read(1))[0].decode("utf8")
except:
f.seek(back_offset)
temp_char = struct.unpack("<H", f.read(2))[0]
r_char = chr(temp_char)
if ord(r_char) == 0:
return r_str
else:
r_str += r_char
I am using this code to find difference between two csv list and hove some formatting questions. This is probably an easy fix, but I am new and trying to learn and having alot of problems.
import difflib
diff=difflib.ndiff(open('test1.csv',"rb").readlines(), open('test2.csv',"rb").readlines())
try:
while 1:
print diff.next(),
except:
pass
the code works fine and I get the output I am looking for as:
Group,Symbol,Total
- Adam,apple,3850
? ^
+ Adam,apple,2850
? ^
bob,orange,-45
bob,lemon,66
bob,appl,-56
bob,,88
My question is how do I clean the formatting up, can I make the Group,Symbol,Total into sperate columns, and the line up the text below?
Also can i change the ? to represent a text I determine? such as test 1 and test 2 representing which sheet it comes from?
thanks for any help
Using difflib.unified_diff gives much cleaner output, see below.
Also, both difflib.ndiff and difflib.unified_diff return a Differ object that is a generator object, which you can directly use in a for loop, and that knows when to quit, so you don't have to handle exceptions yourself. N.B; The comma after line is to prevent print from adding another newline.
import difflib
s1 = ['Adam,apple,3850\n', 'bob,orange,-45\n', 'bob,lemon,66\n',
'bob,appl,-56\n', 'bob,,88\n']
s2 = ['Adam,apple,2850\n', 'bob,orange,-45\n', 'bob,lemon,66\n',
'bob,appl,-56\n', 'bob,,88\n']
for line in difflib.unified_diff(s1, s2, fromfile='test1.csv',
tofile='test2.csv'):
print line,
This gives:
--- test1.csv
+++ test2.csv
## -1,4 +1,4 ##
-Adam,apple,3850
+Adam,apple,2850
bob,orange,-45
bob,lemon,66
bob,appl,-56
So you can clearly see which lines were changed between test1.csv and test1.csv.
To line up the columns, you must use string formatting.
E.g. print "%-20s %-20s %-20s" % (row[0],row[1],row[2]).
To change the ? into any text test you like, you'd use s.replace('any text i like').
Your problem has more to do with the CSV format, since difflib has no idea it's looking at columnar fields. What you need is to figure out into which field the guide is pointing, so that you can adjust it when printing the columns.
If your CSV files are simple, i.e. they don't contain any quoted fields with embedded commas or (shudder) newlines, you can just use split(',') to separate them into fields, and figure out where the guide points as follows:
def align(line, guideline):
"""
Figure out which field the guide (^) points to, and the offset within it.
E.g., if the guide points 3 chars into field 2, return (2, 3)
"""
fields = line.split(',')
guide = guideline.index('^')
f = p = 0
while p + len(fields[f]) < guide:
p += len(fields[f]) + 1 # +1 for the comma
f += 1
offset = guide - p
return f, offset
Now it's easy to show the guide properly. Let's say you want to align your columns by printing everything 12 spaces wide:
diff=difflib.ndiff(...)
for line in diff:
code = line[0] # The diff prefix
print code,
if code == '?':
fld, offset = align(lastline, line[2:])
for f in range(fld):
print "%-12s" % '',
print ' '*offset + '^'
else:
fields = line[2:].rstrip('\r\n').split(',')
for f in fields:
print "%-12s" % f,
print
lastline = line[2:]
Be warned that the only reliable way to parse CSV files is to use the csv module (or a robust alternative); but getting it to play well with the diff format (in full generality) would be a bit of a headache. If you're mainly interested in readability and your CSV isn't too gnarly, you can probably live with an occasional mix-up.