Currently reading from s3 and saving within a dataframe.
Problem image:
S3 objects are read in as bytes however it seems within my string, the byte string is also there.
Unable to decode a string using - example_string.decode().
Another problem from this is trying to find emojis within the text. These are saved as UTF-8 and due to be saved as a byte string within a string, it adds extra \ etc.
I wish just the string with no additional byte string or any combination.
Any help would be appreciated.
bucket_iter = iter(bucket)
while (True) :
next_val = next(bucket_iter)
current_file = (next_val.get()['Body'].read())).decode('utf-8')
split_file = current_file.split(']')
for tweet in split_file:
a = tweet.split(',')
if (len(a) == 10):
a[0] = a[0][2:12]
new_row = {'date':a[0], 'tweet':a[1], 'user':a[2], 'cashtags':a[3],'number_cashtags':a[4],'Hashtags':a[5],'number_hashtags':a[6],'quoted_tweet':a[7],'urs_present':a[8],'spam':a[9]}
df = df.append(new_row, ignore_index=True)
example of a line in s3bucket
["2021-01-06 13:41:48", "Q1 2021 Earnings Estimate for The Walt Disney Company $DIS Issued By Truist Securiti https://t co/l5VSCCCgDF #stocks", "b'AmericanBanking'", "$DIS", "1", "#stocks'", "1", "False", "1", "0"]
Even though this is a string, it will keep the 'b' before the string, even though the item is a string. Just make a small bit of code to only keep what is inside the quotes.
def bytes_to_string(b):
return str(b)[2:-1]
EDIT: you could technically use regexes to do this, but this is a much more readable way of doing it (and shorter)
Related
I want to convert the below data to json in python.
I have the data in the following format.
b'{"id": "1", "name": " value1"}\n{"id":"2", name": "value2"}\n{"id":"3", "name": "value3"}\n'
This has multiple json objects separated by \n. I was trying to load this as json .
converted the data into string first and loads as json but getting the exception.
my_json = content.decode('utf8')
json_data = json.loads(my_json)
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 2306)
You need to decode it then split by '\n' and load each json object separately. If you store your byte string in a variable called byte_string you could do something like:
json_str = byte_string.decode('utf-8')
json_objs = json_str.split('\n')
for obj in json_objs:
json.loads(obj)
For the particular string that you have posted here though, you will get an error on the second object because the second key in it is missing a double quote. It is name" in the string you linked.
First, this isn't valid json since it's not a single object. Second, there is a typo: the "id":"2" entry is missing a double-quote on the name property element.
An alternative to processing one dict at a time, you can replace the newlines with "," and turn it into an array. This is a fragile solution since it requires exactly one newline between each dict, but is compact:
s = b'{"id": "1", "name": " value1"}\n{"id":"2", "name": "value2"}\n{"id":"3", "name": "value3"}\n'
my_json = s.decode('utf8')
json_data = json.loads("[" + my_json.rstrip().replace("\n", ",") + "]")
What have to first decode your json to a string. So you can just say:
your_json_string = the_json.decode()
now you have a string.
Now what you want to do is:
your_json_string = your_json_string.replace("\\n", "")
so you are replacing the \n with nothing basically. Note that the two backslashes are required, this is not a typo.
Now you can just say:
your_json = json.loads(your_json_string)
How do I add up incoming individual characters over UART to form a string?
For exmaple:
The characters from UART are printed in following format:
\x02
1
2
3
4
5
6
7
\X03
\x02
a
b
c
d
e
f
g
\x03
and I would like output to be something like:
1234567
abcdefg
I have tried this so far:
#!/usr/bin/env python
import time
import serial
ser = serial.Serial('/dev/ttyUSB0',38400)
txt = ""
ser.flushInput()
ser.flushOutput()
while 1:
bytesToRead = ser.inWaiting()
data_raw = ser.read(1)
while 1:
if data_raw !='\x02' or data_raw !='\x03':
txt += data_raw
elif data_raw == '\x03':
break
print txt
Any ideas on how to do that? I am getting no output using this.
First of all, you don't need to call inWaiting: read will block until data is available unless you explicitly set a read timeout. Secondly, if you do insist on using it, keep in mind that the function inWaiting has been replaced by the property in_waiting.
The string /x03 is a 4-character string containing all printable characters. The string \x03, on the other hand, contains only a single non printable character with ASCII code 3. Backslash is the escape character in Python strings. \x followed by two digits is an ASCII character code. Please use backslashes where they belong. This is the immediate reason you are not seeing output: the four character string can never appear in a one-character read.
That being out of the way, the most important thing to remember is to clear your buffer when you encounter a terminator character. Let's say you want to use the inefficient method of adding to your string in place. When you reach \x03, you should print txt and just reset it back to '' instead of breaking out of the loop. A better way might be to use bytearray, which is a mutable sequence. Keep in mind also that read returns bytes, not strings in Python 3.x. This means that you should be decoding the result if you want text: txt = txt.decode('ascii').
I would suggest a further improvement, and create an infinite generator function to split your steam into strings. You could use that generator to print the strings or do anything else you wanted with them:
def getstrings(port):
buf = bytearray()
while True:
b = port.read(1)
if b == b'\x02':
del buf[:]
elif b == b'\x03':
yield buf.decode('ascii')
else:
buf.append(b)
for item in getstring(Serial(...)):
print(item)
Here's one way that I think would help you a bit
l = []
while 1:
data_raw = ser.read(1)
if data_raw !='/x02' or data_raw !='/x03':
l.append(data_raw)
elif data_raw == '/x03':
txt = "-".join(l)
print txt
I begin by creating an empty list and each time you receive a new raw_data. you append it to the list. Once you reach your ending character, you create your string and print it.
I took the liberty of removing one of the loop here to give you a simpler approach. The code will not break by itself at the end (just add a break after the print if you want to). It will print out the result whenever you reach the ending character and then wait for a next data stream to start.
If you want to see an intermediate result, you can print out each data_raw and could add right after a print of the currently joined list.
Be sure to set a timeout value to None when you open your port. That way, you will simply wait to receive one bit, process it and then return to read one bit at a time.
ser = serial.Serial(port='/dev/ttyUSB0', baudrate=38400, timeout=None)
http://pyserial.readthedocs.io/en/latest/pyserial_api.html for mor information on that.
I'm trying to read a null terminated string but i'm having issues when unpacking a char and putting it together with a string.
This is the code:
def readString(f):
str = ''
while True:
char = readChar(f)
str = str.join(char)
if (hex(ord(char))) == '0x0':
break
return str
def readChar(f):
char = unpack('c',f.read(1))[0]
return char
Now this is giving me this error:
TypeError: sequence item 0: expected str instance, int found
I'm also trying the following:
char = unpack('c',f.read(1)).decode("ascii")
But it throws me:
AttributeError: 'tuple' object has no attribute 'decode'
I don't even know how to read the chars and add it to the string, Is there any proper way to do this?
Here's a version that (ab)uses __iter__'s lesser-known "sentinel" argument:
with open('file.txt', 'rb') as f:
val = ''.join(iter(lambda: f.read(1).decode('ascii'), '\x00'))
How about:
myString = myNullTerminatedString.split("\x00")[0]
For example:
myNullTerminatedString = "hello world\x00\x00\x00\x00\x00\x00"
myString = myNullTerminatedString.split("\x00")[0]
print(myString) # "hello world"
This works by splitting the string on the null character. Since the string should terminate at the first null character, we simply grab the first item in the list after splitting. split will return a list of one item if the delimiter doesn't exist, so it still works even if there's no null terminator at all.
It also will work with byte strings:
myByteString = b'hello world\x00'
myStr = myByteString.split(b'\x00')[0].decode('ascii') # "hello world" as normal string
If you're reading from a file, you can do a relatively larger read - estimate how much you'll need to read to find your null string. This is a lot faster than reading byte-by-byte. For example:
resultingStr = ''
while True:
buf = f.read(512)
resultingStr += buf
if len(buf)==0: break
if (b"\x00" in resultingStr):
extraBytes = resultingStr.index(b"\x00")
resultingStr = resultingStr.split(b"\x00")[0]
break
# now "resultingStr" contains the string
f.seek(0 - extraBytes,1) # seek backwards by the number of bytes, now the pointer will be on the null byte in the file
# or f.seek(1 - extraBytes,1) to skip the null byte in the file
(edit version 2, added extra way at the end)
Maybe there are some libraries out there that can help you with this, but as I don't know about them lets attack the problem at hand with what we know.
In python 2 bytes and string are basically the same thing, that change in python 3 where string is what in py2 is unicode and bytes is its own separate type, which mean that you don't need to define a read char if you are in py2 as no extra work is required, so I don't think you need that unpack function for this particular case, with that in mind lets define the new readString
def readString(myfile):
chars = []
while True:
c = myfile.read(1)
if c == chr(0):
return "".join(chars)
chars.append(c)
just like with your code I read a character one at the time but I instead save them in a list, the reason is that string are immutable so doing str+=char result in unnecessary copies; and when I find the null character return the join string. And chr is the inverse of ord, it will give you the character given its ascii value. This will exclude the null character, if its needed just move the appending...
Now lets test it with your sample file
for instance lets try to read "Sword_Wea_Dummy" from it
with open("sword.blendscn","rb") as archi:
#lets simulate that some prior processing was made by
#moving the pointer of the file
archi.seek(6)
string=readString(archi)
print "string repr:", repr(string)
print "string:", string
print ""
#and the rest of the file is there waiting to be processed
print "rest of the file: ", repr(archi.read())
and this is the output
string repr: 'Sword_Wea_Dummy'
string: Sword_Wea_Dummy
rest of the file: '\xcd\xcc\xcc=p=\x8a4:\xa66\xbfJ\x15\xc6=\x00\x00\x00\x00\xeaQ8?\x9e\x8d\x874$-i\xb3\x00\x00\x00\x00\x9b\xc6\xaa2K\x15\xc6=;\xa66?\x00\x00\x00\x00\xb8\x88\xbf#\x0e\xf3\xb1#ITuB\x00\x00\x80?\xcd\xcc\xcc=\x00\x00\x00\x00\xcd\xccL>'
other tests
>>> with open("sword.blendscn","rb") as archi:
print readString(archi)
print readString(archi)
print readString(archi)
sword
Sword_Wea_Dummy
ÍÌÌ=p=Š4:¦6¿JÆ=
>>> with open("sword.blendscn","rb") as archi:
print repr(readString(archi))
print repr(readString(archi))
print repr(readString(archi))
'sword'
'Sword_Wea_Dummy'
'\xcd\xcc\xcc=p=\x8a4:\xa66\xbfJ\x15\xc6='
>>>
Now that I think about it, you mention that the data portion is of fixed size, if that is true for all files and the structure on all of them is as follow
[unknow size data][know size data]
then that is a pattern we can exploit, we only need to know the size of the file and we can get both part smoothly as follow
import os
def getDataPair(filename,knowSize):
size = os.path.getsize(filename)
with open(filename, "rb") as archi:
unknown = archi.read(size-knowSize)
know = archi.read()
return unknown, know
and by knowing the size of the data portion, its use is simple (which I get by playing with the prior example)
>>> strins_data, data = getDataPair("sword.blendscn", 80)
>>> string_data, data = getDataPair("sword.blendscn", 80)
>>> string_data
'sword\x00Sword_Wea_Dummy\x00'
>>> data
'\xcd\xcc\xcc=p=\x8a4:\xa66\xbfJ\x15\xc6=\x00\x00\x00\x00\xeaQ8?\x9e\x8d\x874$-i\xb3\x00\x00\x00\x00\x9b\xc6\xaa2K\x15\xc6=;\xa66?\x00\x00\x00\x00\xb8\x88\xbf#\x0e\xf3\xb1#ITuB\x00\x00\x80?\xcd\xcc\xcc=\x00\x00\x00\x00\xcd\xccL>'
>>> string_data.split(chr(0))
['sword', 'Sword_Wea_Dummy', '']
>>>
Now to get each string a simple split will suffice and you can pass the rest of the file contained in data to the appropriated function to be processed
Doing file I/O one character at a time is horribly slow.
Instead use readline0, now on pypi: https://pypi.org/project/readline0/ . Or something like it.
In 3.x, there's a "newline" argument to open, but it doesn't appear to be as flexible as readline0.
Here is my implementation:
import struct
def read_null_str(f):
r_str = ""
while 1:
back_offset = f.tell()
try:
r_char = struct.unpack("c", f.read(1))[0].decode("utf8")
except:
f.seek(back_offset)
temp_char = struct.unpack("<H", f.read(2))[0]
r_char = chr(temp_char)
if ord(r_char) == 0:
return r_str
else:
r_str += r_char
I've had many struggles with Unicode in Python over the years as I work with many text files in Japanese, so I'm familiar with using .encode("utf-8") to get Japanese text back into Japanese display from u'xxxx. I am NOT getting any encoding/decoding errors. But text I'm reading from a unicode file, manipulating, then writing back into a new file is being represented as strings of u'xxxx instead of the original Japanese text. I have tried .encode() and .decode() in multiple places, and also not using them at all, every time with the same result. Any suggestions are welcome.
Specifically, I am using the Scrapy library to write a spider that takes text from a file it crawls, extracts bits of text to construct the filename of a new file, and then writes the first div of the HTML file as a string into that new file.
What is even more confusing to me is that the bits of text I'm using to create the filename all render in Japanese, as does the filename itself. Is it because I am using str() on the div that I am getting u'xxxx as the content of my file? Please toward the end of the code to see this line.
Here is my complete code (and please ignore how hacky some of it is):
def parse_item(self, response):
original = 0
author = "noauthor"
title = "notitle"
year = "xxxx"
publisher = "xxxx"
typer = "xxxx"
ispub = 0
filename = response.url.split("/")[-1]
if "_" in filename:
filename = filename.split("_")[0]
if filename.isdigit():
title = response.xpath("//h1/text()").extract()[0].encode("utf-8")
author = response.xpath("//h2/text()").extract()[0].encode("utf-8")
ID = filename
bibliographic_info = response.xpath("//div[2]/text()").extract()
for subyear in bibliographic_info:
ispub = 0
subyear = subyear.encode("utf-8").strip()
if "初出:" in subyear:
publisher = subyear.split(":")[1]
original = 1
ispub = 1
if "入力:" in subyear:
typer = subyear.split(":")[1]
if len(subyear) > 1 and (original == 1) and (ispub == 0):
counter = 0
while counter < len(subyear):
if subyear[counter].isdigit():
break
counter+=1
if counter != len(subyear):
year = subyear[counter:(counter+4)]
original = 0
body = str(response.xpath("//div[1]/text()").extract())
new_filename = author + "_" + title + "_" + publisher + "_" + year + "_" + typer + ".html"
file = open(new_filename, "a")
file.write(body.encode("utf-8")
file.close()
# -*- coding: utf-8 -*-
# u'初出' and u'\u521d\u51fa' are different ways to specify *the same* string
assert u'初出' == u'\u521d\u51fa'
#XXX don't mix Unicode and bytes!!!
assert u'初出' != '初出' and u'初出' != '\u521d\u51fa'
Don't use str() at all with a Unicode string as an argument, use the explicit .encode() instead.
Do not call .encode(), .decode() unless necessary; use Unicode sandwich instead:
decode bytes that you receive from outside world into Unicode
keep it Unicode inside your script
encode into bytes at the end to save to a file, send over a network.
Both the first and the last step might be implicit i.e., your program might only see Unicode text.
Note, these are three different things:
the way a string looks like in the source code when you specify it using a string literal (unicode escapes, source code encoding, raw string literals)
the content of the string
how it looks like if you print it (repr(), 'backslashreplace' error handler)
If you see u'...' in the output; it means that at some point repr(unicode_string) is called. It may be implicit e.g., via print([unicode_string]) because repr() is called on items of the list while it is converted to string.
print(u'\u521d\u51fa') # -> 初出 #NOTE: no u'', \u..
print(repr(u'\u521d\u51fa')) # -> u'\u521d\u51fa'
I'm having an issue parsing data after reading a file. What I'm doing is reading a binary file in and need to create a list of attributes from the read file all of the data in the file is terminated with a null byte. What I'm trying to do is find every instance of a null byte terminated attribute.
Essentially taking a string like
Health\x00experience\x00charactername\x00
and storing it in a list.
The real issue is I need to keep the null bytes in tact, I just need to be able to find each instance of a null byte and store the data that precedes it.
Python doesn't treat NUL bytes as anything special; they're no different from spaces or commas. So, split() works fine:
>>> my_string = "Health\x00experience\x00charactername\x00"
>>> my_string.split('\x00')
['Health', 'experience', 'charactername', '']
Note that split is treating \x00 as a separator, not a terminator, so we get an extra empty string at the end. If that's a problem, you can just slice it off:
>>> my_string.split('\x00')[:-1]
['Health', 'experience', 'charactername']
While it boils down to using split('\x00') a convenience wrapper might be nice.
def readlines(f, bufsize):
buf = ""
data = True
while data:
data = f.read(bufsize)
buf += data
lines = buf.split('\x00')
buf = lines.pop()
for line in lines:
yield line + '\x00'
yield buf + '\x00'
then you can do something like
with open('myfile', 'rb') as f:
mylist = [item for item in readlines(f, 524288)]
This has the added benefit of not needing to load the entire contents into memory before splitting the text.
To check if string has NULL byte, simply use in operator, for example:
if b'\x00' in data:
To find the position of it, use find() which would return the lowest index in the string where substring sub is found. Then use optional arguments start and end for slice notation.
Split on null bytes; .split() returns a list:
>> print("Health\x00experience\x00charactername\x00".split("\x00"))
['Health', 'experience', 'charactername', '']
If you know the data always ends with a null byte, you can slice the list to chop off the last empty string (like result_list[:-1]).