write spark dataframe as array of json (pyspark) - python

I would like to write my spark dataframe as a set of JSON files and in particular each of which as an array of JSON.
Let's me explain with a simple (reproducible) code.
We have:
import numpy as np
import pandas as pd
df = spark.createDataFrame(pd.DataFrame({'x': np.random.rand(100), 'y': np.random.rand(100)}))
Saving the dataframe as:
df.write.json('s3://path/to/json')
each file just created has one JSON object per line, something like:
{"x":0.9953802385540144,"y":0.476027611419198}
{"x":0.929599290575914,"y":0.72878523939521}
{"x":0.951701684432855,"y":0.8008064729546504}
but I would like to have an array of those JSON per file:
[
{"x":0.9953802385540144,"y":0.476027611419198},
{"x":0.929599290575914,"y":0.72878523939521},
{"x":0.951701684432855,"y":0.8008064729546504}
]

It is not currently possible to have spark "natively" write a single file in your desired format because spark works in a distributed (parallel) fashion, with each executor writing its part of the data independently.
However, since you are okay with having each file be an array of json not only [one] file, here is one workaround that you can use to achieve your desired output:
from pyspark.sql.functions import to_json, spark_partition_id, collect_list, col, struct
df.select(to_json(struct(*df.columns)).alias("json"))\
.groupBy(spark_partition_id())\
.agg(collect_list("json").alias("json_list"))\
.select(col("json_list").cast("string"))\
.write.text("s3://path/to/json")
First you create a json from all of the columns in df. Then group by the spark partition ID and aggregate using collect_list. This will put all the jsons on that partition into a list. Since you're aggregating within the partition, there should be no shuffling of data required.
Now select the list column, convert to a string, and write it as a text file.
Here's an example of how one file looks:
[{"x":0.1420523746714616,"y":0.30876114874052263}, ... ]
Note you may get some empty files.
Presumably you can force spark to write the data in ONE file if you specified an empty groupBy, but this would result in forcing all of the data into a single partition which could result in an out of memory error.

If the data is not super huge and it's okay to have the list as one JSON file, the following workaround is also valid. First, convert the Pyspark data frame to Pandas and then to a list of dicts. Then, the list can be dumped as JSON.
list_of_dicts = df.toPandas().to_dict('records')
json_file = open('path/to/file.json', 'w')
json_file.write(json.dumps(list_of_dicts))
json_file.close()

Related

How do I export JSON data to CSV using python?

I'm building a site that, based on a user's input, sorts through JSON data and prints a schedule for them into an html table. I want to give it the functionality that once the their table is created they can export the data to a CSV/Excel file so we don't have to store their credentials (logins & schedules in a database). Is this possible? If so, how can I do it using python preferably?
This is not the exact answer but rather steps for you to follow in order to get a solution:
1 Read data from json. some_dict = json.loads(json_string)
2 Appropriate code to get the data from dictionary (sort/ conditions etc) and get that data in a 2D array (list)
3 Save that list as csv: https://realpython.com/python-csv/
I'm pretty lazy and like to utilize pandas for things like this. It would be something along the lines of
import pandas as pd
file = 'data.json'
with open(file) as j:
json_data = json.load(j)
df = pd.DataFrame.from_dict(j, orient='index')
df.to_csv("data.csv")

Panda.Read_CSV Usecols not print output as a list in Python 3.9

Why when I am trying to print the following code...
import pandas
import csv
passengersid=pandas.read_csv('test.csv', usecols=['PassengerId'])
print(passengersid)
...I am getting this:
Output
I am trying to get a simple list of values (without indexes of values and not a table) from the first (PassengersID) column in one csv file and then iterate and use it in the other csv file along with other data.
You are reading a data frame with the read_csv command.
col_one_list = passengersid['PassengerId'].tolist()
col_one_arr = passengersid['PassengerId'].to_numpy()
this will give you a list or an array as you need

What is the correct way to convert json data (which is undefined/messy) into a DataFrame?

I am trying to understand how JSON data which is not parsed/extracted correctly can be converted into a (Pandas) DataFrame.
I am using python (3.7.1) and have tried the usual way of reading the JSON data. Actually, the code works if I use transpose or axis=1 syntax. But using that completely ignores a large number of values or variables in the data and I am 100% sure that the maybe the code is working but is not giving the desired results.
import pandas as pd
import numpy as np
import csv
import json
sourcefile = open(r"C:\Users\jadil\Downloads\chicago-red-light-and-speed-camera-data\socrata_metadata_red-light-camera-violations.json")
json_data = json.load(sourcefile)
#print(json_data)
type(json_data)
dict
## this code works but is not loading/reading complete data
df = pd.DataFrame.from_dict(json_data, orient="index")
df.head(15)
#This is what I am getting for the first 15 rows
df.head(15)
0
createdAt 1407456580
description This dataset reflects the daily volume of viol...
rights [read]
flags [default, restorable, restorePossibleForType]
id spqx-js37
oid 24980316
owner {'type': 'interactive', 'profileImageUrlLarge'...
newBackend False
totalTimesRated 0
attributionLink http://www.cityofchicago.org
hideFromCatalog False
columns [{'description': 'Intersection of the location...
displayType table
indexUpdatedAt 1553164745
rowsUpdatedBy n9j5-zh
As you have seen, Pandas will attempt to create a data frame out of JSON data even if it is not parsed or extracted correctly. If your goal is to understand exactly what Pandas does when presented with a messy JSON file, you can look inside the code for pd.DataFrame.from_dict() to learn more. If your goal is to get the JSON data to convert correctly to a Pandas data frame, you will need to provide more information abut the JSON data, ideally by providing a sample of the data as text in your question. If your data is sufficiently complicated, you might try the json_normalize() function as described here.

Turning a huge TextEdit file of JSON into a Pandas dataframe

I have an extremely large list of JSON files in the form of a TextEdit document, each of which has 6 key-value pairs.
I would like to turn each key-value pair into a column name for a Pandas Dataframe, and list the values under the column.
{'column1': "stuff stuff", 'column2': "details details, ....}
Is there a standard way to do this?
I think you could begin uploading the file into a dataframe with
import pandas as pd
df = pd.read_table(file_name)
I think each column could be created by iterating through each JSON document using groupby.
EDIT: I think the correct approach is to parse each JSON object into a Dataframe, and then create a function to iterate through all JSONs and create one Dataframe.
Take a look at read_json or json_normalize. You would indeed most likely read each file and then use for instance pd.concat to combine them as required.
Something along the below lines should work, depending on what your file looks like (here assuming that each json dictionary makes up a line in the file:
df = pd.DataFrame()
f = open('workfile', 'r')
for line in f:
df = pd.concat([df, pd.read_json(line, orient='columns')])

Creating Arrays from cvs files in python

So I have a data file, which i must extract specific data from. Using;
x=15 #need a way for code to assess how many lines to skip from given data
maxcol=2000 #need a way to find final row in data
data=numpy.genfromtxt('data.dat.csv',skip_header=x,delimiter=',')
column_one=data[0;max,0]
column_two=data[0:max,1]
this gives me an array for the specific case where there are (x=)15 lines of metadata above the required data and where the number of rows of data is (maxcol=)2000. In what way do I go about changing the code to satisfy any value for x and maxcol?
Use pandas. Its read_csv function does all that you want (I don't include its equivalent of delimiter, sep=',', because comma-delimited is the default):
import pandas as pd
data = pd.read_csv('data.dat.csv', skiprows=x, nrows=maxcol)
If you really want that as a numpy array, you can do this:
data = data.values
But you can probably just leave it as a pandas DataFrame.

Categories