PySpark's DataFrame.show() runs slow - python

Newbie here, I read a table(about 2 million rows) as Spark's DataFrame via JDBC from MySQL in PySpark and trying to show the top 10 rows:
from pyspark.sql import SparkSession
spark_session = SparkSession.builder.master("local[4]").appName("test_log_processing").getOrCreate()
url = "jdbc:mysql://localhost:3306"
table = "test.fakelog"
properties = {"user": "myUser", "password": "********"}
df = spark_session.read.jdbc(url, table, properties=properties)
df.cache()
df.show(10) # can't get the printed results, and runs pretty slow and consumes 90%+ CPU resources
spark_session.stop()
And here's the console log:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
[Stage 0:> (0 + 1) / 1]
My education background is statistics and recently just started to learn Spark so I've no idea what's going on behind the code(for smallere dataset, this works well), how should I fix this problem? Or what more knowledge should I know about Spark?

Since you call the spark.read.jdbc for some table, the spark will try to collect the whole table from the database into the spark. After that, spark cache the data and print 10 result from the cache. If you run the below code, you will notice some differences.
spark_session = SparkSession.builder.master("local[4]").appName("test_log_processing").getOrCreate()
url = "jdbc:mysql://localhost:3306"
table = "(SELECT * FROM test.fakelog LIMIT 10) temp"
properties = {"user": "myUser", "password": "********"}
df = spark_session.read.jdbc(url, table, properties=properties)
df.cache()
df.show()
spark_session.stop()

Maybe your cache to memory is getting filled up, the default for cache used to be memory only(older spark versions).
Therefore, Instead of cache can you try df.persist(StorageLevel.MEMORY_AND_DISK). It will spill to disk when memory gets too full.
Try .take(10), it will give a collection of rows, it might not be faster but its worth a try
Try df.coalesce(50).persist(StorageLevel.MEMORY_AND_DISK), works well without a shuffle if you have an over-partitioned dataframe
If none of these work it probably means your computing cluster is incapable of handling this load, you might need to scale out.

Related

Can't associate temp view with database in spark session

I'm trying to create a temp view using spark, from a csv file.
To reproduce my production scenario, I need to test my script locally, however in production I'm using Glue Jobs (AWS) where there are databases and tables.
In the code below, I'm creating a database in my spark session and using it, after that, I create a temp view.
from pyspark.sql import SparkSession
spark=SparkSession.builder.appName("pulsar_data").getOrCreate()
df = spark.read.format('csv')\
.options(infer_schema=True)\
.options(header=True)\
.load('pulsar_stars.csv')
spark.sql('CREATE DATABASE IF NOT EXISTS MYDB')
spark.sql('USE MYDB')
df.createOrReplaceTempView('MYDB.TB_PULSAR_STARS')
spark.catalog.listTables()
spark.sql('SELECT * FROM MYDB.TB_PULSAR_STARS').show()
However, when I try to select db.table, Spark can't find the relation between my temp view and my database and throws following error:
*** pyspark.sql.utils.AnalysisException: Table or view not found: MYDB.TB_PULSAR_STARS; line 1 pos 14;
'Project [*]
+- 'UnresolvedRelation [MYDB, TB_PULSAR_STARS], [], false
Debugging my code with pdb, I have listed my spark session catalog, where I find that my table is in fact associated:
(Pdb) spark.catalog.listTables()
[Table(name='tb_pulsar_stars', database='MYDB', description=None, tableType='TEMPORARY', isTemporary=True)]
How can I make this relationship work?
Temporary view name associated to a DataFrame can only be one segment. This is explicitly checked here in Spark code. I would expect your code to throw AnalysisException: CREATE TEMPORARY VIEW or the corresponding Dataset APIs only accept single-part view names, but got: MYDB.TB_PULSAR_STARS - not sure why in your case it's a bit different.
Anyway, use:
df.createOrReplaceTempView('TB_PULSAR_STARS')
spark.sql('SELECT * FROM TB_PULSAR_STARS').show()
And if you need to actually write this data to a table, create it using:
spark.sql("CREATE TABLE MYDB.TB_PULSAR_STARS AS SELECT * FROM TB_PULSAR_STARS")

Asynchronous processing in spark pipeline

I have a local linux server which contains 4 cores. I am running a pyspark job on it locally which basically reads two tables from database and saves the data into 2 dataframes. Now i am using these 2 dataframes to do some processing and then i am using the resultant processed df to save it into elasticsearch. Below is the code
def save_to_es(df):
df.write.format('es').option('es.nodes', 'es_node').option('es.port', some_port_no.).option('es.resource', index_name).option('es.mapping', es_mappings).save()
def coreFun():
spark = SparkSession.builder.master("local[1]").appName('test').getOrCreate()
spark.catalog.clearCache()
spark.sparkContext.setLogLevel("ERROR")
sc = spark.sparkContext
sqlContext = SQLContext(sc)
select_sql = """(select * from db."master_table")"""
df_master = spark.read.format("jdbc").option("url", "jdbcurl").option("dbtable", select_sql).option("user", "username").option("password", "password").option("driver", "database_driver").load()
select_sql_child = """(select * from db."child_table")"""
df_child = spark.read.format("jdbc").option("url", "jdbcurl").option("dbtable", select_sql_cost).option("user", "username").option("password", "password").option("driver", "database_driver").load()
merged_df = merged_python_file.merged_function(df_master,df_child,sqlContext)
logic1_df = logic1_python_file.logic1_function(df_master,sqlContext)
logic2_df = logic2_python_file.logic2_function(df_master,sqlContext)
logic3_df = logic3_python_file.logic3_function(df_master,sqlContext)
logic4_df = logic4_python_file.logic4_function(df_master,sqlContext)
logic5_df = logic5_python_file.logic5_function(df_master,sqlContext)
save_to_es(merged_df)
save_to_es(logic1_df)
save_to_es(logic2_df)
save_to_es(logic3_df)
save_to_es(logic4_df)
save_to_es(logic5_df)
end_time = int(time.time())
print(end_time-start_time)
sc.stop()
if __name__ == "__main__":
coreFun()
There are different logic for processing written in separate python files e.g logic1 in logic1_python_file etc. I send my df_master to separate functions and they return resultant processed df back to driver. Now i use this resultant processed df to save into elasticsearch.
It works fine but problem is here everything is happening sequentially first merged_df gets processed and while it is getting processed others simply wait even though they are not really dependent on the o/p of merged_df function and then logic_1 gets processed while others wait and it goes on. This is not an ideal system design considering the o/p of one logic is not dependent on other.
I am sure asynchronous processing can help me here but i am not sure how to implement it here in my usecase. I know i may have to use some kind of queue(jms,kafka etc) to accomplish this but i dont have a complete picture.
Please let me know how can i utilize asynchronous processing here. Any other inputs which can help in improving the performance of job is welcome.
If during the processing of one single step like (merged_python_file.merged_function), only one core of the CPU is getting heavily utilized and others are nearly idle, multiprocessing can speed up. It can be achieved by using multiprocessing module of python. For more details can check answer on How to do parallel programming in Python?

Using Chunksize and Dask to process 8GB Redshift table in Pandas with Missingno

I have successfully connected Python to a redshift table with my Jupyter Notebook.
I sampled 1 day of data (176707 rows) and performed a function using Missingno to assess how much data is missing and where. No problem there.
Here's the code so far (redacted for security)...
#IMPORT NECESSARY PACKAGES FOR REDSHIFT CONNECTION AND DATA VIZ
import psycopg2
from getpass import getpass
from pandas import read_sql
import seaborn as sns
import missingno as msno
#PASSWORD INPUT PROMPT
pwd = getpass('password')
#REDSHIFT CREDENTIALS
config = { 'dbname': 'abcxyz',
'user':'abcxyz',
'pwd':pwd,
'host':'abcxyz.redshift.amazonaws.com',
'port':'xxxx'
}
#CONNECTION UDF USING REDSHIFT CREDS AS DEFINED ABOVE
def create_conn(*args,**kwargs):
config = kwargs['config']
try:
con=psycopg2.connect(dbname=config['dbname'], host=config['host'],
port=config['port'], user=config['user'],
password=config['pwd'])
return con
except Exception as err:
print(err)
#DEFINE CONNECTION
con = create_conn(config=config)
#SQL TO RETRIEVE DATASET AND STORE IN DATAFRAME
df = read_sql("select * from schema.table where date = '2020-06-07'", con=con)
# MISSINGNO VIZ
msno.bar(df, labels=True, figsize=(50, 20))
This produces the following, which is exactly what I want to see:
However, I need to perform this task on a subset of the entire table, not just one day.
I ran...
SELECT "table", size, tbl_rows FROM SVV_TABLE_INFO
...and I can see that the table is a total size of 9GB and 32.5M rows, although the sample I need to assess the data completion of is 11M rows
So far I have identified 2 options for retrieving a larger dataset than the ~18k rows from my initial attempt.
These are:
1) Using chunksize
2) Using Dask
Using Chunksize
I replaced the necessary line of code with this:
#SQL TO RETRIEVE DATASET AND STORE IN DATAFRAME
df = read_sql("select * from derived.page_views where column_name = 'something'", con=con, chunksize=100000)
This still took several hours to run on a MacBook Pro 2.2 GHz Intel Core i7 with 16 GB RAM and gave memory warnings toward the end of the task.
When it was complete I wasn't able to view the chunks anyway and the kernel disconnected, meaning the data held in memory was lost and I'd essentially wasted a morning.
My question is:
Assuming this is not an entirely foolish endeavour, would Dask be a better approach? If so, how could I perform this task using Dask?
The Dask documentation gives this example:
df = dd.read_sql_table('accounts', 'sqlite:///path/to/bank.db',
npartitions=10, index_col='id') # doctest: +SKIP
But I don't understand how I could apply this to my scenario whereby I have connected to a redshift table in order to retrieve the data.
Any help gratefully recieved.

Accessing large datasets with Python 3.6, psycopg2 and pandas

I am trying to pull a 1.7G file into a pandas dataframe from a Greenplum postgres data source. The psycopg2 driver takes 8 or so minutes to load. Using the pandas "chunksize" parameter does not help as the psycopg2 driver selects all data into memory, then hands it off to pandas, using a lot more than 2G of RAM.
To get around this, I'm trying to use a named cursor, but all the examples I've found then loop through row by row. And that just seems slow. But the main problem appears to that my SQL just stops working in the named query for some unknown reason.
Goals
load the data as quickly as possible without doing any "unnatural
acts"
use SQLAlchemy if possible - used for consistency
have the results in a pandas dataframe for fast in-memory processing (alternatives?)
Have a "pythonic" (elegant) solution. I'd love to do this with a context manager but haven't gotten that far yet.
/// Named Cursor Chunky Access Test
import pandas as pd
import psycopg2
import psycopg2.extras
/// Connect to database - works
conn_chunky = psycopg2.connect(
database=database, user=username, password=password, host=hostname)
/// Open named cursor - appears to work
cursor_chunky = conn_chunky.cursor(
'buffered_fetch', cursor_factory=psycopg2.extras.DictCursor)
cursor_chunky.itersize = 100000
/// This is where the problem occurs - the SQL works just fine in all other tests, returns 3.5M records
result = cursor_chunky.execute(sql_query)
/// result returns None (normal behavior) but result is not iterable
df = pd.DataFrame(result.fetchall())
The pandas call returns AttributeError: 'NoneType' object has no attribute 'fetchall' Failure seems due to named cursor being used. Have tried fetchone, fetchmany, etc. Note the goal here is to let the server chunk and serve up the data in large chunks such that there is a balance of bandwidth and CPU usage. Looping through a df = df.append(row) is just plain fugly.
See related questions (not the same issue):
Streaming data from Postgres into Python
psycopg2 leaking memory after large query
Added standard client side chunking code per request
nrows = 3652504
size = nrows / 1000
idx = 0
first_loop = True
for dfx in pd.read_sql(iso_cmdb_base, engine, coerce_float=False, chunksize=size):
if first_loop:
df = dfx
first_loop = False
else:
df = df.append(dfx,ignore_index=True)
UPDATE:
#Chunked access
start = time.time()
engine = create_engine(conn_str)
size = 10**4
df = pd.concat((x for x in pd.read_sql(iso_cmdb_base, engine, coerce_float=False, chunksize=size)),
ignore_index=True)
print('time:', (time.time() - start)/60, 'minutes or ', time.time() - start, 'seconds')
OLD answer:
I'd try to read data from PostgreSQL using internal Pandas method: read_sql():
from sqlalchemy import create_engine
engine = create_engine('postgresql://user#localhost:5432/dbname')
df = pd.read_sql(sql_query, engine)

Spark 1.4 increase maxResultSize memory

I am using Spark 1.4 for my research and struggling with the memory settings. My machine has 16GB of memory so no problem there since the size of my file is only 300MB. Although, when I try to convert Spark RDD to panda dataframe using toPandas() function I receive the following error:
serialized results of 9 tasks (1096.9 MB) is bigger than spark.driver.maxResultSize (1024.0 MB)
I tried to fix this changing the spark-config file and still getting the same error. I've heard that this is a problem with spark 1.4 and wondering if you know how to solve this. Any help is much appreciated.
You can set spark.driver.maxResultSize parameter in the SparkConf object:
from pyspark import SparkConf, SparkContext
# In Jupyter you have to stop the current context first
sc.stop()
# Create new config
conf = (SparkConf()
.set("spark.driver.maxResultSize", "2g"))
# Create new context
sc = SparkContext(conf=conf)
You should probably create a new SQLContext as well:
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
From the command line, such as with pyspark, --conf spark.driver.maxResultSize=3g can also be used to increase the max result size.
Tuning spark.driver.maxResultSize is a good practice considering the running environment. However, it is not the solution to your problem as the amount of data may change time by time. As #Zia-Kayani mentioned, it is better to collect data wisely. So if you have a DataFrame df, then you can call df.rdd and do all the magic stuff on the cluster, not in the driver. However, if you need to collect the data, I would suggest:
Do not turn on spark.sql.parquet.binaryAsString. String objects take more space
Use spark.rdd.compress to compress RDDs when you collect them
Try to collect it using pagination. (code in Scala, from another answer Scala: How to get a range of rows in a dataframe)
long count = df.count()
int limit = 50;
while(count > 0){
df1 = df.limit(limit);
df1.show(); //will print 50, next 50, etc rows
df = df.except(df1);
count = count - limit;
}
Looks like you are collecting the RDD, So it will definitely collect all the data to driver node that's why you are facing this issue.
You have to avoid collect data if not required for a rdd, or if its necessary then specify spark.driver.maxResultSize. there are two ways of defining this variable
1 - create Spark Config by setting this variable as
conf.set("spark.driver.maxResultSize", "3g")
2 - or set this variable
in spark-defaults.conf file present in conf folder of spark. like
spark.driver.maxResultSize 3g and restart the spark.
while starting the job or terminal, you can use
--conf spark.driver.maxResultSize="0"
to remove the bottleneck
There is also a Spark bug
https://issues.apache.org/jira/browse/SPARK-12837
that gives the same error
serialized results of X tasks (Y MB) is bigger than spark.driver.maxResultSize
even though you may not be pulling data to the driver explicitly.
SPARK-12837 addresses a Spark bug that accumulators/broadcast variables prior to Spark 2 were pulled to driver unnecessary causing this problem.
You can set spark.driver.maxResultSize to 2GB when you start the pyspark shell:
pyspark --conf "spark.driver.maxResultSize=2g"
This is for allowing 2Gb for spark.driver.maxResultSize

Categories