Run hive queries on empty dataframe using pyspark - python

I want to test my complex hive queries beforehand by executing on empty dataframes using pyspark or pandas. How can I do this. I don't want to create hive connection just mocking tables as df and then execute query on them

Related

How can I use a SQL view to write a pandas dataframe to a SQL table from within python?

I have a dataframe in my python program with columns corresponding to a table on my SQL server. I want to append the contents of my dataframe to the SQL table. Here's the catch: I'm not permissioned to access the SQL table itself, I can only interact with it through a view.
I know if I could write directly to the table I could use SQL alchemy's to_sql function. However, I can only use a view to write to the table in the database.
Is this even possible? Thanks for the help.

How to use Spark BigQuery connector to join multiple tables and then fetch the data into dataframe?

I have to read three different bigquery tables and then join them to get some data which will be stored to GCS bucket. I was using Spark BQ connector.
# Load data from BigQuery.
bqdf = spark.read.format('bigquery') \
.option('table', bq_dataset + bq_table) \
.load()
bqdf.createOrReplaceTempView('bqdf')
This reads entire table data to dataframe. I know that I can apply filter on tables and also select required columns. Thereafter create three dataframes and then join them to get the output.
Is there any equivalent way to achieve this?
I have an option of using BigQuery client API (https://googleapis.dev/python/bigquery/latest/index.html) and import it from pyspark script. However, if I can achive that through Spark BQ connector, dont want to use the API call from python script.
Please help.

How to execute simple SQL query such as update in pyspark?

I want to execute update query in SQL using pyspark based some logic I am using. All I could find is documentation on how to read from SQL
BUT
there are no proper examples of executing update or create statement.

Add results columns to existing BigQuery table using Apache Beam

I would like to add columns which is a result of two existing columns in BigQuery. I am using Apache Beam to read from BigQuery and then process it and update the results to the same BigQuery table as a new column.
Beam BigQuery connector does not explicitly support BigQuery DML, however you can write a pipeline to insert the result of your processing into a separate table, and after the pipeline runs, run a DML statement to update the column in the original table using that auxiliary table.
Alternatively, if your processing logic can be expressed in SQL, you're probably better off just implementing it as an SQL DML statement without using a pipeline.

Using pyspark to connect to hive tables

I am trying to query a Hive table from pyspark.
I am using the below statements:
from pyspark.sql import HiveContext
HiveContext(sc).sql('from `dbname.tableName` select `*`')
I am very new to hadoop systems.
Need to understand what is the correct way to bring some data from a hive table and storing it into a dataframe to further write a program.
sqlCtx.sql has access to hive table. You can use it following way.
my_dataframe = sqlCtx.sql("Select * from employees")
my_dataframe.show()

Categories