BigQuery with PySpark overwriting project ID - python

I'm using BigQuery and Dataproc in Google Cloud. Both are in the same project, let's call it "project-123". I use Composer (Airflow) to run my code.
I have a simple python script, test_script.py, that uses pyspark to get read data from a table in the bigquery public dataset:
if __name__ == "__main__":
# Create Spark Cluster
try:
spark = SparkSession.builder.appName("test_script").getOrCreate()
log.info("Created a SparkSession")
except ValueError:
warnings.warn("SparkSession already exists in this scope")
df = (
spark.read.format("bigquery")
.option("project", "project-123")
.option("dataset", "bigquery-public-data")
.option("table", "crypto_bitcoin.outputs")
.load()
)
I run the script using the DataProcPySparkOperator in airflow:
# This task corresponds to the ""
test_script_task = DataProcPySparkOperator(
task_id="test_script",
main="./test_script.py",
cluster_name="test_script_cluster",
arguments=[],
# Since we are using bigquery, we need to explicity add the connector jar
dataproc_pyspark_jars="gs://spark-lib/bigquery/spark-bigquery-latest.jar",
)
However, every time I try I get the following error:
Invalid project ID '/tmp/test_script_20200304_407da59b/test_script.py'. Project IDs must contain 6-63 lowercase letters, digits, or dashes. Some project IDs also include domain name separated by a colon. IDs must start with a letter and may not end with a dash.
Where is this project ID coming from? It's obviously not being overwritten by my .option("project", "project-123"). My guess is that Composer is storing my spark job script at the location /tmp/test_script_20200304_407da59b/test_script.py. If that's the case, how can I overwrite the project ID?
Any help is much appreciated

I'm afraid you are mixing the parameters. project is the project the table belongs to, and bigquery-public-data is a project rather than dataset. Please try the following call:
df = (
spark.read.format("bigquery")
.option("parentProject", "project-123")
.option("project", "bigquery-public-data")
.option("table", "crypto_bitcoin.outputs")
.load()
)

Related

Can't associate temp view with database in spark session

I'm trying to create a temp view using spark, from a csv file.
To reproduce my production scenario, I need to test my script locally, however in production I'm using Glue Jobs (AWS) where there are databases and tables.
In the code below, I'm creating a database in my spark session and using it, after that, I create a temp view.
from pyspark.sql import SparkSession
spark=SparkSession.builder.appName("pulsar_data").getOrCreate()
df = spark.read.format('csv')\
.options(infer_schema=True)\
.options(header=True)\
.load('pulsar_stars.csv')
spark.sql('CREATE DATABASE IF NOT EXISTS MYDB')
spark.sql('USE MYDB')
df.createOrReplaceTempView('MYDB.TB_PULSAR_STARS')
spark.catalog.listTables()
spark.sql('SELECT * FROM MYDB.TB_PULSAR_STARS').show()
However, when I try to select db.table, Spark can't find the relation between my temp view and my database and throws following error:
*** pyspark.sql.utils.AnalysisException: Table or view not found: MYDB.TB_PULSAR_STARS; line 1 pos 14;
'Project [*]
+- 'UnresolvedRelation [MYDB, TB_PULSAR_STARS], [], false
Debugging my code with pdb, I have listed my spark session catalog, where I find that my table is in fact associated:
(Pdb) spark.catalog.listTables()
[Table(name='tb_pulsar_stars', database='MYDB', description=None, tableType='TEMPORARY', isTemporary=True)]
How can I make this relationship work?
Temporary view name associated to a DataFrame can only be one segment. This is explicitly checked here in Spark code. I would expect your code to throw AnalysisException: CREATE TEMPORARY VIEW or the corresponding Dataset APIs only accept single-part view names, but got: MYDB.TB_PULSAR_STARS - not sure why in your case it's a bit different.
Anyway, use:
df.createOrReplaceTempView('TB_PULSAR_STARS')
spark.sql('SELECT * FROM TB_PULSAR_STARS').show()
And if you need to actually write this data to a table, create it using:
spark.sql("CREATE TABLE MYDB.TB_PULSAR_STARS AS SELECT * FROM TB_PULSAR_STARS")

Snowpark Snowflake Python to run a sql statement and export to Excel

I'm creating a Snowflake procedure using Snowpark (python) package executing a query into a snowflake dataframe and I would like to export that into Excel, how can I accomplish that? Is it a better approach to do this? The end goal is to export it the query results into Excel. Needs to be in a Snowflake procedure since we already have others "parent" procedures. Thanks!
CREATE OR REPLACE PROCEDURE EXPORT_SP()
RETURNS string not null
LANGUAGE PYTHON
RUNTIME_VERSION = '3.8'
PACKAGES = ('snowflake-snowpark-python', 'pandas')
HANDLER = 'run'
AS
$$
import pandas
def run(snowpark_session):
## Execute the query into a Snowflake dataframe
results_df = snowpark_session.sql('''
SELECT * FROM
MY TABLES
;
''').collect()
return results_df
$$
;
In general, you can do this by:
"Unloading" the data from the table using the COPY INTO <location> command.
Using the GET command to copy the data to your local filesystem.
Open the file with Excel! If you used the CSV format and the appropriate format options in step 1, you should be able to easily open the resulting data with Excel.
Snowpark directly supports step 1 in the DataFrameWriter.copy_into_location method. An instance of DataFrameWriter contained in the DataFrame.write attribute, as described here.
Snowpark also directly supports step 2 in the FileOperation.get method. As per the example in that documentation page, you can access this method using the .file attribute of your Snowpark session object.
Putting this all together, you should be able to do something like this in Snowpark to save a single exported file into the current working directory:
source_table = "my_table"
unload_location = "#my_stage/export.csv"
def run(session):
df = session.table(source_table)
df.write.copy_into_location(
unload_location,
file_format_type="csv",
format_type_options=dict(
compression="none",
field_delimiter="\t",
),
single=True,
header=True,
)
session.file.get(unload_location, ".")
You can of course use session.sql() instead of session.table() as needed. You might also want to consider unloading data to the stage associated with the source data, instead of creating a separate stage, i.e. if the data is from table my_table then you would unload to the stage #%my_table.
For more details, refer to the documentation pages I linked, which contain important reference information as well as several examples.
Note that I am not sure if session.file is accessible from inside a stored procedure; you will have to experiment to see what works in your specific situation.
As always, remember that this is untested code written by an unpaid volunteer. Always triple-check and test any code that is provided here. Please do ask questions in the comments if anything is still unclear.

How can you load a CSV into PyFlink as a Streaming Table Source?

I am trying to setup a simple playground environment to use the Flink Python Table API. The Jobs I am ultimately trying to write will feed off of a Kafka or Kenesis queue, but that makes playing around with ideas (and tests) very difficult.
I can happily load from a CSV and process it in Batch mode. But I cannot get it to work in Streaming Mode. How would I do something similar but in a StreamingExecutionEnvironment (primarily so I can play around with windows).
I understand that I need to get the system to use EventTime (because ProcTime would all come in at once), but I cannot find anyway to set this up. In principle I should be able to set one of the columns of the CSV to be the event time, but it is not clear form the docs how to do this (or if it is possible).
To get the Batch execution tests running I used the below code, which reads from an input.csv and outputs to an output.csv.
from pyflink.dataset import ExecutionEnvironment
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.table import (
TableConfig,
DataTypes,
BatchTableEnvironment,
StreamTableEnvironment,
)
from pyflink.table.descriptors import Schema, Csv, OldCsv, FileSystem
from pathlib import Path
exec_env = ExecutionEnvironment.get_execution_environment()
exec_env.set_parallelism(1)
t_config = TableConfig()
t_env = BatchTableEnvironment.create(exec_env, t_config)
root = Path(__file__).parent.resolve()
out_path = root / "output.csv"
try:
out_path.unlink()
except:
pass
from pyflink.table.window import Tumble
(
t_env.connect(FileSystem().path(str(root / "input.csv")))
.with_format(Csv())
.with_schema(
Schema().field("time", DataTypes.TIMESTAMP(3)).field("word", DataTypes.STRING())
)
.create_temporary_table("mySource")
)
(
t_env.connect(FileSystem().path(str(out_path)))
.with_format(Csv())
.with_schema(
Schema().field("word", DataTypes.STRING()).field("count", DataTypes.BIGINT())
)
.create_temporary_table("mySink")
)
(
t_env.from_path("mySource")
.group_by("word")
.select("word, count(1) as count")
.filter("count > 1")
.insert_into("mySink")
)
t_env.execute("tutorial_job")
and input.csv is
2000-01-01 00:00:00.000000000,james
2000-01-01 00:00:00.000000000,james
2002-01-01 00:00:00.000000000,steve
So my question is how could I set it up so that it reads from the same CSV, but uses the first column as the event time and allow me to write code like:
(
t_env.from_path("mySource")
.window(Tumble.over("10.minutes").on("time").alias("w"))
.group_by("w, word")
.select("w, word, count(1) as count")
.filter("count > 1")
.insert_into("mySink")
)
Any help would be appreciated, I cant work this out from the docs. I am using python 3.7 and flink 1.11.1 .
If you use the descriptor API, you can specify a field is the event-time field through the schema:
.with_schema( # declare the schema of the table
Schema()
.field("rowtime", DataTypes.TIMESTAMP())
.rowtime(
Rowtime()
.timestamps_from_field("time")
.watermarks_periodic_bounded(60000))
.field("a", DataTypes.STRING())
.field("b", DataTypes.STRING())
.field("c", DataTypes.STRING())
)
But I still recommend you to use DDL, on the one hand it is easier to use, on the other hand there are some bugs in the existing Descriptor API, the community is discussing refactoring the Descriptor API
Have you tried using watermark strategies? As mentioned here, you need to use watermark strategies to use event time. For pyflink case, personally i think it is easier to declare it in the ddl format like this.

Provide blob type to read an Azure append blob from PySpark

The ultimate goal is to able to read the data in my Azure container into a PySpark dataframe.
Steps until now
The steps I have followed till now:
Written this code
spark = SparkSession(SparkContext())
spark.conf.set(
"fs.azure.account.key.%s.blob.core.windows.net" % AZURE_ACCOUNT_NAME,
AZURE_ACCOUNT_KEY
)
spark.conf.set(
"fs.wasbs.impl",
"org.apache.hadoop.fs.azure.NativeAzureFileSystem"
)
container_path = "wasbs://%s#%s.blob.core.windows.net" % (
AZURE_CONTAINER_NAME, AZURE_ACCOUNT_NAME
)
blob_folder = "%s/%s" % (container_path, AZURE_BLOB_NAME)
df = spark.read.format("text").load(blob_folder)
print(df.count())
Set public access and anonymous access to my Azure container.
Added two jars hadoop-azure-2.7.3.jar and azure-storage-2.2.0.jar to the path.
Problem
But now I am stuck with this error: Caused by: com.microsoft.azure.storage.StorageException: Incorrect Blob type, please use the correct Blob type to access a blob on the server. Expected BLOCK_BLOB, actual UNSPECIFIED..
I have not been able to find anything which talks about / resolves this issue. The closest I have found is this which does not work / is outdated.
EDIT
I found that the azure-storage-2.2.0.jar did not support APPEND_BLOB. I upgraded to azure-storage-4.0.0.jar and it changed the error from Expected BLOCK_BLOB, actual UNSPECIFIED. to Expected BLOCK_BLOB, actual APPEND_BLOB.. Does anyone know how to pass the correct type to expect?
Can someone please help me with resolving this.
I have minimal expertise in working with Azure but I don't think it should be this difficult to read and create a Spark dataframe from it. What am I doing wrong?

AWS Glue and update duplicating data

I'm using AWS Glue to move multiple files to an RDS instance from S3. Each day I get a new file into S3 which may contain new data, but can also contain a record I have already saved with some updates values. If I run the job multiple times I will of course get duplicate records in the database. Instead of multiple records being inserted I want Glue to try and update that record if it notices a field has changed, each record has a unique id. Is this possible?
I followed the similar approach which is suggested as 2nd option by Yuriy. Get existing data as well as new data and then do some processing to merge to of them and write with ovewrite mode. Following code would help you to get an idea about how to solve this problem.
sc = SparkContext()
glueContext = GlueContext(sc)
#get your source data
src_data = create_dynamic_frame.from_catalog(database = src_db, table_name = src_tbl)
src_df = src_data.toDF()
#get your destination data
dst_data = create_dynamic_frame.from_catalog(database = dst_db, table_name = dst_tbl)
dst_df = dst_data.toDF()
#Now merge two data frames to remove duplicates
merged_df = dst_df.union(src_df)
#Finally save data to destination with OVERWRITE mode
merged_df.write.format('jdbc').options( url = dest_jdbc_url,
user = dest_user_name,
password = dest_password,
dbtable = dest_tbl ).mode("overwrite").save()
Unfortunately there is no elegant way to do it with Glue. If you would write to Redshift you could use postactions to implement Redshift merge operation. However, it's not possible for other jdbc sinks (afaik).
Alternatively in your ETL script you can load existing data from a database to filter out existing records before saving. However if your DB table is big then the job may take a while to process it.
Another approach is to write into a staging table with mode 'overwrite' first (replace existing staging data) and then make a call to a DB via API to copy new records only into a final table.
I have used INSERT into table .... ON DUPLICATE KEY.. for UPSERTs into the Aurora RDS running mysql engine. Maybe this would be a reference for your use case. We cannot use a JDBC since we have only APPEND, OVERWRITE, ERROR modes currently supported.
I am not sure of the RDS database engine you are using, and following is an example for mysql UPSERTS.
Please see this reference, where i have posted a solution using INSERT INTO TABLE..ON DUPLICATE KEY for mysql :
Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array

Categories