Unable to launch python scripts on the YARN Hadoop cluster via remote - python

Since a few weeks i tried to submit python scripts via remote access or connecting to the pyspark shell of the YARN cluster.
I am new to the HADOOP world. What I want is submitting spark scripts in my local shell on the external HADOOP cluster.
My situation: External hadoop YARN cluster. Have access to the important ports. I have Windows 7 64 Bit / Python 2.7.9 64 Bit / Spark 1.4.1. The HADOOP cluster is running without any problems.
My problem: Submitting python scripts via remote access on the HADOOP cluster doesnt work.
If i try
spark-submit --master yarn-cluster --num-executors 2 --driver-memory 512m --executor-memory 512m --executor-cores 4 ... example.py
It says
Error: Cluster deploy mode is not applicable to Spark shells.
Exception: Java gateway process exited before sending the driver its port number
So as far as I understand the problem I think the question is
How do I set the yarn-config correctly to connect with my local client (not part of the cluster) to the external YARN cluster.

SPARK VERSION 1.6.0 (which is the current version writing this).
Python code cannot be executed in YARN-cluster mode. Python can only executed in a cluster mode on a native spark cluster.
You can switch to use a spark cluster or you re-implement your code in Java or Scala.

Related

Getting result of spark submit job in python

We have a standalone Spark cluster running on several machines, and currently we run Spark jobs in client mode meaning that the driver program is getting executed on the host machine not on the cluster.
We want to be able to run the driver program on the cluster and submit our jobs to the Spark cluster. Using the spark-submit bash script provided by Spark itself, we can give it a python module and Spark cluster runs it on the cluster. But what I want now is to get the result of that parallel computation back in python because I want to write the results to a PostgreSQL database for further use.
How can get the results of a Spark job deployed using cluster mode back in python? How can I submit a python module using pyspark in python not using the spark-submit bash script.

Problem submitting Pyspark jobs from Windows Driver to Ubuntu Spark Cluster

I am having trouble submitting a Pyspark job from my Windows driver machine (Win 10) to a simple Spark cluster running on Ubuntu.
There are several posts already that attempt to answer this question, most notably this one from ThatDataGuy here but none of them have helped.
Every time I try to submit the simple wordcount.py example to my remote master from my Windows box, I get the following error:
Cannot run program 'C:\apps\Python\3.6.6\python.exe': error=2, No such file or directory
This is a Java IOException generated by the Py4J jar.
My Spark cluster is a simple Master, 1 Worker setup in VirtualBox setup via Vagrant. All machines, (my Spark driver laptop, and 2 VMs (Master / Worker) have identical Spark 2.4.2, Python 3.6.6, and Scala 12.8. Note that Scala programs using spark-submit against the remote cluster work fine, as well as anything run in local mode. Also, the code examples work fine when run on either the Master or Worker nodes directly. It's only when I try to use my Windows laptop as a Spark driver in Pyspark, against the Ubuntu Spark cluster, that this issue arises. It always returns the error above.
It seems that Py4j is trying to use or instantiate Python from my Windows Driver's python path, which of course my Linux cluster can't see. I have already set the Pyspark Python path to a different value in the cluster nodes. I have set the both PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON in the nodes environment variables (via .bashrc), in spark-defaults.conf, AND in spark-env.sh files. All values point to /usr/local/bin/python3 as that's where Python 3.6.6 is installed on the Master and Worker nodes.
I've also (just as a hunch) aliased "python" to point to /usr/local/bin/python3 in the nodes and then changed my Windows python shortcut to pull up the same Python version. No luck but I was grasping at straws. ;/ Error simply changed to:
Cannot run program 'python': error=2, No such file or directory
I did see an article where the Py4J 0.10.7 library does not support Python 3.7, so this caused me to drop down to Python 3.6. Error stayed the same after that though.
The only thing I haven't done is to try to setup an additional shared / synced folder in Vagrant back to my Windows Python installation and then use /vagrant/shared/python/whatever in my remote PYSPARK settings. No idea if that would work though given I'm dealing with Windows and Linux Python flavors (all 3.6.6). Ugh. :/
Any ideas? I've got a Windows 10 machine and I like to do my Python development there. I've also got 64GB of RAM so I'd like to use it. Please don't make me switch to Scala! ;)
-- Pyspark works fine in local
spark-submit C:\apps\Spark\spark-2.4.2\examples\src\main\python\wordcount.py C:\Users\sitex\Desktop\p_and_p_ch1.txt
-- Pyspark fails when calling master with IOException
spark-submit --master spark://XXX.XX.XXX.XXX:7077 C:\apps\Spark\spark-2.4.2\examples\src\main\python\wordcount.py C:\Users\sitex\Desktop\p_and_p_ch1.txt
UPDATE: Ok, so it looks like my workaround is to pretend that my Driver (Windows laptop) knows the Python path in Linux that the Worker needs to use. Fortunately for me, I do it as this entire setup is running on my laptop. Here is the code that gets me past the error:
spark-submit --conf spark.pyspark.driver.python=python --conf spark.pyspark.python=/usr/local/bin/python3 --master spark://172.28.128.150:7077 C:\apps\Spark\spark-2.4.2\examples\src\main\python\wordcount.py /vagrant/shared/p_and_p_ch1.txt
Now, I should add that this DOESN'T run wordcount.py as I quickly realized that my Cluster cannot figure out Window's paths and my attempt to use a Vagrant synced / shared folder is resulting in a File Not Found on the p_and_p_ch1.txt file. But it does get me past my dreaded error. I can figure out how to stash my files on a network share / S3 / et some other day.
This puts a lot of onus on the Spark Driver knowing exactly what Python path the Cluster needs to use. Fortunately I know these settings as the setup is entirely on my laptop, but isn't the entire point of this that I am supposed to submit Spark jobs to a cluster without the Driver (me) knowing settings like the Worker nodes' Python path? I'm wondering if this is just a Windows + Linux quirk?

No resource available for a spark submitted script on a remote cluster [duplicate]

I have a remote Ubuntu server on linode.com with 4 cores and 8G RAM
I have a Spark-2 cluster consisting of 1 master and 1 slave on my remote Ubuntu server.
I have started PySpark shell locally on my MacBook, connected to my master node on remote server by:
$ PYSPARK_PYTHON=python3 /vagrant/spark-2.0.0-bin-hadoop2.7/bin/pyspark --master spark://[server-ip]:7077
I tried executing simple Spark example from website:
from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.appName("Python Spark SQL basic example") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
df = spark.read.json("/path/to/spark-2.0.0-bin-hadoop2.7/examples/src/main/resources/people.json")
I have got error
Initial job has not accepted any resources; check your cluster UI to
ensure that workers are registered and have sufficient resources
I have enough memory on my server and also on my local machine, but I am getting this weird error again and again. I have 6G for my Spark cluster, my script is using only 4 cores with 1G memory per node.
[
I have Googled for this error and tried to setup different memory configs, also disabled firewall on both machines, but it does not helped me. I have no idea how to fix it.
Is someone faced the same problem? Any ideas?
You are submitting application in the client mode. It means that driver process is started on your local machine.
When executing Spark applications all machines have to be able to communicate with each other. Most likely your driver process is not reachable from the executors (for example it is using private IP or is hidden behind firewall). If that is the case you can confirm that by checking executor logs (go to application, select on of the workers with the status EXITED and check stderr. You "should" see that executor is failing due to org.apache.spark.rpc.RpcTimeoutException).
There are two possible solutions:
Submit application from the machine which can be reached from you cluster.
Submit application in the cluster mode. This will use cluster resources to start driver process so you have to account for that.

How to start a standalone cluster using pyspark?

I am using pyspark under ubuntu with python 2.7
I installed it using
pip install pyspark --user
And trying to follow the instruction to setup spark cluster
I can't find the script start-master.sh
I assume that it has to do with the fact that i installed pyspark and not regular spark
I found here that i can connect a worker node to the master via pyspark, but how do i start the master node with pyspark?
https://pypi.python.org/pypi/pyspark
The Python packaging for Spark is not intended to replace all ... use cases. This Python packaged version of Spark is suitable for interacting with an existing cluster (be it Spark standalone, YARN, or Mesos) - but does not contain the tools required to setup your own standalone Spark cluster. You can download the full version of Spark from the Apache Spark downloads page.
Well i did a bit of a mix-up in the op.
You need to get spark on the machine that should run as master.
You can download it here
After extracting it, you have spark/sbin folder, there you have start-master.sh script. you need to start it with -h argument.
please note that you need to create a spark-env file like explained here and define the spark local and master variables, this is important on the master machine.
After that, on the worker nodes, use the start-slave.sh script to start worker nodes.
And you are good to go, you can use a spark context inside python to use it!
If you are already using pyspark through conda / pip installation, there's no need to install Spark and setup environment variables again for cluster setup.
For conda / pip pyspark installation is missing only 'conf', 'sbin' , 'kubernetes', 'yarn' folders, You can simply download Spark and move those folders into the folder where pyspark is located (usually site-packages folder inside python).
After you installed pyspark via pip install pyspark, you can start the Spark standalone cluster master process using this command:
spark-class org.apache.spark.deploy.master.Master -h 127.0.0.1
And then you can add some workers (executors), which would process the jobs:
spark-class org.apache.spark.deploy.worker.Worker \
spark://127.0.0.1:7077 \
-c 4 -m 8G
Flags -c and -m specify the number of CPU cores and amount of memory provided by the worker.
The 127.0.0.1 local address is used there for security reasons (it isn't good if anyone just copy/pasting this lines would expose an "arbitary code execution service" in their network), but for the distributed standalone Spark cluster setup the different address should be used (ex, a private IP address in an isolated network available only for this cluster nodes and their intended users, and an official Spark security guide should be read).
The spark-class script is contained in the "pyspark" python package, and it is a wrapper to load the environment variables from spark-env.sh and add the corresponding spark jars locations to -cp flag of java command.
If you may need to configure the environment - consult the official Spark docs, but it also works and may be suitable for the regular usage with default parameters. Also, see the flags for the master/worker commands using their --help.
This is an example how to connect to this standalone cluster using pyspark script with ipython shell:
PYSPARK_DRIVER_PYTHON=ipython \
pyspark --master spark://127.0.0.1:7077 \
--num-executors 2
--executor-cores 2
--executor-memory 4G
The code for instantiating spark session manually, ex. in Jupyter:
from pyspark.sql import SparkSession
spark = (
SparkSession.builder
.master("spark://127.0.0.1:7077")
# the number of executors this job needs
.config("spark.executor.instances", 2)
# the number of CPU cores memory this needs from the executor,
# it would be reserved on the worker
.config("spark.executor.cores", "2")
.config("spark.executor.memory", "4G")
.getOrCreate()
)

multiple spark application submission on standalone mode

i have 4 spark application (to find wordcount from text file) which written on 4 different language (R,python,java,scala)
./wordcount.R
./wordcount.py
./wordcount.java
./wordcount.scala
spark works in standalone mode...
1.4worker nodes
2.1 core for each worker node
3.1gb memory for each node
4.core_max set to 1
./conf/spark-env.sh
export SPARK_MASTER_OPTS="-Dspark.deploy.defaultCores=1"
export SPARK_WORKER_OPTS="-Dspark.deploy.defaultCores=1"
export SPARK_WORKER_CORES=1
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_INSTANCES=4
i submitted spark application using pgm.sh file on terminal
./bin/spark-submit --master spark://-Aspire-E5-001:7077 ./wordcount.R &
./bin/spark-submit --master spark://-Aspire-E5-001:7077 ./wordcount.py &
./bin/spark-submit --master spark://-Aspire-E5-001:7077 ./project_2.jar &
./bin/spark-submit --master spark://-Aspire-E5-001:7077 ./project_2.jar
when each process executing individually it takes 2sec.
when all process executed using .sh file on terminal it takes 5 sec to 6sec
how do i run different spark applications parallelly?
how to assign each spark application to individual core?
Edit: In Standalone mode, per the documentation, you simply need to set spark.cores.max to something less than the size of your standalone cluster, and it is also advisable to set spark.deploy.defaultCores for the applications that don't explicitly set this setting.
Original Answer (assuming this was running in something like local or YARN): When you submit multiple spark applications, they should run in parallel automatically, provided that the cluster or server you are running on is configured to allow for that. A YARN cluster, for example, would run the applications in parallel by default. Note that the more applications you are running in parallel, the greater the risk of resource contention.
On "how to assign each spark application to individual core": you don't, Spark handles the scheduling of workers to cores. You can configure how many resources each of your worker executors uses, but the allocation of them is up to the scheduler (be that Spark, Yarn, or some other scheduler).

Categories