performance - why Spark is not distributing jobs to all executors, but to only one executer? -


my spark cluster has 1 master , 3 workers (on 4 separate machines, each machine 1 core), , other settings in picture below, spark.cores.max set 3, , spark.executor.cores 3 (in pic-1)

but when submit job spark cluster, spark web-ui can see 1 executor used (according used memory , rdd blocks in pic-2), not of executors. in case processing speed slower expected.

since i've set max cores 3, shouldn't executors used job?

how configurate spark distribute current job executors, instead of 1 executor running current job?

thanks lot.

------------------pic-1: spark settings

------------------pic-2: enter image description here

you said running 2 receivers, kind of receivers (kafka, hdfs, twitter ??)

which spark version using?

in experience, if using receiver other file receiver, occupy 1 core permanently. when have 2 receivers, 2 cores permanently used receiving data, left 1 core doing work.

please post spark master hompage screenshot well. , job's streaming page screenshot.


Comments

Popular posts from this blog

c++ - Difference between pre and post decrement in recursive function argument -

php - Nothing but 'run(); ' when browsing to my local project, how do I fix this? -

php - How can I echo out this array? -