I would like to understand why when working with Apache Spark we don't explicitly close JDBC connections.
See: https://learn.microsoft.com/en-us/azure/sql-database/sql-database-spark-connector or https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html
Is this due to the fact, that when we do
val collection = sqlContext.read.sqlDB(config)
or
jdbcDF.write
.format("jdbc")
(...)
.save()
we don't really open the connection but merely specify a DAG stage? And then under the hood Spark establishes the connection and closes it?