I have a Pyspark DataFrame looking like this :
sdf1 = sc.parallelize([["toto", "tata", ["table", "column"], "SELECT {1} FROM {0}"], "titi", "tutu", ["table", "column"], "SELECT {1} FROM {0}"]]).toDF(["table", "column", "parameters", "statement"])
+-----+------+---------------+-------------------+
|table|column| parameters| statement|
+-----+------+---------------+-------------------+
| toto| tata|[table, column]|SELECT {1} FROM {0}|
| titi| tutu|[table, column]|SELECT {1} FROM {0}|
+-----+------+---------------+-------------------+
And I try to map the array "parameters" elements to columns, to finally format "statement" with values from columns.
This is what I expect after processing transformation :
sdf2 = sc.parallelize([["toto", "tata", ["table", "column"], "SELECT {1} FROM {0}", "SELECT tata FROM toto"],["titi", "tutu", ["table", "column"], "SELECT {1} FROM {0}", "SELECT tutu FROM titi"]]).toDF(["table", "column", "parameters", "statement", "result"])
+-----+------+---------------+-------------------+---------------------+
|table|column| parameters| statement| result|
+-----+------+---------------+-------------------+---------------------+
| toto| tata|[table, column]|SELECT {1} FROM {0}|SELECT tata FROM toto|
| titi| tutu|[table, column]|SELECT {1} FROM {0}|SELECT tutu FROM titi|
+-----+------+---------------+-------------------+---------------------+