While reading the Datastax docs for supported syntax of Spark SQL, I noticed you can use INSERT statements like you would normally do:
INSERT INTO hello (someId,name) VALUES (1,"hello")
Testing this out in a Spark 2.0 (Python) environment and a connection to a Mysql database, throws the error:
File "/home/yawn/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/sql/utils.py", line 73, in deco
pyspark.sql.utils.ParseException:
u'\nmismatched input \'someId\' expecting {\'(\', \'SELECT\', \'FROM\', \'VALUES\', \'TABLE\', \'INSERT\', \'MAP\', \'REDUCE\'}(line 1, pos 19)\n\n== SQL ==\nINSERT INTO hello (someId,name) VALUES (1,"hello")\n-------------------^^^\n'
However if I remove the explicit column definition, it works as expected:
INSERT INTO hello VALUES (1,"hello")
Am I missing something?