I'm trying to test the jdbc connection by querying our Customer netsuite table with spark jdbc in Databricks. I've added the .jar file to the cluster and trying to run the below. I tried with the schema in the query also... but our schema is "FL - Accounting" so maybe I didn't add it correctly?
jdbc_url = "jdbc:ns://xxxx.connect.api.netsuite.com:1708;ServerDataSource=NetSuite2.com;Encrypted=1;NegotiateSSLClose=false;CustomProperties=(AccountID=xxxx;RoleID=xxx)"
netsuite_query= f"""
(
SELECT TOP 10 * FROM Customer
) AS t
"""
try:
print(f"Attempting to read using the query: {netsuite_query}")
df = spark.read \
.format("jdbc") \
.option("url", jdbc_url) \
.option("dbtable", netsuite_query) \
.option("user", "xxxx") \
.option("password", "xxx") \
.option("driver", "com.netsuite.jdbc.openaccess.OpenAccessDriver") \
.load()
display(df)
except Exception as e:
raise e
- Note I also tried with:
\
.option("query", netsuite_query) \
.option("dbtable", "oa_tables") \
SELECT TOP 10 * FROM CustomerperhapsTOP 10actually valid syntax in NetSuite?( )around the statement?