I can't seem to find a way to have multiple sql files on one task ID. However, you might want to consider below alternative approach in which I loop the multiples sql files with their respective task ID's for each sql file but belonging to one DAG only.
Please see below sample code:
from airflow import models
from airflow.providers.google.cloud.operators.bigquery import BigQueryInsertJobOperator
from airflow.utils.dates import days_ago
PROJECT_ID = "your-project-id"
DATASET_NAME = "your-dataset"
TABLE_1 = "your-table"
dag_id = "your-dag-id"
sql_files = [
'my-query1.sql',
'my-query2.sql',
'my-query3.sql'
]
with models.DAG(
dag_id,
schedule_interval=None, # Override to match your needs
start_date=days_ago(1),
tags=["example"],
user_defined_macros={"DATASET": DATASET_NAME, "TABLE": TABLE_1},
) as dag:
for sqlrun in sql_files:
my_taskid = sqlrun.split(".")
my_final_taskid = my_taskid[0]
case_june_1 = BigQueryInsertJobOperator(
task_id=my_final_taskid,
configuration={
"query": {
"query": f"{sqlrun}",
"useLegacySql": False,
}
},
)
Output:

In addition, per this documentation, query parameter only accepts a STRING and then parse it to read as query.