I need to read from a file.csv 2 values and make more then 13.000 queries in a PostgreSQL.
It should be very simple task as you may see in the code above, but there are some issues.
#!/bin/bash
MSISDN=($(head file.csv | awk -F ";" '{print $1}' | sed -e "s/^/55/"))
APPID=($(head file.csv | awk -F ";" '{print $3}'))
NUMBER_OF_LINES=$(wc -l file.csv| grep -o "[0-9]*")
for i in $(seq 0 "$NUMBER_OF_LINES")
do
export PGPASSWORD='MY_PASSWORD'
psql -q -A -h VERY-LONG-HOST -U MYUSER -d DATABASE -p 1111 -t -c "select 'http://API-HOST/subscription/cancel?subscriptionId=' + s.subscription_id + '&phone=' + s.phone + '&enabled=0&statusId=7¬ifyActionListeners=false&extraInfo=TICKET_NUMBER' from sbs.subscription s (nolock) join sbs.configuration c on s.configuration_id = c.configuration_id where c.application_id = ${APPID[$i]} and c.carrier_id = 2 and s.phone = ${MSISDN[$i]};"
done
when the code executes there is a error:
ERROR: syntax error at or near "112940676229" LINE 2: and c.carrier_id = 2 and s.phone = 55112940676229;
How can I work with multiples queries and close the connection in each of them before make another query, thus far how to solve the error showed before?
Example of the content in file.csv:
112940676229;Sevice;333
113429402012;Sevice;929
111429402013;Sevice;888
11240672940;Sevice;445
11320294034;Sevice;333
11429294056;Sevice;22
11942940281;Sevice;122
11962940895;Sevice;233
file.csvinto your arrays ahead-of-time; rather, read it line-by-line, and process each line as you go.