I think that if you remove -0 from xargs it'll do what you expect. That flag is for null-byte-separated input, whereas you are passing it newline-separated input.
To work with null-byte-separated records throughout the pipeline use find ... -print0, sort -z and xargs -0. This is the most robust way to pass records through a pipeline (it won't break, no matter what your filenames are).
find "./src/AdviserLinks/Database/SQL" -iname "*.sql" -print0 |
sort -zn |
xargs -0 -n1 sh -c 'echo "$0" &&
sqlcmd -S "$SQL_HOST" -d WebSupportDatabase -U "$SQL_USER" -P "$SQL_PWD" -i "$0"'
This assumes that the $SQL variables are export-ed to the environment.
I have replaced -I % with -n1, which will process records one at a time. Each filename is passed to sh as $0, which can be used safely; there is no risk that the contents of the record is interpreted as shell syntax, as was the case with -I % in your attempt. Note that this means that a separate child shell is invoked for every file, and it would be more efficient to use a loop as in Charles' answer.
As for using separate statements vs &&, that depends on whether you want the execution of the second command to be conditional on the success of first command.
xargs -I % sh -c '...%...'; that way lies serious security bugs.$(rm -rf ~)in its name; worse,$(rm -rf ~)'$(rm -rf ~)', so it gets expanded whether or not it's in a single-quoted context.