0

First Issue resolved, please read scroll down to EDIT2

I'm trying to access a Web Service deployed via Azure Machine Learning Studio, using the Batch Execution-Sample Code for Python on the bottom of below page:

https://studio.azureml.net/apihelp/workspaces/306bc1f050ba4cdba0dbc6cc561c6ab0/webservices/e4e3d2d32ec347ae9a829b200f7d31cd/endpoints/61670382104542bc9533a920830b263c/jobs

I have already fixed an Issue according to this question (replaced BlobService by BlobBlockService and so on):

https://studio.azureml.net/apihelp/workspaces/306bc1f050ba4cdba0dbc6cc561c6ab0/webservices/e4e3d2d32ec347ae9a829b200f7d31cd/endpoints/61670382104542bc9533a920830b263c/jobs

And I also have entered the API-Key, Container-Name, URL, account_key and account_name according to the instructions.

However it seems that today the Code Snippet is even more outdated than it was back then because I receive a different error now:

File "C:/Users/Alex/Desktop/scripts/BatchExecution.py", line 80, in uploadFileToBlob
    blob_service = asb.BlockBlobService(account_name=storage_account_name, account_key=storage_account_key)

  File "C:\Users\Alex\Anaconda3\lib\site-packages\azure\storage\blob\blockblobservice.py", line 145, in __init__

  File "C:\Users\Alex\Anaconda3\lib\site-packages\azure\storage\blob\baseblobservice.py", line 205, in __init__

TypeError: get_service_parameters() got an unexpected keyword argument 'token_credential'

I also noticed, that when installing the Azure SDK for Python via pip, I get the following warnings in the end of the process (installation is successful however):

azure-storage-queue 1.3.0 has requirement azure-storage-common<1.4.0,>=1.3.0, but you'll have azure-storage-common 1.1.0 which is incompatible.

azure-storage-file 1.3.0 has requirement azure-storage-common<1.4.0,>=1.3.0, but you'll have azure-storage-common 1.1.0 which is incompatible.

azure-storage-blob 1.3.0 has requirement azure-storage-common<1.4.0,>=1.3.0, but you'll have azure-storage-common 1.1.0 which is incompatible.

I can't find anything about all this in the latest documentation for the Python SDK (the word 'token_credential' is not even contained):

https://media.readthedocs.org/pdf/azure-storage/latest/azure-storage.pdf

Does anyone have a clue what's going wrong during the installation or why the type-error with the 'token_credential' pops up during execution?

Or does anyone know how I can install the necessary version of azure-storage-common or azure-storage-blob?

EDIT: Here's a my code (however not-reproducible because I changed the keys before posting)

# How this works:
#
# 1. Assume the input is present in a local file (if the web service accepts input)
# 2. Upload the file to an Azure blob - you"d need an Azure storage account
# 3. Call BES to process the data in the blob. 
# 4. The results get written to another Azure blob.

# 5. Download the output blob to a local file
#
# Note: You may need to download/install the Azure SDK for Python.
# See: http://azure.microsoft.com/en-us/documentation/articles/python-how-to-install/

import urllib
# If you are using Python 3+, import urllib instead of urllib2

import json
import time
import azure.storage.blob as asb          # replaces BlobService by BlobBlockService


def printHttpError(httpError):
    print("The request failed with status code: " + str(httpError.code))

    # Print the headers - they include the requert ID and the timestamp, which are useful for debugging the failure
    print(httpError.info())

    print(json.loads(httpError.read()))
    return


def saveBlobToFile(blobUrl, resultsLabel):
    output_file = "myresults.csv" # Replace this with the location you would like to use for your output file
    print("Reading the result from " + blobUrl)
    try:
        # If you are using Python 3+, replace urllib2 with urllib.request in the following code
        response = urllib.request.urlopen(blobUrl)
    except urllib.request.HTTPError:
        printHttpError(urllib.HTTPError)
        return

    with open(output_file, "w+") as f:
        f.write(response.read())
    print(resultsLabel + " have been written to the file " + output_file)
    return


def processResults(result):


    first = True
    results = result["Results"]
    for outputName in results:
        result_blob_location = results[outputName]
        sas_token = result_blob_location["SasBlobToken"]
        base_url = result_blob_location["BaseLocation"]
        relative_url = result_blob_location["RelativeLocation"]

        print("The results for " + outputName + " are available at the following Azure Storage location:")
        print("BaseLocation: " + base_url)
        print("RelativeLocation: " + relative_url)
        print("SasBlobToken: " + sas_token)


        if (first):
            first = False
            url3 = base_url + relative_url + sas_token
            saveBlobToFile(url3, "The results for " + outputName)
    return



def uploadFileToBlob(input_file, input_blob_name, storage_container_name, storage_account_name, storage_account_key):
    blob_service = asb.BlockBlobService(account_name=storage_account_name, account_key=storage_account_key)

    print("Uploading the input to blob storage...")
    data_to_upload = open(input_file, "r").read()
    blob_service.put_blob(storage_container_name, input_blob_name, data_to_upload, x_ms_blob_type="BlockBlob")

def invokeBatchExecutionService():
    storage_account_name = "storage1" # Replace this with your Azure Storage Account name
    storage_account_key = "kOveEtQMoP5zbUGfFR47" # Replace this with your Azure Storage Key
    storage_container_name = "input" # Replace this with your Azure Storage Container name
    connection_string = "DefaultEndpointsProtocol=https;AccountName=" + storage_account_name + ";AccountKey=" + storage_account_key #"DefaultEndpointsProtocol=https;AccountName=mayatostorage1;AccountKey=aOYA2P5VQPR3ZQCl+aWhcGhDRJhsR225teGGBKtfXWwb2fNEo0CrhlwGWdfbYiBTTXPHYoKZyMaKuEAU8A/Fzw==;EndpointSuffix=core.windows.net"
    api_key = "5wUaln7n99rt9k+enRLG2OrhSsr9VLeoCfh0q3mfYo27hfTCh32f10PsRjJtuA==" # Replace this with the API key for the web service
    url = "https://ussouthcentral.services.azureml.net/workspaces/306bc1f050/services/61670382104542bc9533a920830b263c/jobs" #"https://ussouthcentral.services.azureml.net/workspaces/306bc1f050ba4cdba0dbc6cc561c6ab0/services/61670382104542bc9533a920830b263c/jobs/job_id/start?api-version=2.0"



    uploadFileToBlob(r"C:\Users\Alex\Desktop\16_da.csv", # Replace this with the location of your input file
                     "input1datablob.csv", # Replace this with the name you would like to use for your Azure blob; this needs to have the same extension as the input file 
                     storage_container_name, storage_account_name, storage_account_key)

    payload =  {

        "Inputs": {

            "input1": { "ConnectionString": connection_string, "RelativeLocation": "/" + storage_container_name + "/input1datablob.csv" },
        },     

        "Outputs": {

            "output1": { "ConnectionString": connection_string, "RelativeLocation": "/" + storage_container_name + "/output1results.csv" },
        },
        "GlobalParameters": {
}
    }

    body = str.encode(json.dumps(payload))
    headers = { "Content-Type":"application/json", "Authorization":("Bearer " + api_key)}
    print("Submitting the job...")

    # If you are using Python 3+, replace urllib2 with urllib.request in the following code

    # submit the job
    req = urllib.request.Request(url + "?api-version=2.0", body, headers)
    try:
        response = urllib.request.urlopen(req)
    except urllib.request.HTTPError:
        printHttpError(urllib.HTTPError)
        return

    result = response.read()
    job_id = result[1:-1] # remove the enclosing double-quotes
    print("Job ID: " + job_id)


    # If you are using Python 3+, replace urllib2 with urllib.request in the following code
    # start the job
    print("Starting the job...")
    req = urllib.request.Request(url + "/" + job_id + "/start?api-version=2.0", "", headers)
    try:
        response = urllib.request.urlopen(req)
    except urllib.request.HTTPError:
        printHttpError(urllib.HTTPError)
        return

    url2 = url + "/" + job_id + "?api-version=2.0"

    while True:
        print("Checking the job status...")
        # If you are using Python 3+, replace urllib2 with urllib.request in the follwing code
        req = urllib.request.Request(url2, headers = { "Authorization":("Bearer " + api_key) })

        try:
            response = urllib.request.urlopen(req)
        except urllib.request.HTTPError:
            printHttpError(urllib.HTTPError)
            return    

        result = json.loads(response.read())
        status = result["StatusCode"]
        if (status == 0 or status == "NotStarted"):
            print("Job " + job_id + " not yet started...")
        elif (status == 1 or status == "Running"):
            print("Job " + job_id + " running...")
        elif (status == 2 or status == "Failed"):
            print("Job " + job_id + " failed!")
            print("Error details: " + result["Details"])
            break
        elif (status == 3 or status == "Cancelled"):
            print("Job " + job_id + " cancelled!")
            break
        elif (status == 4 or status == "Finished"):
            print("Job " + job_id + " finished!")

            processResults(result)
            break
        time.sleep(1) # wait one second
    return

invokeBatchExecutionService()

EDIT 2: The above issue has been resolved thanks to jon and the csv gets uploaded in blob storage.

However now there is an HTTPError, when the job gets submitted in Line 130:

   raise HTTPError(req.full_url, code, msg, hdrs, fp)  HTTPError: Bad Request
2
  • Can you share your code that you're using where you get the error? Commented Jun 28, 2018 at 19:23
  • Hi there, thanks for asking. I added the code, its quite similar to the one in the first link. However you won't be able to use it, as i removed a part of the keys to my storage account. Commented Jun 28, 2018 at 19:36

1 Answer 1

1

I think the code they give may be pretty old at this point.

The latest version of azure.storage.blob is 1.3. So perhaps a pip install azure.storage.blob --update or simply uninstalling and reinstalling would help.

Once you got the latest version, try using the create_blob_from_text method to load the file to your storage container.

from azure.storage.blob import BlockBlobService

blobService = BlockBlobService(account_name="accountName", account_key="accountKey)

blobService.create_blob_from_text("containerName", "fileName", csv_file)

Hope that works to help lead you down the right path, but if not we can work through it. :)

Sign up to request clarification or add additional context in comments.

2 Comments

Hi There, thanks for the suggestion. It worked so far and the file gets uploaded to blob storage (the rest of the code is still pretty much the same) However now I receive another Error when the job gets submitted (line 130): raise HTTPError(req.full_url, code, msg, hdrs, fp) HTTPError: Bad Request
Hmmmes, that sounds like it may not like the input data that you're sending in the request.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.