0

I'm trying to query Azure AD for some data which returns JSON. I want to take some part of this data and fill up an excel sheet. I've checked many examples which dumps all the data from JSON to an excel sheet (using xlwt), but how do I do this for only some part of JSON data?

Here's the script that I'm using:

import requests 

def get_application_list():
    application_list_response = requests.get("https://graph.microsoft.com/beta/applications", verify=False,
                                             headers={"Authorization": "Bearer" + access_token})

    application_list_response_json = application_list_response.json()

    for item in application_list_response_json['value']:
        print("Application Name:", item['displayName'])
        print("Application ID:", item['id'])

get_application_list()

I would like to get the Application name and the application id in the excel sheet. Sample output:

enter image description here

P.S: I'm very new to Python. Any suggestion to optimize this code would also be helpful.

Thanks!

3
  • please post a minimal, complete, and verifiable example. All the stuff about logging and requests it completely immaterial. What is important is a sample input and desired output. Commented Aug 1, 2017 at 18:28
  • Made the required changes. Thanks. Commented Aug 1, 2017 at 18:37
  • 1
    Just write this out to a CSV or TXT file, Excel will be able to open it. Commented Aug 1, 2017 at 18:40

1 Answer 1

1

Just save it as a CSV file and then you can open it in M$ Excel.

import csv
import requests 

def save_application_list(target_file):
    application_list_response = requests.get("https://graph.microsoft.com/beta/applications", verify=False,
                                             headers={"Authorization": "Bearer" + access_token})

    application_list_response_json = application_list_response.json()

    with open(target_file, 'w') as fp:
        writer = csv.DictWriter(fp, ['displayName', 'id'])
        writer.writeheader()
        writer.writerows(application_list_response_json['value'])

save_application_list('/path/to/your/saved/file.csv')

Update:

To change the field names to ['Application Name', 'Application ID'], just change the writer.writeheader() to writerow() like this:

def save_application_list(target_file):

    ...
    with open(target_file, 'w') as fp:
        writer = csv.DictWriter(fp, ['displayName', 'id'])
        writer.writerow({
            'displayName': 'Application Name',
            'id': 'Application ID'
        })
        writer.writerows(application_list_response_json['value'])
    ...

Update 2:

Since you have other fields in the JSON file which were no need to keep, the code can be something like this:

def save_application_list(target_file):

    ...
    with open(target_file, 'w') as fp:
        writer = csv.DictWriter(fp, ['Application Name', 'Application ID'])
        writer.writeheader()
        writer.writerows({
            'Application Name': item['displayName'],
            'Application ID': item['id']
        } for item in application_list_response_json['value'])

    ...
Sign up to request clarification or add additional context in comments.

3 Comments

I upvoted, but the fieldnames should be ['Application Name', 'Application ID']
ValueError: dict contains fields not in fieldnames: 'publisherDomain', 'allowPublicClient', 'requiredResourceAccess', 'api', 'installedClients', 'orgRestrictions', 'applicationAliases', 'deletedDateTime', 'appRoles', 'keyCredentials', 'passwordCredentials', 'createdDateTime', 'tags', 'info', 'web', 'preAuthorizedApplications' I think it is because I'm not using other fields from 'value'.
Not able to solve this piece. Trying a different approach now. Created a sub dictionary from the JSON data and trying to write this dictionary data to a csv file. Thanks!

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.