I have the following API to extract data from: https://www.business-humanrights.org/en/api/internal/explore/?format=json&search=nike
I have extracted the API results using json (see below), but the structure of the API seems to be quite convoluted and I do not understand how to extract the information relevant for me and store it into a pandas dataframe. The information I am interested in are the values of the following keys
"translated_title" "backdate" "translated_abstract" "translated_url"
r = requests.get("https://www.business-humanrights.org/en/api/internal/explore/?format=json&search=nike")
rjson = r.json()
users_locs = [webPage for webPage in rjson['results']]
users_locs
More generally, it would be great if I could be pointed out to the logic of how to extract data from lists in a dictionary in a list in a dictionary, etc. etc.
My expected output is a dataset at a news level where for each row I report the translated title, the translated abstract and the backdate. See the following sturcture:
df = pd.DataFrame([{"translated_title" : "Chine : La pression augmente contre Nike, Apple et d’autres à mesure que le boycott lié aux allégations de travail forcé s’intensifie", "translated_abstract":'..', "backdate": "2020-07-24"},
{"translated_title" : "..", "translated_abstract":'..', "backdate": ".."}])
Thanks!