The page is making request to external URL via JavaScript. You can use requests to load this information:
import re
import requests
from bs4 import BeautifulSoup
date = '2020-10-17'
main_url = 'https://www.matchi.se/facilities/abybadminton?date={date}&sport='
html_doc = requests.get(main_url.format(date=date)).text
sport_id = re.search(r"var sport = '(.*?)'", html_doc).group(1)
facility_id = re.search(r'facilityId: "(.*?)"', html_doc).group(1)
ajax_url = 'https://www.matchi.se/book/schedule'
params = {
'wl': '',
'facilityId': facility_id,
'date': date,
'sport': sport_id,
'week': '',
'year': ''
}
soup = BeautifulSoup( requests.get(ajax_url, params=params).content, 'html.parser' )
# print occupied slots:
for td in soup.select('td.slot.red'):
title = BeautifulSoup(td['title'], 'html.parser').get_text(strip=True, separator=' ')
print(title)
Prints:
Booked Bana 1 11:00 - 12:00
Booked Bana 2 10:00 - 11:00
Booked Bana 2 11:00 - 12:00
Booked Bana 2 12:00 - 13:00
Booked Bana 3 12:00 - 13:00
Booked Bana 3 14:00 - 15:00
Booked Bana 4 11:00 - 12:00
Booked Bana 5 11:00 - 12:00
Booked Bana 5 14:00 - 15:00
Booked Bana 6 11:00 - 12:00
Booked Bana 6 12:00 - 13:00
Booked Bana 7 11:00 - 12:00
Booked Bana 7 12:00 - 13:00
Booked Bana 7 14:00 - 15:00
Booked Bana 7 15:00 - 16:00
Booked Bana 8 10:00 - 11:00
Booked Bana 9 14:00 - 15:00
Booked Bana 10 12:00 - 13:00
Booked Bana 10 15:00 - 16:00
Booked Bana 13 11:00 - 12:00
Booked Bana 14 10:00 - 11:00
Booked Bana 15 10:00 - 11:00
Booked Bana 15 18:00 - 19:00
Booked Bana 16 13:00 - 14:00
request, because it just downloads data and gives it to you as such (same as what browser gets, but browser then runs the scripts). What you want is to use some library which implements the whole browser or uses an existing browser.