I want to crawl one website but I have a problem with looping trough page. I want to create a system that collects all links, then click on each link and collects data (date in this case). I wrote a code but I keep getting this error:
StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
(Session info: chrome=98.0.4758.109)
I have tried to increase the sleep interval but the result is the same. The error happens after on second iteration (after first link).
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import requests
import time
# url for crawling
url = "https://bstger.weblaw.ch/?size=n_60_n"
# path to selenium
path = 'path to selenium'
driver = webdriver.Chrome(path)
driver.get(url)
time.sleep(4)
# click on search button
buttonClickSearch = driver.find_element_by_xpath('//*[@id="root"]/div/div/div[2]/div[1]/div/div[3]/form/div/input').click()
time.sleep(3)
# get all links
all_links = driver.find_elements_by_tag_name('li.sui-result div.sui-result__header a')
print(all_links)
print()
# loop trough links and crawl them
for link in all_links:
# click on link
print(link)
time.sleep(4)
click = link.click() # I GET THE ERROR HERE ON SECOND ITERATION
time.sleep(4)
# get date
date = driver.find_element_by_tag_name('div.filter-data button.wlclight13').text
day = date.split('.')[0]
month = date.split('.')[1]
year = date.split('.')[2]
date = year + "-" + month + "-" + day
print(date)
print()
# click on back button
back_button = driver.find_element_by_xpath('//*[@id="root"]/div/section[1]/div[1]/div[1]/a').click()
time.sleep(4)
#scroll
driver.execute_script("window.scrollTo(0, 200)")

all_linksinside the for loop. And also the website is so unstable - Clicking on back button does not navigate to previous page properly and other methods to navigate to previous page does not work either.