TL/DR: How can I extend the amount of time that Selenium waits before triggering a timeout? set_page_load_timeout() in isolation does not work as a ReadTimeoutError is still generated by urllib3.
Context: I am using Selenium to define a configuration (sent via a website form / POST request) and download the resulting CSV file. My code works well for small requests, but times out with large datasets that require > 120s to prepare and download.
I have attempted to update the timeout configuration for the Selenium webdriver to no avail:
driver.set_page_load_timeout(300)
The resulting error that I get is still:
urllib3.exceptions.ReadTimeoutError: HTTPConnectionPool(host='localhost', port=55676): Read timed out. (read timeout=120)
How can I go about increasing the urllib3 timeout within Selinium to be able to process/download these larger CSV files? Relevant code is below, but not sure it will be helpful:
from selenium import webdriver
from selenium.webdriver.common.by import By
import time, glob, os, zipfile
from url_destinations import url_destinations
target_data = url_destinations["OTP"]
# Selenium Code to Initiate Download
chrome_options = webdriver.ChromeOptions()
prefs = {"download.default_directory": r"C:\Users\<hidden>\data\downloads"}
chrome_options.add_experimental_option("detach", True)
chrome_options.add_experimental_option("prefs", prefs)
driver = webdriver.Chrome(options=chrome_options)
driver.set_page_load_timeout(300)
driver.get(target_data['URL'])
latest_data = driver.find_element(By.ID, value="lblLatest").text
for val in target_data["Check Options"]:
selected_item = driver.find_element(By.ID,value=val)
selected_item.click()
# Wait for Download to Complete
while len(glob.glob(prefs["download.default_directory"]+"\*.tmp")) > 0:
time.sleep(0.5)