I've been trying to parse the links ended with 20012019.csv from a webpage using the below script but the thing is I'm always having timeout exception error. It occurred to me that I did things in the right way.
However, any insight as to where I'm going wrong will be highly appreciated.
My attempt so far:
from selenium import webdriver
url = 'https://promo.betfair.com/betfairsp/prices'
def get_info(driver,link):
driver.get(link)
for item in driver.find_elements_by_css_selector("a[href$='20012019.csv']"):
print(item.get_attribute("href"))
if __name__ == '__main__':
driver = webdriver.Chrome()
try:
get_info(driver,url)
finally:
driver.quit()
time.sleep()Seleniumis overkill for this project. Have you considered usingrequestsandBeautifulSoup?requestscan handle them @nicholishen.