2

The Python code:

from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import Select
import pandas as pd
import time

# define the website to scrape and path where the chromedriver is located
website = 'https://www.adamchoi.co.uk/overs/detailed'
path = 'C:/users/Administrator/Downloads/chromedriver-win64/chromedriver.exe'  # write your path here
service = Service(executable_path=path)  # selenium 4
driver = webdriver.Chrome(service=service)  # define 'driver' variable
# open Google Chrome with chromedriver
driver.get(website)

# locate and click on a button
all_matches_button = driver.find_element(by='xpath', value='//label[@analytics-event="All matches"]')
all_matches_button.click()

# select elements in the table
matches = driver.find_elements(by='xpath', value='//tr')

# storage data in lists
date = []
home_team = []
score = []
away_team = []

# looping through the matches list
for match in matches:
    date.append(match.find_element(by='xpath', value='./td[1]').text)
    home = match.find_element(by='xpath', value='./td[2]').text
    home_team.append(home)
    print(home)
    score.append(match.find_element(by='xpath', value='./td[3]').text)
    away_team.append(match.find_element(by='xpath', value='./td[4]').text)
# quit drive we opened at the beginning
driver.quit()

# Create Dataframe in Pandas and export to CSV (Excel)
df = pd.DataFrame({'date': date, 'home_team': home_team, 'score': score, 'away_team': away_team})
df.to_csv('football_data.csv', index=False)
print(df)

How can I solve the following error message by modifying the above Python code?

Traceback (most recent call last):
  File "C:\Users\Administrator\PycharmProjects\WebScraping\1.selenium4-adamchoi.py", line 31, in <module>
    home = match.find_element(by='xpath', value='./td[2]').text
           ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Administrator\PycharmProjects\WebScraping\.venv\Lib\site-packages\selenium\webdriver\remote\webelement.py", line 601, in find_element
    return self._execute(Command.FIND_CHILD_ELEMENT, {"using": by, "value": value})["value"]
           ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Administrator\PycharmProjects\WebScraping\.venv\Lib\site-packages\selenium\webdriver\remote\webelement.py", line 572, in _execute
    return self._parent.execute(command, params)
           ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
  File "C:\Users\Administrator\PycharmProjects\WebScraping\.venv\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 458, in execute
    self.error_handler.check_response(response)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
  File "C:\Users\Administrator\PycharmProjects\WebScraping\.venv\Lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 233, in check_response
    raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"./td[2]"}
  (Session info: chrome=142.0.7444.60); For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors#nosuchelementexception
Stacktrace:
Symbols not available. Dumping unresolved backtrace:
    0x7ff6a9617a35
    0x7ff6a9617a90
    0x7ff6a93916ad
    0x7ff6a93ea13e
    0x7ff6a93ea44c
    0x7ff6a93dcc9c
    0x7ff6a93dcb56
    0x7ff6a943b8fb
    0x7ff6a93db068
    0x7ff6a93dbe93
    0x7ff6a98d29d0
    0x7ff6a98cce50
    0x7ff6a98ecc45
    0x7ff6a96330ce
    0x7ff6a963adbf
    0x7ff6a9620c14
    0x7ff6a9620dcf
    0x7ff6a9606828
    0x7ffb0b847bd4
    0x7ffb0c6eced1


Process finished with exit code 1
4
  • maybe first get all ./td and check how many items it found. Maybe there is some row with less elements and there is no [2]. Simply debug it. Commented Nov 1 at 19:10
  • you could also put this in try/except to skip rows which have only one td Commented Nov 1 at 20:01
  • there are rows with text "Next match: ..." and "Next match O/U 2.5 odds vs ..." which have only one td in row - this is your problem. Commented Nov 1 at 20:03
  • You realise you can get the data from json? What data you need. You may not even need selenium. Commented Nov 3 at 16:49

2 Answers 2

1

You're searching for something that doesn't exist.

Firstly, the page you're scraping may pop up a cookie consent. If it does, you'll need to deal with that.

Subsequently, use CSS selectors when possible. They are (in my opinion) easier to use than Xpath. They're also faster.

Essentially, what you're looking for are all the relevant td webelements. However, there are some td elements being used for values other than what you're looking for. Fortunately, the td elements you want have a count of 6 within their tr parent.

If all you want is a CSV file, you could use Python's standard csv module as follows:

import csv
from selenium.webdriver import Chrome
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By

URL = "https://www.adamchoi.co.uk/overs/detailed"
FILENAME = "football_data.csv"
MAP = {"date": 0, "home_team": 2, "score": 3, "away_team": 4}


def wait():
    return WebDriverWait(DRIVER, 5)

def click_through_cookies():
    """
    Cookie prompt may not be shown so ignore any exception
    """
    ec = EC.element_to_be_clickable
    sel = (By.CSS_SELECTOR, "button.fc-primary-button")
    try:
        wait().until(ec(sel)).click()
    except Exception:
        pass


def click_all_matches():
    """
    Click the "All matches" button
    """
    ec = EC.element_to_be_clickable
    sel = (By.CSS_SELECTOR, "label[Analytics-event='All matches']")
    wait().until(ec(sel)).click()


def get_row_data():
    """
    Get all tr elements
    """
    ec = EC.visibility_of_all_elements_located
    sel = (By.CSS_SELECTOR, "tr.ng-scope")
    yield from wait().until(ec(sel))

def process():
    """
    Get the URL, click through any cookie consent pop-up and click the "All matches" button then
    scrape the page, extracting relevant data and creating CSV file
    """
    DRIVER.get(URL)
    click_through_cookies()
    click_all_matches()
    with open(FILENAME, "w", newline="") as output:
        (writer := csv.DictWriter(output, MAP)).writeheader()
        for row in get_row_data():
            # only rows with exactly 6 td elements are relevant
            if len(tds := row.find_elements(By.CSS_SELECTOR, "td")) == 6:
                writer.writerow({k: tds[v].text for (k, v) in MAP.items()})

if __name__ == "__main__":
    with Chrome() as DRIVER:
        process()

Sign up to request clarification or add additional context in comments.

4 Comments

Thank you very much for your answer but please explain how you handle chromedriver in your code?
He instantiates Chrome() using a context manager that assigns it to the variable DRIVER. Selenium Manager will download and start the correct chromedriver for you... you really don't have to do anything special.
Unless you need to use a particular (outdated) version of the browser driver, then there's no longer any need for the Service class. The style used in my answer ensures that you'll be running with the latest driver without any need for manual intervention
These coockie banners are a real pain when doing web surfing. I developed a generalized automated cookie clicker. It is quite some code but you may have a look here starting at line 725 github.com/dornech/utils-seleniumxp/blob/main/src/…
1

There are rows with text "Next match: ..." and "Next match O/U 2.5 odds vs ..." which have only one td in row - this is your problem.

enter image description here

You may use try/except to catch error and skip rows.

Or you may get all td in row and check if there are 4 elements or not:
(frankly, there are 6 <td> in row - maybe some of them are for some extra info or icons)

for match in matches:
    all_tds = match.find_elements(by="xpath", value="./td")

    if len(all_tds) < 6: 
        print("not enough <td> in row")
    else:
        date.append(all_tds[0].text)
        # all_tds[1] - empty column
        home_team.append(all_tds[2].text)
        score.append(all_tds[3].text)
        away_team.append(all_tds[4].text)
        # all_tds[5] - empty column

        print(date[-1], home_team[-1], score[-1], away_team[-1])

Result:

26-10-2025 Arsenal 1 - 0 Crystal Palace
18-10-2025 Fulham 0 - 1 Arsenal
04-10-2025 Arsenal 2 - 0 West Ham
28-09-2025 Newcastle 1 - 2 Arsenal
21-09-2025 Arsenal 1 - 1 Man City
13-09-2025 Arsenal 3 - 0 Nott'm Forest
31-08-2025 Liverpool 1 - 0 Arsenal
23-08-2025 Arsenal 5 - 0 Leeds
17-08-2025 Man United 0 - 1 Arsenal
not enough <td> in row
not enough <td> in row
26-10-2025 Aston Villa 1 - 0 Man City
19-10-2025 Tottenham 1 - 2 Aston Villa
05-10-2025 Aston Villa 2 - 1 Burnley
28-09-2025 Aston Villa 3 - 1 Fulham
21-09-2025 Sunderland 1 - 1 Aston Villa
13-09-2025 Everton 0 - 0 Aston Villa
31-08-2025 Aston Villa 0 - 3 Crystal Palace
23-08-2025 Brentford 1 - 0 Aston Villa
16-08-2025 Aston Villa 0 - 0 Newcastle
not enough <td> in row
not enough <td> in row

Full code used for tests (with extra code to closing Cookies Message)

from selenium import webdriver
import pandas as pd

website = "https://www.adamchoi.co.uk/overs/detailed"
driver = webdriver.Chrome()  # Selenium 4 can automatically download driver
driver.get(website)

# TODO: close cookies message
driver.find_element(by="xpath", value='//button[@aria-label="Consent"]').click()

all_matches_button = driver.find_element(
    by="xpath", value='//label[@analytics-event="All matches"]'
)
all_matches_button.click()

matches = driver.find_elements(by="xpath", value="//tr")

date = []
home_team = []
score = []
away_team = []

for match in matches:
    all_tds = match.find_elements(by="xpath", value="./td")

    if len(all_tds) < 6:
        print("not enough <td> in row")
    else:
        date.append(all_tds[0].text)
        # all_tds[1] - empty column
        home_team.append(all_tds[2].text)
        score.append(all_tds[3].text)
        away_team.append(all_tds[4].text)
        # all_tds[5] - empty column
        print(date[-1], home_team[-1], score[-1], away_team[-1])

driver.quit()

df = pd.DataFrame(
    {"date": date, "home_team": home_team, "score": score, "away_team": away_team}
)
df.to_csv("football_data.csv", index=False)
print(df)

7 Comments

There are error message appears when run your code. seems your code and my code must include the valid chromedriver in the path but there are no chromedriver version 142.0.7444.60 via googlechromelabs.github.io/chrome-for-testing
I have 142.0.7444.59 (on Linux) and it works correctly. So maybe use service = Service(executable_path=path) if it was working on your system.
current google chrome version is 142.0.7444.60 and the last chromedriver version is 142.0.7444.59 via googlechromelabs.github.io/chrome-for-testing and I think the error message when run your code because chromedriver version must match google chrome version.
your code has path = 'C:/users/Administrator/Downloads/chromedriver-win64/chromedriver.exe' service = Service(executable_path=path) driver = webdriver.Chrome(service=service) - if it was working for you then use it in my code too. I use Linux and I didn't use this part. And many other answers about Selenium I wouldn't need to use service
Please look at the answer of "jackal" code is run without error and without include the chromedriver in the path, Please explain the reason for that.
I can't explain it - maybe it also downloads Chrome 142.0.7444.59 (because Selenium can do it). You would have to stop program (ie. using input()) and check version of browser. Besides you didn't show your error so I don't know if problem is driver or someting else. BTW: you may also try to put my code in with Chrome() as driver: and see if it works for you. I can't test it because for me work my code too.
@MoBilal Selenium will download and manage chromedriver for you. You don't need to download the matching version or specify its location.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.