1
`from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.common.action_chains import ActionChains
import time

driver = webdriver.Chrome('chromedriver.exe')

driver.get('https://iremedy.com/search?query=Vital%20Signs%20Monitors')

time.sleep(5)

element = driver.find_element(By.CLASS_NAME, 'body').send_keys(Keys.END)`

I have tried various methods but none are working. Kindly help me.

4
  • Does this answer your question? How can I scroll a web page using selenium webdriver in python? Commented Sep 13, 2022 at 13:59
  • What error do you get? Is the page opens or do you get an exception Commented Sep 13, 2022 at 14:04
  • @Carapace I Checked all the mentioned methods but none worked unfortunately. Commented Sep 13, 2022 at 14:09
  • @TalAngel No error, nothing, the page doesn't scroll. Commented Sep 13, 2022 at 14:10

1 Answer 1

2

This is one way of scrolling that page and loading the items:

from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
import pandas as pd
import time as t

chrome_options = Options()
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument('disable-notifications')

chrome_options.add_argument("window-size=1280,720")

webdriver_service = Service("chromedriver/chromedriver") ## path to where you saved chromedriver binary
browser = webdriver.Chrome(service=webdriver_service, options=chrome_options)
wait = WebDriverWait(browser, 20)
url = 'https://iremedy.com/search?query=Vital%20Signs%20Monitors'
browser.get(url)
items_list = []
while True:
    elements_on_page = wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, '[class^="card"]')))
    print(len(elements_on_page), 'total items found')
    if len(elements_on_page) > 100:
        print('more than 100 found, stopping')
        break
    footer = wait.until (EC.presence_of_element_located((By.CSS_SELECTOR, 'footer[id="footer"]')))
    footer.location_once_scrolled_into_view
    t.sleep(2)
for el in elements_on_page:
    title = el.find_element(By.CSS_SELECTOR, 'h3[class="title"]')
    price = el.find_element(By.CSS_SELECTOR, 'div[class="price"]')
    items_list.append((title.text.strip(), price.text.strip()))
df = pd.DataFrame(items_list, columns = ['Item', 'Price'])
print(df)

The result printed in terminal will be:

10 total items found
20 total items found
20 total items found
30 total items found
30 total items found
40 total items found
50 total items found
60 total items found
70 total items found
80 total items found
90 total items found
100 total items found
110 total items found
more than 100 found, stopping
Item    Price
0   Edan M3A Vital Signs Monitors   $2,714.95
1   M3 Vital Signs Monitors by Edan Instruments $2,476.95
2   Vital Signs Patient Monitors - Touch Screen $2,015.95
3   RVS-100 Advanced Vital Signs Monitors by Riester    $362.95
4   Edan iM80 Vital Signs Patient Monitors  $5,291.95
... ... ...
105 Patient Monitor Connex® Vital Signs Monitoring...   $10,571.95
106 Patient Monitor Connex® Spot Check and Vital S...   $5,089.95
107 Patient Monitor Connex® Spot Check and Vital ...    $5,964.95
108 Patient Monitor X Series® Vital Signs Monitori...   $48,978.95
109 Patient Monitor Connex® Spot Check and Vital S...   $4,391.95
110 rows × 2 columns

I'm breaking the loop once I reach 100, you can go higher.. Important to note there is another way to obtain that data as well, by scraping the GraphQL endpoint where the data is being pulled from. Nonetheless, this is how you do it with Selenium. For documentation, please see https://www.selenium.dev/documentation/

Sign up to request clarification or add additional context in comments.

13 Comments

Thanks a lot! This worked. Made my day :). Can you kindly share some link related to GraphQL endpoint scrapping. GrapghQL seems a better viable option. I am not aware of this method so kindly point point me from where to start. Thanks a lot.
I edited my response, to make the retrieved data more .. decent. Regarding GraphQL.. that will require a long, complex header, and to be honest with you, a better solution (less complex) in this scenario remains Selenium.
Got it ✅, this surely seems easy for me to understand, still in the learning phase.
@BarrythePlatipus Do you know why element.location_once_scrolled_into_view worked here while other ways not? Anyway +1 for the good catch!
@BarrythePlatipus Just one last thing, I hope I'm not asking a lot. Each card has a three dot submenu to access quick look, Is it possible to open that of each card and scrap the description part as well. Initially I thought of going to each page and then going back, but thought it would be an additional overhead. Want to know if its possible just by quick look menu. Some reference code would be appreciated.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.