2

I am scraping the company names as well as the company leads from LinkedIn Sales Navigator, While I get the names of companies in my output, I fail to get the company leads ie. Names of People from the navigator. Here's the code for the same.

# some code
lead_links = []
        leads_button = browser.find_elements_by_class_name('button--unstyled.t-16.font-weight-600.nowrap-ellipsis')
        for lead in leads_button:
            wait = WebDriverWait(browser, 10)
            wait.until(EC.element_to_be_clickable(lead.click()))
            #lead.click()
            leads = soup.find_all("div", attrs={
                'class': 'artdeco-entity-lockup__title.artdeco-entity-lockup__title--alt-link.ember-view'})
            for lead_person in leads:
                lead_links.append(lead_person.a["href"])
# some code

what I am trying to do here is that I have created an empty lead links(which basically stores the person's url which could later be used for scraping other information) and I am trying to click the lead buttons. So upon clicking a lead, A tab opens thereby making the rest of the webpage a bit unreadable. and then from the page I click the lead which then changes the url.This is then stored to the lead_links list.

And I get this error. This is the complete stacktrace

File "webscrape.py", line 162, in <module>
    linkedin_scraper()
  File "webscrape.py", line 113, in linkedin_scraper
    wait.until(EC.presence_of_all_elements_located(lead.click()))
  File "F:\technophile\venv\lib\site-packages\selenium\webdriver\remote\webelement.py", line 80, in click
    self._execute(Command.CLICK_ELEMENT)
  File "F:\technophile\venv\lib\site-packages\selenium\webdriver\remote\webelement.py", line 633, in _execute
    return self._parent.execute(command, params)
  File "F:\technophile\venv\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
    self.error_handler.check_response(response)
  File "F:\technophile\venv\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
    raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element <button data-anonymize="person-name" class="button--unstyled t-16 font-weight-600
nowrap-ellipsis" data-control-name="view_lead_panel_via_card_name" type="button">...</button> is not clickable at point (153, 110). Other element would receive the click: <div id="ember71
7" class="account-sticky-header__lockup artdeco-entity-lockup artdeco-entity-lockup--size-3 ember-view">...</div>
  (Session info: chrome=92.0.4515.159)

While I have tried using other options with EC such as EC.presence_of_all_elements located , EC.visiblity_of_all_elements_located but none of them worked. And I am stuck on this issue since a really long time. Please help me understand how I can solve this error. Please help! Thanks.

2 Answers 2

1

Instead of

wait.until(EC.element_to_be_clickable(lead.click()))

Try using

wait.until(EC.visibility_of(lead))
time.sleep(0.5)
driver.execute_script("arguments[0].click();", lead)

UPD
To close the grayed screen in order to click on the next lead element try clicking on some element with actions.

actions = ActionChains(browser)
title = browser.find_element_by_xpath('//title[contains(text(),"LinkedIn")]')
actions.move_to_element(title).click().perform()

The entire code block will be something like this:

actions = ActionChains(browser)
title = browser.find_element_by_xpath('//title[contains(text(),"LinkedIn")]')
lead_links = []
        leads_button = browser.find_elements_by_class_name('button--unstyled.t-16.font-weight-600.nowrap-ellipsis')
        for lead in leads_button:
            wait = WebDriverWait(browser, 10)
            wait.until(EC.visibility_of(lead))
            time.sleep(0.5)
            driver.execute_script("arguments[0].click();", lead)
            leads = soup.find_all("div", attrs={
                'class': 'artdeco-entity-lockup__title.artdeco-entity-lockup__title--alt-link.ember-view'})
            for lead_person in leads:
                lead_links.append(lead_person.a["href"])
            actions.move_to_element(title).click().perform()
Sign up to request clarification or add additional context in comments.

10 Comments

Thanks alot for your help, it however clicks for a single lead and not all.
How do I ensure that it clicks all the leads?
What error occurs there? What happens? Can you share the link so I will be able to see what actually happens there?
No error as such..but on this url, there are multiple people..your answer is only able to click on the first person..once it is clicked from there i scrape the url of the person and other information.
OK, but YOUR code intended to iterate over list of elements leads_button clicking each one of the lead. So, what happens when trying to click on the second lead? Stale element exception? Something should occur. I just partially fixed YOUR code. I can't see whe web page and what is actually goes there
|
1

First I think this should be a css_selector not class name

Replace this :-

leads_button = browser.find_elements_by_class_name('button--unstyled.t-16.font-weight-600.nowrap-ellipsis')

with this :

leads_button = browser.find_elements_by_css_selector('button--unstyled.t-16.font-weight-600.nowrap-ellipsis')

also when you click, use ActionsChains:

for lead in leads_button:
    wait = WebDriverWait(browser, 10)
    time.sleep(2)
    ActionChains(browser).move_to_element(lead).click().perform()
    # continue with rest of the code here

This should be the import :

from selenium.webdriver.common.action_chains import ActionChains

Updated 1 :

lead_links = []
        leads_button = browser.find_elements_by_class_name('button--unstyled.t-16.font-weight-600.nowrap-ellipsis')
        for lead in leads_button:
            wait = WebDriverWait(browser, 10)
            #wait.until(EC.element_to_be_clickable(lead.click()))
            #lead.click()
            driver.execute_script("arguments[0].scrollIntoView(true);", lead)
            driver.execute_script("arguments[0].click();", lead)
            leads = soup.find_all("div", attrs={
                'class': 'artdeco-entity-lockup__title.artdeco-entity-lockup__title--alt-link.ember-view'})
            for lead_person in leads:
                lead_links.append(lead_person.a["href"])

12 Comments

Thanks for your immediate response cruise, This however is not working..neither does it click the button, nor do I get the url's printed on my terminal :(
Can you share the URL ? and what exactly you are trying to achieve ? also what went wrong any error stack trace ?
linkedin.com/sales/company/… this is the url. from here there are people.. I want to scrape the links of people..from there I will scrape their other information.
I see two options Sales Navigator Professional and Sales Navigator Team. Is it just for premium customers ?
Yes, its premium!
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.