1

I'm trying to get content of this website using PHP simplehtmldom library.

http://www.immigration.govt.nz/migrant/stream/work/workingholiday/czechwhs.htm"

It is not working, so i tried using CURL:

function curl_get_file_contents($URL)
{
    $c = curl_init();
    curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
    curl_setopt($c, CURLOPT_URL, $URL);
    $contents = curl_exec($c);
    curl_close($c);

    if ($contents) return $contents;
    else return FALSE;
}

But always get only respose with some JS code and content:

<noscript>Please enable JavaScript to view the page content.</noscript>

Is any possibility to solve this using PHP? I must use PHP in this case so i need to simulate JS based browser.

Many thanks for any advice.

1 Answer 1

2

I must use PHP in this case so i need to simulate JS based browser.

I'd recommend you two ways:

  1. Leverage v8js php plugin to deal with site's js when scraping. See here an usage example.
  2. Simulate JS based browser thru using Selenium, iMacros or webRobots.io Chrome ext. But in this case you are off the PHP scripting.
Sign up to request clarification or add additional context in comments.

1 Comment

@redrom, thanks for checking my answer, could you share of what of those 2 options has helped you? And how did you apply it? I ask you this since i do web scraping research and then post results into the scraping.pro blog. Any feedback will be appreciable!

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.