0

I have my scrapy bot running on two different systems. One of them is working properly while the other one isn't. They're identical copies. When I use -t csv -o data.csv, I get the following traceback.

Traceback (most recent call last):
  File "/home/scraper/.python/bin/scrapy", line 4, in <module>
    execute()
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/cmdline.py", line 143, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/cmdline.py", line 89, in _run_print_help
    func(*a, **kw)
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/cmdline.py", line 150, in _run_command
    cmd.run(args, opts)
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/commands/crawl.py", line 50, in run
    self.crawler_process.start()
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/crawler.py", line 92, in start
    if self.start_crawling():
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/crawler.py", line 124, in start_crawling
    return self._start_crawler() is not None
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/crawler.py", line 139, in _start_crawler
    crawler.configure()
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/crawler.py", line 46, in configure
    self.extensions = ExtensionManager.from_crawler(self)
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/middleware.py", line 50, in from_crawler
    return cls.from_settings(crawler.settings, crawler)
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/middleware.py", line 31, in from_settings
    mw = mwcls.from_crawler(crawler)
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/contrib/feedexport.py", line 162, in from_crawler
    o = cls(crawler.settings)
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/contrib/feedexport.py", line 144, in __init__
    if not self._storage_supported(self.urifmt):
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/contrib/feedexport.py", line 214, in _storage_supported
    self._get_storage(uri)
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/contrib/feedexport.py", line 225, in _get_storage
    return self.storages[urlparse(uri).scheme](uri)
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/contrib/feedexport.py", line 70, in __init__
    self.path = file_uri_to_path(uri)
  File "/home/scraper/.python/lib/python2.7/site-packages/w3lib/url.py", line 141, in file_uri_to_path
    uri_path = moves.urllib.parse.urlparse(uri).path
AttributeError: 'Module_six_moves_urllib_parse' object has no attribute 'urlparse'
1
  • Please paste your spider's code Commented Jan 17, 2014 at 2:12

1 Answer 1

1

Looks like your six module is not the required by w3lib.

Try:

     pip install -U w3lib six
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.