A script is getting links from a csv file and scrapes some info from webpages. Some links don't work and the script fumbles. I've included a try/except, but this messes up my output, since I need the exact amount of output rows as in the original file.
for row in reader:
try:
url = row[4]
req=urllib2.Request(url)
tree = lxml.html.fromstring(urllib2.urlopen(req).read())
except:
continue
Is there a way to delete the row from a csv file where there's a faulty link? Something like:
for row in reader:
try:
url = row[4]
req=urllib2.Request(url)
tree = lxml.html.fromstring(urllib2.urlopen(req).read())
except:
continue
DELETE_THE_ROW
"need the exact amount of output rows as in the original file"