Open urls from list of urls using urllib -


new python , stackoverflow.

i'm trying scrape data number of links on page. have collected urls , placed them in list (list_of_urls_edit).

i know want iterate through list can scrape information off each page. trying in way:

for in list_of_urls_edit:         page_1 = urllib.request.urlopen(i).read()         soup_1 = beautifulsoup(page_1,'html.parser')  

this returning following error:

recursionerror: maximum recursion depth exceeded while calling python object

every method i've tried iterating through list has given me same error.

any tips on might cause of ?

sorry if has been answered on here before


Comments

Popular posts from this blog

account - Script error login visual studio DefaultLogin_PCore.js -

xcode - CocoaPod Storyboard error: -