It downloads the file from given Google drive url.
In our requirements, we are getting Gdrive links. We can resolve the link for small file. However, Google sends to without scan page for large files. Stackoverflow thread help resolve that.
However how to achieve the session persistence with scrapy? If anyone have encountered similar scenario, the insights will be helpful. Thanks
by "login", I mean, you can just pass cookies/headers to request, in some cases could be achieved even by short selenium session (not recommended though): yield response.follow(self.start_urls[0], callback=self.after_cookies_inserted, cookies=cookies_selenium, headers=after_login_headers, dont_filter=True)
Ребята а bs4 умеет с json работать? а то я спарсил <script></script> А там внутри этого скрипта, json. Попытался написать .text, написало что AttributeError: 'NoneType' object has no attribute 'text' а в доке ничего про json не нашёл