利用python项目对百度文库进行爬取并下载-创新互联

这篇文章给大家介绍利用python项目对百度文库进行爬取并下载,内容非常详细,感兴趣的小伙伴们可以参考借鉴,希望对大家能有所帮助。

让客户满意是我们工作的目标,不断超越客户的期望值来自于我们对这个行业的热爱。我们立志把好的技术通过有效、简单的方式提供给客户,将通过不懈努力成为客户在信息化领域值得信任、有价值的长期合作伙伴,公司提供的服务项目有:申请域名、网页空间、营销软件、网站建设、西林网站维护、网站推广。

具体方法如下:

import requests
import re
import json
import os

session = requests.session()


def fetch_url(url):
  return session.get(url).content.decode('gbk')


def get_doc_id(url):
  return re.findall('view/(.*).html', url)[0]


def parse_type(content):
  return re.findall(r"docType.*?\:.*?\'(.*?)\'\,", content)[0]


def parse_title(content):
  return re.findall(r"title.*?\:.*?\'(.*?)\'\,", content)[0]


def parse_doc(content):
  result = ''
  url_list = re.findall('(https.*?0.json.*?)\\\\x22}', content)
  url_list = [addr.replace("\\\\\\/", "/") for addr in url_list]
  for url in url_list[:-5]:
    content = fetch_url(url)
    y = 0
    txtlists = re.findall('"c":"(.*?)".*?"y":(.*?),', content)
    for item in txtlists:
      if not y == item[1]:
        y = item[1]
        n = '\n'
      else:
        n = ''
      result += n
      result += item[0].encode('utf-8').decode('unicode_escape', 'ignore')
  return result


def parse_txt(doc_id):
  content_url = '/tupian/20230522/error.html + doc_id
  content = fetch_url(content_url)
  md5 = re.findall('"md5sum":"(.*?)"', content)[0]
  pn = re.findall('"totalPageNum":"(.*?)"', content)[0]
  rsign = re.findall('"rsign":"(.*?)"', content)[0]
  content_url = 'https://wkretype.bdimg.com/retype/text/' + doc_id + '?rn=' + pn + '&type=txt' + md5 + '&rsign=' + rsign
  content = json.loads(fetch_url(content_url))
  result = ''
  for item in content:
    for i in item['parags']:
      result += i['c'].replace('\\r', '\r').replace('\\n', '\n')
  return result


def parse_other(doc_id):
  content_url = "/tupian/20230522/error.html + doc_id + "&pn=1&rn=99999&type=ppt"
  content = fetch_url(content_url)
  url_list = re.findall('{"zoom":"(.*?)","page"', content)
  url_list = [item.replace("\\", '') for item in url_list]
  if not os.path.exists(doc_id):
    os.mkdir(doc_id)
  for index, url in enumerate(url_list):
    content = session.get(url).content
    path = os.path.join(doc_id, str(index) + '.jpg')
    with open(path, 'wb') as f:
      f.write(content)
  print("图片保存在" + doc_id + "文件夹")


def save_file(filename, content):
  with open(filename, 'w', encoding='utf8') as f:
    f.write(content)
    print('已保存为:' + filename)


# test_txt_url = '/tupian/20230522/cbb4af8b783e0912a3162a89.html&
# test_ppt_url = '/tupian/20230522/2b7046e3f78a6529657d5376.html&
# test_pdf_url = '/tupian/20230522/dd6e15c1227916888586d795.html&
# test_xls_url = '/tupian/20230522/eb4a5bb7312b3169a551a481.html&
def main():
  url = input('请输入要下载的文库URL地址')
  content = fetch_url(url)
  doc_id = get_doc_id(url)
  type = parse_type(content)
  title = parse_title(content)
  if type == 'doc':
    result = parse_doc(content)
    save_file(title + '.txt', result)
  elif type == 'txt':
    result = parse_txt(doc_id)
    save_file(title + '.txt', result)
  else:
    parse_other(doc_id)


if __name__ == "__main__":
  main()

关于利用python项目对百度文库进行爬取并下载就分享到这里了,希望以上内容可以对大家有一定的帮助,可以学到更多知识。如果觉得文章不错,可以把它分享出去让更多的人看到。


当前文章:利用python项目对百度文库进行爬取并下载-创新互联
网站网址:http://scjbc.cn/article/ddeeej.html

其他资讯