python爬取相关网站一些信息

发布时间:2020-06-09 21:43:54 作者:li371573016
来源:网络 阅读:381
import requests
from bs4 import BeautifulSoup

def getpage(url):

    responce = requests.get(url)
    soup = BeautifulSoup(responce.text,'lxml')
    return soup

def getlinks(link_url):
    responce = requests.get(link_url)
    format_list = BeautifulSoup(responce.text,'lxml')
    link_div = format_list.find_all('div',class_='pic-panel')
    links = [div.a.get('href') for div in link_div]
    return links
url = 'https://bj.lianjia.com/zufang/'

house_url = 'https://bj.lianjia.com/zufang/101102926709.html'
def get_house_info(house_url):

    # li = getlinks(url)
    # print(li)

    soup = getpage(house_url)
    prince = soup.find('span',class_='total').text
    unit = soup.find('span',class_='unit').text.strip()
    house_info = soup.find_all('p')
    area = house_info[0].text[3:]
    layout = house_info[1].text[5:]
    floor = house_info[2].text[3:]
    direction = house_info[3].text[5:]
    location = house_info[4].text[3:]
    xiaoqu_location = house_info[5].text[3:7]
    create_time = house_info[6].text[3:]
    info ={'面积':area,
    '分布':layout,
    '楼层':floor,
    '方向':direction,
    '价格':prince,
    '单价':unit,
    '地铁':location,
    '小区':xiaoqu_location,
    '时间':create_time
    }
    return info
house = get_house_info(house_url)
for k,v in house.items():
    print('{}:{}'.format(k,v))
推荐阅读:
  1. Python练习【爬取银行网站信息】
  2. Python如何基于requests库爬取网站信息

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

python 爬虫 python爬

上一篇:java实现酒店系统

下一篇:python之路-基础篇5

相关阅读

您好,登录后才能下订单哦!

密码登录
登录注册
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》