您好,登录后才能下订单哦!
这篇文章给大家分享的是有关Ceph心跳与网络的示例分析的内容。小编觉得挺实用的,因此分享给大家做个参考,一起跟随小编过来看看吧。
环境:三台机器bdc212,bdc213,bdc213,每台机器2个osd,三副本,另外每台机器都是双网络,分别为192.168.8.0网段ip和192.168.12.0网段。
刚刚完成安装,测试集群重启,在bdc212的osd重启之后发现集群状态一直为ERR,具体如下:
# ceph -s cluster befd161e-ce9e-427d-9d9b-b683b744645c health HEALTH_ERR 60 pgs are stuck inactive for more than 300 seconds 256 pgs peering 60 pgs stuck inactive monmap e3: 3 mons at {bdc212=192.168.13.212:6789/0,bdc213=192.168.13.213:6789/0,bdc214=192.168.13.214:6789/0} election epoch 48, quorum 0,1,2 bdc212,bdc213,bdc214 osdmap e572: 6 osds: 6 up, 6 in; 92 remapped pgs flags sortbitwise pgmap v378651: 256 pgs, 2 pools, 7410 MB data, 1 objects 22517 MB used, 22288 GB / 22310 GB avail 164 peering 92 remapped+peering
检查日志发现:出现ip地址不匹配的情况,标红部分很明显是2个ip地址匹配不上,多次重启每台机器发现,有时候能够匹配上,集群恢复正常,很多时候都是错误状态。
2016-06-06 17:49:11.262293 mon.0 192.168.13.212:6789/0 294 : cluster [INF] osd.1 192.168.13.212:6804/277426 boot 2016-06-06 17:49:11.262620 mon.0 192.168.13.212:6789/0 295 : cluster [INF] osd.0 192.168.13.212:6800/277433 boot 2016-06-06 17:49:11.264871 mon.0 192.168.13.212:6789/0 296 : cluster [INF] osdmap e570: 6 osds: 6 up, 6 in 2016-06-06 17:49:11.267704 mon.0 192.168.13.212:6789/0 297 : cluster [INF] pgmap v378644: 256 pgs: 48 stale+active+clean, 126 peering, 82 active+clean; 7410 MB data, 22514 MB used, 22288 GB / 22310 GB avail 2016-06-06 17:49:12.272648 mon.0 192.168.13.212:6789/0 298 : cluster [INF] osdmap e571: 6 osds: 6 up, 6 in 2016-06-06 17:49:12.282714 mon.0 192.168.13.212:6789/0 299 : cluster [INF] pgmap v378645: 256 pgs: 48 stale+active+clean, 208 peering; 7410 MB data, 22516 MB used, 22288 GB / 22310 GB avail 2016-06-06 17:49:13.271829 mon.0 192.168.13.212:6789/0 300 : cluster [INF] osdmap e572: 6 osds: 6 up, 6 in 2016-06-06 17:49:13.275671 mon.0 192.168.13.212:6789/0 301 : cluster [INF] pgmap v378646: 256 pgs: 48 stale+active+clean, 208 peering; 7410 MB data, 22516 MB used, 22288 GB / 22310 GB avail 2016-06-06 17:49:12.268712 osd.1 192.168.13.212:6804/277426 1 : cluster [ERR] map e571 had wrong cluster addr (192.168.13.212:6806/277426 != my 192.168.8.212:6806/277426) 2016-06-06 17:49:12.268718 osd.0 192.168.13.212:6800/277433 1 : cluster [ERR] map e571 had wrong cluster addr (192.168.13.212:6801/277433 != my 192.168.8.212:6801/277433) 2016-06-06 17:49:14.303244 mon.0 192.168.13.212:6789/0 302 : cluster [INF] pgmap v378647: 256 pgs: 92 remapped+peering, 48 stale+active+clean, 116 peering; 7410 MB data, 22518 MB used, 22288 GB / 22310 GB avail 2016-06-06 17:49:16.686779 mon.0 192.168.13.212:6789/0 303 : cluster [INF] pgmap v378648: 256 pgs: 92 remapped+peering, 48 stale+active+clean, 116 peering; 7410 MB data, 22519 MB used, 22288 GB / 22310 GB avail 2016-06-06 17:49:17.693779 mon.0 192.168.13.212:6789/0 304 : cluster [INF] pgmap v378649: 256 pgs: 92 remapped+peering, 164 peering; 7410 MB data, 22517 MB used, 22288 GB / 22310 GB avail 2016-06-06 17:49:19.709314 mon.0 192.168.13.212:6789/0 305 : cluster [INF] pgmap v378650: 256 pgs: 92 remapped+peering, 164 peering; 7410 MB data, 22517 MB used, 22288 GB / 22310 GB avail 2016-06-06 17:49:21.716720 mon.0 192.168.13.212:6789/0 306 : cluster [INF] pgmap v378651: 256 pgs: 92 remapped+peering, 164 peering; 7410 MB data, 22517 MB used, 22288 GB / 22310 GB avail 2016-06-06 17:49:24.256323 mon.0 192.168.13.212:6789/0 307 : cluster [INF] HEALTH_ERR; 60 pgs are stuck inactive for more than 300 seconds; 256 pgs peering; 60 pgs stuck inactive 2016-06-06 17:49:55.276736 mon.0 192.168.13.212:6789/0 322 : cluster [INF] osd.0 192.168.13.212:6800/277433 failed (2 reporters from different host after 20.648970 >= grace 20.000000) 2016-06-06 17:49:55.276847 mon.0 192.168.13.212:6789/0 323 : cluster [INF] osd.1 192.168.13.212:6804/277426 failed (2 reporters from different host after 20.648910 >= grace 20.000000)
cluster [ERR] map e571 had wrong cluster addr (192.168.13.212:6806/277426 != my 192.168.8.212:6806/277426)
cluster [ERR] map e571 had wrong cluster addr (192.168.13.212:6801/277433 != my 192.168.8.212:6801/277433)
参看官网配置,进行如下三次测试:
修改配置文件
[osd.0]
host = bdc212
osd heartbeat address = 192.168.8.212
[osd.1]
host = bdc212
osd heartbeat address = 192.168.8.212
重启bdc212的osd:
# systemctl restart ceph-osd.target
查看ceph -w 发现重启后,配置不起作用,另外将心跳地址修改为192.168.13.212,集群状态依旧为ERR。
2016-06-06 10:35:43.828677 mon.0 [INF] osdmap e540: 6 osds: 4 up, 6 in 2016-06-06 10:35:43.834786 mon.0 [INF] pgmap v378545: 256 pgs: 26 peering, 230 remapped+peering; 7410 MB data, 22505 MB used, 22288 GB / 22310 GB avail 2016-06-06 10:35:45.863266 mon.0 [INF] pgmap v378546: 256 pgs: 126 active+undersized+degraded, 26 peering, 104 remapped+peering; 7410 MB data, 22506 MB used, 22288 GB / 22310 GB avail; 1/3 objects degraded (33.333%) 2016-06-06 10:35:47.878381 mon.0 [INF] pgmap v378547: 256 pgs: 256 active+undersized+degraded; 7410 MB data, 22507 MB used, 22288 GB / 22310 GB avail; 1/3 objects degraded (33.333%) 2016-06-06 10:35:50.365666 mon.0 [INF] pgmap v378548: 256 pgs: 256 active+undersized+degraded; 7410 MB data, 22508 MB used, 22288 GB / 22310 GB avail; 1/3 objects degraded (33.333%) 2016-06-06 10:35:52.817568 mon.0 [INF] pgmap v378549: 256 pgs: 256 active+undersized+degraded; 7410 MB data, 22508 MB used, 22288 GB / 22310 GB avail; 1/3 objects degraded (33.333%) 2016-06-06 10:35:53.820444 mon.0 [INF] pgmap v378550: 256 pgs: 256 active+undersized+degraded; 7410 MB data, 22508 MB used, 22288 GB / 22310 GB avail; 1/3 objects degraded (33.333%) 2016-06-06 10:36:24.144881 mon.0 [INF] HEALTH_WARN; 256 pgs degraded; 256 pgs stuck unclean; 256 pgs undersized; recovery 1/3 objects degraded (33.333%); 2/6 in osds are down 2016-06-06 10:39:20.452872 mon.0 [INF] from='client.? 192.168.13.212:0/819457382' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=bdc212", "root=default"], "id": 1, "weight": 3.6313}]: dispatch 2016-06-06 10:39:20.484124 mon.0 [INF] from='client.? 192.168.13.212:0/2428259865' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=bdc212", "root=default"], "id": 0, "weight": 3.6313}]: dispatch 2016-06-06 10:39:21.520292 mon.0 [INF] osd.1 192.168.13.212:6800/255276 boot 2016-06-06 10:39:21.520479 mon.0 [INF] osd.0 192.168.13.212:6804/255283 boot 2016-06-06 10:39:21.521637 mon.0 [INF] osdmap e541: 6 osds: 6 up, 6 in 2016-06-06 10:39:21.524495 mon.0 [INF] pgmap v378551: 256 pgs: 256 active+undersized+degraded; 7410 MB data, 22508 MB used, 22288 GB / 22310 GB avail; 1/3 objects degraded (33.333%) 2016-06-06 10:39:22.531697 mon.0 [INF] osdmap e542: 6 osds: 6 up, 6 in 2016-06-06 10:39:22.533615 mon.0 [INF] pgmap v378552: 256 pgs: 256 active+undersized+degraded; 7410 MB data, 22508 MB used, 22288 GB / 22310 GB avail; 1/3 objects degraded (33.333%) 2016-06-06 10:39:23.549374 mon.0 [INF] pgmap v378553: 256 pgs: 130 remapped+peering, 126 active+undersized+degraded; 7410 MB data, 22509 MB used, 22288 GB / 22310 GB avail; 1/3 objects degraded (33.333%) 2016-06-06 10:39:24.145709 mon.0 [INF] HEALTH_WARN; 126 pgs degraded; 130 pgs peering; 126 pgs stuck unclean; 126 pgs undersized; recovery 1/3 objects degraded (33.333%) 2016-06-06 10:39:25.654043 mon.0 [INF] pgmap v378554: 256 pgs: 256 remapped+peering; 7410 MB data, 22509 MB used, 22288 GB / 22310 GB avail 2016-06-06 10:39:27.659595 mon.0 [INF] pgmap v378555: 256 pgs: 256 remapped+peering; 7410 MB data, 22509 MB used, 22288 GB / 22310 GB avail 2016-06-06 10:39:28.670168 mon.0 [INF] pgmap v378556: 256 pgs: 256 remapped+peering; 7410 MB data, 22510 MB used, 22288 GB / 22310 GB avail 2016-06-06 10:39:30.678376 mon.0 [INF] pgmap v378557: 256 pgs: 256 remapped+peering; 7410 MB data, 22510 MB used, 22288 GB / 22310 GB avail 2016-06-06 10:39:32.687783 mon.0 [INF] pgmap v378558: 256 pgs: 256 remapped+peering; 7410 MB data, 22509 MB used, 22288 GB / 22310 GB avail 2016-06-06 10:39:33.697850 mon.0 [INF] pgmap v378559: 256 pgs: 256 remapped+peering; 7410 MB data, 22509 MB used, 22288 GB / 22310 GB avail 2016-06-06 10:39:51.708232 osd.1 [ERR] map e542 had wrong cluster addr (192.168.13.212:6801/255276 != my 192.168.8.212:6801/255276) 2016-06-06 10:39:51.748896 osd.0 [ERR] map e542 had wrong cluster addr (192.168.13.212:6805/255283 != my 192.168.8.212:6805/255283) 2016-06-06 10:40:10.350113 mon.0 [INF] osd.0 192.168.13.212:6804/255283 failed (2 reporters from different host after 21.000496 >= grace 20.000000) 2016-06-06 10:40:10.350839 mon.0 [INF] osd.1 192.168.13.212:6800/255276 failed (2 reporters from different host after 21.001158 >= grace 20.000000) 2016-06-06 10:40:10.412630 mon.0 [INF] osdmap e543: 6 osds: 4 up, 6 in 2016-06-06 10:40:10.419756 mon.0 [INF] pgmap v378560: 256 pgs: 256 remapped+peering; 7410 MB data, 22509 MB used, 22288 GB / 22310 GB avail 2016-06-06 10:40:11.416130 mon.0 [INF] osdmap e544: 6 osds: 4 up, 6 in 2016-06-06 10:40:11.418453 mon.0 [INF] pgmap v378561: 256 pgs: 256 remapped+peering; 7410 MB data, 22509 MB used, 22288 GB / 22310 GB avail 2016-06-06 10:40:13.446063 mon.0 [INF] pgmap v378562: 256 pgs: 130 active+undersized+degraded, 126 remapped+peering; 7410 MB data, 22510 MB used, 22288 GB / 22310 GB avail
修改配置文件
[osd.0]
cluster addr = 192.168.8.212
[osd.1]
cluster addr = 192.168.8.212
再次重启bdc212的osd服务,测试发现静态ip不光8网段,13网段的ip也可以。
2016-06-06 10:45:24.147360 mon.0 [INF] HEALTH_WARN; 160 pgs degraded; 256 pgs stuck unclean; 160 pgs undersized; recovery 1/3 objects degraded (33.333%) 2016-06-06 10:45:25.924692 mon.0 [INF] pgmap v378575: 256 pgs: 96 active+remapped, 160 active+undersized+degraded; 7410 MB data, 15015 MB used, 14859 GB / 14873 GB avail; 1/3 objects degraded (33.333%) 2016-06-06 10:45:27.932786 mon.0 [INF] pgmap v378576: 256 pgs: 96 active+remapped, 160 active+undersized+degraded; 7410 MB data, 15015 MB used, 14859 GB / 14873 GB avail; 1/3 objects degraded (33.333%) 2016-06-06 10:52:48.811978 mon.0 [INF] from='client.? 192.168.13.212:0/891703336' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=bdc212", "root=default"], "id": 1, "weight": 3.6313}]: dispatch 2016-06-06 10:52:48.813694 mon.0 [INF] from='client.? 192.168.13.212:0/4288153588' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=bdc212", "root=default"], "id": 0, "weight": 3.6313}]: dispatch 2016-06-06 10:52:49.881762 mon.0 [INF] osd.1 192.168.13.212:6804/261511 boot 2016-06-06 10:52:49.882013 mon.0 [INF] osd.0 192.168.13.212:6800/261514 boot 2016-06-06 10:52:49.884250 mon.0 [INF] osdmap e548: 6 osds: 6 up, 6 in 2016-06-06 10:52:49.886696 mon.0 [INF] pgmap v378577: 256 pgs: 96 active+remapped, 160 active+undersized+degraded; 7410 MB data, 15015 MB used, 14859 GB / 14873 GB avail; 1/3 objects degraded (33.333%) 2016-06-06 10:52:50.904273 mon.0 [INF] osdmap e549: 6 osds: 6 up, 6 in 2016-06-06 10:52:50.923353 mon.0 [INF] pgmap v378579: 256 pgs: 126 remapped+peering, 51 active+remapped, 79 active+undersized+degraded; 7410 MB data, 15015 MB used, 14859 GB / 14873 GB avail 2016-06-06 10:52:51.906771 mon.0 [INF] osdmap e550: 6 osds: 6 up, 6 in 2016-06-06 10:52:51.909719 mon.0 [INF] pgmap v378580: 256 pgs: 126 remapped+peering, 51 active+remapped, 79 active+undersized+degraded; 7410 MB data, 15015 MB used, 14859 GB / 14873 GB avail 2016-06-06 10:52:53.920610 mon.0 [INF] pgmap v378581: 256 pgs: 82 active+clean, 126 remapped+peering, 35 active+remapped, 13 active+undersized+degraded; 7410 MB data, 15016 MB used, 14859 GB / 14873 GB avail 2016-06-06 10:52:55.941817 mon.0 [INF] pgmap v378582: 256 pgs: 256 active+clean; 7410 MB data, 22518 MB used, 22288 GB / 22310 GB avail 2016-06-06 10:52:58.057115 mon.0 [INF] pgmap v378583: 256 pgs: 256 active+clean; 7410 MB data, 22519 MB used, 22288 GB / 22310 GB avail 2016-06-06 10:53:00.065731 mon.0 [INF] pgmap v378584: 256 pgs: 256 active+clean; 7410 MB data, 22519 MB used, 22288 GB / 22310 GB avail 2016-06-06 10:53:01.069834 mon.0 [INF] pgmap v378585: 256 pgs: 256 active+clean; 7410 MB data, 22519 MB used, 22288 GB / 22310 GB avail 2016-06-06 10:53:03.085171 mon.0 [INF] pgmap v378586: 256 pgs: 256 active+clean; 7410 MB data, 22519 MB used, 22288 GB / 22310 GB avail 2016-06-06 10:53:24.149488 mon.0 [INF] HEALTH_OK
发现集群恢复正常。
先模拟出第一次测试前出现的ERR状态,然后
global段下添加:
cluster network = 192.168.13.212/16
再次重启:
2016-06-06 17:34:24.251564 mon.0 [INF] HEALTH_WARN; 256 pgs peering 2016-06-06 17:34:37.582278 osd.1 [ERR] map e562 had wrong cluster addr (192.168.13.212:6805/276029 != my 192.168.8.212:6805/276029) 2016-06-06 17:34:37.586347 osd.0 [ERR] map e562 had wrong cluster addr (192.168.13.212:6801/276025 != my 192.168.8.212:6801/276025) 2016-06-06 17:34:56.509186 mon.0 [INF] osd.0 192.168.13.212:6800/276025 failed (2 reporters from different host after 22.645655 >= grace 20.000000) 2016-06-06 17:34:56.509895 mon.0 [INF] osd.1 192.168.13.212:6804/276029 failed (2 reporters from different host after 22.646360 >= grace 20.000000) 2016-06-06 17:34:56.571704 mon.0 [INF] osdmap e563: 6 osds: 4 up, 6 in 2016-06-06 17:34:56.576604 mon.0 [INF] pgmap v378626: 256 pgs: 256 remapped+peering; 7410 MB data, 22505 MB used, 22288 GB / 22310 GB avail 2016-06-06 17:34:57.584605 mon.0 [INF] osdmap e564: 6 osds: 4 up, 6 in 2016-06-06 17:34:57.589648 mon.0 [INF] pgmap v378627: 256 pgs: 256 remapped+peering; 7410 MB data, 22505 MB used, 22288 GB / 22310 GB avail 2016-06-06 17:34:59.611818 mon.0 [INF] pgmap v378628: 256 pgs: 126 active+undersized+degraded, 130 remapped+peering; 7410 MB data, 22505 MB used, 22288 GB / 22310 GB avail; 1/3 objects degraded (33.333%) 2016-06-06 17:35:01.623694 mon.0 [INF] pgmap v378629: 256 pgs: 256 active+undersized+degraded; 7410 MB data, 22507 MB used, 22288 GB / 22310 GB avail; 1/3 objects degraded (33.333%) 2016-06-06 17:35:03.919856 mon.0 [INF] pgmap v378630: 256 pgs: 256 active+undersized+degraded; 7410 MB data, 22508 MB used, 22288 GB / 22310 GB avail; 1/3 objects degraded (33.333%) 2016-06-06 17:35:06.564530 mon.0 [INF] pgmap v378631: 256 pgs: 256 active+undersized+degraded; 7410 MB data, 22508 MB used, 22288 GB / 22310 GB avail; 1/3 objects degraded (33.333%) 2016-06-06 17:35:24.251890 mon.0 [INF] HEALTH_WARN; 256 pgs degraded; 256 pgs stuck unclean; 256 pgs undersized; recovery 1/3 objects degraded (33.333%); 2/6 in osds are down 2016-06-06 17:36:22.468740 mon.0 [INF] from='client.? 192.168.13.212:0/2865614433' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=bdc212", "root=default"], "id": 1, "weight": 3.6313}]: dispatch 2016-06-06 17:36:22.490094 mon.0 [INF] from='client.? 192.168.13.212:0/1154763864' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=bdc212", "root=default"], "id": 0, "weight": 3.6313}]: dispatch 2016-06-06 17:36:23.534519 mon.0 [INF] osd.1 192.168.13.212:6800/276823 boot 2016-06-06 17:36:23.534729 mon.0 [INF] osd.0 192.168.13.212:6804/276828 boot 2016-06-06 17:36:23.536388 mon.0 [INF] osdmap e565: 6 osds: 6 up, 6 in 2016-06-06 17:36:23.538601 mon.0 [INF] pgmap v378632: 256 pgs: 256 active+undersized+degraded; 7410 MB data, 22508 MB used, 22288 GB / 22310 GB avail; 1/3 objects degraded (33.333%) 2016-06-06 17:36:24.252318 mon.0 [INF] HEALTH_WARN; 256 pgs degraded; 256 pgs stuck unclean; 256 pgs undersized; recovery 1/3 objects degraded (33.333%) 2016-06-06 17:36:24.551431 mon.0 [INF] pgmap v378633: 256 pgs: 126 remapped+peering, 130 active+undersized+degraded; 7410 MB data, 22508 MB used, 22288 GB / 22310 GB avail 2016-06-06 17:36:24.568370 mon.0 [INF] osdmap e566: 6 osds: 6 up, 6 in 2016-06-06 17:36:24.570434 mon.0 [INF] pgmap v378634: 256 pgs: 126 remapped+peering, 130 active+undersized+degraded; 7410 MB data, 22508 MB used, 22288 GB / 22310 GB avail 2016-06-06 17:36:25.577264 mon.0 [INF] osdmap e567: 6 osds: 6 up, 6 in 2016-06-06 17:36:25.581312 mon.0 [INF] pgmap v378635: 256 pgs: 126 remapped+peering, 130 active+undersized+degraded; 7410 MB data, 22508 MB used, 22288 GB / 22310 GB avail 2016-06-06 17:36:26.591915 mon.0 [INF] pgmap v378636: 256 pgs: 82 active+clean, 126 remapped+peering, 48 active+undersized+degraded; 7410 MB data, 22508 MB used, 22288 GB / 22310 GB avail 2016-06-06 17:36:28.751459 mon.0 [INF] pgmap v378637: 256 pgs: 178 active+clean, 78 remapped+peering; 7410 MB data, 22510 MB used, 22288 GB / 22310 GB avail 2016-06-06 17:36:29.758035 mon.0 [INF] pgmap v378638: 256 pgs: 256 active+clean; 7410 MB data, 22511 MB used, 22288 GB / 22310 GB avail 2016-06-06 17:36:31.774843 mon.0 [INF] pgmap v378639: 256 pgs: 256 active+clean; 7410 MB data, 22513 MB used, 22288 GB / 22310 GB avail 2016-06-06 17:36:33.783225 mon.0 [INF] pgmap v378640: 256 pgs: 256 active+clean; 7410 MB data, 22513 MB used, 22288 GB / 22310 GB avail 2016-06-06 17:36:34.786234 mon.0 [INF] pgmap v378641: 256 pgs: 256 active+clean; 7410 MB data, 22514 MB used, 22288 GB / 22310 GB avail 2016-06-06 17:37:24.252649 mon.0 [INF] HEALTH_OK
发现集群也恢复正常。
虽然集群恢复正常了,但是理论知识还不是很清楚,从官网上继续学习心跳机制以及Ceph网络的配置:
http://docs.ceph.org.cn/rados/configuration/mon-osd-interaction/#index-6
http://docs.ceph.org.cn/rados/configuration/network-config-ref/
公共网
要配置一个公共网,把下列选项加到配置文件的 [global] 段下。
[global] ... public network = {public-network/netmask}
集群网
如果你声明了集群网, OSD 将把心跳、对象复制和恢复流量路由到集群网,与单个网络相比这会提升性能。要配置集群网,把下列选项加进配置文件的 [global] 段。
[global] ... cluster network = {cluster-network/netmask}
做了个小测试
在/etc/ceph/ceph.conf里配置
cluster_network=192.16.40.0/24
public_network=172.16.40.0/24
重启ceph,会发现同时配置公共网和集群网,osd的网会优先使用公共网,日志体现
2017-06-13 16:14:54.305773 mon.0 192.16.40.1:6789/0 23 : cluster [INF] osd.2 172.16.40.1:6800/2060619 boot
去除掉public_network,只配置集群网,osd会使用集群网
2017-06-13 16:23:43.979744 mon.0 192.16.40.1:6789/0 33 : cluster [INF] osd.2 192.16.40.1:6817/2064630 boot
感谢各位的阅读!关于“Ceph心跳与网络的示例分析”这篇文章就分享到这里了,希望以上内容可以对大家有一定的帮助,让大家可以学到更多知识,如果觉得文章不错,可以把它分享出去让更多的人看到吧!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。