数据采集export使用prometheus_client 和 Flask实现

2022-04-09 18:29:15 浏览数 (3)

一、export

1、安装库

代码语言:shell复制
pip install prometheus_client  flask

2、demo.py

代码语言:python代码运行次数:0复制
from atexit import register
import mimetypes
from prometheus_client.core import CollectorRegistry
from prometheus_client import Gauge,Counter,Info,Enum,generate_latest,start_http_server

from flask import Response, Flask


#获取源数据,数据源可以是任意接口、数据库、文件等
def  get_qcloud_data():
      data = {
        'cvm': 1,
        'cbs': 2,
        'clb': 3
      }
      return data
    
#设置metrics
#Prometheus提供4种类型Metrics:Counter, Gauge, Summary和Histogram
#Counter 累加器,只有inc方法,定义方法指标名,描述,默认增长值是1
#Gauge 可任意设置,比如cpu、内存、磁盘等指标,定义方法,指标名,描述,标签
#Histogram 分桶统计,对每个桶的数据进行统计
#Summary 分位统计,对每个值进行统计

registry = CollectorRegistry(auto_describe=False)
product_cvm = Gauge('product_cvm', 'product_usage_cvm', ['product'], registry=registry)
product_cbs = Gauge('product_cbs', 'product_usage_cbs', ['product'], registry=registry)
product_clb = Gauge('product_clb', 'product_usage_clb', ['product'], registry=registry)



app = Flask(__name__)


@app.route("/metrics")
def main():
  data = get_qcloud_data()
  for key, value in data.items():
    if key == 'cvm':
      product_cvm.labels(product=key).set(value)
    elif key == 'cbs':
      product_cbs.labels(product=key).set(value)
    elif key == 'clb':
      product_clb.labels(product=key).set(value)
  return Response(generate_latest(registry), mimetype="text/plain")
      

@app.route('/')
def index():
  return "welecome to qcloud export"

if __name__ == "__main__":
  app.run(host="0.0.0.0", port=8000)

3、观察采集到的数据

image.pngimage.png

二、prometheus

1、安装prometheus,prometheus.yml文件先拿出来再映射

代码语言:shell复制
docker run -d -p 9090:9090 --name prometheus --restart=always -v /etc/localtime:/etc/localtime -v $(pwd)/prometheus.yml:/etc/prometheus/prometheus.yml -v $(pwd)/data:/prometheus/  /prom/prometheus

2、编辑prometheus.yml

添加刚写的export地址,prometheus是请求式获取数据,默认配置15s

代码语言:yaml复制
# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["localhost:9090"]
      - targets: ["192.168.1.200:8000"]

3、

image.pngimage.png
image.pngimage.png

三、grafana

image.pngimage.png

0 人点赞