聊聊被AI坑到的经历

2023-08-31 14:30:27 浏览数 (1)

本文主要介绍一下被AI坑到的经历

经历1

之前写聊聊如何自定义parallelStream的线程池这篇文章的时候,就被坑了一下。

起初我是搜索了一下,发现需要类似如下使用:

代码语言:javascript复制
List<TodoTask> result = forkJoinPoolFactoryBean.getObject().submit(new Callable<List<TodoTask>>() {
            @Override
            public List<TodoTask> call() throws Exception {
                return IntStream.rangeClosed(1, 20).parallel().mapToObj(i -> {
                    log.info("thread:{}", Thread.currentThread().getName());
                    return new TodoTask(i, "name" i);
                }).collect(Collectors.toList());
            }
        }).get();

这种方式总感觉不太优雅,而且也不太好直观看出来实现原理,在submit里头用parallel就可以了?

后来我用某gpt问了一下,给了如下答案,心想这个靠谱啊

代码语言:javascript复制
IntStream.rangeClosed(1, 20).parallel().executor(customThreadPool).mapToObj(i -> {
                    log.info("thread:{}", Thread.currentThread().getName());
                    return new TodoTask(i, "name" i);
                }).collect(Collectors.toList());

结果我等到打开ide,发现并没有executor方法,我想,是不是java8没有,是高版本的java才有,于是我用java20版本的打开看看,也没有,直呼上当了,不过这个因为有可运行的版本因而入坑不是很深。下面的经历2就是大坑了。

经历2

想用filebeat去收集日志,于是问ai: filebeat extract field from log seperate by |,回答如下

代码语言:javascript复制
To extract fields from log files using Filebeat and separate them using the pipe symbol (|), you can use the grok processor in the Filebeat configuration file.

Here's an example of how you can configure Filebeat to collect log files and extract fields using the pipe symbol:

Open the Filebeat configuration file. The location of this file may vary depending on your system. Common locations include /etc/filebeat/filebeat.yml or /etc/filebeat/filebeat.yaml.

Find the filebeat.inputs section in the configuration file. This is where you define the log files to collect.

Add the following configuration for each log file you want to collect:

- type: log
  paths:
    - /path/to/your/logfile.log
  processors:
    - grok:
        patterns:
          - '%{WORD:field1}|%{WORD:field2}|%{WORD:field3}'
        field: "message"
In this example, replace /path/to/your/logfile.log with the actual path to your log file. The grok processor is used to extract fields from the log message using the pipe symbol (|). The patterns option specifies the grok pattern to use for field extraction, and the field option specifies the field to apply the processor to.

Save the configuration file and restart Filebeat for the changes to take effect.
With this configuration, Filebeat will collect the log file and extract fields from the log message using the pipe symbol (|). The extracted fields will be available in the output, and you can send them to Elasticsearch or any other output you have configured in Filebeat.

I hope this helps! Let me know if you have any further questions.

给出了grok来匹配,心想靠谱啊,想起了之前用logstash的时候好像用过grok,等我依葫芦画瓢写出了grok版本,一执行,发现报错filebeat error while initializing input: the processor action grok does not exist

于是我又搜索了半天,发现elastic论坛还在讨论给filebeat实现grok的processor,然后看了下官网,直呼上当了,filebeat现在还不支持grok,但是有dissect可以实现类似的功能

小结

还是不能过渡依赖ai,必须警惕ai回答的准确性,看来想借助ai来偷懒一会还是不行。我猜测ai是被错误的data给训练坏了,或者是没能完全理解上下文。

0 人点赞