2021-10-27 假如 人生可以overwrite

2021-12-08 09:49:07 浏览数 (1)

假如人生可以 overwrite ,我宁愿是我能有多个备份,加上ctrl z

spark 写 目录有个如下的方法:

代码语言:javascript复制
model.write.overwrite().save(".")

这么写TMD 的有大问题。 尤其这个overwrite(),上面的代码会直接在程序运行时候重写当前文件系统目录,覆盖代码,数据恢复软件都找不回来。我很不清楚为何能有这么厉害的权限

而且代码还能运行成功。 把我半年来写的本地测试框架工程删的一干二净。只留下一个p 都不能干的数据模型。。。

我突然回想起,这样的错误,我TM犯了两次。

上一次是在aws 的 EMR 上也是用同样的骚操作,我想把aws S3 上的文件写回本地,来了个好像overwrite 加上是:

代码语言:javascript复制
save("local:///test/user/")

把自己的测试目录删的干干净净。

更加危险的操作,如果是:我估计是多半连 根目录都能干掉。。。

代码语言:javascript复制
save("../../")

所以人生可以重来,就能不犯错嘛?打游戏,多个存档这么简单嘛。

世间的事大抵如此。

ALL RIGHTS RESERVED

大家没事,可以读读源码,看看他们这个save ,overwrite 逻辑,到底怎么回事。。。

  • https://github.com/apache/spark/blob/v3.2.0/mllib/src/main/scala/org/apache/spark/ml/util/ReadWrite.scala
  • https://spark.apache.org/docs/latest/api/python/_modules/pyspark/ml/util.html#MLWriter.overwrite

python 代码中 是这么调用的:

代码语言:javascript复制
def overwrite(self):
        """Overwrites if the output path already exists."""
        self._jwrite.overwrite()
        return self

>>> df.write.mode('append').parquet(os.path.join(tempfile.mkdtemp(), 'data'))
        """
        # At the JVM side, the default value of mode is already set to "error".
        # So, if the given saveMode is None, we will not call JVM-side's mode method.
        if saveMode is not None:
            self._jwrite = self._jwrite.mode(saveMode)
        return self

spark scala 源码类似这样,我节选了一部分:

代码语言:javascript复制
 /**
   * Saves the ML instances to the input path.
   */
  @Since("1.6.0")
  @throws[IOException]("If the input path already exists but overwrite is not enabled.")
  def save(path: String): Unit = {
    new FileSystemOverwrite().handleOverwrite(path, shouldOverwrite, sparkSession)
    saveImpl(path)
  }

  /**
   * `save()` handles overwriting and then calls this method.  Subclasses should override this
   * method to implement the actual saving of the instance.
   */
  @Since("1.6.0")
  protected def saveImpl(path: String): Unit

  /**
   * Overwrites if the output path already exists.
   */
  @Since("1.6.0")
  def overwrite(): this.type = {
    shouldOverwrite = true
    this
  }
private[ml] class FileSystemOverwrite extends Logging {

  def handleOverwrite(path: String, shouldOverwrite: Boolean, session: SparkSession): Unit = {
    val hadoopConf = session.sessionState.newHadoopConf()
    val outputPath = new Path(path)
    val fs = outputPath.getFileSystem(hadoopConf)
    val qualifiedOutputPath = outputPath.makeQualified(fs.getUri, fs.getWorkingDirectory)
    if (fs.exists(qualifiedOutputPath)) {
      if (shouldOverwrite) {
        logInfo(s"Path $path already exists. It will be overwritten.")
        // TODO: Revert back to the original content if save is not successful.
        fs.delete(qualifiedOutputPath, true)
      } else {
        throw new IOException(s"Path $path already exists. To overwrite it, "  
          s"please use write.overwrite().save(path) for Scala and use "  
          s"write().overwrite().save(path) for Java and Python.")
      }
    }
  }
}

0 人点赞