linux使用curl命令_如何使用curl从Linux命令行下载文件

2020-12-30 15:47:50 浏览数 (1)

参考链接: 使用Python在Linux Terminal中格式化文本

linux使用curl命令

   Fatmawati Achmad Zaenuri/Shutterstock

   Fatmawati Achmad Zaenuri / Shutterstock 

 The Linux curl command can do a whole lot more than download files. Find out what curl is capable of, and when you should use it instead of wget. 

  Linux curl命令除了下载文件外,还可以做更多的事情。 找出curl的功能,以及何时使用它而不是wget 。  

  curl vs. wget:有什么区别? (curl vs. wget : What’s the Difference?) 

 People often struggle to identify the relative strengths of the wget and curl commands. The commands do have some functional overlap. They can each retrieve files from remote locations, but that’s where the similarity ends. 

  人们通常很难确定wget和curl命令的相对优势。 这些命令确实有一些功能重叠。 他们每个人都可以从远程位置检索文件,但这就是相似性结束的地方。  

 wget is a fantastic tool for downloading content and files. It can download files, web pages, and directories. It contains intelligent routines to traverse links in web pages and recursively download content across an entire website. It is unsurpassed as a command-line download manager. 

  wget是下载内容和文件的绝佳工具 。 它可以下载文件,网页和目录。 它包含智能例程,可遍历网页中的链接并在整个网站上递归下载内容。 作为命令行下载管理器,它无与伦比。  

 curl satisfies an altogether different need. Yes, it can retrieve files, but it cannot recursively navigate a website looking for content to retrieve. What curl actually does is let you interact with remote systems by making requests to those systems, and retrieving and displaying their responses to you. Those responses might well be web page content and files, but they can also contain data provided via a web service or API as a result of the “question” asked by the curl request. 

  curl 完全满足了不同的需求 。 是的,它可以检索文件,但是不能递归地浏览网站以查找要检索的内容。 curl实际作用是通过向远程系统发出请求,并检索和显示它们对您的响应,从而与远程系统进行交互。 这些响应很可能是网页内容和文件,但是由于curl请求提出的“问题”,它们也可能包含通过Web服务或API提供的数据。  

 And curl isn’t limited to websites. curl supports over 20 protocols, including HTTP, HTTPS, SCP, SFTP, and FTP. And arguably, due to its superior handling of Linux pipes, curl can be more easily integrated with other commands and scripts. 

  而且curl不仅限于网站。 curl支持20多种协议,包括HTTP,HTTPS,SCP,SFTP和FTP。 可以说,由于其对Linux管道的出色处理, curl可以更轻松地与其他命令和脚本集成。  

 The author of curl has a webpage that describes the differences he sees between curl and wget. 

  curl的作者提供了一个网页,该网页描述了他看到的 curl和wget之间的差异 。  

  安装卷发 (Installing curl) 

 Out of the computers used to research this article, Fedora 31 and Manjaro 18.1.0 had curl already installed. curl had to be installed on Ubuntu 18.04 LTS. On Ubuntu, run this command to install it: 

  在用于研究本文的计算机中,Fedora 31和Manjaro 18.1.0已经安装了curl 。 curl必须在Ubuntu 18.04 LTS上安装。 在Ubuntu上,运行以下命令进行安装:  

 sudo apt-get install curl

  卷曲版本 (The curl Version) 

 The --version option makes curlreport its version. It also lists all the protocols that it supports. 

  --version选项使curl报告其版本。 它还列出了它支持的所有协议。  

 curl --version

  检索网页 (Retrieving a Web Page) 

 If we point curl at a web page, it will retrieve it for us. 

  如果我们将curl指向网页,它将为我们检索它。  

 curl https://www.bbc.com

 But its default action is to dump it to the terminal window as source code. 

  但是它的默认操作是将其作为源代码转储到终端窗口中。  

 Beware: If you don’t tell curl you want something stored as a file, it will always dump it to the terminal window. If the file it is retrieving is a binary file, the outcome can be unpredictable. The shell may try to interpret some of the byte values in the binary file as control characters or escape sequences. 

  当心 :如果您不告诉curl您希望将某些内容存储为文件,它将始终将其转储到终端窗口中。 如果要检索的文件是二进制文件,则结果可能无法预测。 Shell可能会尝试将二进制文件中的某些字节值解释为控制字符或转义序列。  

  将数据保存到文件 (Saving Data to a File) 

 Let’s tell curl to redirect the output into a file: 

  让我们告诉curl将输出重定向到文件中:  

 curl https://www.bbc.com  > bbc.html

 This time we don’t see the retrieved information, it is sent straight to the file for us. Because there is no terminal window output to display, curl outputs a set of progress information. 

  这次我们看不到检索到的信息,它会直接发送给我们。 因为没有要显示的终端窗口输出,所以curl输出一组进度信息。  

 It didn’t do this in the previous example because the progress information would have been scattered throughout the web page source code, so curl automatically suppressed it. 

  在上一个示例中,它没有这样做,因为进度信息会分散在整个网页源代码中,因此curl自动抑制了它。  

 In this example, curl detects that the output is being redirected to a file and that it is safe to generate the progress information. 

  在此示例中, curl检测到输出已重定向到文件,并且可以安全地生成进度信息。  

 The information provided is: 

  提供的信息是:  

 % Total: The total amount to be retrieved. %Total :要检索的总数。 % Received: The percentage and actual values of the data retrieved so far. 收到的百分比:到目前为止检索到的数据的百分比和实际值。 % Xferd: The percent and actual sent, if data is being uploaded. %Xferd :如果正在上传数据,则发送的百分比和实际发送的百分比。 Average Speed Dload: The average download speed. 平均速度Dload :平均下载速度。 Average Speed Upload: The average upload speed. 平均上传速度:平均上传速度。 Time Total: The estimated total duration of the transfer. 总时间 :估计的总传输时间。 Time Spent: The elapsed time so far for this transfer. 花费的时间:到目前为止,此传输已用的时间。 Time Left: The estimated time left for the transfer to complete 剩余时间 :完成传输所需的估计时间 Current Speed: The current transfer speed for this transfer. 当前速度 :此传输的当前传输速度。 

 Because we redirected the output from curl to a file, we now have a file called “bbc.html.” 

  因为我们将输出从curl重定向到了一个文件,所以现在有了一个名为“ bbc.html”的文件。  

 Double-clicking that file will open your default browser so that it displays the retrieved web page. 

  双击该文件将打开您的默认浏览器,以显示检索到的网页。  

 Note that the address in the browser address bar is a local file on this computer, not a remote website. 

  请注意,浏览器地址栏中的地址是此计算机上的本地文件,而不是远程网站。  

 We don’t have to redirect the output to create a file. We can create a file by using the -o (output) option, and telling curl to create the file. Here we’re using the -o option and providing the name of the file we wish to create “bbc.html.” 

  我们不必重定向输出即可创建文件。 我们可以通过使用-o (输出)选项来创建文件,并告诉curl创建文件。 在这里,我们使用-o选项,并提供我们要创建的文件名“ bbc.html”。  

 curl -o bbc.html https://www.bbc.com

  使用进度条监视下载 (Using a Progress Bar To Monitor Downloads) 

 To have the text-based download information replaced by a simple progress bar, use the -# (progress bar) option. 

  要将简单的进度条替换为基于文本的下载信息,请使用-# (进度条)选项。  

 curl -x -o bbc.html https://www.bbc.com

  重新启动中断的下载 (Restarting an Interrupted Download) 

 It is easy to restart a download that has been terminated or interrupted. Let’s start a download of a sizeable file. We’ll use the latest Long Term Support build of Ubuntu 18.04. We’re using the --output option to specify the name of the file we wish to save it into: “ubuntu180403.iso.” 

  重新启动已终止或中断的下载很容易。 让我们开始下载一个较大的文件。 我们将使用最新的Ubuntu 18.04长期支持构建。 我们正在使用--output选项来指定要保存到的文件的名称:“ ubuntu180403.iso”。  

 curl --output ubuntu18043.iso http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso

 The download starts and works its way towards completion. 

  下载开始,并逐步完成。  

 If we forcibly interrupt the download with Ctrl C , we’re returned to the command prompt, and the download is abandoned. 

  如果使用Ctrl C强制中断下载,则返回到命令提示符,并且放弃下载。  

 To restart the download, use the -C (continue at) option. This causes curl to restart the download at a specified point or offset within the target file. If you use a hyphen - as the offset, curl will look at the already downloaded portion of the file and determine the correct offset to use for itself. 

  要重新开始下载,请使用-C (继续)选项。 这将导致curl在目标文件中的指定点或偏移处重新开始下载。 如果使用连字符-作为偏移量, curl将查看文件的已下载部分,并确定要用于其自身的正确偏移量。  

 curl -C - --output ubuntu18043.iso http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso

 The download is restarted. curl reports the offset at which it is restarting. 

  重新开始下载。 curl报告重新启动的偏移量。  

  检索HTTP标头 (Retrieving HTTP headers) 

 With the -I (head) option, you can retrieve the HTTP headers only. This is the same as sending the HTTP HEAD command to a web server. 

  使用-I (头)选项,您只能检索HTTP标头。 这与将HTTP HEAD命令发送到Web服务器相同。  

 curl -I www.twitter.com

 This command retrieves information only; it does not download any web pages or files. 

  该命令仅检索信息; 它不会下载任何网页或文件。  

  下载多个URL (Downloading Multiple URLs) 

 Using xargs we can download multiple URLs at once. Perhaps we want to download a series of web pages that make up a single article or tutorial. 

  使用xargs我们可以一次下载多个URL 。 也许我们想下载构成单个文章或教程的一系列网页。  

 Copy these URLs to an editor and save it to a file called “urls-to-download.txt.” We can use xargs to treat the content of each line of the text file as a parameter which it will feed to curl, in turn. 

  将这些URL复制到编辑器,然后将其保存到名为“ urls-to-download.txt”的文件中。 我们可以使用xargs将文本文件每一行的内容视为一个参数,然后将其馈送到curl 。  

 https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-ubuntu#0

https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-ubuntu#1

https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-ubuntu#2

https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-ubuntu#3

https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-ubuntu#4

https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-ubuntu#5

 This is the command we need to use to have xargs pass these URLs to curl one at a time: 

  这是我们需要用来使xargs传递这些URL来一次curl一个的命令:  

 xargs -n 1 curl -O < urls-to-download.txt

 Note that this command uses the -O (remote file) output command, which uses an uppercase “O.” This option causes curl to save the retrieved  file with the same name that the file has on the remote server. 

  请注意,此命令使用-O (远程文件)输出命令,该命令使用大写的“ O”。 此选项使curl可以使用与远程服务器上文件相同的名称来保存检索到的文件。  

 The -n 1 option tells xargs to treat each line of the text file as a single parameter. 

  -n 1选项告诉xargs将文本文件的每一行视为一个参数。  

 When you run the command, you’ll see multiple downloads start and finish, one after the other. 

  运行命令时,您会看到多次下载开始和结束,一个接一个。  

 Checking in the file browser shows the multiple files have been downloaded. Each one bears the name it had on the remote server. 

  在文件浏览器中签入显示多个文件已下载。 每个人都有它在远程服务器上拥有的名称。  

  从FTP服务器下载文件 (Downloading Files From an FTP Server) 

 Using curl with a File Transfer Protocol (FTP) server is easy, even if you have to authenticate with a username and password. To pass a username and password with curl use the -u (user) option, and type the username, a colon “:”, and the password. Don’t put a space before or after the colon. 

  即使必须使用用户名和密码进行身份验证,也可以将curl与文件传输协议 (FTP)服务器一起使用很容易。 要使用curl传递用户名和密码,请使用-u (用户)选项,然后键入用户名,冒号“:”和密码。 不要在冒号之前或之后放置空格。  

 This is a free-for-testing FTP server hosted by Rebex. The test FTP site has a pre-set username of “demo”, and the password is “password.” Don’t use this type of weak username and password on a production or “real” FTP server. 

  这是Rebex托管的免费测试FTP服务器。 测试FTP站点的预设用户名是“ demo”,密码是“ password”。 不要在生产或“真实” FTP服务器上使用这种类型的弱用户名和密码。  

 curl -u demo:password ftp://test.rebex.net

 curl figures out that we’re pointing it at an FTP server, and returns a list of the files that are present on the server. 

  curl指出我们将其指向FTP服务器,并返回该服务器上存在的文件的列表。  

 The only file on this server is a “readme.txt” file, of 403 bytes in length. Let’s retrieve it. Use the same command as a moment ago, with the filename appended to it: 

  该服务器上唯一的文件是“ readme.txt”文件,其长度为403个字节。 让我们取回它。 使用与之前相同的命令,并附加文件名:  

 curl -u demo:password ftp://test.rebex.net/readme.txt

 The file is retrieved and curl displays its contents in the terminal window. 

  检索文件,然后curl在终端窗口中显示其内容。  

 In almost all cases, it is going to be more convenient to have the retrieved file saved to disk for us, rather than displayed in the terminal window. Once more we can use the -O (remote file) output command to have the file saved to disk, with the same filename that it has on the remote server. 

  在几乎所有情况下,将检索到的文件保存给我们磁盘而不是显示在终端窗口中将更加方便。 再一次,我们可以使用-O (远程文件)输出命令将文件保存到磁盘,文件名与远程服务器上的文件名相同。  

 curl -O -u demo:password ftp://test.rebex.net/readme.txt

 The file is retrieved and saved to disk. We can use ls to check the file details. It has the same name as the file on the FTP server, and it is the same length, 403 bytes. 

  检索文件并将其保存到磁盘。 我们可以使用ls检查文件详细信息。 它与FTP服务器上的文件具有相同的名称,并且长度相同,为403个字节。  

 ls -hl readme.txt

  将参数发送到远程服务器 (Sending Parameters to Remote Servers) 

 Some remote servers will accept parameters in requests that are sent to them. The parameters might be used to format the returned data, for example, or they may be used to select the exact data that the user wishes to retrieve. It is often possible to interact with web application programming interfaces (APIs) using curl. 

  某些远程服务器将在发送给它们的请求中接受参数。 例如,参数可用于格式化返回的数据,或者可用于选择用户希望检索的确切数据。 通常可以使用curl与Web 应用程序编程接口 (API)进行交互。  

 As a simple example, the ipify website has an API can be queried to ascertain your external IP address. 

  作为一个简单示例,可以查询ipify网站上的API,以确定您的外部IP地址。  

 curl https://api.ipify.org

 By adding the format  parameter to the command, with the value of “json” we can again request our external IP address, but this time the returned data will be encoded in the JSON format. 

  通过在命令中添加format参数,并使用“ json”的值,我们可以再次请求我们的外部IP地址,但是这次返回的数据将以JSON格式编码。  

 curl https://api.ipify.org?format=json

 Here’s another example that makes use of a Google API. It returns a JSON object describing a book. The parameter you must provide is the International Standard Book Number (ISBN) number of a book. You can find these on the back cover of most books, usually below a barcode. The parameter we’ll use here is “0131103628.” 

  这是另一个利用Google API的示例。 它返回描述一本书的JSON对象。 您必须提供的参数是一本书的国际标准书号 (ISBN)编号。 您可以在大多数书籍的封底中找到这些书,通常在条形码下方。 我们将在此处使用的参数是“ 0131103628”。  

 curl https://www.googleapis.com/books/v1/volumes?q=isbn:0131103628

 The returned data is comprehensive: 

  返回的数据是全面的:  

  有时卷曲,有时wget (Sometimes curl, Sometimes wget) 

 If I wanted to download content from a website and have the tree-structure of the website searched recursively for that content, I’d use wget. 

  如果我想从网站上下载内容并递归搜索该内容的网站树结构,则可以使用wget 。  

 If I wanted to interact with a remote server or API, and possibly download some files or web pages, I’d use curl. Especially if the protocol was one of the many not supported by wget. 

  如果我想与远程服务器或API交互,并可能下载一些文件或网页,则可以使用curl 。 特别是如果协议是wget不支持的众多协议之一。  

  翻译自: https://www.howtogeek.com/447033/how-to-use-curl-to-download-files-from-the-linux-command-line/

 linux使用curl命令

0 人点赞