NoSQL Sharding 分片

2018-05-04 11:09:30 浏览数 (1)

翻译内容:

NoSQL Distilled 第四章 Distribution Models

作者简介:

本节摘要:

各位周末好,今天我们主要讨论有关分布模型中分片(sharding)的内容。

4.2. Sharding 分片

Often, a busy data store is busy because different people are accessing different parts of the dataset. In these circumstances we can support horizontal scalability by putting different parts ofthe data onto different servers—a technique that’s called sharding (see Figure 4.1).

很多时候,数据库繁忙是因为很多不同的人访问数据库的不同部分(译者曰:其实意思是所有的请求都来一个server上的数据库来访问,这个单库就很忙,怎么办呢?)在这种情况下,我们就可以把不同模块的数据分放到不同的服务器上,这样就实现了流量分散,我们把这种技术叫做:分片 sharding(见图4.1)。

Figure 4.1. Sharding puts different data on separate nodes, each of which does its own reads and writes.

图4.1 分片技术把不同的数据put到不同的节点上,每个节点处理自己的读和取。

In the ideal case, we have different users all talking to different server nodes. Each user only has to talk to one server, so gets rapid responses from that server. The load is balanced out nicely between servers—for example, if we have ten servers, each one only has to handle 10% of the load.

在理想的情况,不同的服务器节点会服务于不同的用户。每个用户只需要和一个server进行存取操作,这样就可以获得比较快的响应。这样的话服务器之间的负载就被平均分摊了,也就是均衡了。比如:假定我们现在有10个server,那么每个server就只负责10%的存取请求。

Of course the ideal case is a pretty rare beast. In order to get close to it we have to ensure that data that’s accessed together is clumped together on the same node and that these clumps are arranged on the nodes to provide the best data access.

但,理想和现实还是有差距的。上面的那种理想情况很少见。为了获得近乎理想的效果,必须要保证哪些需要同时访问的数据放在同一个节点上。并且节点必须要排布好这些数据块,让访问速度更快。

The first part of this question is how to clump the data up so that one user mostly gets her data from a single server.

这个问题首先需要解决的就是我们如何把数据聚合起来,让一个用户基本上总从一台服务器上获取他的数据。(译者曰:其实就是说,一次请求到一个服务器上,然后就能得到想要的数据,而不是还要同时去访问别的机器。所以这时候,我们就要想办法如何组织、分布数据,达到这种近乎理想的效果。)

This is where aggregate orientation comes in really handy. The whole point of aggregates is that we design them to combine data that’s commonly accessed together—so aggregates leap out as an obvious unit of distribution.

面向聚合的数据库对于解决上面的需求是非常方便和擅长的。设计聚合的初衷就是为了把经常需要同时访问的数据放在一个节点上(译者曰:这样就不用跨机器了)。所以“聚合”作为分布式的一个数据单元就是一件很自然的事情。(译者曰:聚合和集群上的节点对应得很好啊。一个节点上有一个聚合,可以这样理解。聚合就是“经常同时访问的数据的集合”)

When it comes to arranging the data on the nodes, there are several factors that can help improve performance.

If you know that most accesses of certain aggregates are based on a physical location, you can place the data close to where it’s being accessed. If you have orders for someone who lives in Boston, you can place that data in your eastern US data center.

在节点的数据排布问题上,有若干个与性能改善相关的因素。如果你知道某个聚合的大部分访问量都来自一个地方(物理位置),那么你就可以把这些数据放在这个地方附近,这样就可以提高访问速度了。比如说:如果你有住在波士顿的人下的订单数据,那么你就可以把这些数据放在美国东部数据中心。

Another factor is trying to keep the load even. This means that you should try to arrange aggregates so they are evenly distributed across the nodes which all get equal amounts of the load. This may vary over time, for example if some data tends to be accessed on certain days of the week—so there may be domain-specific rules you’d like to use.

还有就是我们要努力让负载均匀。这个意思就是说你应该安排那些“聚合”均匀的分布在各个节点上来保证它们负载分摊得尽可能均衡。这个可能会随着时间变化,比如如果有的数据就是一周的固定几天中会被集中频繁的访问——这时候你就可能需要使用一些领域特定的规则来解决它。(译者曰:这个怎么解决呢?)

In some cases, it’ s useful to put aggregates together if you think they may be read in sequence. The Bigtable paper [Chang etc.] described keeping its rows in lexicographic order and sorting web addresses based on reversed domain names (e.g., com.martinfowler). This way data for multiple pages could be accessed together to improve processing efficiency.

在有的情况下,你把聚合放在一起是有用的,如果你认为它们会被一块依次读取的话。在谷歌的Bigtable的论文就提到过他们对于这种情况的做法。他们把域名颠倒过来作为key然后进行排序存储。这样对于多个网页被同时访问的那种情况,读取性能就会被提升。以下我们找到了谷歌Bigtable paper 中关于这个做法的片段:

Historically most people have done sharding as part of application logic. You might put all customers with surnames starting from A to D on one shard and E to G on another. This complicates the programming model, as application code needs to ensure that queries are distributed across the various shards. Furthermore, rebalancing the sharding means changing the application code and migrating the data. Many NoSQL databases offer auto-sharding, where the database takes on the responsibility of allocating data to shards and ensuring that data access goes to the right shard. This can make it much easier to use sharding in an application.

过去很多小伙伴喜欢把分片写在应用逻辑中。你可能会把所有的姓的首字母A到D的放在一个分片上(节点),把姓的首字母E到G的放到另外一个分片上。这样做就把我们的程序模块变得复杂化(小编说:这样做是不是连封装和耦合度都没有考虑啊?势必不可行啊。),因为你的代码逻辑中不仅要做业务逻辑的事情,而且还有去做db访问层的事情,而且还要每次都要确保能够查询到分布在不同的分片上的数据。更重要的是,如果你哪天需要重新调整一些分片的话,就意味着你要去修改应用程序的代码并且迁移数据(小编说:细思恐极)。没事,好在大部分的NoSQL数据库都提供了自动sharding,数据库自动负责数据的分片工作并且确保你访问数据时能正确的跑到对应数据的分片上去拿数据(小编说:这就对了,各干各的事情)。这样的话,在应用程序中就可以很easy的去使用分片了(只需要访问接口就是了)。

Sharding is particularly valuable for performance because it can improve both read and write performance. Using replication, particularly with caching, can greatly improve read performance but does little for applications that have a lot of writes. Sharding provides a way to horizontally scale writes.

分片对于性能提高特别有价值因为它可以同时提高读和写的性能(数据分散了,当然读写都提高了)。使 用复制(replication)的方式,再辅之以缓存是能够大大的改善读取的性能。但,对于写入请求的性能改善“复制”(replication)就显得有点力不从心了。

Sharing does little to improve resilience when used alone. Although the data is on different nodes, a node failure makes that shard’s data unavailable just as surely as it does for a single-server solution. The resilience benefit it does provide is that only the users of the data on that shard will suffer; however, it’ s not good to have a database with part of its data missing. With a single server it’ s easier to pay the effort and cost to keep that server up and running; clusters usually try to use less reliable machines, and you’re more likely to get a node failure. So in practice, sharding alone is likely to decrease resilience.

如果单独使用分片技术的话,分片对于数据丢失后的恢复并没有什么保障。尽管分片的数据是在不同的机器上,但一个节点数据丢失了或者出现了故障会让分片数据不可用。(小编说:分片的数据都是独一份,丢了或者出现故障,自然就不能访问了)。不过也有个好消息就是分片后的数据如果丢了,其它的机器分片的数据还是能够访问的,只是那一个故障机器的数据不可用了而已。(小编说:这个角度倒是能够抚慰人)。但对于一个数据库来讲,丢了一部分数据,导致数据不完整毕竟不是什么好事(请用俄罗斯口音和我一起读:not good,it’ s not good。)。过去我们把数据放在一台机器上的时候,我们会下大气力全神贯注的保障它的高可用。但在集群上很多时候机器可能并不够可靠,很可能就会出现节点的故障。所以在实践中,独立使用分片技术很有可能会降低数据的可用性。

Despite the fact that sharding is made much easier with aggregates, it’s still not a step to be taken lightly. Some databases are intended from the beginning to use sharding, in which case it’s wise to run them on a cluster from the very beginning of development, and certainly in production. Other databases use sharding as a deliberate step up from a single-server configuration, in which case it’s best to start single-server and only use sharding once your load projections clearly indicate that you are running out of headroom.

尽管有了“聚合”结构之后分片变得容易了许多,但我们也不能草率而轻易的踏出这步(决定分片是个谨慎的事情) 。有的数据库从一开始就决定使用分片,并且在开发阶段就把它运行在集群上了,这样自然是极好的,尤其是产品进入上线阶段后。而有的数据库则是想从单机版迁移到分片上来,这个则要谨慎小心,三思而后行。这种情况下最好还是继续single-server 模式,直到现有的服务能力已经明显无法应付负载量时再使用分片技术。

In any case the step from a single node to sharding is going to be tricky. We have heard tales of teams getting into trouble because they left sharding to very late, so when they turned it on in production their database became essentially unavailable because the sharding support consumed all the database resources for moving the data onto new shards. The lesson here is to use sharding well before you need to—when you have enough headroom to carry out the sharding.

总之从一个单独的节点迁移成多节点的分片终究是个复杂的活,so tricky啊。我们听到过有的团队迁移到分片的过程中遇到了麻烦,因为她们决定分片的时间太晚了。当产品都上线了,数据库却不能访问了。因为这时候你正在把数据迁移到新的分片上,这个过程中消耗了数据库所有的资源,这时候就没法处理数据访问请求了。所以我们得出的经验是,要分片就趁早,也就是说在还有余地的时候就尽快的将数据迁移到分片上。

以上是今天的内容,下期我们讨论有关Master-Slave Replication 的内容!

0 人点赞