.net simplified

Leave a Comment

.net simplified


SQL Vs NoSql

Posted: 22 Jan 2022 10:46 PM PST

Hi Friends,

Its been long time since I last blogged. Hence, thought to come up with some design topics. In this case, I thought to present my take on Sql Vs NoSql. Therefore, we will be discussing the below agenda in this blog post.

  • Basics
  • Sharding
  • Replication
  • CAP Theorem
  • Different types of databases
  • Parameters to choose right database

Background:-

In 2004, google released a white paper on Big Table. This was one of the foundation for NoSql database. Then, amazon released white paper on Dynamo Db. And since then, this space has taken wings and different types of NoSql Databases started coming.

Sharding:-

  • Partition/Split data based on certain Logical column. So, let’s say based on user data one new db is setup for taking care of user’s data itself and other consider order history. Now, one application is talking to two db. Now, this further got split into multiple services, which also introduces microservices where in basically one db per service reside. Means, now actual application will get the user data via user route and orders data via orders route. This is also called functional functional split where in architect has to take decision on logical separation.
  • At the functional level we can first split the services say Catalog, Orders, OrderHistory, Seller etc. This means all the CRUD related will be taken care by individual services.
  • Consider following dbs are chosen at the high level
    • Catalog: Mongo
    • Cart: Redis
    • User: Sql
    • Orders: Cassandra
  • Now, choosing shard key logically is very important because this will decide which server from the pool will be used for storing the value.
    • Say, we decide server no based on % like S# = shard_key_value%n. Now, let’s say we have 5 servers, then n can be substituted with 5 here. Here, I have taken user id as the shard key.
Server (Shard_Key_Value_User_Id)%5
0 5 (5%5)
1 1, 6
2 2, 7
3 3, 8
4 4, 9….
  • This concept works fine, till we have consistent no of servers. If in case, servers are getting increased which is going to happen most likely once load increases, then above formula will not work. In that case, we need to apply Consistent Hashing to overcome this scenario. Consistent Hashing, we will be discussing in a separate topic.

Replication:-

In simple terms, let’s say one instance is not enough to handle to multiple reads. In that case, there will be multiple copies of instances which can serve for read. Let’s say a scenario, where in a post written by any celebrity on social media. Now, post written once, but its going to be read million times. In that case, data will be replicated to multiple shards, then that post can be read by any of the available shards. BTW, in these situations, caching also comes handy. But, that we will see later. For the context of this post, let’s stick to replication factor. Replication also minimizes the risk of data loss means let’s say you only have one copy of dataset with you and that particular instance itself goes down, then in that case, data lost. With replication, at least you are maintaining multiple copies of the data. This also means that you are avoiding Single Point of Failure.

With that being said, let’s understand Replication Factor aka RF now. Consider, we have a cluster (combination of servers). Let’s say we have 1000 servers in one cluster.

Note:- Number of servers in a cluster can be greater than read replicas. It’s situational. But, in most of the cases, it holds true.

Let’s say, we have 1000 servers in one cluster which is holding the user data. Now, with the above statement in place, not every node (server) will know my information. Therefore, if we have 1000 nodes and RF = 5. Then, my information will be replicated to 5 different nodes to avoid single point of failure. This also means, that each row in a node will be replicated to RF no of nodes.

Now, consider we have read request for user id 1 like Read(User_Id_1). At this instant, 5 servers are holding this value. Now, this will be returned based on any routing like round robin, least connections, weighted least connections or something else as used by load balancer. Idea is we don’t know which server will respond to this request.

The next questions comes is like how request knows, whether the data is lying between these 5 servers.  Obviously, one way is via Consistent Hashing, which we will see in another thread.

Let’s say, hash is returning server 1 as primary node to store the data, then replication logic say goes like primary node + RF no of nodes means S1+1 => S2, S1+2 => S3, etc. Obviously, algorithm for both consistent hashing and replication will be much cleaner considering the edge cases as well.

Now, let’s say we have write request for the same User Id, say Write(User_Id_1) => “data”. Here, write is writing some data for the user id. At this instant as well, one of the servers out of 5 can take request update the node and then parallely send the request to other servers to update the record. Let’s say, at the time other servers S2, S3..S5 are getting updated, at the same time read request comes for the same user_id. Then, it may happen, that value will be returned from S3 => old_data, which is not matching with the updated data what S1 holds. Therefore, this scenario will be known as Inconsistent state. Although, system will eventually becomes consistent as parallely(async) backend job is already running. But, this kind of system won’t be acceptable at many places say Banking systems.

Therefore, in order to make system consistent, we need to acquire lock on that particular key say user_id till all the servers get updated with latest value. This also means, let’s say if any read request comes for that user_id, during update time, then that read request will be rejected with message say “Try after sometime or any other message“. This is the case, where in we keep consistency high over availability aka (consistency>availability). This kind of replication happens synchronously.

CAP Theorem:-

CAP theorem says, either you can achieve consistency or availability when your system is partition tolerant. CAP theorem, gets applied whenever any distributed system comes into picture. We will also delve in this topic in coming posts as this itself is very big topic. Now, let’s look at this formula

  • RF:- Replication factor, in simple terms no of servers which will hold my data. Say, 5 servers will hold data.
  • R:- R means no of servers, I will read data from.
  • W:- W means, no of servers, I will write data to.

Here, let’s say, I have updated the data in one server and while updating the same timestamp is also stamped there, just to give indication, which data is latest. Although, I am updating the data in other nodes parallely. But, at the time of reading, I am reading the data from all the servers and sorting the data based on timestamp. This will always return me the latest updated data, hence will pick that data. Now, this state is also strongly consistent state. Now, if the updated data, which is holding the latest data, if it goes down then it will be data loss scenario.

But, this strong consistency we are achieving at the cost of latency. Latency will be higher as we are going to all nodes for reads and then we are comparing the data with timestamp and then returning. This kind of system is well suited for where in read frequency is very less than write say Log aggregator system. For a Log aggregator system, you are writing heavily but reads are less eg. Splunk, NewRelic etc.

Therefore, for this kind of system, values will go like W=>1, R= 5, RF = 5. This means, I am writing to one system but I am reading from multiple systems. This is aka as write heavy system. For read heavy case, numbers can look like W=>5, R= 1, RF = 5. This will also be strongly consistent system.

Many a time, it happens, when W=1, people tend to think, there are chances of data loss if node goes down. In actual scenario, async update to other nodes will happen in milli seconds. And, if you like to avoid, even that faintest chance of data loss say that 10 ms, then we can have more no of Ws. Therefore,

Strong Consistency => R+W > RF say 2+5> 5.

R+W > RF aka known as Quorum.

Note:- Generally sharding is supported out of the box by many Dbs. Let’s say, if you are using any SQL db, which is not providing sharding, then you can implement your own sharding at application layer. But, implementing sharding is not an easy task.

Now, these are the points which we need to take care before choosing any database.

Criteria for choosing Database:-

  • Data Manipulation
    • SQL
    • Key-Value (KV)
    • JSON
    • Graph
  • ACID Properties:
    • Most of the SQL databases are ACID compliant and most of the NoSQL databases are non ACID compliant.
    • Many NoSql databases offer ACID on single shard.
    • NoSql databases offer eventual consistency.
    • Let’s say, if I want low latency, where in I don’t want to flush data immediately to disk and I am fine with some level of data loss (which will be negligible in reality), then in that case, I can go with NoSQL otherwise SQL.
  • Schema Support
    • Although many NoSql systems provide schema less support, which means anyone can dump anything to db without any check. But, at some point of time, it makes sense to have schema support for NoSql database as well, like mongo introduced some kind of schema support as well.
  • CAP Support
    • Many databases, by default have inherent support for Eventual consistency.
    • Tunable consistency. It means in some databases, to keep the database consistent, this option is left on developers like how do you want your consistency to look like based on R, W and RF factor. This we have already seen above. But, some databases, don’t provide this option. They will remain either Consistent or Available throughout. Hence, tunable consistency factor is also very important criteria before choosing any database.
  • Performance
    • Latency
    • Throughput (Queries per sec aka QPS)
    • Concurrency
    • Benchmarks can be referred from Google, Uber, Facbook, Netflix OSS, Hotstar, LinkedIn, AirBnb articles
  • Database Types
    • Relational databases
      • Built on B+ tree
      • Means, OLTP queries will be faster. Retrieving data from joins are insanely fast, if data is in 3NF form and indexes are written properly. Therefore, given a choice if you have to choose between Sql and NoSql where in entire data can fit in one instance, SQL still rules.
    • Columnar databases
      • Based on LSM trees (Log structured merge trees)
      • Cassandra, BigTable, RedShift, Influx etc
      • In this kind of system, everything written in log file. Means writing to LSM tree means is just like appending to log file.
      • OLAP queries or analytical queries will be faster in columnar database.
      • So, let’s say, if I have to analyse one column of millions of rows, then SQL based is not preferred as this will load entire column first and then apply the aggregation where as with columnar based, only one column will do.
    • KV/Document
      • If my use case is something like given some id, return the value for that id. Then, in that case, I will use NoSql with hash based implementation.
      • Mongo, Redis, Couchbase, MemCache are few examples
    • Graph Database
      • This especially comes into picture, when we have to calculate relationships for nested networks say social networking sites like LinkedIn, Tinder, Fb etc
      • In case of Linked In, Degree of connection is calculated and represented using Graph db.
      • Similarly, in FB, friend of friends kind of connection solved via Graph db.
      • Neo4J, OrientDb, ArrangoDB are few exaamples
  • Data Organization
    • B+Tree
    • LSM trees etc
  • Replication
    • Avoiding single point of failure
    • XDR (Cross region replication)
  • Operations and Maintenance
    • Paid or unpaid in case required
    • Community support
  • Transport Protocol
    • TCP
    • gRPC
    • HTTP
    • GraphQL
  • Cost
    • Disk Memory ratio. Say if most of the data is going to reside in memory, then it will be expensive as RAMs are expensive.
    • Cost of DR centre
    • Power Consumption
    • Density of storage

With these details, I would like to wrap this discussion here. Will meet again in some time. Till then stay tuned and Happy Learning.

103 total views, 6 views today

0 comments :

Post a Comment