Introduction to Distributed Systems
The what?
A distributed system is a group of machines working together that seems to be a single machine to the end-user. They have a shared state, work concurrently and they do not have a central point of failure.
For example, a traditional server contains all the data and we can directly perfrom CRUD operations on the machine. But in a distribute system, the data base runs on multiple machines and can communicate with whichever it wants and can take receive the same imformation from all.
The why?
Though creating and managing distributed systems is a complex matter there is a good reason as to why we need it. Distribute systems make it very easy to scale horizontally. This is good because after a certain point, it's becomes expensive and we hit our threshold to vertically scale our system.
Along with scalability, we also get the benefit of low latency and fault tolerance. Since there are cluster of multiple computers, even if one fails, the application continues to run also the traffic is also distributed. The cluster can be located in any data center around the world, the traffic hits the node closest to it.
Scaling Strategies
Primary-Replica Replication:
Explaination
Read operations are carried out more frequently than write operations in a typical online application. We thus build new databases under this technique, which will synchronise with the primary database. Only read operations are available from these new instances. We communicate with the main database whenever we add or edit data. Consequently, we asynchronously notify the replicas of the modification, and they likewise save it.
Pitfalls
There exists a possibilty where a read operation is executed exactly before the sync operation is completed. This suggests we are losing consistency. Another scenario is that the sync process doesn't succeed.
Multi-master replication strategy:
Explaination
This strategy allows for better availability and load distribution because multiple database instances, or masters, can handle both read and write operations. This lowers latency for worldwide applications by enabling us to disperse the instances across several geographic locations.
Pitfalls
It can be difficult to settle disputes and maintain consistency in data. The system's implementation and upkeep are complicated. Performance overhead can be introduced by dispute resolution and synchronisation.
Sharding
Explaination
Sharding is a methods of separating and storing a single logical dataset in several databases. By distributing the data amoong the cluster, database system can store larger dataset and can handle additional queries. When a dataset grows too big to fit into a single database, sharding becomes necessary. A hot spot is a single shard that receives more requests than the others; these should be avoided. Re-sharding data after it has been split up can be quite costly and result in a large amount of downtime.
Pitfalls
We have now made queries that use keys other than the partition key extremely inefficient, as they need to search through all the shards. SQL JOIN queries are even more problematic, and complex JOINs become virtually unusable.
© Aditya Nagar.RSS