Saturday, September 26, 2020

Cassandra (High Level details)



What is Cassandra?
  • Open source
  • Distributed Database
  • NoSQL Database
  • Can add lots of nodes in a cluster hence highly scalable
  • High availability since the nodes can be spread across DC's.
  • Fault tolerant, durable with no single point of failure.
  • Note that in Cassandra distributed architecture, there are multiple nodes. These nodes can be replicated across multiple data centers to avoid a single point of failure.

All the nodes have the same functionality and no one node is a master node. We do not have a master slave node architecture. All nodes have the same functionality.

If all nodes have the same functionality, how do the nodes know about the other nodes in the cluster?
Ans. Snitch.

What is a Snitch?

As per official documentation: https://cassandra.apache.org/doc/latest/operating/snitch.html?highlight=snitch

In Cassandra, a snitch has two functions:
  1. Teaches Cassandra enough about your network topology to route requests efficiently
  2. Allows Cassandra to spread replicas around your cluster to avoid correlated failures. It does this by grouping machines into “data centers” and “racks.” Cassandra will do its best not to have more than one replica on the same “rack” (which may not actually be a physical location).
There are various snitch classes. Read more about them in the above link.
SimpleSnitch is suitable for only a single DC.

In PropertyFileSnitch, one will put the IP address of all the nodes in a cluster and also specify the rack and DC. Data is entered in the cassandra-topology.properties file.
This is fine if we had lesser # of nodes. But if we had 1000 nodes, it would be very tedious to keep creating and modifying entries for the nodes.

What is Gossip?
Gossip is how the nodes in a cluster communicate with each other.
Every second, a node sends information about itself and the other nodes information (that it has) to at-least 3 other nodes.
Thus, as mentioned before, there is no master slave configuration. Eventually all nodes have information of all the other nodes and they keep sending/sharing information every 1 sec.
Do note that this is internal communication between nodes in a cluster.

Data is distributed across nodes (so rows in a table can be in different nodes).

But if the rows in a table are in different nodes and if a node goes down, would we lose data?
We have an option to replicate data by setting a replication factor. If the replication factor is 1, then data is not replicated and you may lose data. But if the replication factor is increased to 2 or 3 etc, the data is replicated across nodes and we have high availability of data even if nodes go down.








Monday, August 10, 2020

MapReduce example

 Q. Given a data set of movie ID, ratings and user ID, find the # of ratings + popular movies.

MapReduce code:

Use MR Job and have the following script:


class MovieRatings(MRJob):

    def steps(self):

        return [

            MRStep(mapper=self.mapper_get_ratings,

                   reducer=self.reducer_count_ratings)

        ]



    def mapper_get_ratings(self, _, line):

        (userID, movieID, rating, timestamp) = line.split('\t')

        yield rating, 1


Here we are splitting the data (delimiter is tab) into a tuple.

Return a 1 for every rating.


    def reducer_count_ratings(self, key, values):

        yield key, sum(values)


For the reducer we use a sum with the keys being the movie ratings.



if __name__ == '__main__':

    MovieRatings.run()


The main run part.


Execute the same on local or on Hadoop cluster to get a break down.


Friday, August 7, 2020

Hadoop Overview and Terminology

 Trying to list the components of Hadoop as an overview:

1. HDFS (Hadoop Distributed File system)

  • Data Storage
  • Distribute the storage of Big Data across clusters of computers
  • Makes all clusters look like 1 file system
  • Maintains copies of data for backup and also to be used when a computer in a clusters fails.
2. YARN (Yet another resource negotiator)
  • Data Processing
  • Manages the resources on the computer cluster
  • Decides which nodes are free/available to be run etc
3. MapReduce
  • Process Data
  • Programming model that allows one to process data
  • Consists of mappers and reducers
  • Mappers transform data in parallel
    • Converts raw data into key value pairs
    • Same key can appear multiple times in a dataset
    • Shuffle and sort to sort and group the data by keys
  • Reducers aggregate the data
    • Process each keys values
4. PIG
  • SQL Style syntax
  • Programming API to retrieve data
5. HIVE
  • Like a SQL Database
  • Can run SQL queries on the database (makes the distributed data system look like a SQL DB)
  • Hadoop is not a relational DB and HIVE makes it look like one
6. Apache Ambari
  • Gives view of your cluster
  • Resources being used by your systems
  • Execute HIVE Queries, import DB into HIVE
  • Execute PIG Queries
7. SPARK
  • Allows to run queries on the data
  • Create spark queries using Python/Java/Scala
  • Quickly and efficiently process data in the cluster
8. HBASE
  • NoSQL DB
  • Columnar Data store
  • Fast DB meant for large transactions
9. Apache Storm
  • Process streaming data
10. OOZIE
  • Schedule jobs on your cluster
11. ZOOKEEPER
  • Coordinating resources
  • Tracks which nodes are functioning
  • Applications rely on zookeeper to maintain reliable and consistent performance across clusters
12. SQOOP
  • Data Ingestion
  • Connector between relational DB and HDFS
13. FLUME
  • Data Ingestion
  • Transport Web Logs to your cluster
14. KAFKA
  • Data Ingestion
  • Collect data from a cluster or web servers and broadcast to HADOOP cluster.