Database and Cloud Computing: Key Concepts and Practices

Database Management

What is the Purpose of a Primary Key in a Database Table?

Answer: To enforce data integrity.

What Does ‘Normalization’ Refer to in Database Design?

Answer: The process of eliminating redundancy and maintaining data integrity.

In MongoDB, What is the Equivalent of a Table?

Answer: Collection.

Which Command Finds All Documents in a MongoDB Collection?

Answer: db.collection.find({})

What Does a ‘Shard’ Refer to in MongoDB?

Answer: A partition of data in the cluster.

Amazon Web Services (AWS)

What Does VPC Stand for in AWS?

Answer: Virtual Private Cloud.

Primary Difference Between Public and Private Subnets in VPC?

Answer: Public subnets have direct internet access through an Internet Gateway.

What Does a Hash Function Primarily Do?

Answer: Translate data to a fixed-size string.

What is a Partitioning Scheme Combining Multiple Methods?

Answer: Composite Partitioning.

What Does ACID Stand for in RDBMS?

Answer: ACID refers to the four properties—Atomicity, Consistency, Isolation, and Durability—that ensure reliable database transactions.

Can You Move a Resource from Private to Public Subnet?

Yes, a resource can be moved from a private subnet to a public subnet in an Amazon VPC, but it requires modifying the resource’s subnet association. The decision to move depends on the resource’s purpose and security needs. Public subnets are accessible from the internet through an Internet Gateway, making them suitable for resources like web servers that need direct access to serve requests. However, moving sensitive resources like databases or backend services to a public subnet can introduce significant security risks, exposing them to potential cyberattacks. To mitigate risks, additional security measures like security groups and access control lists (ACLs) must be applied. Proper route table configurations are essential to ensure traffic flows correctly after the move. The decision should align with the architecture’s goals, maintaining a balance between accessibility and security while adhering to organizational policies.

How Does Auto-Scaling Work in Cloud Computing?

Auto-scaling dynamically adjusts the number of compute resources based on workload demands, ensuring efficient performance and cost management. It operates through two main types: vertical scaling, which enhances an instance’s capacity (e.g., adding memory or CPU), and horizontal scaling, which adjusts the number of instances by adding or removing them. Cloud platforms like AWS use Auto Scaling Groups (ASGs) to manage scaling policies triggered by metrics like CPU usage, memory consumption, or custom thresholds. Predictive scaling anticipates resource needs using historical data, while reactive scaling responds to real-time workload spikes. For instance, during a high-traffic event, ASGs can launch new instances to balance the load and terminate excess ones once demand decreases. Scaling policies can also include cooldown periods to prevent rapid instance changes. This automated process ensures optimal resource availability, reduces operational overhead, and minimizes costs during fluctuating workloads.

What are Some Drawbacks of Auto-Scaling?

While auto-scaling offers flexibility and efficiency, it has several limitations. One drawback is the potential delay in resource provisioning, which may cause performance issues during sudden traffic spikes. The cooldown period between scaling actions can also limit responsiveness, leading to underutilization or over-provisioning. Additionally, misconfigured scaling policies may result in unexpected costs or inefficient resource allocation. For instance, an overly aggressive scaling policy could launch unnecessary instances, increasing expenses. Auto-scaling requires careful monitoring and tuning of thresholds to avoid such inefficiencies. Furthermore, not all applications are designed to scale horizontally; certain monolithic architectures may face challenges adapting to an auto-scaling setup. Lastly, managing and testing auto-scaling configurations requires expertise, adding complexity to the system’s design and maintenance.

What is Point-in-Time Recovery (PITR)?

Point-in-time Recovery (PITR) is a database feature that allows you to restore data to a specific time within a defined retention period, helping to recover from accidental deletions, updates, or corruption. It is commonly used in relational databases like AWS RDS. PITR works by utilizing continuous backups and transaction logs, enabling precise recovery to the desired moment. For example, if a data modification error occurs at 3:15 PM, you can restore the database to 3:14 PM to retrieve accurate data. PITR is highly useful for mitigating human errors or system malfunctions, ensuring business continuity. However, implementing PITR requires sufficient storage for backups and efficient log management, and it may incur additional costs depending on the cloud provider or database configuration.

What is Data Classification?

Data classification is the process of organizing and categorizing data into predefined groups based on its sensitivity, value, or relevance. This process helps organizations manage data more effectively and apply appropriate security measures. Common classifications include public, internal, confidential, and highly sensitive data. For example, customer contact details might be classified as confidential, while financial records are categorized as highly sensitive. Data classification enhances compliance with regulations like GDPR or HIPAA by ensuring proper handling of sensitive information. It also supports effective data access control, enabling organizations to minimize risks related to unauthorized access or data breaches.

What are the Two Types of Cloud Database Mirroring?

The two types of cloud database mirroring are synchronous and asynchronous. Synchronous mirroring ensures that data is simultaneously written to both the primary and mirrored databases. This guarantees consistency, as both databases are always in sync. However, it may introduce latency, especially in geographically distributed systems. Asynchronous mirroring, on the other hand, allows the primary database to process writes immediately, replicating changes to the mirrored database later. This method reduces latency but risks losing recent changes in case of failure. Synchronous mirroring is ideal for critical applications requiring real-time data consistency, while asynchronous mirroring is better suited for scenarios prioritizing performance over immediate consistency.

What is a Failover Cluster and its Components?

A Failover Cluster is a high-availability system designed to minimize downtime and maintain service continuity in case of hardware or software failures. It consists of multiple nodes (servers) that work together to host applications or services. If one node fails, another node automatically takes over its responsibilities, ensuring uninterrupted service. Key components of a failover cluster include the cluster nodes, shared storage for consistent data access, and networking infrastructure for inter-node communication. Cluster management software monitors node health and orchestrates failover processes. Failover Clusters are widely used in databases, web services, and enterprise applications to provide resilience and reduce operational disruptions.

What is N+1 Clustering?

N+1 clustering is a redundancy strategy used to ensure high availability and system reliability by including an additional node, referred to as the “+1”, within a cluster. This extra node acts as a standby or failover resource, ready to take over the workload of any node in the cluster that experiences a failure. For instance, in a cluster with N active nodes, the “+1” node remains idle or underutilized during normal operations but immediately activates to maintain continuity if an active node fails. This configuration is commonly employed in systems requiring high availability, such as cloud services, data centers, and enterprise applications.

The primary advantage of N+1 clustering is its ability to balance fault tolerance with resource efficiency, as it minimizes downtime without requiring a duplicate node for each active one (as in N+N redundancy). Additionally, it reduces operational disruptions and ensures that critical applications or services remain functional even during unexpected hardware or software failures. However, the “+1” node introduces additional costs for the standby infrastructure and may require periodic testing and maintenance to ensure readiness. Despite these considerations, N+1 clustering is widely regarded as a cost-effective method for achieving resilience and reliability in distributed systems.

Advantages of NoSQL Over Traditional RDBMS?

NoSQL databases offer several advantages over traditional RDBMS systems, particularly for handling unstructured, semi-structured, or large-scale data. First, they provide schema flexibility, allowing easy adaptation to changing data requirements without complex migrations. Second, NoSQL databases excel in horizontal scaling, distributing data across multiple nodes to handle high traffic and massive datasets efficiently. They are also optimized for high-speed read and write operations, making them suitable for real-time applications like social media or IoT. Additionally, NoSQL databases support diverse data models, including key-value, document, column-family, and graph structures, offering versatility for different use cases. Finally, they eliminate the overhead of complex joins, simplifying queries and improving performance.

How Does a NoSQL Database Handle Schema Changes?

NoSQL databases handle schema changes flexibly by allowing data to be stored without a predefined schema. This means new fields can be added to documents or records dynamically without affecting existing data. For example, in a document-based NoSQL database like MongoDB, documents in the same collection can have different structures, enabling easy adaptation to evolving application requirements. This flexibility reduces the need for costly and time-consuming migrations typical of RDBMS systems. Schema changes in NoSQL databases are straightforward, enabling developers to iterate rapidly and accommodate agile development practices. However, managing schema changes effectively still requires careful planning to maintain consistency and data integrity.