Episode 46 — VPC Peering vs Private Link: choosing the right private connectivity model
In Episode Forty Six, titled “VPC Peering vs Private Link: choosing the right private connectivity model,” the goal is to compare two private connectivity approaches that the exam often puts side by side. Both options promise private connectivity, but they do so with very different assumptions about scope, routing, and exposure. When learners miss questions here, it is usually because they treat all private connections as interchangeable, or they focus on encryption and ignore reachability. The exam tests whether you understand what each model fundamentally connects, what it intentionally does not connect, and how that affects risk. If you can keep the distinction between “networks” and “services” clear in your mind, you can choose correctly even when the scenario includes multiple accounts, hybrid routing constraints, or strict segmentation requirements. This episode builds a consistent decision pattern you can reuse without guessing.
Before we continue, a quick note: this audio course is a companion to the Cloud Net X books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.
Virtual Private Cloud, which is a logically isolated network environment in a public cloud, can be connected to another Virtual Private Cloud using peering. Peering connects networks, meaning it establishes private connectivity at the network layer so that resources in one network can route to resources in the other network. When peering is in place, traffic can flow broadly, and the main control becomes which routes exist and which security rules allow the traffic. This is typically a two way routing relationship, and while you can still restrict access with security groups and network rules, the underlying capability is that the two networks are now reachable from each other. That broad reach is the feature, because it allows many different services and systems to communicate without having to build individual service specific connections. For the exam, “peering” should immediately translate to “network connectivity,” which is a larger blast radius than a single service connection. The important nuance is that peering makes reachability possible, and your policies decide what is permitted within that reachable space.
Private Link, by contrast, is designed to expose a specific service privately without enabling full network routing between the consumer and provider networks. The mental model is closer to “publish a service endpoint” than “connect two networks,” because the consuming network reaches a private endpoint that maps to a service in another network or account. This allows tight scoping, because the consumer gains access only to what is explicitly exposed, rather than gaining the ability to route to the provider’s broader address space. Private Link is often used to share internal services across accounts or to consume managed services privately, while keeping the provider network hidden and unreachable from the consumer. From a security perspective, this is a least exposure pattern, because it reduces lateral movement opportunity by not creating broad two way routing. On the exam, “Private Link” should translate to “service connectivity,” and that distinction is the core tested assumption. If the scenario emphasizes minimizing exposure while still enabling access to one service, Private Link is frequently the intended answer.
Choosing peering is usually appropriate when you need to share resources and support many services between networks, especially when the relationship is long lived and multiple systems must communicate in both directions. Shared resources might include internal application services, logging pipelines, authentication services, or management tooling that spans multiple environments. When many services need to talk and the traffic patterns are varied, building individual service endpoints can become operationally heavy, and peering can provide a simpler connectivity foundation. This does not mean peering is automatically simple, because routing and security policies must still be designed carefully, but the connectivity problem is solved at the network layer. In multi account environments, peering can also support organizational patterns where separate teams own separate networks but require broad integration. The exam often frames this as “connect these networks so multiple systems can communicate,” which is a strong cue for peering. The important point is that peering is best when the requirement is network level reachability, not just access to one or two specific services.
Choosing Private Link is usually appropriate when you want least exposure and controlled access to a specific service, especially when the service provider and consumer should remain isolated. This pattern is common when a central team offers a shared service, such as an internal application programming interface, and other accounts or networks consume it without needing any other visibility into the provider network. It is also common when consuming sensitive managed services privately, where the goal is to keep access scoped tightly and avoid broad routing that could be abused. Private Link supports a model where the consumer can reach the service as if it were local, but cannot route to the provider’s other subnets or resources. That reduces risk when the consumer environment is less trusted, or when compliance requires strict segmentation. On the exam, scenarios that emphasize “tight scope,” “only this service,” or “avoid network level connectivity” are usually pointing toward Private Link. The guiding idea is that Private Link is an exposure minimization tool as much as it is a connectivity tool.
Transitive limits are another exam favorite, and they often separate correct architecture reasoning from guesswork. Peering usually avoids multi hop routing, meaning it does not naturally support transitive connectivity through an intermediate network. If network A peers with network B, and network B peers with network C, network A typically does not automatically gain routes to network C through network B. This limitation is intentional, because it prevents peering from turning into an uncontrolled transit network and forces architects to be explicit about hub designs using other constructs. For exam questions, if a scenario implies that peering should provide transit connectivity across multiple networks, the correct response is often to recognize the limitation and choose a different approach, such as a dedicated transit hub architecture. Private Link is not about transitive routing at all, because it is a service access model rather than a general routing model. The key is to remember that peering is not a substitute for a transit gateway, and the exam tests whether you can spot when someone is trying to use it that way.
Domain Name System, which is the system that maps names to addresses, matters in both models, but it becomes especially central when reaching private service endpoints. With peering, you may rely on existing naming and resolution across networks, but you still need to ensure that names resolve to reachable private addresses and that the resolver path supports cross network queries where required. With Private Link, Domain Name System and naming are often the primary user experience layer, because consumers connect to a service name that resolves to a private endpoint address inside their own network. If Domain Name System is not configured correctly, consumers may resolve the public service name and accidentally route to a public endpoint, or they may fail to resolve the private name entirely. Private naming patterns, conditional forwarding, and private zones are common mechanisms to ensure that service names resolve internally to private endpoints. The exam frequently hides the real issue inside a Domain Name System clue, such as “it connects publicly but not privately” or “it works from one network but not the other.” When you see Private Link, assume Domain Name System alignment is part of the expected design.
A typical peering scenario is connecting an application network and a data network across accounts, where multiple application components must talk to databases, caches, and internal services, and where teams want full private reachability under controlled rules. In that model, peering allows the application subnets to route to the data subnets, and the security posture is enforced through route scoping and security policies that limit what ports and sources are allowed. This can be a clean approach when both networks are within the same trust boundary or when the organization can enforce consistent governance across accounts. It also supports bidirectional needs, such as data tier systems sending logs or metrics back to application monitoring services in the other network. The exam often describes this as “two environments in different accounts need private connectivity for many components,” which leans toward peering. The important caveat is that the reachability becomes broad, so segmentation must be explicit and monitored. If the scenario emphasizes collaboration across multiple services, peering is often the correct model.
A typical Private Link scenario is consuming a managed database privately with tight scope, where the consumer needs only database access and should not gain any broader network visibility. In this model, the consumer network creates a private endpoint and connects to the database service over that endpoint, keeping the traffic private and limiting reachability to that specific service interface. The provider side can enforce who is allowed to connect and can expose only what is necessary, which supports least privilege at the connectivity layer. This is especially valuable when the consumer environment is less trusted, such as a development account consuming a central service, because it prevents the consumer from probing or connecting to other provider resources. It is also valuable when compliance requirements demand strict isolation while still enabling service consumption. The exam often frames this as “private access to a single service, avoid broad network connectivity,” which points toward Private Link. The distinguishing feature is that the consumer gains service access, not network adjacency.
Internet Protocol overlap is a practical constraint that can prevent peering, and the exam frequently uses it as a trap for learners who choose peering automatically. If two networks use overlapping address ranges, routing becomes ambiguous, and peering between those networks is typically not feasible without redesigning address space or introducing translation mechanisms. Overlap is common in enterprises that grew networks organically or reused private address ranges across environments. When overlap exists, peering can be blocked or can create unpredictable routing, which makes it an unsafe choice. Private Link often sidesteps this issue because it does not require full network routing between the address spaces, instead exposing a specific endpoint within the consumer network. For exam questions, if overlap is mentioned, it is usually a strong signal that peering is problematic or impossible as stated. Recognizing overlap as a hard constraint helps you avoid answers that look correct in principle but fail in practice.
Another pitfall is over peering, where teams create too many broad network connections and unintentionally create lateral movement opportunities. Every peering relationship expands the reachable surface area, and if security rules are misconfigured or if a workload is compromised, the attacker may move across networks that were never intended to be part of the same trust zone. Over peering also creates operational risk because troubleshooting becomes complex, route tables drift, and unintended access paths appear over time. This is the same “spaghetti network” effect seen in other contexts, but peering makes it easy to create because it feels like a simple connectivity fix. Private Link reduces this lateral movement risk by limiting what is exposed, but it can also be misused if teams expose too many services without governance. For the exam, when a scenario highlights strict segmentation or minimizing blast radius, broad peering is often the wrong instinct. The safe mindset is to treat peering as a high trust linkage and Private Link as a controlled service interface, then choose accordingly.
A useful memory anchor is “peering shares networks, link shares services,” because it forces the correct scope comparison in a single phrase. Peering is about network reachability, which implies broad potential access controlled by routes and security policies. Private Link is about service exposure, which implies narrow access to a defined interface without full routing. This anchor helps you avoid getting lost in provider specific names, because the exam may use slightly different terminology while testing the same concept. It also helps you map risk to choice, because network sharing generally increases blast radius while service sharing generally reduces it. The anchor is not a shortcut to avoid thinking, but it keeps the central distinction stable while you evaluate constraints like overlap, trust boundaries, and transitive limits. When you can explain the anchor and then apply it to a scenario, you are aligned with the exam’s logic.
To practice selection under constraints, imagine you are given three requirements and must choose a model that satisfies them without creating unnecessary exposure. If the constraints include broad two way communication between many systems, stable address space without overlap, and a shared trust boundary, peering becomes a natural fit. If the constraints include strict least exposure, a need to consume only one service, or a risk that consumer environments are not fully trusted, Private Link becomes the stronger fit. If overlap is present, peering may be blocked, pushing you toward service scoped patterns that avoid full routing. If the scenario implies multi hop transit through peerings, that should raise a flag because peering typically avoids multi hop routing, and a hub model would be needed instead. The exam rewards answers that match the connectivity model to the scope and constraints rather than choosing based on habit. When you can state which model fits and which risk it avoids, you show the reasoning the test is looking for.
To close Episode Forty Six, titled “VPC Peering vs Private Link: choosing the right private connectivity model,” keep the differences crisp and tied to risk. Peering connects networks and enables broad two way routing, which supports shared resources and many services but increases reachable surface area and requires careful segmentation. Private Link exposes a specific service without full routing, which supports least exposure designs and tight scope consumption, often relying heavily on Domain Name System alignment to make private endpoints usable. Peering has transitive limits and can be blocked by Internet Protocol overlap, while over peering can create lateral movement opportunities that undermine isolation goals. The memory anchor that peering shares networks and link shares services helps you choose quickly when the scenario is noisy. Your rehearsal assignment is a decision tree walk through where you ask whether the requirement is network connectivity or service access, then check for overlap, trust boundary, and transit assumptions, because that structured reasoning is exactly how the exam expects you to choose the right model.