Episode 39 — Split Tunneling: security and performance tradeoffs in plain language

In Episode Thirty-Nine, titled “Split Tunneling: security and performance tradeoffs in plain language,” we frame split tunneling as the decision to send some traffic outside the secure tunnel rather than forcing everything through it. This choice shows up whenever remote access becomes a daily norm and performance complaints begin to compete with security expectations. Split tunneling is not a binary good or bad decision, it is a design tradeoff that shifts where controls live and who is responsible for enforcing them. The exam often presents it as a subtle option inside a larger remote access question, and the best answer depends on whether the environment can tolerate shifting some responsibility to endpoints. Understanding split tunneling requires thinking about traffic paths, inspection points, and user experience at the same time, rather than optimizing one at the expense of the others. The goal here is to make the tradeoffs clear enough that you can justify a choice confidently without hiding behind vague language.

Before we continue, a quick note: this audio course is a companion to the Cloud Net X books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.

The main benefit of split tunneling is improved performance, because avoiding hairpin routing reduces latency and relieves bottlenecks at centralized gateways. When all traffic is forced through a central tunnel, even simple internet browsing or software updates must travel to a distant gateway and then back out to the internet, adding delay and consuming bandwidth unnecessarily. Split tunneling allows traffic that does not need corporate inspection or access to internal resources to go directly to the internet from the user’s location. This can dramatically improve responsiveness for cloud services, video streaming, and collaboration tools, especially when users are geographically dispersed. It also reduces load on virtual private network gateways and wide area network links, which can improve stability for the traffic that truly must traverse the secure path. In exam scenarios, when the prompt mentions congestion, poor performance, or overloaded gateways, split tunneling is often the mechanism that addresses the performance side of the problem.

The risk side of split tunneling is that unmanaged internet traffic bypasses centralized security controls, which changes the threat model in a meaningful way. When traffic leaves the device directly to the internet, it does not pass through corporate firewalls, intrusion detection systems, or centralized inspection tools that might otherwise block malicious destinations or detect suspicious behavior. This means that the endpoint itself becomes the primary enforcement point for many security decisions, and any weakness in endpoint protection is now more consequential. It also means that visibility shifts, because security teams may no longer see all user traffic in central logs, making detection and investigation more dependent on endpoint telemetry. The exam often tests whether you recognize this shift by offering split tunneling as a performance fix without mentioning compensating controls, which is usually incomplete. The correct reasoning acknowledges that bypassing centralized controls increases reliance on endpoint security and policy discipline. The key is that split tunneling trades centralized enforcement for distributed responsibility, and that trade must be intentional.

A common compromise pattern is to tunnel business applications while allowing local internet browsing to go direct, which balances performance gains with controlled access to sensitive resources. In this model, traffic destined for internal applications, private cloud resources, or regulated services is explicitly routed through the secure tunnel, ensuring it is inspected, logged, and governed by corporate policy. Traffic destined for general internet destinations, such as public websites and content delivery networks, bypasses the tunnel and uses the local connection, improving speed and reducing gateway load. This approach recognizes that not all traffic carries the same risk or governance requirement, and it aligns controls with sensitivity rather than with convenience. The success of this compromise depends on correctly identifying business destinations and keeping those definitions current as applications evolve. In exam reasoning, this pattern often appears as the “best answer” when the scenario demands both improved performance and continued protection of core services. The important lesson is that split tunneling does not have to be all or nothing, it can be scoped intentionally.

To make split tunneling safe, policy must clearly define what must traverse the secure path, because ambiguity leads to accidental exposure or broken access. Policy decisions include which address ranges, application categories, or service identities are considered corporate and therefore must be routed through the tunnel. These policies should be explicit and auditable so that changes in application architecture or cloud provider endpoints do not silently move sensitive traffic outside the secure path. Policy also includes fallback behavior, such as what happens when the tunnel is unavailable and whether certain applications should fail closed rather than fail open. Without clear policy, split tunneling becomes a guessing game, and users may experience inconsistent behavior depending on resolver responses or transient routing conditions. In exam scenarios, answers that mention policy-driven routing rather than vague “allow split tunneling” language tend to be stronger because they acknowledge the need for precision. The key is that split tunneling is governed by policy, not by user choice.

Domain Name System considerations are central because the resolver that answers a query often determines where traffic goes, and split tunneling changes which resolver is consulted. If a device uses a corporate resolver for internal names and a local resolver for public names, the split is predictable and controlled. If resolver configuration is inconsistent, internal names may be resolved by external resolvers, or public names may resolve differently than expected, leading to routing surprises. Domain Name System also affects security because query patterns can reveal internal structure or because malicious domains may be resolved without centralized filtering when split tunneling is active. The exam often hides this dependency by describing name resolution issues rather than explicitly mentioning split tunneling, so recognizing the resolver path is critical. A well-designed split tunnel ensures that corporate names are resolved only through corporate resolvers, keeping internal destinations on the secure path. The practical takeaway is that split tunneling is as much a name resolution design as it is a routing design.

Consider a scenario where remote workers complain that video meetings are choppy and unreliable when connected through the corporate tunnel, especially during peak hours. Video and real-time collaboration tools are sensitive to latency, jitter, and packet loss, and forcing their traffic through a distant gateway can degrade quality significantly. By allowing video traffic to use the local internet connection while keeping access to internal applications tunneled, the organization can improve user experience without exposing sensitive resources. This approach also reduces load on the gateway, which can improve stability for other tunneled traffic. The design must still ensure that the device is protected, because the video traffic is now leaving the device directly, but modern endpoint controls can often handle that risk. In exam reasoning, this scenario often points toward split tunneling as the performance fix, provided security controls at the endpoint are acknowledged. The key is that the performance problem is a path problem, and split tunneling changes the path intentionally.

One pitfall is corporate Domain Name System leaks that reveal internal names outside controlled paths, because split tunneling can cause internal queries to escape if resolver rules are not tight. If a device sends queries for internal services to a public resolver, those names may be logged or cached externally, exposing information about internal structure. It can also cause functional failures, because internal names may not resolve correctly outside the corporate resolver context, leading to intermittent access issues. This pitfall often appears after a split tunnel change when users report that some internal services fail unpredictably depending on location. In exam scenarios, when internal names appear in external logs or resolution fails only for some users, resolver leakage is a strong suspect. The correct answer typically involves ensuring that internal domains are always resolved through corporate resolvers and that resolver configuration aligns with split tunnel policy. The lesson is that name resolution is part of the security boundary, not just a convenience service.

Another pitfall is that local malware can reach the internet while the device is still connected to the corporate environment, increasing risk if endpoint controls are weak. When split tunneling is enabled, malicious software on an endpoint may communicate directly with command and control servers without passing through centralized inspection. This increases reliance on endpoint detection and response, host firewalls, and secure configuration to prevent or detect such activity. If endpoint controls are outdated, misconfigured, or absent, split tunneling can create a blind spot where malicious traffic flows unnoticed. The exam often tests this by pairing split tunneling with a question about endpoint security posture, implying that one decision affects the other. The best answer usually pairs split tunneling with strong endpoint controls rather than presenting it as a standalone optimization. The key is that split tunneling shifts risk toward the endpoint, and the endpoint must be ready to carry that responsibility.

There are quick wins that make split tunneling safer, such as requiring strong endpoint controls when it is enabled, because compensating controls reduce the risk introduced by bypassing centralized inspection. Strong controls include up-to-date endpoint protection, host-based firewalls, device compliance checks, and rapid patching, all of which reduce the chance that a compromised device becomes a gateway to the internet. Device posture checks can ensure that only healthy devices are allowed to connect with split tunneling enabled, reducing exposure from unmanaged or outdated systems. Central policy can also restrict which users or roles are allowed to use split tunneling, reserving it for those whose workflows genuinely require it. In exam scenarios, answers that mention endpoint controls alongside split tunneling tend to be correct because they show awareness of the shifted trust model. The practical lesson is that performance gains should be paired with security maturity, not treated as free improvements.

Monitoring also changes with split tunneling, and tracking tunnel usage and anomalies becomes important because not all traffic will appear in central logs anymore. Monitoring should include visibility into which applications are using the tunnel, how much traffic remains inside versus outside, and whether unusual patterns suggest misconfiguration or abuse. Endpoint telemetry becomes more valuable, because it may be the only place where certain flows are visible once they bypass the tunnel. Correlating tunnel usage with performance complaints can also validate whether split tunneling is delivering the intended benefit or whether further tuning is needed. In exam reasoning, when a prompt mentions lack of visibility or difficulty investigating incidents after a change, improved monitoring is often part of the correct response. The key is that observability must follow the traffic, and split tunneling changes where traffic flows. Designing monitoring alongside split tunneling prevents blind spots from becoming long-term risks.

A useful memory anchor is speed up, reduce load, increase endpoint responsibility, because it captures the essence of split tunneling in one phrase. Speed up reminds you that the primary motivation is improved performance by avoiding unnecessary detours. Reduce load reminds you that gateways and central links benefit when non-essential traffic bypasses them. Increase endpoint responsibility reminds you that security enforcement and visibility shift toward the device, requiring stronger endpoint controls and policy. This anchor helps you evaluate exam options quickly because it forces you to ask whether the environment described can handle increased endpoint responsibility. If the answer is no, split tunneling may be inappropriate despite performance benefits. When you can recite this anchor, you can articulate both sides of the tradeoff clearly.

To end the core, decide a split tunnel policy under stated constraints, because the exam often presents mixed requirements that force you to balance speed and security. If the constraint is poor user experience for cloud applications and the organization has mature endpoint security, split tunneling for non-sensitive traffic can be justified. If the constraint is strict compliance and centralized inspection requirements, full tunneling may be required even at the cost of performance, or split tunneling may be limited to a narrow set of trusted destinations. If the constraint includes limited gateway capacity and widespread remote work, a hybrid policy that tunnels business-critical applications and allows local internet for the rest may be the best fit. The correct answer will usually mention both the routing decision and the compensating controls that make it safe. When you can state the policy and its assumptions explicitly, you are applying the decision logic the exam is testing.

In the conclusion of Episode Thirty-Nine, titled “Split Tunneling: security and performance tradeoffs in plain language,” the core tradeoff is that split tunneling improves performance and reduces central load by sending some traffic outside the tunnel, but it increases reliance on endpoint security and careful policy. The benefits include avoiding hairpin routing, reducing gateway congestion, and improving user experience for latency-sensitive applications. The risks include bypassing centralized security controls, potential Domain Name System leakage, and increased exposure if endpoint protections are weak. Effective designs define clearly which traffic must traverse the secure path, align resolver behavior with routing intent, and require strong endpoint controls and monitoring. You avoid pitfalls like leaking internal names and allowing local malware unchecked internet access by pairing split tunneling with identity, posture, and observability measures. Assign yourself one policy reasoning rehearsal by taking a remote access scenario and stating which traffic you would tunnel, which you would allow to go direct, what endpoint controls you require, and what you would monitor to confirm the policy is working, because that reasoning pattern is exactly what the exam is looking for.

Episode 39 — Split Tunneling: security and performance tradeoffs in plain language
Broadcast by