Episode 27 — Network Zones: trusted, untrusted, and screened subnet decisions

In Episode Twenty-Seven, titled “Network Zones: trusted, untrusted, and screened subnet decisions,” we frame zones as controlled trust levels that guide every control, because zoning is how you turn abstract security goals into enforceable network reality. When you label parts of a network as trusted, untrusted, or screened, you are not making a moral statement, you are declaring what assumptions are allowed and what assumptions are forbidden. The exam uses zone thinking as a foundation for many scenario questions because zones determine where services should live, where inspection should occur, and what kinds of controls are necessary at each boundary. Without zones, everything becomes a flat network where policy is either too permissive or too complex, and both outcomes increase risk. With zones, you can reason quickly about what should be reachable, what must be protected, and where monitoring should be strongest. The goal is to make zone decisions feel like a repeatable classification process that naturally leads to the “best answer” in design and troubleshooting prompts.

Before we continue, a quick note: this audio course is a companion to the Cloud Net X books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.

A trusted zone is for internal systems where you enforce strict identity and monitoring, and where you expect stronger governance, tighter change control, and clearer ownership. Trusted does not mean safe by default, it means you are willing to treat the zone as internal because you can apply identity gates, segmentation, and visibility measures consistently. Systems in a trusted zone typically include core business services, internal application tiers, directories, logging pipelines, and data stores that must be protected from broad exposure. The trust comes from controls, not from geography, which is why modern designs emphasize identity and policy enforcement even inside internal networks. In exam scenarios, the trusted zone is where the most valuable assets often reside, and the correct answers usually involve limiting access into it, logging meaningful actions, and reducing blast radius within it. A trusted zone should also be where monitoring is strong because internal compromises are common, and visibility is what turns incidents into manageable events rather than mysteries. When you hear “internal systems” and “sensitive data,” trusted zone thinking is usually part of the solution.

An untrusted zone is where internet-facing exposure exists and where minimal assumed safety is the right posture, because you do not control the endpoints, the paths, or the intentions of external traffic. The public internet, partner networks outside your governance, unmanaged devices, and open wireless segments are typical examples of untrusted space. In an untrusted zone, you assume traffic can be malicious, identities can be spoofed, and requests can be automated at scale, so controls must be designed accordingly. The goal in untrusted zones is to limit what can be reached, to authenticate aggressively, to rate-limit and inspect where appropriate, and to keep failure from cascading inward. In exam scenarios, untrusted zone cues include phrases like public access, internet-facing, external clients, and exposed endpoints, and those cues should immediately increase your skepticism of designs that place sensitive services directly in that space. Untrusted zones are not where you store secrets or host management planes, and the “best answer” often reinforces that separation. Treating untrusted as truly untrusted is how you avoid building castles on sand.

A screened subnet is the buffer zone that hosts services needing controlled access, sitting between untrusted exposure and trusted internal assets, and it is often the most practical way to publish services safely. The screened subnet is designed for systems that must accept inbound connections from untrusted sources, such as web tiers, application gateways, or reverse proxies, but that should not have unrestricted access to trusted internal systems. This zone allows you to concentrate inspection, enforce strict inbound rules, and keep the blast radius limited if a public-facing service is compromised. The screened subnet also supports clear routing and policy boundaries, because traffic entering the environment can be inspected and then forwarded inward only through defined paths. In exam scenarios, the screened subnet is often implied by the idea of a demilitarized zone, even if that phrase is not used, and correct answers often place public endpoints here rather than deep inside the internal network. The key is that screened subnet services are exposed by necessity, so they must be treated as higher-risk and surrounded by controls and logging. A well-designed screened subnet is a deliberate compromise that enables business access while protecting internal assets.

A strong design rule is to place public endpoints in screened areas and never deep inside networks, because deep placement makes internal assets one misconfiguration away from exposure. Public endpoints need to be reachable from untrusted sources, and that reachability should terminate in a zone that is built for hostile traffic, not in the same zone where your databases and directories live. When you publish a service, you are creating an entry point, and entry points should be surrounded by inspection, rate limiting, and strict policy boundaries so that compromise does not become a straight line to sensitive systems. Placing public endpoints deep inside often forces you to punch holes through multiple internal boundaries and can create scattered exceptions that are hard to audit and easy to forget. In exam questions, if the scenario describes a public web application, the best answer usually places the web tier in a screened subnet with controlled access to internal services and data. This is not a matter of tradition, it is a matter of keeping the trust gradient intact. When you respect the gradient, security and troubleshooting become simpler because boundaries are clear.

Control layers at boundaries include firewalls, web application firewalls, gateways, and logging, and the exam expects you to understand that boundaries are where control density should be highest. Firewalls provide basic filtering and segmentation, enforcing which sources and destinations can communicate and on which ports. Web application firewalls focus on application-layer patterns, inspecting requests for malicious input and abnormal behavior, which is especially relevant for public web tiers. Gateways can include reverse proxies, application gateways, or identity-aware proxies that enforce authentication and can centralize policy and observability. Logging around boundaries is essential because boundary events are high signal for both security and troubleshooting, and without logs you often learn about problems only after users complain. These controls are layered because no single layer covers everything, and the correct design is usually defense in depth rather than a single magic device. In exam scenarios, answers that place control layers at boundaries often align with best practice because they reflect the reality that boundaries are where trust changes. The goal is to make boundary crossings observable and enforceable, not implicit.

Inbound and outbound rules should differ by zone purpose and risk, because the direction of traffic and the trust level change what is reasonable to allow. Inbound rules into a screened subnet should be tightly scoped to the published services, limiting ports, sources when possible, and exposure to only what the business requires. Outbound rules from a screened subnet to a trusted zone should be even more constrained, allowing only the specific calls needed to fulfill requests, such as the web tier reaching the application programming interface tier or the application tier reaching a database on specific ports. In a trusted zone, inbound rules should generally be restrictive, because internal does not mean open, and lateral movement is a common attacker strategy. Outbound rules in trusted zones can also be scoped to reduce data exfiltration paths and to prevent compromised systems from reaching unnecessary destinations. The exam often tests this by offering answers that allow broad east-west traffic “because it is internal,” which is usually a trap. A mature zone model uses different rules per direction to reflect risk, necessity, and the principle of least privilege.

A practical scenario is hosting the web tier in a screened subnet while keeping the database in a trusted zone, because it illustrates how zones create a safe service chain. The web tier must accept traffic from untrusted clients, so it belongs in the screened subnet where it can be inspected, rate limited, and isolated from deeper systems. The database holds high-value data and should not be exposed directly to untrusted sources, so it belongs in the trusted zone where access is limited to the application components that require it. The connection between the screened subnet and the trusted zone is tightly controlled, often through an application tier or through specific allow rules that permit only necessary database ports from known sources. Logging at the boundary captures incoming requests and internal calls, supporting both security monitoring and troubleshooting when the application behaves unexpectedly. This placement also supports containment, because if the web tier is compromised, the attacker still must cross an enforced boundary to reach the database, rather than inheriting direct access. In exam logic, this pattern is frequently the best answer because it aligns exposure with the zone built to handle it while protecting core data.

A severe pitfall is placing administrative interfaces in untrusted zones, because it increases compromise likelihood and reduces the effectiveness of your defenses by exposing high-privilege entry points. Administrative interfaces include management consoles, remote administration ports, and control planes that allow configuration changes, and they should be treated as crown jewels. If these interfaces are reachable from untrusted networks, you have effectively invited brute force attempts, credential stuffing, and exploitation of management vulnerabilities. Even strong authentication does not fully offset the risk of broad exposure, because misconfigurations, bypasses, and password reuse can still occur, and management interfaces are often high-impact targets. In exam scenarios, if you see “admin interface exposed to the internet,” the best answer usually involves moving it into a trusted management zone, restricting access through secure gateways, and enforcing strong identity controls. The correct reasoning is that administrative access should be narrow, auditable, and controlled, not broadly reachable. Keeping admin paths out of untrusted space is one of the simplest ways to reduce catastrophic compromise risk.

Another pitfall is allowing unrestricted east-west traffic inside the trusted zone, because “trusted” is not the same as “flat,” and flat internal networks enable lateral movement and blast radius expansion. When systems inside a trusted zone can talk to everything freely, a compromise in one system can spread quickly to others, turning a contained incident into an environment-wide breach. Unrestricted east-west also makes troubleshooting harder because traffic patterns become noisy, and it becomes more difficult to spot anomalous connections when everything is allowed. A better approach is to treat trusted zones as internally segmented, using network segmentation, security groups, and service-to-service policies to allow only what is needed. This does not require creating a separate subnet for every workload, but it does require meaningful boundaries that reduce implicit trust and constrain reachability. In exam questions, answers that suggest “allow all internal traffic” often conflict with modern security expectations and are usually wrong when the prompt hints at sensitive data or prior incidents. The best answer is typically the one that maintains internal segmentation and logs meaningful flows so that trust remains controlled rather than assumed.

There are quick wins that improve zone design even before you add new tools, such as documenting zone intent and mapping addresses accordingly, because clarity reduces mistakes and makes enforcement easier. Zone intent documentation defines what belongs in each zone, what the default stance is for inbound and outbound traffic, and what controls must exist at the boundaries. Address mapping means your subnets and address ranges reflect zones so that an address gives a hint about trust level and purpose, which helps logs and troubleshooting. When address plans align with zones, it becomes easier to write policies and to validate that traffic is flowing between appropriate places. Documentation also helps during change, because new services can be placed correctly without ad hoc decisions, and reviews can detect drift earlier. In exam scenarios, operational discipline cues often separate good answers from brittle ones, and clear zone mapping is a disciplined move. The point is that zoning is both a technical design and an operational agreement that must be maintained.

A useful memory anchor is that trust level decides exposure, controls, and monitoring, because those are the three things zones are really about. Exposure means who can reach the service and from where, which is tightly tied to whether the zone is untrusted, screened, or trusted. Controls means what enforcement exists at boundaries and within zones, such as firewalls, gateways, web application firewalls, and segmentation policies. Monitoring means what visibility is required, because higher risk zones and boundary crossings demand more logging and more alerting to detect abuse and misconfiguration quickly. This anchor helps you avoid focusing only on placement, because placement without controls and monitoring is not a zone model, it is a diagram. It also keeps you from overcomplicating the concept, because most scenario questions can be answered by asking how exposure, controls, and monitoring should differ across trust levels. When you can recite this anchor, you can quickly justify why a service belongs in one zone and not another.

To end the core, classify three services into appropriate zones, such as a public web front end, an administrative management console, and a customer data database, because this exercise matches how exam scenarios test your instincts. The public web front end belongs in a screened subnet because it must accept traffic from untrusted sources but should be isolated and surrounded by inspection and tight inbound rules. The administrative management console belongs in a trusted management zone, not in untrusted space, because it is a high-privilege entry point that should be reachable only through controlled access paths with strong identity checks and auditing. The customer data database belongs in the trusted zone, with access limited to the application components that require it and with strong monitoring because the data is high value. The relationships between these services should be defined with least privilege rules, such as allowing the web tier to talk to the application tier on necessary ports and allowing only the application tier to talk to the database on specific database ports. Logging should be strongest at the boundaries and for sensitive access paths, supporting both detection and troubleshooting. When you can classify services like this quickly, you can answer zone-related scenario questions with a steady, repeatable logic.

In the conclusion of Episode Twenty-Seven, titled “Network Zones: trusted, untrusted, and screened subnet decisions,” the main lesson is that zones are controlled trust levels that dictate where services live and how traffic is governed. Trusted zones house internal systems under strict identity and monitoring, untrusted zones represent internet-facing exposure where minimal safety is assumed, and screened subnets provide a buffer for services that must be reachable while limiting blast radius. You place public endpoints in screened areas rather than deep inside, surround boundaries with layered controls like firewalls, web application firewalls, gateways, and strong logging, and you treat inbound and outbound rules differently based on zone purpose and risk. You avoid pitfalls like exposing administrative interfaces in untrusted zones and allowing unrestricted east-west traffic inside trusted zones, because both increase compromise likelihood and blast radius. You gain quick wins by documenting zone intent and mapping addressing to zones so policies and logs stay clear and consistent. Assign yourself one boundary review exercise by taking a service chain you know and stating which zone each component belongs in, what the inbound and outbound rules should be at each boundary, and what logs you would review to confirm the trust model is actually being enforced.

Episode 27 — Network Zones: trusted, untrusted, and screened subnet decisions
Broadcast by