Understanding Port Control, Application Control

What is the difference (not the mechanical (?), but the blind (?)) between network application controls vs. port access controls?

If you have network application controls is port control necessary.

Or, …

-I posit that simple (is it?) port access controls are less resource intensive.
A wall that you put holes in.
A group of applications that have individual policies that need to be checked each time a network call is made.

So does then combining the two create a compromise?

  • Less Resource Intensive:
    So, I open “a” port that allows incoming connections. This means any program can receive information (act as a server) through this port. But with application controls only the programs I specify will receive this information (correct?).

  • More Resource Intensive:
    So I open “all” my ports and allow incoming connections. This means any program can receive information (act as a server) through these ports. But with application controls only the programs I specify will receive this information (correct?).

I’m coming from a Zone Alarm background where port control is not the main or visible facet of the program, but program access control. I wonder if this is not why Zone Alarm requires more memory and, or has …. (difficulty)…

If admin as better title please rename this thread.

G’day Zoofield,

Firstly, welcome to the forums.

As I understand it, the application monitor rules determine WHAT can send or receive and the network monitor rules determine HOW permitted apps can send or receive.

Comodo Personal firewall uses a hierarchical approach that varies depending on whether its in inbound or outbound request. Inbound requests have to satisfy the network monitor rules first before they are allowed in, then they are checked against the application monitor and/or the component monitor. Outbound requests firstly check the component/application monitor and then the network monitor rules.

While this may seem “fussier” than ZA or other firewalls, it is, IMHO, a better approach as it verifies not only the attempted method of transmission or reception, but also what is trying to transmit or receive.

- Less Resource Intensive: So, I open “a” port that allows incoming connections. This means any program can receive information (act as a server) through this port. But with application controls only the programs I specify will receive this information (correct?).

Correct

- More Resource Intensive: So I open “all” my ports and allow incoming connections. This means any program can receive information (act as a server) through these ports. But with application controls only the programs I specify will receive this information (correct?).

INCORRECT (emphasis not shouting). Opening all ports is just that - opening all ports. Antything attempting to access your PC would therefore satisfy the network monitor rule. You’d be leaving your system open to abuse by opening all ports.

IMHO, the golden rule with firewalls is to open as little as possible, while still allowing yourself to work the way you want to work.

If anyone else has a better explanation or if I’ve gotten anything wrong, feel free to jump in.

Hope this helps,
Ewen :slight_smile:

Sorry, still not quite understanding this quite yet.

For instance,

  • Zonealarm;
    Application Controls is the first response and if necessary you can customize the ports, protocols, and IP ranges through the Application Controls

With your Applications I/O communications controlled, is this not near the same thing as having port control? You would still be invisible to a network?

  • In Comodo;
    When an application try’s to respond to a port closed by the Network Rules do the Application Rules still get activated? I wonder because this would seem redundant.

When you open a port or set of ports through the Network Rules Then isn’t the functionality comparable to that of Zonealam for the reasons mentioned above?

  • With a program like uTorrent in which you can have random ports used Zonealams method would seem to cope with this better.

Welcome Zoofield.

See if this helps? maybe you will get a better understanding after reading this.

Thanks,
rki.

The level at which Application control operates are too high to protect TCP/IP stack. Network monitor protects your TCP/IP stack. When a request has reached to application layer, all the data have already been received and processed by your host(i.e. TCP/IP protocol in your kernel got everything).
Application level filter, can only allow or deny passing this data to the application waiting for it. It can not protect TCP/IP stack at all.

For example, A DOS attack can easily knock out your computer without such a protection.

Also, not all the traffic is visible to the application filter. For example, if you are a gateway PC, application filter will see nothing.

Another advantage of this is the fault tolerance. When CPF’s application filter somehow fails to operate, you will still be protected against incoming threats.

The only disadvantage of this design is about configuration difficulties, which will be solved with the new application rules interface.

Egemen

Thank you, rki for your response and the link.

Thank you, Egemen that is the answer I believe I’m looking for.

An extremely simplified clarification (hope not to annoy or be redundant);

  • An Application Controls based firewall (i.e. Zonealarm), will automatically control the Network Rules based on this systems simple Application Controls yes/no policy.

When an application is started and requires network access based on its Application Controls the firewall opens up the port requested by the application.
For this application to receive (act as a server) information from the network the firewall needs to check if this program exists and if it is allowed to receive then opens up the port and allows access.

When the Application Controls fail the whole system fails.
Because of the amount of communication and checks required between the Application Controls and the Network Controls this is possible source of resource waist.
More user-friendly and dynamic (port control).

  • A Rules Based or Network Controls based firewall with Application Controls (i.e. C.O.M.O.D.O), controls the network through Network Controls independent of Application Controls.

When an application is started and requires network access based on its Application Controls It is given access to the Network Controls. Based on the Network Controls it is given access to the network. For this application to receive (act as a server) information from the network the Network Controls allow (or deny) information. If information passes the Network Controls then based on an applications Application Controls it receives the information.

Because the systems (Controls) are independent of one another the failure of one does not cause the failure of the other.
Fewer checks and communication this is possibly more resource friendly.
Less user-friendly and dynamic (port control).

Thanks

You are welcome. Glad we could help.

Thanks,
rki.

OK, this answer is pretty generic, and may actually be wrong for comodo personal firewall, but I think they’ve managed to implement it “normally” (and in that case this answer will be right)

A standard port access control (what network techs call an “access control list”), is a very simple firewall rule. It the rule is formed as follows:

PROTOCOL, RULE, SOURCE IP, SOURCE PORT, DESTINATION IP, DESTINATION PORT.

Example:

TCP, Permit, 192.168.0.2, all, any, 80

This rule would permit the host 192.168.0.2 to connect to any ip (via tcp) on port 80 (http), i.e. it would permit the host 192.168.0.2 to browse web pages. It applies to all traffic from that host, regardless of what program that accesses it. This is a simple layer 4 access control list (for explanations on the different layers of the osi model, take a look in Wikipedia on OSI Model)

This is your basic filtering, and it requires very little cpu overhead.

However there is one flaw to this type of filtering. Since the filter is “stupid”, and only looks at the source and destination of the packet (i.e. only reads the header of the packet) to see if the packet should be dropped, or forwarded (transmitted).

This type of rules CAN NOT stop applications accessing internet adresses “behind your back”, hence if you get a worm infection, the worm can still spread from your box onto others. And since it doesn’t look at what application is involved (you can use application filters for inbound traffic aswell), it doesn’t block a crafted worm packet entering your system. This is a weak solution.

A much better approach, is to extend the filter to cover layer 7 (application) aswell. This means that the traffic is matched against the software transmitting it, and thus you can specify which applications are actually allowed to “surf the web”.

Since some of the malware hackers install on compromized machines usually is remotely controlled applications (bots), the need to select which applications are actually allowed to connect to the internet is vital. Not only to keep yourself from spreading worms, or having your machine used in a DDoS (Distributed Denial of Service) attack, but also in removing such infections. A lot of the “smarter” flood-drones are capable of being instructed to “upgrade themselves” over the web, and the new upgraded version will of course have a different signature, and be harder to find for your antivirus package.

Thus, if you can stop the application both from being remotely controlled, stop it from being upgraded, and more importantly: Get a warning that an application is connecting to internet without you asking it to, you are in a lot better position to regain control of your machine.

This is of course a worst case scenario.

A more day-to-day version would be to stop spyware connecting “home” to dump their info on you to the marketing corps.

The application layer firewalling is only possible to do on a personal firewall (it can’t be done on a network firewall since the network firewall doesn’t have access to your machines internal task list). The downside to application-layer security, is that it creates a small overhead on traffic. This adds a few clock-cycles extra latency on the first packet of a stream, but for the rest of the stream the latency is minimal (since the connection is already established).

The best setup is to use a mixture of layer 3, layer 4 and layer 7 filtering.

Start out with your layer 3 filtering. First thing to do, is to filter out all adresses that doesn’t belong on an interface. For instance: Your ethernet interface has NO business talking on the 127.0.0.0/24 address space at all. Block it. When you have filtered out all adresses that have no need to pass through that interface, move onto ports (tcp/udp ports, icmp services, etc), and do the same thing there. Filter out all that you can do there, since layer 3 and 4 filtering is “cheap” cpu-wise.

Now that you have a basic ruleset, it’s time to put the application filter into learning mode, and start binding applications to those ports you opened at layer 4.

By filtering this way, you reduce the amount of rules in the application ruleset (since a lot of unwanted traffic is filtered at a lower level), and thus you reduce the cpu load.

Building a firewall ruleset this way is a LOT of work, but the payoff is that you can be 99% secure, without needing the cpu of a Cray supercomputer not anywhere near your budget, alas.

When building your rules, try to aggregate the rules. Instead of opening one port at a time (for a range) specify things as a range. It reduces the number of rules (improves readability, and reduces cpu load).

Remember that a firewall reads the rules from the bottom and upwards.

This means that you can do some tricks.

Let’s say that you want to allow ports 1022,1023,1025,1026 to be allowed.

Use two lines.
deny 1024
permit 1022-1026

The bottom rule opens the entire range (including the one that you don’t want)
The next rule (the one above) denies that port.

For a packet to be successfully transmitted, it has to pass the entire ruleset (from the bottom and up) without being implicitly or explicitly dropped. For a packet to be dropped, it just has to fail one test.

Writing your rulesets this way provides you with control, since it’s readable, and computer networking security is all about KNOWING what traffic goes in and out, and having CONTROL over it. If you write a too complex ruleset, you loose the ability to read it, and thus loose the KNOWLEDGE. Without KNOWLEDGE, you cannot have control, and hence no security.

I hope I made myself somewhat understandable (especially since English is NOT my native language)

//Svein

Again, an excellent post, Svein.

Just to clarify one point, CPF rules are applied starting at Rule 0 (which appears at the top of the rules list), then Rule 1, Rule 2, etc.

Cheers,
Ewen :-

(Added to FAQ.)