Web Filtering

But what about web filtering? Doesn’t that solve this problem? Unfortunately, the answer is “Not entirely.” However, web filters play an important role in protecting ourselves from pornographic material, and I strongly recommend the use of an effective web filter.

Filtering pornographic images is a very difficult computing problem. As humans, we’re blessed with brains that are very good at making sense of a visual world with its nuances of color, shape, and motion. But to a software program, your computer screen is just a collection of millions of tiny colored dots called pixels. Image filtering programs use complex algorithms to try to make sense of the dots and try to figure out whether an image contains pornography.

A closer look at pixels.

Actually, one of the most reliable pieces of information for an image filtering program is the website that serves it up. Blocking all content from sites known to serve up pornographic images is much more reliable than trying to analyze images in real time. Of course, new pornographic sites come online every day, so tracking all malicious sites is never entirely accurate. Further, some inappropriate material is served up from websites that wouldn’t be considered patently pornographic. Personal opinion also comes into play, since one viewer may find pornographic what another finds acceptable; even if an image filtering software program can tell what a picture contains, the value judgments that enter into appropriate filtering are almost impossible for software to do accurately. To muddy the water even further, legitimate information on topics involving sexuality and human physiology are part of the mix of online material.

Ultimately, filtering software programs take three approaches to attempt to provide protection: real-time analysis, white listing, and black listing.

Real-time analysis involves filtering software looking through web pages and images as they are delivered to a web browser. The main constraint to real-time analysis is that it can be time-intensive, so computer performance may suffer. Since no decision can be perfect, such analysis stops at some point and makes a best guess as to appropriateness of content.

Since real-time filtering is a problem with imperfect solutions at best, failures occur with regularity. Filtering failures fall broadly into two categories: false positives and false negatives.

A false positive is a situation in which the filter determines that a site or image is inappropriate when in fact it isn’t. Most content filters have an override feature that allows a user to use his or her judgment and proceed to a site that they perceive as safe. Overriding a block is simply a matter of entering a password to proceed. However, for children and others that don’t have access to an override password, false positives can be very frustrating. They may know that a particular site is not problematic, and even desperately need information from the site for homework, but then find themselves at the mercy of waiting for mom or dad to come override the block. Such frustrations aside, false positives are a necessary evil when a filter gets overly aggressive and makes an over-protective call.

A false negative is the other side of the coin—offensive content that the filter fails to identify as such and hence allows through to the browser. This sort of failure allows us to be exposed to corrosive content despite the fact that we have been conscientious enough to install a filter in the first place. Web filtering yields false negatives in a small percentage of the cases in which a true negative should have been detected. In practice that means that a very large percentage of the garbage you don’t want to see won’t make it to you. But as I pointed out earlier, given the enormous volume of pornographic content on the web, even a false negative rate of 1% leaves plenty of inappropriate content to get through, particularly if a user is committed to getting to it.

Some have suggested that, since filters aren’t foolproof, we shouldn’t use them. Others argue that since our kids can gain access to this material with or without a filter, why bother? I strongly disagree with both positions. The fact is that this generation will experience virtually 100% exposure to pornography in some form. The question is one of magnitude. It’s sort of like the difference between getting a virus blast straight in the face versus an inoculation. While filters aren’t perfect, they accomplish two very important things: 1) They significantly reduce both the frequency and severity of exposure; and 2) Reporting mechanisms for blocked sites provide parents with an opportunity to have a conversation with their children, placing the issue back firmly where it belongs—as a parenting issue, rather than as a technology issue.

I consider web filtering to be an essential tool in managing our moral lives online. It helps protect children from accidental exposure. It also helps when someone acknowledges that they have a problem and want to draw upon all the help they can get in protecting themselves. We have to be absolutely honest with ourselves and admit that if someone has a desire to access pornographic or other inappropriate material on the web, there is nothing that you, or I, or anyone else can do to stop them. So while we do all we can to protect ourselves from exposure, our ultimate protection stems from a deep internal commitment to keep ourselves clean from the filth of this generation, buoyed by the reality of the atonement of Jesus Christ and the strength that flows from the Holy Ghost.

As we teach and counsel family, friends and ward members, it’s imperative that we focus our attention on the nature of their internal conversion to true principles and their relationship with God, even while we put into place practical protections to help avoid unwanted exposure.

Suggested Listening:
Internet Safety Podcast Episode 4: Content and Filtering


Image from Wikimedia Commons.

Leave a Comment