Back to school

Filtering kids' Net access sends all the wrong messages

My son’s been back to school for a couple of weeks now. Getting to sleep on time and putting away the summer’s camping gear has been a drag, but it’s offset by the promise of new friends and untried rhinoviruses. 

Kids seem to adapt quickly to new teachers no matter how extensive their idiosyncrasies, but I always dread the first "curriculum night," at which each teacher explains in excruciating detail how he intends to leverage personal neuroses into a yearlong traumatic experience for my offspring.

I attended my obligatory session with the primary teacher, then escaped to the tranquil bliss of the computer room. In the soothing glare of the fluorescents, a warmly geekish former construction worker explained to a room full of parents how a collection of donated hardware and unpatched copies of Windows XP would provide limitless educational opportunity for their children. Not for a minute do I believe that access to a keyboard will fill their heads with useful content, but that sort of talk is relatively harmless -- until the topic comes to Internet content filtering in the school’s computer lab and library.

Not long before I became a parent, I attended a Usenix conference that forever changed my opinion on the topic. Usenix -- formally called the "The Advanced Computing Systems Professional and Technical Association" -- draws a very different collection of geeks to its gatherings than other system- and security-related organizations. SANS  seems to attract diligent system administrators intent upon becoming security experts, and policy administrators who'll live up to everything we’ve come to expect from years of government training. BlackHat is filled with wizards, up-and-coming h@x3r dudes and 1337 ladies, and DefCon is rife with fanboys and girls. 

However, Usenix conferences are attended by mature systems and security technorati who don’t typically have "con" or "owned" in their common parlance. Conversations tend to shift away from the latest tools to the most effective methods, from war stories to work-life balance, and from confidentiality and access to responsible disclosure and personal ethics. Rainbow suspenders and gray beards may not be rampant, but these are the people who think long and hard about the consequences of their actions. They’re not usually impressed by technologies that promise to solve social problems.

I’d ducked out of an uninspiring session and into a small hallway gathering. Clustered around some chairs and the snack table were the heads of network security for most of the world’s major telecom networks and two of the largest financial companies, notable federal and law enforcement figures, and several other gurus and mavens I’d heard of.  They were talking about Internet content filtering -- not for their respective organizations, but at home with their children and teenagers. 

I discovered that I’m not the only one who thought most content filtering software is junk.  Worse, given the experiences of those wiser than I, it seems to have a negative effect on all but the most docile and unimaginative of children.

Consider the goals of child-oriented content filtering. Broadly speaking, parents want to prevent bad stuff from reaching the eyes and ears of children. A more primal concern is to prevent children from interactive contact with persons who might want to abuse or harm them, ranging from inappropriate information gathering ("give me Mommy’s credit card number") to the ever-present bogeyman of webcam-enabled van-driving sex nuts. 

In my city, an organization of do-gooders posted billboards claiming that "One in five children is sexually solicited online." This obsessive nonsense actually comes from a survey conducted by the Department of Justice, which fails to adequately define "stranger" and made only passing distinctions between prank chat messages from giggling boys, breathy missives from recidivist Level III sex offenders, and off-color comments between sexually active 17-year-olds in relationships.  But claims like those on the signage drive many parents into a paranoid tizzy, and their first reaction is to erect barriers at home and school between their children and the perceived bad stuff. But barriers rarely work, and when they do, they don’t last. 

Filtering works by either preventing access to a list of known-bad sites (a blacklist), or allowing only access to a list of known-good sites (a whitelist).  From a practical standpoint, a list of known-bad sites can never be up to date, and the providers of most objectionable content are highly motivated to defeat filters and reach out to immature and ill-advised audiences that have access to their own or their parents’ financial resources. 

Blacklisting sites is a game of catch-up that can’t possibly be won. (But it’s a great business model; there’s excellent money in selling an unending stream of security updates to paranoid people.) Broad blacklisting of objectionable content or communications by protocol or keyword is also ineffective, and the technique is criticized as preventing access to legitimate educational content. Meanwhile, whitelist filtering that only allows access to a limited set of known-good sites doesn’t work beyond the first couple of grades, because it prevents teaching anything about exploration, reading, research and critical thinking. 

Even the National Center for Missing and Exploited Children's Youth Internet Safety Survey in 2006 ("YISS-2") (PDF file) says, "The increase in exposure to unwanted sexual material occurred despite increased use of filtering, blocking, and monitoring software in households[.]"  Use of "filtering, blocking, and monitoring software" ranked 15th on the survey's list of recommendations for protecting children. Unfortunately, there doesn’t seem to be any alternative for the concerned parent, or at least one that comes at a reasonable price at the local computer retailer.  But that’s the wrong place to be shopping for a solution.

What was the opinion of the Usenix hallway gang? At risk of infringing on children’s innocence, it’s worth teaching them how to survive in an increasingly communication-saturated world. While most saw value in logging their children’s activities, every last one derided Net filtering software as conveying one of two messages to children:  1) "I don’t trust you,"  or 2) "I don’t care enough to educate you myself, so I’m going to put you in the care of an ineffective watchdog." 

Instead, the methods for protecting the children should focus on the children themselves. In outbound communications, we ought to teach and foster independent judgment regarding appropriateness. When content is brought in, we want the squinty-eye from young kids, and genuine critical thinking from older ones. This includes a willingness to ask for help judging information or communications that seem "off" without the threat of excessive parental meddling in kids’ own social development and rituals. And most of all, we want to breed a responsibility for one’s own actions -- including knowledge that perceived anonymity on the net is often a false comfort.

Their wisdom included several rich quotes. Regarding reaching out for content: "Browsing the internet is like walking around in the city.  Have a purpose, or you’ll get lost, and lost people look vulnerable." Regarding trust and critical thinking: "Most of the stuff on the internet is whatever someone felt like making up [opinion, not fact].  You just have to keep asking ‘Is this reasonable?’ or ‘Would my friend ask me this?’" 

Another person offered up the priceless "Have you been completely honest with everyone you’ve met online?" and "Do people lie to make themselves more friendly and attractive, or more mean and ugly?"  And on the topic of parental attentiveness, I vividly remember the top security administrator for a worldwide telecom company paraphrasing years of conversations with his junior-high-school daughter: "You do know what I do for a living, don’t you?  Everything we do is logged by the family’s firewall or the ISP, but I trust you. Don’t give me a reason to invade your privacy, and I promise I won’t."

A family shares many interaction and communication traits common to most other enterprises.  Ask any network administrator about border-based enterprise security, and you’re likely to elicit a variant of the phrase "crunchy outside, nougaty inside" regarding systems and networks.  It’s not enough to shield individual systems from the hostile Internet; they must survive on their own.  At a minimum, configuration of new systems should allow for recognition and notification when bad things happen.  A more mature and hardened system ought to be able to withstand more direct or hostile interaction with outside entities. 

Surely a child deserves no less care than we apply to inanimate data processing devices, and if we observe the same principles in child rearing, there’s a continuum with room for both innocence and maturity. From our hallway conversation, it was clear that these parents are highly involved with their children but let them roam far enough into unconstrained realms to grow strong and intelligent on their own. Granted, the comments came from a technically elite group. But care and involvement can be exercised by any parent. And these days the tools for logging and review are available to nontechnical parents in a wide variety of consumer network devices. They just need to be used wisely.

Jon Espenschied has been at play in the security industry for enough years to become enthusiastic, blasé, cynical, jaded, content and enthusiastic again. He is currently a senior security consultant in Seattle, where his advice has been ignored by CEOs, auditors and sysadmins alike.


Copyright © 2006 IDG Communications, Inc.

Shop Tech Products at Amazon