.comment-link {margin-left:.6em;} Searching for the Moon
My original blog - I have moved to http://shannonclark.wordpress.com so this remains only as an archive.
 
Archives
<< current
 

Searching for the Moon
by Shannon Clark
 

Sunday, January 11, 2004


Salon.com | Homeland insecurity
Salon.com | Homeland insecurity

I will read this article in more detail and comment, but a few points.

1. Salon is WRONG on a fundemental point - more data is fundemental to better analysis. Especially with automated tools (applied AI) it is inherent in the nature of such systems that more data is better, the systems "learn" and recognize patterns, more data means more opportunities to learn and find "better" patterns.

As well, reducing the data by means of human assumptions, for example that certain ages cannot be security risks, will weaken the accuracy and effectiveness of computer systems - since in a fundemental way they are designed to recognize patterns that humans don't (and/or can't because of the amounts of data needed).

This is something that most people don't understand. Computers programmed well are inherently able to do things that humans can't - the goal of applied AI is not to make the computers "think like humans" - rather it is build systems that learn and recognize patterns, often in ways that humans because we don't retain all the same details and/or can't process the same type of data as quickly will never recognize.

Just to follow up on the concept of "age" - a human might simplistically conclude that age is a clear negatize sign for risk of being a terrorist and then set an arbitrary cutoff for that risk (say 16, leaving aside Columbine etc).

At that point, however, either the human has to "prove" age in some way, or assume that data in the systems is accurate.

A computer, on the other hand if given all of the data available (say about passengers) could use the following (just an example, as I stated, AI systems would probably come up with something much different)

1. Look at passenger's names (as given) and compare with a list of names (and known aliases) of people wanted for questioning.

2. Look at related data to the passenger traveling, say credit cards used to pay for the ticket and look for discrepencies - i.e. names that don't match (passenger/card) - especially, pehaps, names that don't quite match that might indicate mispellings that in turn might indicate a false name being used); as well look for data on the card (such as billing address) which might match with a watch list (i.e. someone who lives at the same address as someone wanted)

3. Look at other known behavior - missed flights; check bags vs. no bags; passengers traveling alone; one way tickets vs. round trips; travel events in the in future (i.e. vacation booked at the same time) vs. nothing - especially with a one way ticket, etc.

4. Look at past events and see if anyone traveling matches closely.

For example, I have read of a gun being discovered inside of a stuffed animal being carried by a young child - the article stated that the mother claimed it had been a gift from a stranger. Nothing definitative, and certainly just one isolated news story, but I'd think it warrents some random screening of other stuffed animals to be on the safe side - especially if anything unique about that mother/child seemed like a relevant fact.

As well, one of the patterns that a computer might find (and a determined observer might as well) is a negative fact - i.e. "we do not screen anyone who is an elderly woman or traveling with young children"

Unfortunately such a pattern is likely to be noticed by the "bad guys" as well. So a very smart approach to avoid this is to vary up procedures and truly check random people (with a computer looking all the time to see if what "seems" random really is)

2. I disagree with Salon in their blithe dismisall of using computers to find patterns, and in their concern about too many false alarms. The challenge that all security faces is complency. "False alarms" while disruptive also shake off cobwebs and are vital realtime learning exercises for the system as a whole. As well, as the data collected improves, and as patterns are proven to be important (or not) the systems as a whole can get better.

It is unfortunately very expensive and costly to make an inaction mistake when it comes to people's lives and security - that is. if they miss a terrorist who does indeed exploit a whole in the security systems, that single person (or group of people) can harm a great number of people very quickly.

For example, there have been reports over the years, since 9/11, of stolen uniforms and badges of airline employees. As well, I could easily imagine that terrorists might respond to enhanced security around passengers with focusing on where the security is weaker, perhaps around employees and workers at airports and airlines. Who knows, which is a major part of the problem.

As a developer and thinker on AI systems, I have a sense of what they can do. The power, in most cases, comes from letting a system loose on lots of data, much (indeed perhaps most) of it seemingly nonsensical and irrelevant. The real power comes from the system being able to combine disperate facts and data into something unseen and unknown.

This has always been a part of security/intelligence opperations. The famous anecdote about being able to predict major world events by the number of pizzas delivered to the Pentagon is not completely without merit - such patterns exist all over the place. It takes much human effort, and much computer effort to tease these out - the challenge is to do so inadvance of the events in question, not after.

In the case of homeland security, I, for one, am not unhappy to see more proactive and random security inspections at airports, if anything I'd like to see more of them not less. As well, I hope that the people running the computers are using as much data as they can get and have tools that are capable of really "learning" from those sets of data. As well, as they appear to have an alarm, I hope that the systems continue to be given data, especially about alarms and begin to learn what might have indicated it was not a "real" alarm.

The risk is that, like the Salon article implies, humans will intervene too much and short circuit the capabilities of the systems - at which point I would agree that they do not offer much help.

1/11/2004 12:59:00 PM 0 comments
Comments: Post a Comment


 
This page is powered by Blogger.
Listed on BlogShares



Shannon John Clark (email me), b. 1974.

Male (to hold off the assumptions), currently in Chicago, IL.
I am active on many other forums and sites around the Internet. If I am online, feel free to Skype me.
You are also welcome to connect with me on Omidyar Networks on LinkedIn or Ryze.com and my blog on Ecademy or see more about me at MeshForum or my corporate site, JigZaw . I also maintain piecing IT together, as my corporate blog for JigZaw Inc.