Blog, SEMANTICS

The logic of control

Meet Al. Al is new to your organization, and once trained everyone relies upon Al to know the right thing to do, don’t worry Al’s got this. Teams have Al help them make decisions, customers, and even senior management rely upon Al each and every day. Working tirelessly in the background Al can support virtually any part of the organization, Al is in control.

Al is usually brought in by the technology organization who has thoroughly reviewed Al’s background, qualifications, and references. But there have been problems. Virtually everywhere Al has worked, Al has consistently made mistakes, in and of its self this is not reason for concern, as we all make mistakes. But the problem is that Al cannot be corrected, nor are you able to understand why the mistake occurred or where Al’s training has failed. Your only option with Al is to train and train again.

Has Al worked in your organization? Yes, I suspect Al has.

Al is also goes by the name algorithm. You know that automagic and omniscient black box that knows exactly what to do. That is if you have absolute confidence that your expertise in training an algorithm is up to task. Oh yes, about that training. I often say that “I have a crystal ball, but don’t know where to put the batteries”. That’s particularly relevant when training these opaque amalgams of logic, because as you endeavor to compile the most complete and representative training sample for the algorithm to digest, be sure to include what you will require in the future. If you are unable to gaze into your functioning crystal ball and therefore cannot include your future needs within the training sample, consider the training you are about to perform as your own training, a practice run for the ongoing training your algorithm will require. About those mistakes I mentioned earlier, unfortunately Al doesn’t know when Al is wrong, Al does not self correct, Al only knows what Al knows. This is the fundamental problem associated with employing Al, you cannot set and forget. You must be vigilant and monitor Al’s performance continuously, and in a dynamic environment even more so as Al does not say oops.

I think there is an amount of hubris at work by those who produce these algorithmic system that are opaque and completely sealed off in black boxes. The siren’s song of rapid deployment and automagic operation is very seductive to the under resourced organization. But the sigh of relief often heard when some vendor emphasizes how easy it is to deploy, is so often followed by frustration when the rabbit you expect to emerge from the magical algorithm hat is actually celery. Why can’t these very capable and intelligent developers provide a control panel that allows those that deploy algorithm based systems a mechanism beyond the never ending cycle of train and retrain to fine tune and proactively provide for future requirements? This would avoid the typical reduction in performance and acuity until the next iteration of training has been completed. It seems that in exchange for the perception of simplified deployment a new skill must be developed and applied for the life of the system, you must become an expert at training Al. Can you imagine if commercial aircraft manufactures employed the same approach? Modern passenger aircraft are highly automated, but there’s more upfront that an antennae. Would you choose to fly in a highly automated modern aircraft that operated without competent pilots or no pilots at all? I think not, at least not yet anyway. If you are considering bringing Al into your organization, please consider this. When inclined to take a leap of faith, have a clear understanding of what determines where you land.


Share On

Post Categories

Menu