The long-awaited threat to human existence from Artificial
Intelligence (AI) is here!
Finally, we have something that we can use to measure human intelligence against. We have been limited far to long by having to rely upon our own self-referential abilities as the yard-stick. We created a notion of the Intelligence Quotient (IQ) to measure some general metric of where a human falls within the population of humans. It’s not a hugely useful or reliable measure which is pretty consistent with human nature. The ‘Natural’ Intelligence Level (NIL) of humans has been an elusive quantity and quality since the first human asked their self the salient question about another human: “Is that person really that ‘stupid’?” * Before you take immediate offense, I want to alert you that I will shortly bring up the issue of politicians, government, and corporate executives/leaders discussing how to ‘solve’ the problems/risks that AI presents in many contexts.
* Note: The term
‘stupid’ is likely to be considered offensive by some, so please forgive me for
its use. I apologize for not being able to determine what other term would be
as clearly understood and recognized by others and possessing the attribute of
not being seen as just or more offensive. I thought about listing some of the
other terms considered, except that just creates an even larger issue. Suffice
it to say that I could find none that would be as easily able to fulfill the
requirements of Information Theory.
Artificial Intelligence (AI) has apparently come close to
passing the Turing Test initially proposed in 1950 by Alan Turing. For those
unfamiliar with the Turning Test, it posits that if a person cannot tell if
they are interacting with another person or with a machine (an AI in this
context) then the AI is has passed the test, i.e., exhibits an intelligence
equal to that of a human. How recently AI or a machine can be considered to
have ‘passed’ the Turing Test is debatable both in terms of when, in what
context, and compared to what human(s). There are situations from a decade or
two ago that some claim the test was passed. However, defining human
intelligence is not a simple thing to do, and even if I attempted to do so it
would not be agreed to nor accepted by the vast majority of those who might be considered
knowledgeable. It absolutely would not be accepted by most people.
Regardless of whether AI has or hasn’t passed the Turing
Test, it is currently capable of fooling a significant portion of the public on
any number of things. It is also capable of being used by humans for good or
ill purposes. This is where the current risks and threats from AI come from for
the foreseeable future.
This brings us to the question of the day. How likely is it
that politicians, government entities, and AI corporate leaders are going to be
able to provide the answers, solutions, and policies that will effectively
eliminate (or more realistically mitigate) the risks, threats, and harms that
AI technology will produce? This is the appropriate question because we have
empirical data upon which to base an assessment.
To manage, regulate, control, or adapt how AI is used and
how it impacts people there is another technology which has preceded it that
give us a valuable insight into how well our politicians and technology
companies & leaders have done in managing, regulating, controlling, and
adapting it. In fact, we have several but let’s just focus on Social Media. I
think we can all agree that the following assessments regarding issues around
social media are reasonably good predictors of how well politicians,
government, and technical entities will do with respect to AI.
All these entities have done very poorly when it comes to
social media. The problems surrounding and embedded in social media are many
and mostly unresolved. For one thing, the problems have been very much enabled
by technology and the companies engaged in providing social media. Politicians
and government entities are completely lost when it comes to understanding the
problems and certainly have no ideas of how to address them.
When it comes to AI, this propensity for inaction and failure
will be exponentially worse. This does not bode well for the public, nation, or
the world being competently protected. This is not because the issues that must
be addressed or the problems that have to be solved are exceptionally
difficult. Many of the issues and problems are not hard at all to deal with and
to benefit from. This is true partly because the same is true for the even
simpler problems and issues related to social media.
The underlying problem isn’t AI or social media, it is that
we don’t have individuals involved in finding the solutions who possess the
skills, competencies, and perspectives to do the required problem-solving
tasks. If you do not know how to understand a problem, you are relying upon
just being lucky in what you choose to do or not do.
No comments:
Post a Comment