Home » Technology and Invention

Technology and Invention

Technology has traditionally evolved as the result of human needs. Invention, when prized and rewarded, will invariably rise-up to meet the free market demands of society. It is in this realm that Artificial Intelligence research and the resultant expert systems have been forged. Much of the material that relates to the field of Artificial Intelligence deals with human psychology and the nature of consciousness.

Exhaustive debate on consciousness and the possibilities of consciousnessness in machines has adequately, in my opinion, revealed that it is most unlikely that we will ever converse or interract with machine of artificial consciousness. In John Searle’s collection of lectures, Minds, Brains and Science, arguments centering around the mind-body problem alone is sufficient to convince a reasonable person that there is no way science will ever unravel the mysteries of consciousness. Key to Searle’s analysis of consciousness in the context of Artificial Intelligence machines are refutations of strong and weak AI theses.

Strong AI Theorists (SATs) believe that in the future, mankind will forge machines that will think as well as, if not better than humans. To them, pesent technology constrains this achievement. The Weak AI Theorists (WATs), almost converse to the SATs, believe that if a machine performs functions that resemble a human’s, then there must be a correlation between it and consciousness. To them, there is no technological impediment to thinking machines, because our most advanced machines already think.

It is important to review Searle’s refutations of these respective theorists’ proposition to establish a foundation (for the purpose of this essay) for discussing the applications of Artificial Intelligence, both now and in the future. Strong AI Thesis Strong AI Thesis, according to Searle, an be described in four basic propositions. Proposition one categorizes human thought as the result of computational processes. Given enough computational power, memory, inputs, etc. , machines will be able to think, if you believe this proposition. Proposition two, in essence, relegates the human mind to the software bin.

Proponents of this proposition believe that humans just happen to have biological computers that run “wetware” as opposed to software. Proposition three, the Turing proposition, holds that if a conscious being can be convinced that, through context-input manipulation, a machine is ntelligent, then it is. proposition four is where the ends will meet the means. It purports that when we are able to finally understand the brain, we will be able to duplicate its functions. Thus, if we replicate the computational power of the mind, we will then understand it.

Through argument and experimentation, Searle is able to refute or severely diminish these propositions. Searle argues that machines may well be able to “understand” syntax, but not the semantics, or meaning communicated thereby. Essentially, he makes his point by citing the famous “Chinese Room Thought Experiment. It is here he demonstrates that a computer” (a non-chinese speaker, a book of rules and the chinese symbols) can fool a native speaker, but have no idea what he is saying. By proving that entities don’t have to understand what they are processing to appear as understanding refutes proposition one.

Proposition two is refuted by the simple fact that there are no artificial minds or mind-like devices. Proposition two is thus a matter of science fiction rather than a plausible theory A good chess program, like my (as yet undefeated) Chessmaster 4000 Trubo refutes proposition three by passing a Turing test. It appears to be intelligent, but I know it beats me through number crunching and symbol manipulation. The Chessmaster 4000 example is also an adequate refutation of Professor Simon’s fourth proposition: “you can understand a process if you can reproduce it.

Because the Software Toolworks company created a program for my computer that simulates the behavior of a grandmaster in the game, doesn’t mean that the computer is indeed intelligent. Weak AI Thesis There are five basic propositions that fall in the Weak AI Thesis (WAT) camp. The first of these states that the brain, due to its complexity of peration, must function something like a computer, the most sophisticated of human invention. The second WAT proposition states that if a machine’s output, if it were compared to that of a human counterpart appeared to be the result of intelligence, then the machine must be so.

Proposition three concerns itself with the similarity between how humans solve problems and how computers do so. By solving problems based on information gathered from their respective surroundings and memory and by obeying rules of logic, it is proven that machines can indeed think. The fourth WAT proposition deals with the fact that brains are known o have computational abilities and that a program therein can be inferred. Therefore, the mind is just a big program (“wetware”). The fifth and final WAT proposition states that, since the mind appears to be “wetware”, dualism is valid.

Proposition one of the Weak AI Thesis is refuted by gazing into the past. People have historically associated the state of the art technology of the time to have elements of intelligence and consciousness. An example of this is shown in the telegraph system of the latter part of the last century. People at the time saw correlations between the brain and the telegraph network itself. Proposition two is readily refuted by the fact that semantical meaning is not addressed by this argument. The fact that a clock can compute and display time doesn’t mean that it has any concept of coounting or the meaning of time.

Defining the nature of rule-following is the where the weakness lies with the fourth proposition. Proposition four fails to again account for the semantical nature of symbol manipulation. Referring to the Chinese Room Thought Experiment best refutes this argument. By examining the nature by which humans make conscious decisions, it becomes clear that the fifth proposition is an item of fancy. Humans follow a virtually infinite set of rules that rarely follow highly ordered patterns. A computer may be programmed to react to syntactical information with seeminly semantical output, but again, is it really cognizant?

We, through Searle’s arguments, have amply established that the future of AI lies not in the semantic cognition of data by machines, but in expert systems designed to perform ordered tasks. Technologically, there is hope for some of the proponents of Strong AI Thesis. This hope lies in the advent of neural networks and the application of fuzzy logic engines. Fuzzy logic was created as a subset of oolean logic that was designed to handle data that is neither completely true, nor completely false. Intoduced by Dr. Lotfi Zadeh in 1964, fuzzy logic enabled the modelling of uncertainties of natural language.

Dr. Zadeh regards fuzzy theory not as a single theory, but as “fuzzification”, or the generalization of specific theories from discrete forms to continuous (fuzzy) forms. The meat and potatos of fuzzy logic is in the extrapolation of data from seta of variables. A fairly apt example of this is the variable lamp. Conventional boolean logical processes deal well with the binary nature of lights. They are either on, or off. But introduce the variable lamp, which can range in intensity from logically on to logically off, and this is where applications demanding the application of fuzzy logic come in.

Using fuzzy algorithms on sets of data, such as differing intensities of illumination over time, we can infer a comfortable lighting level based upon an analysis of the data. Taking fuzzy logic one step further, we can incorporate them into fuzzy expert systems. This systems takes collections of data in fuzzy rule format. According to Dr. Lotfi, the rules in a fuzzy logic expert system will usually follow the following simple rule: if x is low and y is high, then z is medium”.

Under this rule, x is the low value of a set of data (the light is off) and y is the high value of the same set of data (the light is fully on). is the output of the inference based upon the degree of fuzzy logic application desired. It is logical to determine that based upon the inputs, more than one output (z) may be ascertained. The rules in a fuzzy logic expert system is described as the rulebase. The fuzzy logic inference process follows three firm steps and sometimes an optional fourth.

They are: 1. Fuzzification is the process by which the membership functions determined for the nput variables are applied to their true values so that truthfulness of rules may be established. 2. Under inference, truth values for each rule’s premise are calculated and then applied to the output portion of each rule. 3. Composition is where all of the fuzzy subsets of a particular problem are combined into a single fuzzy variable for a particular outcome. 4. Defuzzification is the optional process by which fuzzy data is converted to a crisp variable. In the lighting example, a level of illumination can be determined (such as potentiometer or lux values). A new form of information theory is the Possibility Theory. This theory is similar to, but independent of fuzzy theory.

By evaluating sets of data (either fuzzy or discrete), rules regarding relative distribution can be determined and possibilities can be assigned. It is logical to assert that the more data that’s availible, the better possibilities can be determined. The application of fuzzy logic on neural networks (properly known as artificial neural networks) will revolutionalize many industries in the future. Though we have determined that conscious machines may never come to fruition, expert systems will certainly gain “intelligence” as the wheels of technological innovation turn.

Cite This Work

To export a reference to this essay please select a referencing style below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.