[ad_1]
Whenever a person wants to present themselves as an industry expert, one credible approach is to paint a shining picture of future technology and what people can expect from hopeful visions of things to come. One potential that has long bothered me is the current general perception of artificial intelligence technology.
There are a few key concepts that are not often included in the general discussion of creating machines that think and act like us. First, the problem with artificial intelligence is that it is artificial. Trying to create machines that work like the human brain and its special creative properties has always seemed useless to me. We already have people to do all that. If we succeed in generating a system that is every bit as able as the human brain to create and solve problems, such an achievement will also result in the same limitations.
There is no benefit in creating an artificial life form that can surpass us to further degrade the value of humanity. Creating machines to enhance and compliment the wonders of human thinking does have many appealing benefits. One significant plus to building artificially intelligent systems is the benefit of the teaching process. Like people, machines have to be taught what we want them to learn, but unlike us, the methods used to imprint machine instructions can be accomplished in a single pass.
Our brains allow us to selectively flush out information we do not want to retain, and are geared for a learning process based on repetition to imprint a long term memory. Machines cannot “forget” what they are taught unless they are damaged, reach their memory capacity, or they are specifically instructed to erase the information they are tasked to retain. This makes machines great candidates for performing all the tediously repetitive tasks, and storing all the information we do not want to burden ourselves with absorbing. With a little creativity, computers can be adjusted to respond to people in ways that are more pleasing to the human experience, without the need to actually replicate the processes that comprise this experience. We can already teach machines to issue polite responses, offer helpful hints, and walk us through learning processes that mimic the niceties of human interaction, without requiring machines to actually understand the nuances of what they are doing. Machines can repeat these actions because a person has programmed them to execute the instructions that offer these results. If a person wants to take the time to impress aspects of presenting their own personality into a sequence of mechanical instructions, computers can faithfully repeat these processes when called upon to do so.
In today’s market place, most software developers do not add on the extra effort that is required to make their applications seem more polite and conservatively friendly to the end users. If the commercial appeal for doing this was more apparent, more software vendors would race to jump onto this bandwagon. Since the consuming public understands so little about how computers really work, many people seem to be nervous about machines that project a personality that is too human in the flavor of its interaction with people. A computer personality is only as good as the creativity of its originator, which can be quite entertaining. For this reason, if computers with personality are to gain ground in their appeal, friendlier system design should incorporate a partnering with end users themselves in building and understanding how this artificial personality is constructed. When a new direction is needed, a person can incorporate that information into the process, and the machine learns this new aspect as well.
People can teach a computer how to cover all contingencies that arise in accomplishing a given purpose for managing information. We do not have to take ourselves out of the loop in training computers how to work with people. The goal of achieving the highest form of artificial intelligence, self-teaching computers, also reflects the highest form of human laziness. My objective in design is to accomplish a system that will do the things I want it to do, without having to deal with negotiating over what the system wants to do instead. This approach is already easier to achieve than most people think, but requires consumer interest to become more prevalent.
Comments are closed.