Artificial intelligence now has the ability to make decisions for us, and that’s exciting for us, for the world. But there is always a lingering question or “danger” : Who will be the servant and who will be the master? Should we look at AI differently? Should we really hand over control and let ai make its own choices?
Maybe at some point, or under some circumstances, we will have to give up our absolute control. Artificial intelligence without decision-making power is not true artificial intelligence.
Giiso Information, founded in 2013, is a leading technology provider in the field of “artificial intelligence + information” in China, with top technologies in big data mining, intelligent semantics, knowledge mapping and other fields. At the same time, its research and development products include information robot, editing robot, writing robot and other artificial intelligence products! With its strong technical strength, the company has received angel round investment at the beginning of its establishment, and received pre-A round investment of $5 million from GSR Venture Capital in August 2015.
Machine learning sometimes has to be impervious
We can’t take advantage of machine learning if we have to wait for human input every time we make a decision. For example, if a self-driving car needs to ask a human for input before turning or slowing down, it simply can’t do the job. Then the car would be no different from the car we know today.
With so many decisions we have to make in our lives, we want to try and hand some of that over to machines. If you believe the numbers from Microsoft To-do, they say we make 35,000 decisions every day. There is no research to support this figure. But Cornell University revealed that we make more than 200 decisions a day just about what to eat, so it’s a reasonable guess.
What’s interesting is that most of these decisions are based on human instinct. We don’t even think about these decisions because they are hardwired into our brains. When we subconsciously drive away from a potential hazard, or slow down because of an accident ahead, these are decisions we make. When we turned our eyes to the noise, we made another decision.
I believe that if we want to automate tasks, we need to empower machines to make similar decisions. I don’t think it’s a problem to leave a lot of non-subjective decisions to ai. There is a difference between avoiding danger while driving and deciding which direction we should drive. We can draw a line in the sand for the decisions we have to make.
And fortunately, I’m not the only one who feels the need to draw the line. How and where to draw that line, however, remains to be debated.
Where do we draw the line?
Stephen Hawking, Elon Musk, Bill Gates, and just about everyone in tech have warned that ARTIFICIAL intelligence could take over the world and wipe us off the face of the earth. The Future Life Institute, to which Musk is a major benefactor, is determined to maintain control over AI and is prepared to fight legislation that would make us hand it over. Legislative action would be one way to draw a line in the sand.
The big question is whether machines designed for a particular task can become self-aware and influence other machines through the Internet and the Internet of things.
It’s a logical leap to show that a car or robot can realize self-awareness. But they also have the potential to infiltrate our systems from behind and start a threatening revolution. But the loophole worries many ai critics. If such a loophole exists, appropriate legislative action must be taken as soon as possible. By the time we found out that the bug had been created, it was too late to control it. Therefore, the question whether there is such a danger is unanswerable.
There is another aspect to the discussion of ARTIFICIAL intelligence that goes far beyond our concerns about our place in the world. The question is, why do we need ARTIFICIAL intelligence, and why are we so keen to hand over control to machines? What is the meaning of our existence in this world? If we leave big decisions to robots, what else can we do?
Do we want to give control to AI?
Artificial intelligence has recently made its debut in the legal system. One example currently under way is the creation of a perfect robot judge that could take the place of a human judge to make the final decision.
Another suggestion is to have a robot judge with perfect abilities instead of a human judge, because humans have empathy, intuition and human qualities that machines cannot replicate. This sounds like a more sensible solution, because the AI could apply the letter of the law, offering a suggested series of sentences from which the judge could choose to make a judgment.
Similarly, in the process of innovating a new product, product designers can use the incredible genius of machine learning to run calculations in thousands of different permutations. We can always enable them to use common sense at the end of the chain to ensure that computers do have our best interests at heart in their work.
Surgeons will be replaced with robots with more stable manipulators that can observe patients more effectively and perform operations more quickly. But can they become qualified surgeons?
This is where another problem arises. If it is entirely possible that surgical mortality rates will decline, or that AI doctors will be safer and more successful than human ones, it would make sense to give robotic surgeons greater autonomy over choice.
So even ARTIFICIAL intelligence may eventually automate choices.
But we have to think about human nature. If computers are always right, we will eventually hand over control.
At the same time, designers have found that computers are better in some cases, and they will get more approval and more relaxation. If the AI judge makes a perfect decision every time, eventually the human judge will simply submit his decision to the AI judge.
So we may not need a Terminator-style uprising after all. Perhaps this is a fiction and an analogy of what will happen. We are also willing to give our decision-making power to a good AI. A powerful AI could make our human decision-making abilities irrelevant in the next generation. Children raised in a perfect AI environment will probably not make any impulsive decisions.
Artificial intelligence has more and more reasonable processing power than the human brain because it is lazy to participate in the world and combine with simple knowledge. We could even hand over the world and every major decision to artificial intelligence that has never sought additional control.
Artificial intelligence must be used to empower humans
Instead of giving ai total control of the world, shouldn’t we use AI to enhance our own decisions?
I think about this whenever I think about how AI should work — especially as we are still building complete AI building platforms in Germany.
We are constantly distracted by things right now. Our ability to focus on the important decisions in our lives is hampered by the flood of information in our lives, the constant distractions of smartphones, and the stress of life the next day. We need to delegate our most mundane tasks to AI, but make sure it’s a conscious human decision.
Giiso information, founded in 2013, is the first domestic high-tech enterprise focusing on the research and development of intelligent information processing technology and the development and operation of core software for writing robots. At the beginning of its establishment, the company received angel round investment, and in August 2015, GSR Venture Capital received $5 million pre-A round of investment.