The paper contains 2322 words and is expected to last 7 minutes
Photo source: Unsplash
The craze for AI has been accompanied by many negative reports about ai algorithms.
Whether it’s the future of artificial intelligence and robots in movies like The Ultimate One and the Matrix that will wipe out, enslaver and replace humans, or the real world cynicism about “Sophia, the first ai citizen,” which critics say is really nothing more than a strange puppet head hanging from a chatbot. Ai seems to have become synonymous with terror, discrimination and prejudice.
And when it comes to the inherent biases in artificial intelligence systems that help us make decisions, opinions differ.
Photo source: Unsplash
People take it for granted that AI algorithms are biased against race, gender, and even age, and then point fingers and criticize them.
An article published in New Scientist identifies five biases that exist in existing AI systems that can subtly affect people’s real lives. One of the most well-known was exposing the COMPAS scandal. COMPAS was originally introduced in America to predict the likelihood of a criminal reoffending and thus to guide sentencing. But according to ProPublica, dark-skinned criminals are more likely to become repeat offenders.
But humans are the creators of these algorithms. The algorithm itself is not there to discriminate.
Often it is the use of algorithms that leads to bias. Moreover, the data studied by the algorithm is also a source of bias. There is no perfect algorithm in most situations, especially in social situations.
Deviations occur when people try to fit round nails perfectly into oval holes. But in the artificial intelligence system, the present artificial intelligence ability still has the insufficiency.
What do we need in this situation?
The answer is understanding.
Photo source: pexels
While politicians wrestle with proposed regulations, lawyers and investigators need to work with technologists to understand the full process of innovation. They don’t need to know the details, but they need to know how to deal with the main issues and the bigger picture.
Lawyers need understanding just as artificial intelligence needs software product managers. These programs often affect people’s lives.
China’s social credit system is proof that AI has many impacts on people’s lives. It has changed the structure of society and people’s lives, as well as the structure of business markets. In the West, with constant innovation, lawyers must learn to deal with complexities beyond contracts and the language of algorithms.
They have to really grasp the basic concept of what the algorithm is trying to accomplish, which is the original intention, true intent, and impact of the algorithm.
The whole process is like using the law to judge individuals. Artificial intelligence systems will become individual organisms. But unlike humans who are capable of higher dimensional thinking, they currently exist for specific purposes and perform specific functions.
Under this premise, the intention of an AI system comes from the organization that creates, plans, and uses it. When third-party vendors of ai algorithms are added to the model, the situation is often much more complicated. The picture includes organizations that design algorithms, companies that use algorithms to develop software applications, and companies that use software for business purposes. The picture becomes even more complicated if the companies involved are turned into government departments.
Behind the surface math and patterns, algorithms actually have other goals to accomplish. In algorithmic software, these goals are often buried in a sea of innovations. In the business processes that follow software, the original intent is changed all over the place.
Innovation is moving faster than ever before. Most of the time, technologists who design new algorithms are optimizing them while people are chasing business needs. Algorithms used in business can distort the original intent considerably.
This is where a lot of grey areas come in. In these grey areas, law must work with innovation to move forward.
Photo source: Unsplash
Laws require flexibility (in application and enforcement). Most people don’t associate the law with flexibility, but now that AI is so young, the law isn’t enough to explain the gray areas found.
The absence of rules and regulations and the constraints of law and order can lead to chaos, and some is better than none. However, regulations should be carefully crafted so as not to hinder the process of technological innovation.
That’s why you need tech experts and lawyers to work together. The two attitudes are not contradictory, and the goal of cooperation is not to control the world. Nor are tech companies touting their dominance of data.
We are in an unstable world. People’s data are all put together. When our security is violated, we face the same level of risk; When privacy laws are violated, we are affected in the same way.
The question is how do we work together?
In the West, the entire legal system is constantly being tested in the process of innovation and development. When law and innovation conflict, opportunities arise to revise regulations and enforce new laws.
Collaborative models in the age of ARTIFICIAL intelligence are projects like Google’s AI for Social Good and AINow workshops that bring together tech experts, companies, Social scientists, governments, lawyers and managers.
While the media and media organizations may have identified problems, recognized them, and made people think about them, it takes working with larger organizations that can bring together technologists, companies, and governments to try to solve deeper problems.
If the problem is not addressed in depth with companies, governments and businesses, it will only get worse when people are affected by it.
When the problem is magnified, the gray area becomes bigger than the problem itself.
Photo source: pexels
Lawyers have their work cut out for them in the age of artificial intelligence.
Lawyers will play a crucial role in the grey area of artificial intelligence when it is regulated. They are the key to collaboration between businesses, technologists and governments, serving not only as a mediator but also as a force for proposing changes to existing laws.
FPF has just released its privacy expert guide to ARTIFICIAL intelligence and machine learning. It’s a good start for lawyers to learn about artificial intelligence and machine learning.
Lawyers can learn from the subtleties of AI technology to understand the grey areas where innovation and management need to be balanced.
Better cooperation with technical experts, create a harmonious and beautiful coexistence of us and artificial intelligence tomorrow.
Leave a comment like follow
We share the dry goods of AI learning and development. Welcome to pay attention to the “core reading technology” of AI vertical we-media on the whole platform.
(Add wechat: DXSXBB, join readers’ circle and discuss the freshest artificial intelligence technology.)