Developing "Trustworthy" AI

Henry Ford developed mass production of the automobile. It was, however, 60 years before we made seat beats mandatory.

This shows how we can advance technologies in spite of overlooking some basic tenets.

When it comes to AI, we are at the beginning of a very long journey. Today, we are laying the foundation for what will be built upon it. For this reason, we need to establish a few guidelines so we have "trustworthy" AI.


Source

It all starts with democratizing AI. This is an imperative. AI should be accessible to everyone in society. Projects such as SingularityNet are paving the way for this by placing AI on blockchain. This means anyone can put out a request to have AI developed for them by experts from all over the world.

Once this happens, we need to ensure that all in society benefit. The idea is nobody is left behind. AI should be designed with the thought of helping as many people as possible. At the present moment, we are seeing AI used for surveillance, killing, and to sell people stuff. This is not the best use of this technology.

Movements have to arise to push those developing the technology to design it with the benefit of society in mind. Having it outside the sole realm of governments and major corporations is a starting point. It is encouraging to see that the major technology companies are opening up their AI platforms.

AI technology might be one of the most powerful we have seen. This means special care needs to be taken. Like the aforementioned Ford example, the balance needs to be between progress and ensuring things are done in a sane manner. This is often tricky.

A large part of this can come from the industry itself. There are already 6 pillars the community accepted in the march forward.

There is already a consensus in the international community about the six dimensions of “Trustworthy AI”: fairness, accountability, value alignment, robustness, reproducibility and explainability. While fairness, accountability and value alignment embody our social responsibility; robustness, Reproducibility and explainability pose massive technical challenges to us.

weforum.org/agenda/2020/02/where-is-artificial-intelligence-going/

The world proved that it could handle a dangerous technology and not let it get out of hand. Biological weapons were equally as dangerous in the 1990s. At that time, the global community came together and decided that their use was not in its best interest. Outside a few rouge incidents, we have seen little in the way of their usage.

We need to take the same approach to AI. Many fret about AI killing us. The sad reality is AI will only enhance our abilities to kills each other and, ultimately, ourselves.

"Trustworthy" AI is a step in the right direction. To achieve that end, we need to find some trustworthy people to head up the initiatives.


If you found this article informative, please give an upvote and resteem.

image.png

image.png

Posted via Steemleo



0
0
0.000
1 comments
avatar

The more I read your contents on AI, the more scary I become. Though I subscribe to technology development, but your approach to expositing the subjects glaringly shows how many people would be going hungry for the sake of AI developments.

Posted via Steemleo

0
0
0.000